Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-1147

Reduce CPU overhead and performance degradation with larger striped files while reading uncached data in 1.8

Details

    • Improvement
    • Resolution: Won't Fix
    • Minor
    • None
    • Lustre 1.8.7
    • Tested on RHEL 5.6, OFED 1.5.3.1, and Lustre 1.8.6
    • 9738

    Description

      Wile looking at functions with high CPU utilization for LU-1056 we came across lov_stripe_size as one of the top functions for files with larger striping. After looking into this it appears the reason can be summarized as:

      With the current read implementation in Lustre 1.8 ll_readpage->ll_readahead->odd_merge_lvb is called for every page of a file at least once. Even if the read ahead pre-fetched the data into cache it still requires a subsequent call to ll_readpage to set the page flags to uptodate. obd_merge_lvb is used to calculate the kms for an inode and calls lov_stripe_size for every stripe in a file. So the larger the striping of a file the larger the overhead in this calculation. To reduce the overhead on calculating the kms i_size_read() could be used for the inode instead of locking the stripe and calling obd_merge_lvb. The inode size should be updated by ll_extent_lock which is called by ll_file_aio_read.

      Attachments

        Issue Links

          Activity

            [LU-1147] Reduce CPU overhead and performance degradation with larger striped files while reading uncached data in 1.8

            Issue may be raised again for 2.x at a future time.

            jfc John Fuchs-Chesney (Inactive) added a comment - Issue may be raised again for 2.x at a future time.

            Thank you Jeremy.
            ~ jfc.

            jfc John Fuchs-Chesney (Inactive) added a comment - Thank you Jeremy. ~ jfc.

            Go ahead and close the issue, I'll open a new ticket for Lustre 2.x when we get to that point.

            jfilizetti Jeremy Filizetti added a comment - Go ahead and close the issue, I'll open a new ticket for Lustre 2.x when we get to that point.

            Jeremy – is this still an issue you are interested in?
            If you have moved on and we don't need to keep tracking this, I'd like to mark it as resolved.
            Please let me know what you would prefer?
            Thanks,
            ~ jfc.

            jfc John Fuchs-Chesney (Inactive) added a comment - Jeremy – is this still an issue you are interested in? If you have moved on and we don't need to keep tracking this, I'd like to mark it as resolved. Please let me know what you would prefer? Thanks, ~ jfc.

            Oleg, you reworked a lot of the 1.8 page handling code to reduce the DLM overhead. Can you please comment on whether Jeremy's patch in http://review.whamcloud.com/2221 would work correctly?

            Jeremy, is the performance in the attached graph with or without data checksums enabled? In 2.2 clients the checksums are calculated on multiple cores (even for a single-threaded read/write) and this has shown to improve single-client performance significantly.

            adilger Andreas Dilger added a comment - Oleg, you reworked a lot of the 1.8 page handling code to reduce the DLM overhead. Can you please comment on whether Jeremy's patch in http://review.whamcloud.com/2221 would work correctly? Jeremy, is the performance in the attached graph with or without data checksums enabled? In 2.2 clients the checksums are calculated on multiple cores (even for a single-threaded read/write) and this has shown to improve single-client performance significantly.
            pjones Peter Jones added a comment -

            Andreas

            Could you please answer Jeremy's follow-on question about this patch.

            Thanks

            Peter

            pjones Peter Jones added a comment - Andreas Could you please answer Jeremy's follow-on question about this patch. Thanks Peter

            Thanks for the input Peter, I replied to Andreas's comments. I don't know if the same issue exists in master but we are still probably at least a year away from moving to Lustre 2+. Until then I still need to keep focusing on 1.8.

            jfilizetti Jeremy Filizetti added a comment - Thanks for the input Peter, I replied to Andreas's comments. I don't know if the same issue exists in master but we are still probably at least a year away from moving to Lustre 2+. Until then I still need to keep focusing on 1.8.
            pjones Peter Jones added a comment -

            Jeremy

            Thanks for the patch. I know that Andreas has provided some comments on the patch in gerrit. My additional comment is that this work would be far likelier to land if it was targeted on master as that is the codeline under active development atm.

            Peter

            pjones Peter Jones added a comment - Jeremy Thanks for the patch. I know that Andreas has provided some comments on the patch in gerrit. My additional comment is that this work would be far likelier to land if it was targeted on master as that is the codeline under active development atm. Peter
            jfilizetti Jeremy Filizetti added a comment - Patch can be found at http://review.whamcloud.com/#change,2221

            Performance before and after patch reading uncached files from a client. While there is still a lot of fluctuations with the patch I was able to to obtain almost the same performance with 118 stripes as single striped file.

            jfilizetti Jeremy Filizetti added a comment - Performance before and after patch reading uncached files from a client. While there is still a lot of fluctuations with the patch I was able to to obtain almost the same performance with 118 stripes as single striped file.

            People

              adilger Andreas Dilger
              jfilizetti Jeremy Filizetti
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: