[LU-1147] Reduce CPU overhead and performance degradation with larger striped files while reading uncached data in 1.8 Created: 28/Feb/12  Updated: 05/Mar/14  Resolved: 05/Mar/14

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 1.8.7
Fix Version/s: None

Type: Improvement Priority: Minor
Reporter: Jeremy Filizetti Assignee: Andreas Dilger
Resolution: Won't Fix Votes: 0
Labels: llite, performance
Environment:

Tested on RHEL 5.6, OFED 1.5.3.1, and Lustre 1.8.6


Attachments: PNG File stripe_perf.png    
Rank (Obsolete): 9738

 Description   

Wile looking at functions with high CPU utilization for LU-1056 we came across lov_stripe_size as one of the top functions for files with larger striping. After looking into this it appears the reason can be summarized as:

With the current read implementation in Lustre 1.8 ll_readpage->ll_readahead->odd_merge_lvb is called for every page of a file at least once. Even if the read ahead pre-fetched the data into cache it still requires a subsequent call to ll_readpage to set the page flags to uptodate. obd_merge_lvb is used to calculate the kms for an inode and calls lov_stripe_size for every stripe in a file. So the larger the striping of a file the larger the overhead in this calculation. To reduce the overhead on calculating the kms i_size_read() could be used for the inode instead of locking the stripe and calling obd_merge_lvb. The inode size should be updated by ll_extent_lock which is called by ll_file_aio_read.



 Comments   
Comment by Jeremy Filizetti [ 28/Feb/12 ]

Performance before and after patch reading uncached files from a client. While there is still a lot of fluctuations with the patch I was able to to obtain almost the same performance with 118 stripes as single striped file.

Comment by Jeremy Filizetti [ 28/Feb/12 ]

Patch can be found at http://review.whamcloud.com/#change,2221

Comment by Peter Jones [ 29/Feb/12 ]

Jeremy

Thanks for the patch. I know that Andreas has provided some comments on the patch in gerrit. My additional comment is that this work would be far likelier to land if it was targeted on master as that is the codeline under active development atm.

Peter

Comment by Jeremy Filizetti [ 01/Mar/12 ]

Thanks for the input Peter, I replied to Andreas's comments. I don't know if the same issue exists in master but we are still probably at least a year away from moving to Lustre 2+. Until then I still need to keep focusing on 1.8.

Comment by Peter Jones [ 03/Apr/12 ]

Andreas

Could you please answer Jeremy's follow-on question about this patch.

Thanks

Peter

Comment by Andreas Dilger [ 03/Apr/12 ]

Oleg, you reworked a lot of the 1.8 page handling code to reduce the DLM overhead. Can you please comment on whether Jeremy's patch in http://review.whamcloud.com/2221 would work correctly?

Jeremy, is the performance in the attached graph with or without data checksums enabled? In 2.2 clients the checksums are calculated on multiple cores (even for a single-threaded read/write) and this has shown to improve single-client performance significantly.

Comment by John Fuchs-Chesney (Inactive) [ 05/Mar/14 ]

Jeremy – is this still an issue you are interested in?
If you have moved on and we don't need to keep tracking this, I'd like to mark it as resolved.
Please let me know what you would prefer?
Thanks,
~ jfc.

Comment by Jeremy Filizetti [ 05/Mar/14 ]

Go ahead and close the issue, I'll open a new ticket for Lustre 2.x when we get to that point.

Comment by John Fuchs-Chesney (Inactive) [ 05/Mar/14 ]

Thank you Jeremy.
~ jfc.

Comment by John Fuchs-Chesney (Inactive) [ 05/Mar/14 ]

Issue may be raised again for 2.x at a future time.

Generated at Sat Feb 10 01:13:57 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.