[LU-14412] changing max_read_ahead_whole_mb results in error "Numerical result out of range? " Created: 10/Feb/21 Updated: 18/Mar/21 Resolved: 18/Mar/21 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.12.5 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major |
| Reporter: | Mahmoud Hanafi | Assignee: | Yang Sheng |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Severity: | 3 |
| Rank (Obsolete): | 9223372036854775807 |
| Description |
|
Trying to change client side max_read_ahead_whole_mb setting returns error. r633i6n8 ~ # lctl get_param llite.nbp12-ffff95b4806d1800.max_read_ahead_mb llite.nbp12-ffff95b4806d1800.max_read_ahead_mb=4096 r633i6n8 ~ # lctl set_param llite.nbp12-ffff95b4806d1800.max_read_ahead_whole_mb=512 error: set_param: setting /sys/kernel/debug/lustre/llite/nbp12-ffff95b4806d1800/max_read_ahead_whole_mb=512: Numerical result out of range?
|
| Comments |
| Comment by Andreas Dilger [ 10/Feb/21 ] |
|
It looks like this parameter can't be larger than llite.*.max_read_ahead_per_file_mb. It should have printed a message on the console like: nbp12: can't set max_read_ahead_whole_mb=512 > max_read_ahead_per_file_mb=256 or similar. |
| Comment by Andreas Dilger [ 10/Feb/21 ] |
|
One option to make this more usable, would be rather than returning an error in this case to increase max_read_ahead_per_file_mb to match the specified max_read_ahead_whole_mb value if it is not large enough. |
| Comment by James A Simmons [ 10/Feb/21 ] |
|
You can't set max_read_ahead_whole_mb to be larger than max_read_ahead_per_file_mb. That is why you see this error. |
| Comment by Mahmoud Hanafi [ 10/Feb/21 ] |
|
Got it thanks.
|
| Comment by Andreas Dilger [ 10/Feb/21 ] |
|
On a side note, I would be interested to understand what your motivation is for setting max_read_ahead_whole_mb=512? IMHO, this is probably not desirable as a general tuning, unless you have a random read workload that accesses the whole file, but the file is small enough to fit into cache (e.g. like LU-11416)? Or is your workload always accessing large files and the overhead of detecting sequential access is worse than fetching files up to 512MB to the client is negligible? |
| Comment by Mahmoud Hanafi [ 10/Feb/21 ] |
|
We have a very specific work load that opens lots of small files (<100MB) and will do small read (4KB or less). The job memory foot print is small so the client has lots of memory for cache. We are experimenting with these setting hoping to increase their read performance. |
| Comment by Mahmoud Hanafi [ 18/Mar/21 ] |
|
we can close this |