[LU-947] ptlrpc dynamic service thread count handling Created: 20/Dec/11 Updated: 24/Nov/23 Resolved: 17/Mar/20 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | Lustre 2.13.0 |
| Type: | Improvement | Priority: | Minor |
| Reporter: | Andreas Dilger | Assignee: | Andreas Dilger |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Issue Links: |
|
||||||||
| Bugzilla ID: | 22,516 | ||||||||
| Rank (Obsolete): | 10749 | ||||||||
| Description |
|
It should be possible to dynamically tune the number of ptlrpc threads at runtime for testing purposes. Currently it is possible to increase the maximum thread count, but it is not possible to stop threads that are already running. This was being worked on in bug 22417: and later enhanced in bug 22516: The later patch includes code to dynamically tune the thread counts based on the threads in use over the past several minutes. |
| Comments |
| Comment by Andreas Dilger [ 07/Mar/19 ] |
|
We also want to reduce the default maximum number of service threads, as this is typically too high for most systems. Chris, can you please provide details about which threads should be reduced, and what the preferred thread count is. |
| Comment by Chris Hunter (Inactive) [ 11/Mar/19 ] |
|
For systems with large number of clients (ie. ~1000) we find the max number of OSS & MDS threads too high. This causes high system load & lost network connections. Too many service threads particularly impacts Ethernet & OPA since they have higer CPU utilization/system load. AFAIK the current default is max 512 threads. We usually set fixed values mds_num_threads=256 and oss_num_threads=256 via module options (ie. half default). To set kernel module options we have to stop lustre & reload lustre modules. For virtual machines with limited CPU cores we often use smaller values. We also tested tuneable ost.OSS.ost_io.threads_max. I believe we also have to reload lustre modules to set this parameters.
|
| Comment by Andreas Dilger [ 12/Mar/19 ] |
|
Could you comment on the core count for 256 threads vs. smaller systems? I'm wondering if that could be made automatic? The threads_max parameter can currently be increased to allow more threads to be started, if needed, but decreasing it does not stop the threads. I'm just lookin my at the code to determine if this is practical to change (it was previously not with the "obdfilter" code, but it seems the "ofd" code does not suffer the same limitations. |
| Comment by Andreas Dilger [ 12/Mar/19 ] |
|
Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/34400 |
| Comment by Andreas Dilger [ 12/Mar/19 ] |
|
chunteraa it looks like oss_max_threads=512 and MDS_NTHRS_MAX=1024 in the code. The oss_max_threads upper limit is tunable since commit v2_8_50_0-44-gaa84d18864, but MDS_NTHRS_MAX=1024 is fixed. It seems that rather than setting oss_num_threads and mds_num_threads (which sets the minimum, maximum, and number of threads started) it might be better to set oss_max_threads=256 which sets the upper limit of threads (and add a tunable mds_max_threads also), but this allows a system to start fewer threads if more are not needed (e.g. only few clients). It looks like LDLM_NTHRS_MAX is already somewhat dependent on the number of cores (num_online_cpus() == 1 ? 64 : 128), but this is probably a holdover from days gone by, or maybe single-core VMs? It does show that it is possible to auto-tune based on the core count, however. |
| Comment by Chris Hunter (Inactive) [ 12/Mar/19 ] |
example VM environments:
|
| Comment by Gerrit Updater [ 14/Mar/19 ] |
|
Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/34418 |
| Comment by Chris Hunter (Inactive) [ 14/Mar/19 ] |
|
A general rule of mdt_threads_max=16*num_cpus has been good starting point, with caveats if processor hyperthreading is enabled. For ost & ost_io threads , the LOM states "You may want to start with a number of OST threads equal to the number of actual disk spindles on the node. " That makes sense for spinning media but perhaps not relevant for flash storage. With flash storage, I suspect ost_io_threads_max=N*num_cpus, with N in range 10-20 is good starting point. However not sure if storage blk_mq support means more or less ost_io threads. Of course ability to reduce active thread count would help tuning. |
| Comment by Shuichi Ihara [ 15/Mar/19 ] |
|
I agree reducing number of threads might be help if large number of clients send messages simultaneously, but need to keep maximum performance with small number of client with large network bandwdith. e.g. 8 clients with EDR needs to get 80GB/sec too. For systems with large number of clients (ie. ~1000) we find the max number of OSS & MDS threads too high. This causes high system load & lost network connections. Too many service threads particularly impacts Ethernet & OPA since they have higer CPU utilization/system load. did you disable 16MB rpc here? most of large instllastion, memory presure comes from large rpc and we have been disabling rpc size to 8M or 4MB. |
| Comment by Chris Hunter (Inactive) [ 16/Apr/19 ] |
Reducing brw_size doesn't help much with many clients.
I suspect the main factors: Ideally we would have ability to set max_threads based on different environments and change value if workload changes.
|
| Comment by Andreas Dilger [ 16/Apr/19 ] |
That ability already exists, it is a module parameter, but can also be increased at runtime if needed. What is new in the first patch is the ability to reduce it at runtime. That said, changing it based on workload seems impractical since there may be many different jobs running at the same time. What I'd like is to have a reasonable out-of-the box value, possibly a function of some well-known parameters (RAM, core count, possibly OST count, maybe client count though I'm not sure that is right). |
| Comment by Chris Hunter (Inactive) [ 18/Apr/19 ] |
This would help for managing larger systems.
Perhaps we could provide guidelines in the documentation ? |
| Comment by Gerrit Updater [ 21/Apr/19 ] |
|
Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/34400/ |