[LU-947] ptlrpc dynamic service thread count handling Created: 20/Dec/11  Updated: 24/Nov/23  Resolved: 17/Mar/20

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: Lustre 2.13.0

Type: Improvement Priority: Minor
Reporter: Andreas Dilger Assignee: Andreas Dilger
Resolution: Fixed Votes: 0
Labels: None

Issue Links:
Related
is related to LU-17312 interop conf-sanity test_53b: Asserti... Open
Bugzilla ID: 22,516
Rank (Obsolete): 10749

 Description   

It should be possible to dynamically tune the number of ptlrpc threads at runtime for testing purposes. Currently it is possible to increase the maximum thread count, but it is not possible to stop threads that are already running.

This was being worked on in bug 22417:
https://bugzilla.lustre.org/attachment.cgi?id=32351

and later enhanced in bug 22516:
https://bugzilla.lustre.org/attachment.cgi?id=32510

The later patch includes code to dynamically tune the thread counts based on the threads in use over the past several minutes.



 Comments   
Comment by Andreas Dilger [ 07/Mar/19 ]

We also want to reduce the default maximum number of service threads, as this is typically too high for most systems.

Chris, can you please provide details about which threads should be reduced, and what the preferred thread count is.

Comment by Chris Hunter (Inactive) [ 11/Mar/19 ]

For systems with large number of clients (ie. ~1000) we find the max number of OSS & MDS threads too high. This causes high system load & lost network connections. Too many service threads particularly impacts Ethernet & OPA since they have higer CPU utilization/system load.

AFAIK the current default is max 512 threads. We usually set fixed values mds_num_threads=256 and oss_num_threads=256 via module options (ie. half default). To set kernel module options we have to stop lustre & reload lustre modules. For virtual machines with limited CPU cores we often use smaller values.

We also tested tuneable ost.OSS.ost_io.threads_max. I believe we also have to reload lustre modules to set this parameters.

 

Comment by Andreas Dilger [ 12/Mar/19 ]

Could you comment on the core count for 256 threads vs. smaller systems? I'm wondering if that could be made automatic?

The threads_max parameter can currently be increased to allow more threads to be started, if needed, but decreasing it does not stop the threads. I'm just lookin my at the code to determine if this is practical to change (it was previously not with the "obdfilter" code, but it seems the "ofd" code does not suffer the same limitations.

Comment by Andreas Dilger [ 12/Mar/19 ]

Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/34400
Subject: LU-947 ptlrpc: stop threads if more than threads_max
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 039e8c72f5ace8b7d9b6b1d3ec543f1dbf95335b

Comment by Andreas Dilger [ 12/Mar/19 ]

chunteraa it looks like oss_max_threads=512 and MDS_NTHRS_MAX=1024 in the code. The oss_max_threads upper limit is tunable since commit v2_8_50_0-44-gaa84d18864, but MDS_NTHRS_MAX=1024 is fixed. It seems that rather than setting oss_num_threads and mds_num_threads (which sets the minimum, maximum, and number of threads started) it might be better to set oss_max_threads=256 which sets the upper limit of threads (and add a tunable mds_max_threads also), but this allows a system to start fewer threads if more are not needed (e.g. only few clients).

It looks like LDLM_NTHRS_MAX is already somewhat dependent on the number of cores (num_online_cpus() == 1 ? 64 : 128), but this is probably a holdover from days gone by, or maybe single-core VMs?  It does show that it is possible to auto-tune based on the core count, however.

Comment by Chris Hunter (Inactive) [ 12/Mar/19 ]

Could you comment on the core count for 256 threads vs. smaller systems? I'm wondering if that could be made automatic?

example VM environments:

  • 6 cpu core 32G memory: oss_num_threads=192 or mds_num_threads=128; we also used ost.OSS.ost_io.threads_max=150 when there are many disks installed.

 

  • 16 cpu core 90G memory: oss_num_threads=256 or mds_num_threads=192

 

Comment by Gerrit Updater [ 14/Mar/19 ]

Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/34418
Subject: LU-947 ptlrpc: reduce default MDS/OSS thread count
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 52ebfd503a17fac1a02cb63a8807d3d289a1c853

Comment by Chris Hunter (Inactive) [ 14/Mar/19 ]

A general rule of mdt_threads_max=16*num_cpus has been good starting point, with caveats if processor hyperthreading is enabled.

For ost & ost_io threads , the LOM states "You may want to start with a number of OST threads equal to the number of actual disk spindles on the node. " That makes sense for spinning media but perhaps not relevant for flash storage. With flash storage, I suspect ost_io_threads_max=N*num_cpus, with N in range 10-20 is good starting point. However not sure if storage blk_mq support means more or less ost_io threads.

Of course ability to reduce active thread count would help tuning.

Comment by Shuichi Ihara [ 15/Mar/19 ]

I agree reducing number of threads might be help if large number of clients send messages simultaneously, but need to keep maximum performance with small number of client with large network bandwdith. e.g. 8 clients with EDR needs to get 80GB/sec too.

For systems with large number of clients (ie. ~1000) we find the max number of OSS & MDS threads too high. This causes high system load & lost network connections. Too many service threads particularly impacts Ethernet & OPA since they have higer CPU utilization/system load.

did you disable 16MB rpc here? most of large instllastion, memory presure comes from large rpc and we have been disabling rpc size to 8M or 4MB.

Comment by Chris Hunter (Inactive) [ 16/Apr/19 ]

did you disable 16MB rpc here? most of large instllastion, memory presure comes from large rpc and we have been disabling rpc size to 8M or 4MB.

Reducing brw_size doesn't help much with many clients.

but need to keep maximum performance with small number of client with large network bandwdith. e.g. 8 clients with EDR needs to get 80GB/sec too.

I suspect the main factors:
1. number of clients (ie. number of simultaneous messages to OST target)
2. number of OSTs per server
3. performance difference between flash and spinning disk storage

Ideally we would have ability to set max_threads based on different environments and change value if workload changes.

 

Comment by Andreas Dilger [ 16/Apr/19 ]

Ideally we would have ability to set max_threads based on different environments and change value if workload changes.

That ability already exists, it is a module parameter, but can also be increased at runtime if needed. What is new in the first patch is the ability to reduce it at runtime. That said, changing it based on workload seems impractical since there may be many different jobs running at the same time. What I'd like is to have a reasonable out-of-the box value, possibly a function of some well-known parameters (RAM, core count, possibly OST count, maybe client count though I'm not sure that is right).

Comment by Chris Hunter (Inactive) [ 18/Apr/19 ]

What is new in the first patch is the ability to reduce it at runtime.

This would help for managing larger systems.

reasonable out-of-the box value, possibly a function of some well-known parameters (RAM, core count, possibly OST count, maybe client count though I'm not sure that is right).

Perhaps we could provide guidelines in the documentation ?

Comment by Gerrit Updater [ 21/Apr/19 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/34400/
Subject: LU-947 ptlrpc: allow stopping threads above threads_max
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 183cb1e3cdd2de93aca5dff79b3d56bbadc00178

Generated at Sat Feb 10 01:11:58 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.