[LU-12064] Adaptive timeout at_min adjustment & granularity Created: 12/Mar/19  Updated: 07/Feb/24

Status: Reopened
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Major
Reporter: Patrick Farrell (Inactive) Assignee: WC Triage
Resolution: Unresolved Votes: 0
Labels: None

Attachments: PDF File LAD2022-Scaling_Up_to_20k_Client-Cedeyn_Delbary.pdf    
Issue Links:
Related
is related to LU-4942 lock callback timeout is not per-export Resolved
is related to LU-15246 Add per device adaptive timeout param... Resolved
is related to LU-11989 Global filesystem hangs in 2.12 Resolved
is related to LU-17514 parameter hint for expected number of... Open
Rank (Obsolete): 9223372036854775807

 Description   

The adaptive timeout code currently works on a granularity of full seconds, and ignores timeouts of "0".  This means the MDS adaptive timeout code doesn't really adjust the timeouts there.

This means, for example, the bl_ast timeout stays at the default value of 100 seconds * 1.5 (ldlm_bl_timeout), so, 150 seconds.

This is a very long time to wait, and the AT code is supposed to shorten this.

There are two obvious approaches here.

  1. Stop ignoring "0" values in the adaptive timeout code, and set a default non-zero at_min (setting it to 1 second should mean no behavioral change, as that's the current minimum real value).  This solution should be simple and shouldn't affect existing installs too much.  (configuring at_min is pretty common anyway)
  2. Update the adaptive timeout code to use more precise time intervals than 1 second.

 

I'm inclined to #1.  But in real configs, at_min is generally recommended to be something like 40 seconds.  So perhaps we should default to that instead.

 

Note specifically in the ldlm_bl_timeout we use the max() of this and ldlm_enqueue_min (default is OBD_TIMEOUT_DEFAULT, 100 seconds), so we'll only get down to that value there.

 

A few open questions here.



 Comments   
Comment by Andreas Dilger [ 12/Mar/19 ]

It seems patch http://review.whamcloud.com/9336 "LU-4942 at: per-export lock callback timeout" changed prolog_timeout() significantly:

-       return max(at_est2timeout(at_get(&svcpt->scp_at_estimate)), ldlm_timeout);
+       /* We are in the middle of the process - BL AST is sent, CANCEL
+        * is ahead. Take half of AT + IO process time. /
+       return at_est2timeout(at_get(&svcpt->scp_at_estimate)) +
+               (ldlm_bl_timeout(lock) >> 1);

which I'm not sure I agree with. Definitely if a client is responsive and sending IO it should be allowed to complete, but there should be a shorter timeout for the initial AST if the client is not responsive.

It looks like there was an incremental smearing of logic over several patches. Initially, ldlm_get_rq_timeout() returned min(ldlm_timeout, obd_timeout / 3), which seems reasonable - we want to allow a timeout and a retry before evicting a client. then AT came in and disabled this if AT_OFF, in favor of ptlrpc_at_set_req_timeout() set when the request is allocated, which uses only obd_timeout internally. We currently don't use ldlm_timeout anywhere in the code when AT is enabled.

The ldlm_server_blocking_ast() code should not use ldlm_bl_timeout() for the initial BL AST reply timeout, since we don't know at this point if the client is responsive or not, but rather something like max(ldlm_timeout, at_est2timeout(at_get(&lock->l_export->exp_bl_lock_at))). This is OK for later bulk IO timeouts when doing prolong_timeout() after we know the client has replied to the initial blocking AST and is busy doing writes under the lock.

Comment by Andreas Dilger [ 14/Mar/19 ]

In any case, Patrick I agree that setting at_min = 1 by default and allowing zero elapsed time replies makes sense and is relatively easy and low risk to implement.

Comment by Andreas Dilger [ 29/Jul/21 ]

What else would be useful here is to tune at_min as a function of the number of clients connected to the servers. For systems with ~200 clients, having at_min=15 is typical, and with ~1500 clients at_min=30 is typical, so a function like the following seems reasonable:

        at_min = ilog(num_clients) * 3;

though any explicitly-specified at_min value should take precedence. That probably means having a separate flag that indicates whether at_min has been explicitly set or not, and otherwise recalculating it when clients connect and disconnect.

Comment by Andreas Dilger [ 28/Sep/22 ]

Presentation from CEA set at_min=55 for a cluster with 20,000 clients, so this would also match approximately the ilog(num_clients) * 3 formula (16 * 3 = 48).

Comment by DELBARY Gael [ 28/Sep/22 ]

The main concern about at_min is that if you specify a value inferior to your lnet transaction timeout (modulo the number of lnet retry if it is setup...) the propabilty to flood your lnet networks increases drastically because an rpc not acknowledge by server will be retransmitted by client (not directly but through an high priority rpc) regarding the at_min value. I don't remember by heart the piece of code on client side but it is what we have observed. Anyway in lustre others timeouts like ldlm_timeout, obd_timeout are generally initialized in the code with constant value higher than default transaction timeout. On what we have seen on large scale we could set a default at_min value to lnet_transaction_timeout+1 (for non routing configuration) or max((lnet_transaction_timeout+1),ilog(num_clients) * 3). I think we have to rely on under layers. Does it make sense?

Comment by Andreas Dilger [ 30/Sep/22 ]

I think one of the current issues is that at_min=0 allows the ping flood to happen. If there is at_min > 0 it would significantly reduce the flood. Having a reasonable at_min for the cluster size will help significantly.

Comment by Andreas Dilger [ 20/Jan/23 ]

With ever-increasing core counts on the client, it makes sense to scale the number of "clients" by max_mod_rpcs_in_flight when computing at_min so that a multi-threaded workload on a smaller number of clients is handled similarly to a larger number of clients with max_mod_rpcs_in_flight=1. With the default max_mod_rpcs_in_flight=8 this would be a multiplier of 3 to the calculated at_min value.

Comment by Gerrit Updater [ 12/Apr/23 ]

"Andreas Dilger <adilger@whamcloud.com>" uploaded a new patch: https://review.whamcloud.com/c/fs/lustre-release/+/50609
Subject: LU-12064 ptlrpc: set at_min=5 by default
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 1805e4dfa8fcba712b5fa3868f26988e7635dbcb

Comment by Gerrit Updater [ 06/Sep/23 ]

"Oleg Drokin <green@whamcloud.com>" merged in patch https://review.whamcloud.com/c/fs/lustre-release/+/50609/
Subject: LU-12064 ptlrpc: set at_min=5 by default
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 46804c2230cc0f72d4472cddd5a37456e1f2fb00

Comment by Peter Jones [ 06/Sep/23 ]

Landed for 2.16

Comment by Andreas Dilger [ 13/Sep/23 ]

The patch that landed is only increasing at_min to a reasonable minimum value. The work to implement a dynamic at_min/at_max based on the number of connected clients has not been done.

Comment by Andreas Dilger [ 13/Sep/23 ]

One proposal that might help here (and in other places) is for the servers to persistently track the maximum number of connected clients, so that the MDS/OSS knows after a restart how many clients might connect and can set at_min to an appropriate value right from the start.

Generated at Sat Feb 10 02:49:21 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.