[LU-5183] If Adaptive Timeout is set for at_max = 600 then id ldlm_timeouts gets affective or it becomes over ruled Created: 12/Jun/14  Updated: 21/Jul/14  Resolved: 21/Jul/14

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Question/Request Priority: Minor
Reporter: Manish Patel (Inactive) Assignee: Emoly Liu
Resolution: Duplicate Votes: 0
Labels: None
Environment:

Lustre Server 2.1.6
Lustre Client 1.8.9


Issue Links:
Related
is related to LUDOC-250 More explanation of several kinds of ... Open
Rank (Obsolete): 14384

 Description   

Hi

I'd like an explanation of which timeout values are being exceeded that are resulting in these evictions, so what does that "227 seconds" reffers to, like which timeout it's considering. Is that "ldlm_timeout, obd_timeout, /proc/sys/lustre/timeout, at_min or at_max.

May 14 05:37:59 dc2oss15 kernel: : Lustre: dc2-OST009c: haven't heard from client ac9ef944-83f6-a453-c821-f0067101d2ca (at 149.165.229.28@tcp) in 227 seconds. I think it's dead, and I am evicting it. exp ffff880bb8ff2800, cur 1400060279 expire 1400060129 last 1400060052
May 14 05:37:59 dc2oss15 kernel: : Lustre: Skipped 9 previous similar messages
May 14 05:38:02 dc2oss12 kernel: : Lustre: dc2-OST007b: haven't heard from client ac9ef944-83f6-a453-c821-f0067101d2ca (at 149.165.229.28@tcp) in 227 seconds. I think it's dead, and I am evicting it. exp ffff88169eecb400, cur 1400060282 expire 1400060132 last 1400060055
May 14 05:38:02 dc2oss12 kernel: : Lustre: Skipped 8 previous similar messages
May 14 05:37:53 dc2oss04 kernel: : Lustre: dc2-OST0021: haven't heard from client ac9ef944-83f6-a453-c821-f0067101d2ca (at 149.165.229.28@tcp) in 227 seconds. I think it's dead, and I am evicting it. exp ffff880bd683c400, cur 1400060273 expire 1400060123 last 1400060046
May 14 05:37:53 dc2oss04 kernel: : Lustre: Skipped 8 previous similar messages
May 14 05:37:58 dc2oss05 kernel: : Lustre: dc2-OST002c: haven't heard from client ac9ef944-83f6-a453-c821-f0067101d2ca (at 149.165.229.28@tcp) in 227 seconds. I think it's dead, and I am evicting it. exp ffff88154cc64000, cur 1400060278 expire 1400060128 last 1400060051
May 14 05:37:58 dc2oss05 kernel: : Lustre: Skipped 9 previous similar messages

Particularly, I'm interested in knowing whether ldlm_timeouts, which is 20s for OSTs and 6s for MDT, are in play given that we've adaptive timeouts enabled(at_max = 600) and /proc/sys/lustre/timeout=100.

Should we consider increasing the ldlm_timeouts if they are in fact being used? Should we consider setting at_min to 60-70s to allow time for slow client responses?

If yes then how does that settings helps and makes difference.

See sections 2.2.2 and 2.2.8 in Cory Spitz's paper here:
https://cug.org/5-publications/proceedings_attendee_lists/CUG11CD/page
s/1-program/final_program/Wednesday/12A-Spitz-Paper.pdf

Thank You,
Manish



 Comments   
Comment by Peter Jones [ 13/Jun/14 ]

Emoly

Could you please help on this one?

Thanks

Peter

Comment by Emoly Liu [ 16/Jun/14 ]

Hi Manish,

The first question about "227 seconds" eviction is related to obd_timeout. The code is here:

                expire_time = cfs_time_current_sec() - PING_EVICT_TIMEOUT;
...  
                        if (expire_time > exp->exp_last_request_time) {
                                class_export_get(exp);
                                cfs_spin_unlock(&obd->obd_dev_lock);
                                 LCONSOLE_WARN("%s: haven't heard from client %s"
                                              " (at %s) in %ld seconds. I think"
                                              " it's dead, and I am evicting"
                                              " it. exp %p, cur %ld expire %ld"
                                              " last %ld\n",
                                              obd->obd_name,
                                              obd_uuid2str(&exp->exp_client_uuid),
                                              obd_export_nid2str(exp),
                                              (long)(cfs_time_current_sec() -
                                                     exp->exp_last_request_time),
                                              exp, (long)cfs_time_current_sec(),
                                              (long)expire_time,
                                              (long)exp->exp_last_request_time);

and

#define PING_INTERVAL max(obd_timeout / 4, 1U)
/* Client may skip 1 ping; we must wait at least 2.5. But for multiple
 * failover targets the client only pings one server at a time, and pings
 * can be lost on a loaded network. Since eviction has serious consequences,
 * and there's no urgent need to evict a client just because it's idle, we
 * should be very conservative here. */
#define PING_EVICT_TIMEOUT (PING_INTERVAL * 6)

From the log above we can see that PING_EVICT_TIMEOUT is 150 seconds, and the time difference between the last request sent by client and the OST ping eviction check is 227 seconds.
For example,

May 14 05:37:59 dc2oss15 kernel: : Lustre: dc2-OST009c: haven't heard from client ac9ef944-83f6-a453-c821-f0067101d2ca (at 149.165.229.28@tcp) in 227 seconds. I think it's dead, and I am evicting it. exp ffff880bb8ff2800, cur 1400060279 expire 1400060129 last 1400060052

PING_EVICT_TIMEOUT = 1400060279 - 1400060129 = 150

As for the other question about adaptive timeout and ldlm_timeout, I need to check the code and the document, and then give a reply.

Comment by Manish Patel (Inactive) [ 20/Jun/14 ]

Hi Emoly,

If that 227 is the related to "obd_timeout" then which setting need to be tweaked so that it can hold that limit till 300 seconds and what is the tweak setting options for increasing the PING_EVICT_TIMEOUT to 300 sec.

About the second let me know if you have any new updates for the "ldlm_timeout and adaptive_timeout", by looking at the codes.

Thank you,
Manish

Comment by Emoly Liu [ 23/Jun/14 ]

Hi Manish,
According to the following code, obd_timeout is /proc/sys/lustre/timeout.

        {
                .ctl_name = OBD_TIMEOUT,
                .procname = "timeout",
                .data     = &obd_timeout,
                .maxlen   = sizeof(int),
                .mode     = 0644,
                .proc_handler = &proc_set_timeout
        },

So you can increase the PING_EVICT_TIMEOUT by increasing /proc/fs/lustre/timeout should work.

About the second question, ldlm_timeout is a static timeout, that is a server waits for a client to reply to an initial lock cancellation request, and it should be smaller than obd_timeout.

        if (ldlm_timeout >= obd_timeout)
                ldlm_timeout = max(obd_timeout / 3, 1U);

Somehow, ldlm_timeout is in play given that you have adaptive timeouts enabled.
In ost code,

static inline int prolong_timeout(struct ptlrpc_request *req)
{       
        struct ptlrpc_service *svc = req->rq_rqbd->rqbd_service;

        if (AT_OFF)
                return obd_timeout / 2; 
        
        return max(at_est2timeout(at_get(&svc->srv_at_estimate)), ldlm_timeout);
}       

In mdt_init0() code,

        /* Reduce the initial timeout on an MDS because it doesn't need such
         * a long timeout as an OST does. Adaptive timeouts will adjust this
         * value appropriately. */
        if (ldlm_timeout == LDLM_TIMEOUT_DEFAULT)
                ldlm_timeout = MDS_LDLM_TIMEOUT_DEFAULT;

We can see in Cory's pdf, Cray configures both the minimum Adaptive Timeout, at_min, and the ldlm_timeout to 70 seconds to allow Lustre to “ride through” the re-route for the Gemini feature. And yes, you can set at_min to 60-70s to allow time for slow client responses.

Comment by Peter Jones [ 18/Jul/14 ]

Emoly

DDN have suggested that we include this material in the Lustre manual. Could you please create an LUDOC ticket to track that?

Thanks

Peter

Comment by Emoly Liu [ 21/Jul/14 ]

LUDOC-250 is created to track the lustre manual update.

Comment by Peter Jones [ 21/Jul/14 ]

Closing ticket as the remaining doc work will be handled under LUDOC-250

Generated at Sat Feb 10 01:49:14 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.