[LU-11951] sanity: test_231a failure, idle disconnect & import generation disagreement Created: 08/Feb/19  Updated: 19/Dec/23  Resolved: 27/Feb/19

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.13.0
Fix Version/s: Lustre 2.13.0, Lustre 2.12.1

Type: Bug Priority: Major
Reporter: Patrick Farrell (Inactive) Assignee: Alex Zhuravlev
Resolution: Fixed Votes: 0
Labels: None

Issue Links:
Duplicate
is duplicated by LU-11976 req wrong generation leading to I/O e... Resolved
is duplicated by LU-11958 sanity test_244: FAIL: sendfile+group... Closed
Related
is related to LU-11964 Heavy load and soft lockups on MDS wi... Resolved
is related to LU-11559 sanity-sec test_21: req wrong generat... Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

These observations are drawn from this test failure/logs (master + a few unrelated patches):

https://testing.whamcloud.com/test_sets/7ed37e66-2b3a-11e9-90fb-52540065bddc

sanity 231a.

 

dd fails for no particular reason in the middle of a pretty innocuous test.

 

A disconnect/idle request is sent:

00000100:00100000:0.0:1549584426.013412:0:3658:0:(client.c:1631:ptlrpc_send_new_req()) Sending RPC pname:cluuid:pid:xid:nid:opc ptlrpcd_rcv:6a3d5a57-20e1-3c31-d0be-69db5e11ff93:3658:1624855787560768:10.9.5.221@tcp:9

 

It has not completed yet.

 

A lock request is made:

 00010000:00010000:1.0:1549584426.079672:0:25756:0:(ldlm_request.c:1019:ldlm_cli_enqueue()) ### sending request ns: lustre-OST0006-osc-ffff8853e612d000 lock: ffff8853c1acd200/0x600e7a448b87755c lrc: 3/0,1 mode: -/PW res: [0x75c0:0x0:0x0].0x0 rrc: 2 type: EXT [0->4194303] (req 0>4194303) flags: 0x0 nid: local remote: 0x0 expref: -99 pid: 25756 timeout: 0 lvb_type: 1
00000100:00080000:1.0:1549584426.079679:0:25756:0:(client.c:1575:ptlrpc_send_new_req()) @@@ req waiting for recovery: (FULL != CONNECTING)  req@ffff8853e63e3a80

 

 

The disconnect comes back:

 00000100:00080000:0.0:1549584426.491344:0:3658:0:(import.c:1677:ptlrpc_disconnect_idle_interpret()) @@@ inflight=2, refcount=6: rc = 0
  req@ffff8853e63e2400 x1624855787560768/t0(0) o9->lustre-OST0006-osc-ffff8853e612d000@10.9.5.221@tcp:28/4 lens 224/192 e 0 to 0 dl 1549584432 ref 1 fl Interpret:RN/0/0 rc 0/0

 

 

It notices a request has happened and reconnects immediately:

 00000100:00080000:0.0:1549584426.491350:0:3658:0:(import.c:1693:ptlrpc_disconnect_idle_interpret()) ffff8853fbff1800 lustre-OST0006_UUID: changing import state from IDLE to NEW
00000100:00080000:0.0:1549584426.491361:0:3658:0:(import.c:683:ptlrpc_connect_import()) ffff8853fbff1800 lustre-OST0006_UUID: changing import state from NEW to CONNECTING

 

 

The delayed lock request is rejected for import generation difference:

 00000100:00080000:0.0:1549584426.491919:0:3658:0:(recover.c:223:ptlrpc_wake_delayed()) @@@ waking (set ffff88538df3fc80):  req@ffff8853e63e3a80 x1624855787560912/t0(0) o101->lustre-OST0006-osc-ffff8853e612d000@10.9.5.221@tcp:28/4 lens 328/400 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1
00000100:00100000:0.0:1549584426.491929:0:3658:0:(client.c:2061:ptlrpc_check_set()) Completed RPC pname:cluuid:pid:xid:nid:opc ptlrpcd_rcv:6a3d5a57-20e1-3c31-d0be-69db5e11ff93:3658:1624855787560928:10.9.5.221@tcp:8
00000100:00020000:1.0:1549584426.491944:0:25756:0:(client.c:1193:ptlrpc_import_delay_req()) @@@ req wrong generation:  req@ffff8853e63e3a80 x1624855787560912/t0(0) o101->lustre-OST0006-osc-ffff8853e612d000@10.9.5.221@tcp:28/4 lens 328/400 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1
00000100:00100000:1.0:1549584426.496060:0:25756:0:(client.c:2061:ptlrpc_check_set()) Completed RPC pname:cluuid:pid:xid:nid:opc dd:6a3d5a57-20e1-3c31-d0be-69db5e11ff93:25756:1624855787560912:10.9.5.221@tcp:101
00010000:00010000:1.0:1549584426.496066:0:25756:0:(ldlm_request.c:592:ldlm_cli_enqueue_fini()) ### client-side enqueue END (FAILED) ns: lustre-OST0006-osc-ffff8853e612d000 lock: ffff8853c1acd200/0x600e7a448b87755c lrc: 4/0,1 mode: -/PW res: [0x75c0:0x0:0x0].0x0 rrc: 2 type: EXT [0->4194303] (req 0>4194303) flags: 0x0 nid: local remote: 0x0 expref: -99 pid: 25756 timeout: 0 lvb_type: 1

 

 

Here's the problem.

 

ptlrpc_request_alloc_internal uses IMP_IDLE as a way to tell if it should start a new connection:

      if (unlikely(imp->imp_state == LUSTRE_IMP_IDLE)) {

                               

And increment the generation number as part of that:

                 if (imp->imp_state == LUSTRE_IMP_IDLE) {
                        imp->imp_generation++;
                        imp->imp_initiated_at = imp->imp_generation;
                        imp->imp_state =  LUSTRE_IMP_NEW;
                        connect = 1;

 

 

After this, the request generation in the request is set to match that of the import.  So if this doesn't happen, the existing import generation is stored in the request.

 

However, the idle code (ptlrpc_disconnect_prep_req) sets the import state to connecting:

IMPORT_SET_STATE(imp, LUSTRE_IMP_CONNECTING);

 

Then sends the request.  And only sets IMP_IDLE only once the import is fully idle, in

ptlrpc_disconnect_idle_interpret:

 /* DISCONNECT reply can be late and another connection can just
         * be initiated. so we have to abort disconnection. */
        if (req->rq_import_generation == imp->imp_generation &&
            imp->imp_state != LUSTRE_IMP_CLOSED) {
                LASSERTF(imp->imp_state == LUSTRE_IMP_CONNECTING,
                         "%s\n", ptlrpc_import_state_name(imp->imp_state));
                imp->imp_state = LUSTRE_IMP_IDLE;

 

So the request_alloc_internal doesn't notice the import is in the process of disconnecting, so it does not start the reconnect early, as intended.

 

But we increment the generation in disconnect_idle_interpret:

                             if (atomic_read(&imp->imp_inflight) > 1) {
                                                imp->imp_generation++;
                                                imp->imp_initiated_at = imp->imp_generation;
                                                IMPORT_SET_STATE_NOLOCK(imp, LUSTRE_IMP_NEW);
                                                connect = 1;"

I am not really sure of the correct handling of import generation overall.  My first instinct is that we should not increment it in this case, as we connect immediately...

But perhaps that's incorrect as well.  I'm hoping that Alex will take a look at this.



 Comments   
Comment by Patrick Farrell (Inactive) [ 08/Feb/19 ]

I note also:
The use of imp_lock for checking and setting the import state is very inconsistent.  In at least ptlrpc_request_alloc_internal, I think this could also cause a similar bug to the one I just noted.

Comment by James A Simmons [ 08/Feb/19 ]

I had this discussion some time ago with Oleg. Their are many places imp_lock is not taken. For the case of  imp_generation I suggested we make it a atomic since it appears to be the most access item. I have a patch already. Interested?

Comment by Alex Zhuravlev [ 09/Feb/19 ]

thanks for the analysis. checking the details..

Comment by Patrick Farrell (Inactive) [ 10/Feb/19 ]

James,

Making imp_gen an atomic won't work.  imp_gen is used to reflect that the import state has cycled, so changes of imp_gen and import state need to be atomic together.  So an atomic for import gen isn't enough, we locking so we can change them together.  (And we have that locking, it's just not used everywhere.)

Comment by Alex Zhuravlev [ 11/Feb/19 ]

I think I've got quite reliable reproducer.

Comment by Gerrit Updater [ 11/Feb/19 ]

Alex Zhuravlev (bzzz@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/34221
Subject: LU-11951 ptlrpc: reset generation for old requests
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: d5de95f1439ff51411f4f40dea4d8f5bdc61ab14

Comment by Alex Zhuravlev [ 11/Feb/19 ]

the patch above should fix the issue, though I'm still thinking of more elegant solution.

Comment by Patrick Farrell (Inactive) [ 13/Feb/19 ]

Huh.

Same issue caused test_0 in sanity-pfl to fail: https://testing.whamcloud.com/test_sets/62cb3e5c-2e55-11e9-be61-52540065bddc

Some timing thing must have changed or something, this is hitting a bunch.
(https://jira.whamcloud.com/browse/LU-11958 is this as well)

Comment by Gerrit Updater [ 27/Feb/19 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/34221/
Subject: LU-11951 ptlrpc: reset generation for old requests
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 42d8cb04637b62d91deb9ef047f1d9ef8473eca6

Comment by Peter Jones [ 27/Feb/19 ]

Landed for 2.13

Comment by Gerrit Updater [ 16/Mar/19 ]

Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/34435
Subject: LU-11951 ptlrpc: reset generation for old requests
Project: fs/lustre-release
Branch: b2_12
Current Patch Set: 1
Commit: 667d1bd495eadf210075673443780313efed864d

Comment by Gerrit Updater [ 15/Apr/19 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/34435/
Subject: LU-11951 ptlrpc: reset generation for old requests
Project: fs/lustre-release
Branch: b2_12
Current Patch Set:
Commit: 78ddbc595302af9ec7b4512148990571e217fe64

Generated at Sat Feb 10 02:48:21 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.