Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-12120

LustreError: 15069:0:(tgt_grant.c:561:tgt_grant_incoming()) LBUG

Details

    • Bug
    • Resolution: Fixed
    • Major
    • Lustre 2.13.0, Lustre 2.12.3
    • Lustre 2.10.5
    • None
    • 3
    • 9223372036854775807

    Description

      First time we have see this lbug. We have a crash dump.

      [11168227.609396] Lustre: Skipped 4149 previous similar messages
      [11168546.559600] LustreError: 15069:0:(tgt_grant.c:559:tgt_grant_incoming()) nbp2-OST0121: cli 811dc111-6b22-b447-9144-bcd1173a571d/ffff881d57bff000 dirty -610055168 pend 0 grant -643806208
      [11168546.609562] LustreError: 15069:0:(tgt_grant.c:561:tgt_grant_incoming()) LBUG
      [11168546.631425] Pid: 15069, comm: ll_ost_io01_084 3.10.0-693.21.1.el7.20180508.x86_64.lustre2105 #1 SMP Mon Aug 27 23:04:41 UTC 2018
      [11168546.631428] Call Trace:
      [11168546.631439]  [<ffffffff8103a1f2>] save_stack_trace_tsk+0x22/0x40
      [11168546.650223]  [<ffffffffa08597cc>] libcfs_call_trace+0x8c/0xc0 [libcfs]
      [11168546.650228]  [<ffffffffa085987c>] lbug_with_loc+0x4c/0xa0 [libcfs]
      [11168546.650292]  [<ffffffffa0e8c270>] tgt_grant_prepare_read+0x0/0x3b0 [ptlrpc]
      [11168546.650325]  [<ffffffffa0e8c37b>] tgt_grant_prepare_read+0x10b/0x3b0 [ptlrpc]
      [11168546.650335]  [<ffffffffa0c86590>] ofd_preprw+0x450/0x1170 [ofd]
      [11168546.650365]  [<ffffffffa0e702a5>] tgt_brw_read+0x975/0x1860 [ptlrpc]
      [11168546.650394]  [<ffffffffa0e6deca>] tgt_request_handle+0x92a/0x1370 [ptlrpc]
      [11168546.650420]  [<ffffffffa0e164bb>] ptlrpc_server_handle_request+0x23b/0xaa0 [ptlrpc]
      [11168546.650445]  [<ffffffffa0e1a4a2>] ptlrpc_main+0xa92/0x1e40 [ptlrpc]
      [11168546.650449]  [<ffffffff810b1131>] kthread+0xd1/0xe0
      [11168546.650452]  [<ffffffff816a14f7>] ret_from_fork+0x77/0xb0
      [11168546.650468]  [<ffffffffffffffff>] 0xffffffffffffffff
      [11168546.650469] Kernel panic - not syncing: LBUG 
       

      Attachments

        Issue Links

          Activity

            [LU-12120] LustreError: 15069:0:(tgt_grant.c:561:tgt_grant_incoming()) LBUG

            Minh Diep (mdiep@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/35084
            Subject: LU-12120 grants: prevent negative ted_grant value
            Project: fs/lustre-release
            Branch: b2_12
            Current Patch Set: 1
            Commit: 82ad1cf4fd9298f95a8f233ee18feef363b10ebb

            gerrit Gerrit Updater added a comment - Minh Diep (mdiep@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/35084 Subject: LU-12120 grants: prevent negative ted_grant value Project: fs/lustre-release Branch: b2_12 Current Patch Set: 1 Commit: 82ad1cf4fd9298f95a8f233ee18feef363b10ebb
            pjones Peter Jones added a comment -

            Landed for 2.13

            pjones Peter Jones added a comment - Landed for 2.13

            Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/34996/
            Subject: LU-12120 grants: prevent negative ted_grant value
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: 7e08317ef5cbed5cd587017cbe343eb4cc52822c

            gerrit Gerrit Updater added a comment - Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/34996/ Subject: LU-12120 grants: prevent negative ted_grant value Project: fs/lustre-release Branch: master Current Patch Set: Commit: 7e08317ef5cbed5cd587017cbe343eb4cc52822c

            Mike Pershin (mpershin@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/34996
            Subject: LU-12120 grants: prevent negative ted_grant value
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 296ac6b910880dc65c9cee8be2552bba759cea57

            gerrit Gerrit Updater added a comment - Mike Pershin (mpershin@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/34996 Subject: LU-12120 grants: prevent negative ted_grant value Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 296ac6b910880dc65c9cee8be2552bba759cea57

            Ok, thanks Mikhail!

            From what I understand clients and servers maintain separate views of grants and this LU relates to pairs of (client; server) that drift apart from each other. At the next shrink request, that results in the client asking to give back more space than the server remembers granting.

            Does anyone know if there a valid reason as to why in tgt_grant_incoming():

            static void tgt_grant_incoming(...)
            {
                    ...
                    CDEBUG(D_CACHE, "%s: cli %s/%p reports grant %llu dropped %u, local %lu\n",
                           obd->obd_name, exp->exp_client_uuid.uuid, exp, oa->o_grant,
                           oa->o_dropped, ted->ted_grant);
                    ...
            }
            

            The value printed after "reports grant" would not be the same as the value printed after "local"?
            Can this be used to track the desynchronization?

            bougetq Quentin Bouget (Inactive) added a comment - Ok, thanks Mikhail! From what I understand clients and servers maintain separate views of grants and this LU relates to pairs of (client; server) that drift apart from each other. At the next shrink request, that results in the client asking to give back more space than the server remembers granting. — Does anyone know if there a valid reason as to why in tgt_grant_incoming() : static void tgt_grant_incoming(...) { ... CDEBUG(D_CACHE, "%s: cli %s/%p reports grant %llu dropped %u, local %lu\n" , obd->obd_name, exp->exp_client_uuid.uuid, exp, oa->o_grant, oa->o_dropped, ted->ted_grant); ... } The value printed after "reports grant" would not be the same as the value printed after "local"? Can this be used to track the desynchronization?
            tappro Mikhail Pershin added a comment - - edited

            Quentin, I agree with your analysis and also think that tgt_grant_shrink should not decrease ted_grant blindly but do sanity checks there. I will prepare patch today

            tappro Mikhail Pershin added a comment - - edited Quentin, I agree with your analysis and also think that tgt_grant_shrink should not decrease ted_grant blindly but do sanity checks there. I will prepare patch today

            I am fairly certain that the code that generates the negative value is in tgt_grant_shrink().

            @Stephane, as a temporary fix, you can try deactivating grant shrinking on clients with:

            lctl set_param osc.*.grant_shrink=0
            

            I do not know how much of an impact this has on OSSes, but considering the feature was disable by default up until july 2018 (cf. LU-8708), I assume it should be bearable.

            bougetq Quentin Bouget (Inactive) added a comment - - edited I am fairly certain that the code that generates the negative value is in tgt_grant_shrink() . @Stephane, as a temporary fix, you can try deactivating grant shrinking on clients with: lctl set_param osc.*.grant_shrink=0 I do not know how much of an impact this has on OSSes, but considering the feature was disable by default up until july 2018 (cf. LU-8708 ), I assume it should be bearable.
            pjones Peter Jones added a comment -

            Note that some comments that may assist in the analysis have been added to the resolved unassigned ticket LU-12322

            pjones Peter Jones added a comment - Note that some comments that may assist in the analysis have been added to the resolved unassigned ticket LU-12322

            Patrick, I think this may actually be due to another thing, likely an OSS deadlock. Please see LU-12215

            sthiell Stephane Thiell added a comment - Patrick, I think this may actually be due to another thing, likely an OSS deadlock. Please see LU-12215

            Stephane,

            If true, that would seem to put you in exactly the same situation as NASA, who are also hitting that LBUG because of the LU-11644 evictions.

            By the way, I meant to close this bug, was just waiting on a reply from Mahmoud - the LBUG is tracked under LU-11939 if you want to follow that.

            pfarrell Patrick Farrell (Inactive) added a comment - - edited Stephane, If true, that would seem to put you in exactly the same situation as NASA, who are also hitting that LBUG because of the LU-11644 evictions. By the way, I meant to close this bug, was just waiting on a reply from Mahmoud - the LBUG is tracked under LU-11939 if you want to follow that.

            We're actually investigating if this could be caused by LU-11644 reported by NASA, as I noticed a few similar call traces:

            Apr 22 10:48:49 fir-io3-s2 kernel: LustreError: 78496:0:(ldlm_lockd.c:682:ldlm_handle_ast_error()) ### client (nid 10.9.101.28@o2ib4) failed to reply to blocking AST (req@ffff981d18ba8c00 x1631526698598368 status 0 rc -110), evict it ns: filter-fir-OST0023_UUID lock: ffff98016d218b40/0xd80148300515f99f lrc: 4/0,0 mo
            Apr 22 10:48:49 fir-io3-s2 kernel: LustreError: 138-a: fir-OST001b: A client on nid 10.9.101.28@o2ib4 was evicted due to a lock blocking callback time out: rc -110
            Apr 22 10:48:49 fir-io3-s2 kernel: LustreError: 77026:0:(ldlm_lockd.c:256:expired_lock_main()) ### lock callback timer expired after 105s: evicting client at 10.9.101.28@o2ib4  ns: filter-fir-OST001b_UUID lock: ffff97f619e7af40/0xd80148300516127a lrc: 3/0,0 mode: PR/PR res: [0xd00000401:0x91e8cf:0x0].0x0 rrc: 3 type
            Apr 22 10:48:49 fir-io3-s2 kernel: LustreError: 78496:0:(ldlm_lockd.c:682:ldlm_handle_ast_error()) Skipped 1 previous similar message
            Apr 22 10:49:49 fir-io3-s2 kernel: LNet: Service thread pid 77058 was inactive for 200.48s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
            Apr 22 10:49:49 fir-io3-s2 kernel: Pid: 77058, comm: ll_ost00_000 3.10.0-957.1.3.el7_lustre.x86_64 #1 SMP Fri Dec 7 14:50:35 PST 2018
            Apr 22 10:49:49 fir-io3-s2 kernel: Call Trace:
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc0f9b640>] ptlrpc_set_wait+0x500/0x8d0 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc0f58fe5>] ldlm_run_ast_work+0xd5/0x3a0 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc0f7a69b>] ldlm_glimpse_locks+0x3b/0x100 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc162910b>] ofd_intent_policy+0x69b/0x920 [ofd]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc0f59d26>] ldlm_lock_enqueue+0x366/0xa60 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc0f826d7>] ldlm_handle_enqueue0+0xa47/0x15a0 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc10090b2>] tgt_enqueue+0x62/0x210 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc101010a>] tgt_request_handle+0xaea/0x1580 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc0fb46db>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc0fb800c>] ptlrpc_main+0xafc/0x1fc0 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffff8b8c1c31>] kthread+0xd1/0xe0
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffff8bf74c24>] ret_from_fork_nospec_begin+0xe/0x21
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffffffffff>] 0xffffffffffffffff
            Apr 22 10:49:49 fir-io3-s2 kernel: LustreError: dumping log to /tmp/lustre-log.1555955389.77058
            Apr 22 10:49:49 fir-io3-s2 kernel: Pid: 78291, comm: ll_ost00_019 3.10.0-957.1.3.el7_lustre.x86_64 #1 SMP Fri Dec 7 14:50:35 PST 2018
            Apr 22 10:49:49 fir-io3-s2 kernel: Call Trace:
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc0f9b640>] ptlrpc_set_wait+0x500/0x8d0 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc0f58fe5>] ldlm_run_ast_work+0xd5/0x3a0 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc0f7a69b>] ldlm_glimpse_locks+0x3b/0x100 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc162910b>] ofd_intent_policy+0x69b/0x920 [ofd]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc0f59d26>] ldlm_lock_enqueue+0x366/0xa60 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc0f826d7>] ldlm_handle_enqueue0+0xa47/0x15a0 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc10090b2>] tgt_enqueue+0x62/0x210 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc101010a>] tgt_request_handle+0xaea/0x1580 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc0fb46db>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffc0fb800c>] ptlrpc_main+0xafc/0x1fc0 [ptlrpc]
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffff8b8c1c31>] kthread+0xd1/0xe0
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffff8bf74c24>] ret_from_fork_nospec_begin+0xe/0x21
            Apr 22 10:49:49 fir-io3-s2 kernel:  [<ffffffffffffffff>] 0xffffffffffffffff
            Apr 22 10:49:49 fir-io3-s2 kernel: Lustre: fir-OST0019: Connection restored to 708f469b-f504-7275-d236-e90aa8bcec67 (at 10.9.0.64@o2ib4)
            Apr 22 10:49:49 fir-io3-s2 kernel: Lustre: Skipped 24 previous similar messages
            Apr 22 10:49:55 fir-io3-s2 kernel: LNet: Service thread pid 78318 was inactive for 200.37s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
            Apr 22 10:49:55 fir-io3-s2 kernel: LNet: Skipped 1 previous similar message
            Apr 22 10:49:55 fir-io3-s2 kernel: Pid: 78318, comm: ll_ost02_032 3.10.0-957.1.3.el7_lustre.x86_64 #1 SMP Fri Dec 7 14:50:35 PST 2018
            Apr 22 10:49:55 fir-io3-s2 kernel: Call Trace:
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc0f9b640>] ptlrpc_set_wait+0x500/0x8d0 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc0f58fe5>] ldlm_run_ast_work+0xd5/0x3a0 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc0f7a69b>] ldlm_glimpse_locks+0x3b/0x100 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc162910b>] ofd_intent_policy+0x69b/0x920 [ofd]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc0f59d26>] ldlm_lock_enqueue+0x366/0xa60 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc0f826d7>] ldlm_handle_enqueue0+0xa47/0x15a0 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc10090b2>] tgt_enqueue+0x62/0x210 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc101010a>] tgt_request_handle+0xaea/0x1580 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc0fb46db>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc0fb800c>] ptlrpc_main+0xafc/0x1fc0 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffff8b8c1c31>] kthread+0xd1/0xe0
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffff8bf74c24>] ret_from_fork_nospec_begin+0xe/0x21
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffffffffff>] 0xffffffffffffffff
            Apr 22 10:49:55 fir-io3-s2 kernel: LustreError: dumping log to /tmp/lustre-log.1555955395.78318
            Apr 22 10:49:55 fir-io3-s2 kernel: Pid: 78868, comm: ll_ost03_105 3.10.0-957.1.3.el7_lustre.x86_64 #1 SMP Fri Dec 7 14:50:35 PST 2018
            Apr 22 10:49:55 fir-io3-s2 kernel: Call Trace:
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc0f9b640>] ptlrpc_set_wait+0x500/0x8d0 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc0f58fe5>] ldlm_run_ast_work+0xd5/0x3a0 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc0f7a69b>] ldlm_glimpse_locks+0x3b/0x100 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc162910b>] ofd_intent_policy+0x69b/0x920 [ofd]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc0f59d26>] ldlm_lock_enqueue+0x366/0xa60 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc0f826d7>] ldlm_handle_enqueue0+0xa47/0x15a0 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc10090b2>] tgt_enqueue+0x62/0x210 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc101010a>] tgt_request_handle+0xaea/0x1580 [ptlrpc]
            Apr 22 10:49:55 fir-io3-s2 kernel:  [<ffffffffc0fb46db>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
            Apr 22 10:49:56 fir-io3-s2 kernel:  [<ffffffffc0fb800c>] ptlrpc_main+0xafc/0x1fc0 [ptlrpc]
            Apr 22 10:49:56 fir-io3-s2 kernel:  [<ffffffff8b8c1c31>] kthread+0xd1/0xe0
            Apr 22 10:49:56 fir-io3-s2 kernel:  [<ffffffff8bf74c24>] ret_from_fork_nospec_begin+0xe/0x21
            Apr 22 10:49:56 fir-io3-s2 kernel:  [<ffffffffffffffff>] 0xffffffffffffffff
            Apr 22 10:50:12 fir-io3-s2 kernel: Lustre: fir-OST0021: haven't heard from client 83316b44-aad9-bbe2-1580-9c149ecd135e (at 10.9.101.28@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff97fd659af000, cur 1555955412 expire 1555955262 last 1555955185
            Apr 22 10:50:12 fir-io3-s2 kernel: Lustre: Skipped 19 previous similar messages
            Apr 22 10:50:12 fir-io3-s2 kernel: LNet: Service thread pid 77058 completed after 223.36s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).
            Apr 22 10:50:12 fir-io3-s2 kernel: LustreError: 78624:0:(client.c:1175:ptlrpc_import_delay_req()) @@@ IMP_CLOSED   req@ffff981b3d9bad00 x1631526704122896/t0(0) o104->fir-OST0021@10.9.101.28@o2ib4:15/16 lens 296/224 e 0 to 0 dl 0 ref 1 fl Rpc:/0/ffffffff rc 0/-1
            

            I have enabled +lnet debug on all servers to see how it goes.

            sthiell Stephane Thiell added a comment - We're actually investigating if this could be caused by LU-11644 reported by NASA, as I noticed a few similar call traces: Apr 22 10:48:49 fir-io3-s2 kernel: LustreError: 78496:0:(ldlm_lockd.c:682:ldlm_handle_ast_error()) ### client (nid 10.9.101.28@o2ib4) failed to reply to blocking AST (req@ffff981d18ba8c00 x1631526698598368 status 0 rc -110), evict it ns: filter-fir-OST0023_UUID lock: ffff98016d218b40/0xd80148300515f99f lrc: 4/0,0 mo Apr 22 10:48:49 fir-io3-s2 kernel: LustreError: 138-a: fir-OST001b: A client on nid 10.9.101.28@o2ib4 was evicted due to a lock blocking callback time out: rc -110 Apr 22 10:48:49 fir-io3-s2 kernel: LustreError: 77026:0:(ldlm_lockd.c:256:expired_lock_main()) ### lock callback timer expired after 105s: evicting client at 10.9.101.28@o2ib4 ns: filter-fir-OST001b_UUID lock: ffff97f619e7af40/0xd80148300516127a lrc: 3/0,0 mode: PR/PR res: [0xd00000401:0x91e8cf:0x0].0x0 rrc: 3 type Apr 22 10:48:49 fir-io3-s2 kernel: LustreError: 78496:0:(ldlm_lockd.c:682:ldlm_handle_ast_error()) Skipped 1 previous similar message Apr 22 10:49:49 fir-io3-s2 kernel: LNet: Service thread pid 77058 was inactive for 200.48s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Apr 22 10:49:49 fir-io3-s2 kernel: Pid: 77058, comm: ll_ost00_000 3.10.0-957.1.3.el7_lustre.x86_64 #1 SMP Fri Dec 7 14:50:35 PST 2018 Apr 22 10:49:49 fir-io3-s2 kernel: Call Trace: Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc0f9b640>] ptlrpc_set_wait+0x500/0x8d0 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc0f58fe5>] ldlm_run_ast_work+0xd5/0x3a0 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc0f7a69b>] ldlm_glimpse_locks+0x3b/0x100 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc162910b>] ofd_intent_policy+0x69b/0x920 [ofd] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc0f59d26>] ldlm_lock_enqueue+0x366/0xa60 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc0f826d7>] ldlm_handle_enqueue0+0xa47/0x15a0 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc10090b2>] tgt_enqueue+0x62/0x210 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc101010a>] tgt_request_handle+0xaea/0x1580 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc0fb46db>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc0fb800c>] ptlrpc_main+0xafc/0x1fc0 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffff8b8c1c31>] kthread+0xd1/0xe0 Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffff8bf74c24>] ret_from_fork_nospec_begin+0xe/0x21 Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffffffffff>] 0xffffffffffffffff Apr 22 10:49:49 fir-io3-s2 kernel: LustreError: dumping log to /tmp/lustre-log.1555955389.77058 Apr 22 10:49:49 fir-io3-s2 kernel: Pid: 78291, comm: ll_ost00_019 3.10.0-957.1.3.el7_lustre.x86_64 #1 SMP Fri Dec 7 14:50:35 PST 2018 Apr 22 10:49:49 fir-io3-s2 kernel: Call Trace: Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc0f9b640>] ptlrpc_set_wait+0x500/0x8d0 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc0f58fe5>] ldlm_run_ast_work+0xd5/0x3a0 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc0f7a69b>] ldlm_glimpse_locks+0x3b/0x100 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc162910b>] ofd_intent_policy+0x69b/0x920 [ofd] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc0f59d26>] ldlm_lock_enqueue+0x366/0xa60 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc0f826d7>] ldlm_handle_enqueue0+0xa47/0x15a0 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc10090b2>] tgt_enqueue+0x62/0x210 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc101010a>] tgt_request_handle+0xaea/0x1580 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc0fb46db>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffc0fb800c>] ptlrpc_main+0xafc/0x1fc0 [ptlrpc] Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffff8b8c1c31>] kthread+0xd1/0xe0 Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffff8bf74c24>] ret_from_fork_nospec_begin+0xe/0x21 Apr 22 10:49:49 fir-io3-s2 kernel: [<ffffffffffffffff>] 0xffffffffffffffff Apr 22 10:49:49 fir-io3-s2 kernel: Lustre: fir-OST0019: Connection restored to 708f469b-f504-7275-d236-e90aa8bcec67 (at 10.9.0.64@o2ib4) Apr 22 10:49:49 fir-io3-s2 kernel: Lustre: Skipped 24 previous similar messages Apr 22 10:49:55 fir-io3-s2 kernel: LNet: Service thread pid 78318 was inactive for 200.37s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Apr 22 10:49:55 fir-io3-s2 kernel: LNet: Skipped 1 previous similar message Apr 22 10:49:55 fir-io3-s2 kernel: Pid: 78318, comm: ll_ost02_032 3.10.0-957.1.3.el7_lustre.x86_64 #1 SMP Fri Dec 7 14:50:35 PST 2018 Apr 22 10:49:55 fir-io3-s2 kernel: Call Trace: Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc0f9b640>] ptlrpc_set_wait+0x500/0x8d0 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc0f58fe5>] ldlm_run_ast_work+0xd5/0x3a0 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc0f7a69b>] ldlm_glimpse_locks+0x3b/0x100 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc162910b>] ofd_intent_policy+0x69b/0x920 [ofd] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc0f59d26>] ldlm_lock_enqueue+0x366/0xa60 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc0f826d7>] ldlm_handle_enqueue0+0xa47/0x15a0 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc10090b2>] tgt_enqueue+0x62/0x210 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc101010a>] tgt_request_handle+0xaea/0x1580 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc0fb46db>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc0fb800c>] ptlrpc_main+0xafc/0x1fc0 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffff8b8c1c31>] kthread+0xd1/0xe0 Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffff8bf74c24>] ret_from_fork_nospec_begin+0xe/0x21 Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffffffffff>] 0xffffffffffffffff Apr 22 10:49:55 fir-io3-s2 kernel: LustreError: dumping log to /tmp/lustre-log.1555955395.78318 Apr 22 10:49:55 fir-io3-s2 kernel: Pid: 78868, comm: ll_ost03_105 3.10.0-957.1.3.el7_lustre.x86_64 #1 SMP Fri Dec 7 14:50:35 PST 2018 Apr 22 10:49:55 fir-io3-s2 kernel: Call Trace: Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc0f9b640>] ptlrpc_set_wait+0x500/0x8d0 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc0f58fe5>] ldlm_run_ast_work+0xd5/0x3a0 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc0f7a69b>] ldlm_glimpse_locks+0x3b/0x100 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc162910b>] ofd_intent_policy+0x69b/0x920 [ofd] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc0f59d26>] ldlm_lock_enqueue+0x366/0xa60 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc0f826d7>] ldlm_handle_enqueue0+0xa47/0x15a0 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc10090b2>] tgt_enqueue+0x62/0x210 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc101010a>] tgt_request_handle+0xaea/0x1580 [ptlrpc] Apr 22 10:49:55 fir-io3-s2 kernel: [<ffffffffc0fb46db>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] Apr 22 10:49:56 fir-io3-s2 kernel: [<ffffffffc0fb800c>] ptlrpc_main+0xafc/0x1fc0 [ptlrpc] Apr 22 10:49:56 fir-io3-s2 kernel: [<ffffffff8b8c1c31>] kthread+0xd1/0xe0 Apr 22 10:49:56 fir-io3-s2 kernel: [<ffffffff8bf74c24>] ret_from_fork_nospec_begin+0xe/0x21 Apr 22 10:49:56 fir-io3-s2 kernel: [<ffffffffffffffff>] 0xffffffffffffffff Apr 22 10:50:12 fir-io3-s2 kernel: Lustre: fir-OST0021: haven't heard from client 83316b44-aad9-bbe2-1580-9c149ecd135e (at 10.9.101.28@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff97fd659af000, cur 1555955412 expire 1555955262 last 1555955185 Apr 22 10:50:12 fir-io3-s2 kernel: Lustre: Skipped 19 previous similar messages Apr 22 10:50:12 fir-io3-s2 kernel: LNet: Service thread pid 77058 completed after 223.36s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). Apr 22 10:50:12 fir-io3-s2 kernel: LustreError: 78624:0:(client.c:1175:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff981b3d9bad00 x1631526704122896/t0(0) o104->fir-OST0021@10.9.101.28@o2ib4:15/16 lens 296/224 e 0 to 0 dl 0 ref 1 fl Rpc:/0/ffffffff rc 0/-1 I have enabled +lnet debug on all servers to see how it goes.

            People

              pfarrell Patrick Farrell (Inactive)
              mhanafi Mahmoud Hanafi
              Votes:
              0 Vote for this issue
              Watchers:
              14 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: