Details

    • Bug
    • Resolution: Duplicate
    • Critical
    • None
    • Lustre 2.7.0
    • None
    • 2.7.1-fe
    • 3
    • 9223372036854775807

    Description

      OSS deadlocked unable to ping ethernet or IB interfaces. Console showed no errors.
      Attaching full trace of all threads. Most notable are kiblnd

      ID: 8711   TASK: ffff882020b12ab0  CPU: 4   COMMAND: "kiblnd_sd_01_01"
       #0 [ffff880060c86e90] crash_nmi_callback at ffffffff81032256
       #1 [ffff880060c86ea0] notifier_call_chain at ffffffff81568515
       #2 [ffff880060c86ee0] atomic_notifier_call_chain at ffffffff8156857a
       #3 [ffff880060c86ef0] notify_die at ffffffff810a44fe
       #4 [ffff880060c86f20] do_nmi at ffffffff8156618f
       #5 [ffff880060c86f50] nmi at ffffffff815659f0
          [exception RIP: _spin_lock+33]
          RIP: ffffffff81565261  RSP: ffff882021b75b70  RFLAGS: 00000293
          RAX: 0000000000002b8e  RBX: ffff880ffe7dd240  RCX: 0000000000000000
          RDX: 0000000000002b8b  RSI: 0000000000000003  RDI: ffff88201ee3f140
          RBP: ffff882021b75b70   R8: 6950000000000000   R9: 4a80000000000000
          R10: 0000000000000001  R11: 0000000000000001  R12: 0000000000000018
          R13: ffff881013262e40  R14: ffff8820268ecac0  R15: 0000000000000004
          ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
      --- <NMI exception stack> ---
       #6 [ffff882021b75b70] _spin_lock at ffffffff81565261
       #7 [ffff882021b75b78] cfs_percpt_lock at ffffffffa049edab [libcfs]
       #8 [ffff882021b75bb8] lnet_ptl_match_md at ffffffffa0529605 [lnet]
       #9 [ffff882021b75c38] lnet_parse_local at ffffffffa05306e7 [lnet]
      #10 [ffff882021b75cd8] lnet_parse at ffffffffa05316da [lnet]
      #11 [ffff882021b75d68] kiblnd_handle_rx at ffffffffa0a16f3b [ko2iblnd]
      #12 [ffff882021b75db8] kiblnd_scheduler at ffffffffa0a182be [ko2iblnd]
      #13 [ffff882021b75ee8] kthread at ffffffff8109dc8e
      #14 [ffff882021b75f48] kernel_thread at ffffffff8100c28a
      

      Attachments

        Issue Links

          Activity

            [LU-8334] OSS lockup

            Oleg, just in case it may help, I have better analyzed the all/whole stacks listing originally provided, and I am definitely convinced now that, as I already strongly suspected before, this "python" thread parsing the /proc/slabinfo stuff is the one blocking all the others.

            As per its stack content, that I have already pointed before, it should have taken/spin_lock()'ed a kmem_cache->nodelists[]->list_lock (located @0xffff88203fc00500), that's blocking several others threads, and particularly 2 kiblnd threads (with PIDs 8713/8715) that I have found with the following similar stack :

            PID: 8713   TASK: ffff882027d55520  CPU: 9   COMMAND: "kiblnd_sd_02_01"
             #0 [ffff881078826e90] crash_nmi_callback at ffffffff81032256
             #1 [ffff881078826ea0] notifier_call_chain at ffffffff81568515
             #2 [ffff881078826ee0] atomic_notifier_call_chain at ffffffff8156857a
             #3 [ffff881078826ef0] notify_die at ffffffff810a44fe
             #4 [ffff881078826f20] do_nmi at ffffffff8156618f
             #5 [ffff881078826f50] nmi at ffffffff815659f0
                [exception RIP: _spin_lock+33]
                RIP: ffffffff81565261  RSP: ffff88201e8c5ba0  RFLAGS: 00000097
                RAX: 00000000000056fb  RBX: ffff881376245200  RCX: 0000000000000001
                RDX: 00000000000056f7  RSI: 000000000000005a  RDI: ffff88203fc00500
                RBP: ffff88201e8c5ba0   R8: 0000000000000000   R9: ffff881376245200
                R10: 0000000000000001  R11: 0000000000000000  R12: ffff8820294f8c00
                R13: 0000000000000286  R14: ffff88103fcb02c0  R15: ffff88203fc004c0
                ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
            --- <NMI exception stack> ---
             #6 [ffff88201e8c5ba0] _spin_lock at ffffffff81565261
             #7 [ffff88201e8c5ba8] kfree at ffffffff811704b9
             #8 [ffff88201e8c5c18] null_free_rs at ffffffffa08412b0 [ptlrpc]
             #9 [ffff88201e8c5c38] sptlrpc_svc_free_rs at ffffffffa082fded [ptlrpc]
            #10 [ffff88201e8c5c58] lustre_free_reply_state at ffffffffa0802f64 [ptlrpc]
            #11 [ffff88201e8c5c68] reply_out_callback at ffffffffa0809030 [ptlrpc]
            #12 [ffff88201e8c5c98] ptlrpc_master_callback at ffffffffa08085ca [ptlrpc]
            #13 [ffff88201e8c5ca8] lnet_eq_enqueue_event at ffffffffa0525b55 [lnet]
            #14 [ffff88201e8c5cc8] lnet_msg_detach_md at ffffffffa0523cef [lnet]
            #15 [ffff88201e8c5cf8] lnet_finalize at ffffffffa0524dd6 [lnet]
            #16 [ffff88201e8c5d68] kiblnd_tx_done at ffffffffa0a11fd7 [ko2iblnd]
            #17 [ffff88201e8c5db8] kiblnd_scheduler at ffffffffa0a18170 [ko2iblnd]
            #18 [ffff88201e8c5ee8] kthread at ffffffff8109dc8e
            #19 [ffff88201e8c5f48] kernel_thread at ffffffff8100c28a
            

            and according to this stack's content, these 2 threads should have spin_lock()'ed (lnet_res_lock()/cfs_percpt_lock() before calling lnet_msg_detach_md()) the lock(s) that all others threads spinning in cfs_percpt_lock() are waiting for.

            Now the question is, why does s_show() take so long to release its kmem_cache->nodelists[]->list_lock ? Could there be a loop in any of the associated slabs_full/slabs_partial lists ?

            bfaccini Bruno Faccini (Inactive) added a comment - Oleg, just in case it may help, I have better analyzed the all/whole stacks listing originally provided, and I am definitely convinced now that, as I already strongly suspected before, this "python" thread parsing the /proc/slabinfo stuff is the one blocking all the others. As per its stack content, that I have already pointed before, it should have taken/spin_lock()'ed a kmem_cache->nodelists[]->list_lock (located @0xffff88203fc00500), that's blocking several others threads, and particularly 2 kiblnd threads (with PIDs 8713/8715) that I have found with the following similar stack : PID: 8713 TASK: ffff882027d55520 CPU: 9 COMMAND: "kiblnd_sd_02_01" #0 [ffff881078826e90] crash_nmi_callback at ffffffff81032256 #1 [ffff881078826ea0] notifier_call_chain at ffffffff81568515 #2 [ffff881078826ee0] atomic_notifier_call_chain at ffffffff8156857a #3 [ffff881078826ef0] notify_die at ffffffff810a44fe #4 [ffff881078826f20] do_nmi at ffffffff8156618f #5 [ffff881078826f50] nmi at ffffffff815659f0 [exception RIP: _spin_lock+33] RIP: ffffffff81565261 RSP: ffff88201e8c5ba0 RFLAGS: 00000097 RAX: 00000000000056fb RBX: ffff881376245200 RCX: 0000000000000001 RDX: 00000000000056f7 RSI: 000000000000005a RDI: ffff88203fc00500 RBP: ffff88201e8c5ba0 R8: 0000000000000000 R9: ffff881376245200 R10: 0000000000000001 R11: 0000000000000000 R12: ffff8820294f8c00 R13: 0000000000000286 R14: ffff88103fcb02c0 R15: ffff88203fc004c0 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 --- <NMI exception stack> --- #6 [ffff88201e8c5ba0] _spin_lock at ffffffff81565261 #7 [ffff88201e8c5ba8] kfree at ffffffff811704b9 #8 [ffff88201e8c5c18] null_free_rs at ffffffffa08412b0 [ptlrpc] #9 [ffff88201e8c5c38] sptlrpc_svc_free_rs at ffffffffa082fded [ptlrpc] #10 [ffff88201e8c5c58] lustre_free_reply_state at ffffffffa0802f64 [ptlrpc] #11 [ffff88201e8c5c68] reply_out_callback at ffffffffa0809030 [ptlrpc] #12 [ffff88201e8c5c98] ptlrpc_master_callback at ffffffffa08085ca [ptlrpc] #13 [ffff88201e8c5ca8] lnet_eq_enqueue_event at ffffffffa0525b55 [lnet] #14 [ffff88201e8c5cc8] lnet_msg_detach_md at ffffffffa0523cef [lnet] #15 [ffff88201e8c5cf8] lnet_finalize at ffffffffa0524dd6 [lnet] #16 [ffff88201e8c5d68] kiblnd_tx_done at ffffffffa0a11fd7 [ko2iblnd] #17 [ffff88201e8c5db8] kiblnd_scheduler at ffffffffa0a18170 [ko2iblnd] #18 [ffff88201e8c5ee8] kthread at ffffffff8109dc8e #19 [ffff88201e8c5f48] kernel_thread at ffffffff8100c28a and according to this stack's content, these 2 threads should have spin_lock()'ed (lnet_res_lock()/cfs_percpt_lock() before calling lnet_msg_detach_md()) the lock(s) that all others threads spinning in cfs_percpt_lock() are waiting for. Now the question is, why does s_show() take so long to release its kmem_cache->nodelists[]->list_lock ? Could there be a loop in any of the associated slabs_full/slabs_partial lists ?

            core file uploaded and email was sent to Oleg.

            mhanafi Mahmoud Hanafi added a comment - core file uploaded and email was sent to Oleg.
            pjones Peter Jones added a comment -

            Oleg

            pjones Peter Jones added a comment - Oleg

            I can upload the encrypted crash dump. Who should I send the password to?

            mhanafi Mahmoud Hanafi added a comment - I can upload the encrypted crash dump. Who should I send the password to?
            pjones Peter Jones added a comment -

            Mahmoud

            Bruno is not a US citizen but if you are able to provide access to the crash dump we can have it examined by someone who is

            Peter

            pjones Peter Jones added a comment - Mahmoud Bruno is not a US citizen but if you are able to provide access to the crash dump we can have it examined by someone who is Peter

            We do have a crash dump. I can upload it but it can only view by a US citizen. We have done this before.

            The python script runs every minute and save several files from /proc/. one of which is /proc/slabinfo. We have been running this for sometime on all our lustre servers.

            mhanafi Mahmoud Hanafi added a comment - We do have a crash dump. I can upload it but it can only view by a US citizen. We have done this before. The python script runs every minute and save several files from /proc/. one of which is /proc/slabinfo. We have been running this for sometime on all our lustre servers.

            Hello Mahmoud,
            I wonder if you had the chance to get a crash-dump from one of these occurrences ??
            If not, do you have more details about the thread with the following stack ?? :

            PID: 61350  TASK: ffff88145f9d1520  CPU: 14  COMMAND: "python"
             #0 [ffff8810788c6e90] crash_nmi_callback at ffffffff81032256
             #1 [ffff8810788c6ea0] notifier_call_chain at ffffffff81568515
             #2 [ffff8810788c6ee0] atomic_notifier_call_chain at ffffffff8156857a
             #3 [ffff8810788c6ef0] notify_die at ffffffff810a44fe
             #4 [ffff8810788c6f20] do_nmi at ffffffff8156618f
             #5 [ffff8810788c6f50] nmi at ffffffff815659f0
                [exception RIP: s_show+231]
                RIP: ffffffff8116ea67  RSP: ffff8817f2c83d98  RFLAGS: 00000086
                RAX: ffff88203fc004c0  RBX: 0000001b547b089e  RCX: ffff88203fc00500
                RDX: 0000000000000001  RSI: ffff880b7de4b800  RDI: ffff88203fc004d0
                RBP: ffff8817f2c83e18   R8: 0000000000000008   R9: 0000000000000008
                R10: ffffffff817dd61e  R11: 0000000000000004  R12: 000000daa3d7c8d1
                R13: ffffffff817dd61e  R14: 0000000000000801  R15: 0000000000000000
                ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
            --- <NMI exception stack> ---
             #6 [ffff8817f2c83d98] s_show at ffffffff8116ea67
             #7 [ffff8817f2c83e20] seq_read at ffffffff811ae5b9
             #8 [ffff8817f2c83ea0] proc_reg_read at ffffffff811f4dbe
             #9 [ffff8817f2c83ef0] vfs_read at ffffffff81188fe5
            #10 [ffff8817f2c83f30] sys_read at ffffffff81189121
            #11 [ffff8817f2c83f80] system_call_fastpath at ffffffff8100b0d2
                RIP: 00007fffec9de500  RSP: 00007fffffffe220  RFLAGS: 00010202
                RAX: 0000000000000000  RBX: ffffffff8100b0d2  RCX: 0000000000008050
                RDX: 0000000000002c00  RSI: 00000000007989f9  RDI: 0000000000000004
                RBP: 0000000000000000   R8: 00000000007979e4   R9: 00007fffedaee700
                R10: 3834202020202030  R11: 0000000000000246  R12: 00000000007989f9
                R13: 0000000000002feb  R14: 0000000000793780  R15: 0000000000002feb
                ORIG_RAX: 0000000000000000  CS: 0033  SS: 002b
            

            Could it be doing something similar to "slabinfo" ??

            About your parallel with LU-8249, you may be right, but for this ticket we are also missing infos about the exact state and possible cause of the dead-lock, and thus it is also difficult to be sure that its provided patch could be the solution.

            bfaccini Bruno Faccini (Inactive) added a comment - Hello Mahmoud, I wonder if you had the chance to get a crash-dump from one of these occurrences ?? If not, do you have more details about the thread with the following stack ?? : PID: 61350 TASK: ffff88145f9d1520 CPU: 14 COMMAND: "python" #0 [ffff8810788c6e90] crash_nmi_callback at ffffffff81032256 #1 [ffff8810788c6ea0] notifier_call_chain at ffffffff81568515 #2 [ffff8810788c6ee0] atomic_notifier_call_chain at ffffffff8156857a #3 [ffff8810788c6ef0] notify_die at ffffffff810a44fe #4 [ffff8810788c6f20] do_nmi at ffffffff8156618f #5 [ffff8810788c6f50] nmi at ffffffff815659f0 [exception RIP: s_show+231] RIP: ffffffff8116ea67 RSP: ffff8817f2c83d98 RFLAGS: 00000086 RAX: ffff88203fc004c0 RBX: 0000001b547b089e RCX: ffff88203fc00500 RDX: 0000000000000001 RSI: ffff880b7de4b800 RDI: ffff88203fc004d0 RBP: ffff8817f2c83e18 R8: 0000000000000008 R9: 0000000000000008 R10: ffffffff817dd61e R11: 0000000000000004 R12: 000000daa3d7c8d1 R13: ffffffff817dd61e R14: 0000000000000801 R15: 0000000000000000 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 --- <NMI exception stack> --- #6 [ffff8817f2c83d98] s_show at ffffffff8116ea67 #7 [ffff8817f2c83e20] seq_read at ffffffff811ae5b9 #8 [ffff8817f2c83ea0] proc_reg_read at ffffffff811f4dbe #9 [ffff8817f2c83ef0] vfs_read at ffffffff81188fe5 #10 [ffff8817f2c83f30] sys_read at ffffffff81189121 #11 [ffff8817f2c83f80] system_call_fastpath at ffffffff8100b0d2 RIP: 00007fffec9de500 RSP: 00007fffffffe220 RFLAGS: 00010202 RAX: 0000000000000000 RBX: ffffffff8100b0d2 RCX: 0000000000008050 RDX: 0000000000002c00 RSI: 00000000007989f9 RDI: 0000000000000004 RBP: 0000000000000000 R8: 00000000007979e4 R9: 00007fffedaee700 R10: 3834202020202030 R11: 0000000000000246 R12: 00000000007989f9 R13: 0000000000002feb R14: 0000000000793780 R15: 0000000000002feb ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b Could it be doing something similar to "slabinfo" ?? About your parallel with LU-8249 , you may be right, but for this ticket we are also missing infos about the exact state and possible cause of the dead-lock, and thus it is also difficult to be sure that its provided patch could be the solution.
            mhanafi Mahmoud Hanafi added a comment - - edited

            One more stack trace. Looks like it could be a dup of LU-8249

            PID: 18374  TASK: ffff880ffe79a040  CPU: 6   COMMAND: "ptlrpcd_01_00"
             #0 [ffff880060cc6e90] crash_nmi_callback at ffffffff81032256
             #1 [ffff880060cc6ea0] notifier_call_chain at ffffffff81568515
             #2 [ffff880060cc6ee0] atomic_notifier_call_chain at ffffffff8156857a
             #3 [ffff880060cc6ef0] notify_die at ffffffff810a44fe
             #4 [ffff880060cc6f20] do_nmi at ffffffff8156618f
             #5 [ffff880060cc6f50] nmi at ffffffff815659f0
                [exception RIP: _spin_lock+33]
                RIP: ffffffff81565261  RSP: ffff880f2be33ad0  RFLAGS: 00000287
                RAX: 0000000000002b92  RBX: ffff880ffe7dd240  RCX: 0000000000000000
                RDX: 0000000000002b8b  RSI: 0000000000000003  RDI: ffff88201ee3f140
                RBP: ffff880f2be33ad0   R8: ffff880bda1ea340   R9: 0000000000000000
                R10: 0000000000000047  R11: 00000000000000aa  R12: 0000000000000018
                R13: 0000000000000001  R14: 00057407d4831d60  R15: ffff882021aebb80
                ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
            --- <NMI exception stack> ---
             #6 [ffff880f2be33ad0] _spin_lock at ffffffff81565261
             #7 [ffff880f2be33ad8] cfs_percpt_lock at ffffffffa049edab [libcfs]
             #8 [ffff880f2be33b18] LNetMEAttach at ffffffffa0523466 [lnet]
             #9 [ffff880f2be33b78] ptl_send_rpc at ffffffffa07ffa65 [ptlrpc]
            #10 [ffff880f2be33c48] ptlrpc_send_new_req at ffffffffa07f39e3 [ptlrpc]
            #11 [ffff880f2be33cb8] ptlrpc_check_set at ffffffffa07f7cb0 [ptlrpc]
            #12 [ffff880f2be33d78] ptlrpcd_check at ffffffffa0826ad3 [ptlrpc]
            #13 [ffff880f2be33dd8] ptlrpcd at ffffffffa0826f82 [ptlrpc]
            #14 [ffff880f2be33ee8] kthread at ffffffff8109dc8e
            #15 [ffff880f2be33f48] kernel_thread at ffffffff8100c28a
            
            mhanafi Mahmoud Hanafi added a comment - - edited One more stack trace. Looks like it could be a dup of LU-8249 PID: 18374 TASK: ffff880ffe79a040 CPU: 6 COMMAND: "ptlrpcd_01_00" #0 [ffff880060cc6e90] crash_nmi_callback at ffffffff81032256 #1 [ffff880060cc6ea0] notifier_call_chain at ffffffff81568515 #2 [ffff880060cc6ee0] atomic_notifier_call_chain at ffffffff8156857a #3 [ffff880060cc6ef0] notify_die at ffffffff810a44fe #4 [ffff880060cc6f20] do_nmi at ffffffff8156618f #5 [ffff880060cc6f50] nmi at ffffffff815659f0 [exception RIP: _spin_lock+33] RIP: ffffffff81565261 RSP: ffff880f2be33ad0 RFLAGS: 00000287 RAX: 0000000000002b92 RBX: ffff880ffe7dd240 RCX: 0000000000000000 RDX: 0000000000002b8b RSI: 0000000000000003 RDI: ffff88201ee3f140 RBP: ffff880f2be33ad0 R8: ffff880bda1ea340 R9: 0000000000000000 R10: 0000000000000047 R11: 00000000000000aa R12: 0000000000000018 R13: 0000000000000001 R14: 00057407d4831d60 R15: ffff882021aebb80 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 --- <NMI exception stack> --- #6 [ffff880f2be33ad0] _spin_lock at ffffffff81565261 #7 [ffff880f2be33ad8] cfs_percpt_lock at ffffffffa049edab [libcfs] #8 [ffff880f2be33b18] LNetMEAttach at ffffffffa0523466 [lnet] #9 [ffff880f2be33b78] ptl_send_rpc at ffffffffa07ffa65 [ptlrpc] #10 [ffff880f2be33c48] ptlrpc_send_new_req at ffffffffa07f39e3 [ptlrpc] #11 [ffff880f2be33cb8] ptlrpc_check_set at ffffffffa07f7cb0 [ptlrpc] #12 [ffff880f2be33d78] ptlrpcd_check at ffffffffa0826ad3 [ptlrpc] #13 [ffff880f2be33dd8] ptlrpcd at ffffffffa0826f82 [ptlrpc] #14 [ffff880f2be33ee8] kthread at ffffffff8109dc8e #15 [ffff880f2be33f48] kernel_thread at ffffffff8100c28a

            People

              bfaccini Bruno Faccini (Inactive)
              mhanafi Mahmoud Hanafi
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: