[LU-8334] OSS lockup Created: 27/Jun/16  Updated: 14/Jun/18  Resolved: 22/Sep/16

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.7.0
Fix Version/s: None

Type: Bug Priority: Critical
Reporter: Mahmoud Hanafi Assignee: Bruno Faccini (Inactive)
Resolution: Duplicate Votes: 0
Labels: None
Environment:

2.7.1-fe


Attachments: File oss.bt.all    
Issue Links:
Related
is related to LU-7980 Overrun in generic <size-128> kmem_ca... Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

OSS deadlocked unable to ping ethernet or IB interfaces. Console showed no errors.
Attaching full trace of all threads. Most notable are kiblnd

ID: 8711   TASK: ffff882020b12ab0  CPU: 4   COMMAND: "kiblnd_sd_01_01"
 #0 [ffff880060c86e90] crash_nmi_callback at ffffffff81032256
 #1 [ffff880060c86ea0] notifier_call_chain at ffffffff81568515
 #2 [ffff880060c86ee0] atomic_notifier_call_chain at ffffffff8156857a
 #3 [ffff880060c86ef0] notify_die at ffffffff810a44fe
 #4 [ffff880060c86f20] do_nmi at ffffffff8156618f
 #5 [ffff880060c86f50] nmi at ffffffff815659f0
    [exception RIP: _spin_lock+33]
    RIP: ffffffff81565261  RSP: ffff882021b75b70  RFLAGS: 00000293
    RAX: 0000000000002b8e  RBX: ffff880ffe7dd240  RCX: 0000000000000000
    RDX: 0000000000002b8b  RSI: 0000000000000003  RDI: ffff88201ee3f140
    RBP: ffff882021b75b70   R8: 6950000000000000   R9: 4a80000000000000
    R10: 0000000000000001  R11: 0000000000000001  R12: 0000000000000018
    R13: ffff881013262e40  R14: ffff8820268ecac0  R15: 0000000000000004
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
--- <NMI exception stack> ---
 #6 [ffff882021b75b70] _spin_lock at ffffffff81565261
 #7 [ffff882021b75b78] cfs_percpt_lock at ffffffffa049edab [libcfs]
 #8 [ffff882021b75bb8] lnet_ptl_match_md at ffffffffa0529605 [lnet]
 #9 [ffff882021b75c38] lnet_parse_local at ffffffffa05306e7 [lnet]
#10 [ffff882021b75cd8] lnet_parse at ffffffffa05316da [lnet]
#11 [ffff882021b75d68] kiblnd_handle_rx at ffffffffa0a16f3b [ko2iblnd]
#12 [ffff882021b75db8] kiblnd_scheduler at ffffffffa0a182be [ko2iblnd]
#13 [ffff882021b75ee8] kthread at ffffffff8109dc8e
#14 [ffff882021b75f48] kernel_thread at ffffffff8100c28a


 Comments   
Comment by Mahmoud Hanafi [ 27/Jun/16 ]

One more stack trace. Looks like it could be a dup of LU-8249

PID: 18374  TASK: ffff880ffe79a040  CPU: 6   COMMAND: "ptlrpcd_01_00"
 #0 [ffff880060cc6e90] crash_nmi_callback at ffffffff81032256
 #1 [ffff880060cc6ea0] notifier_call_chain at ffffffff81568515
 #2 [ffff880060cc6ee0] atomic_notifier_call_chain at ffffffff8156857a
 #3 [ffff880060cc6ef0] notify_die at ffffffff810a44fe
 #4 [ffff880060cc6f20] do_nmi at ffffffff8156618f
 #5 [ffff880060cc6f50] nmi at ffffffff815659f0
    [exception RIP: _spin_lock+33]
    RIP: ffffffff81565261  RSP: ffff880f2be33ad0  RFLAGS: 00000287
    RAX: 0000000000002b92  RBX: ffff880ffe7dd240  RCX: 0000000000000000
    RDX: 0000000000002b8b  RSI: 0000000000000003  RDI: ffff88201ee3f140
    RBP: ffff880f2be33ad0   R8: ffff880bda1ea340   R9: 0000000000000000
    R10: 0000000000000047  R11: 00000000000000aa  R12: 0000000000000018
    R13: 0000000000000001  R14: 00057407d4831d60  R15: ffff882021aebb80
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
--- <NMI exception stack> ---
 #6 [ffff880f2be33ad0] _spin_lock at ffffffff81565261
 #7 [ffff880f2be33ad8] cfs_percpt_lock at ffffffffa049edab [libcfs]
 #8 [ffff880f2be33b18] LNetMEAttach at ffffffffa0523466 [lnet]
 #9 [ffff880f2be33b78] ptl_send_rpc at ffffffffa07ffa65 [ptlrpc]
#10 [ffff880f2be33c48] ptlrpc_send_new_req at ffffffffa07f39e3 [ptlrpc]
#11 [ffff880f2be33cb8] ptlrpc_check_set at ffffffffa07f7cb0 [ptlrpc]
#12 [ffff880f2be33d78] ptlrpcd_check at ffffffffa0826ad3 [ptlrpc]
#13 [ffff880f2be33dd8] ptlrpcd at ffffffffa0826f82 [ptlrpc]
#14 [ffff880f2be33ee8] kthread at ffffffff8109dc8e
#15 [ffff880f2be33f48] kernel_thread at ffffffff8100c28a
Comment by Bruno Faccini (Inactive) [ 28/Jun/16 ]

Hello Mahmoud,
I wonder if you had the chance to get a crash-dump from one of these occurrences ??
If not, do you have more details about the thread with the following stack ?? :

PID: 61350  TASK: ffff88145f9d1520  CPU: 14  COMMAND: "python"
 #0 [ffff8810788c6e90] crash_nmi_callback at ffffffff81032256
 #1 [ffff8810788c6ea0] notifier_call_chain at ffffffff81568515
 #2 [ffff8810788c6ee0] atomic_notifier_call_chain at ffffffff8156857a
 #3 [ffff8810788c6ef0] notify_die at ffffffff810a44fe
 #4 [ffff8810788c6f20] do_nmi at ffffffff8156618f
 #5 [ffff8810788c6f50] nmi at ffffffff815659f0
    [exception RIP: s_show+231]
    RIP: ffffffff8116ea67  RSP: ffff8817f2c83d98  RFLAGS: 00000086
    RAX: ffff88203fc004c0  RBX: 0000001b547b089e  RCX: ffff88203fc00500
    RDX: 0000000000000001  RSI: ffff880b7de4b800  RDI: ffff88203fc004d0
    RBP: ffff8817f2c83e18   R8: 0000000000000008   R9: 0000000000000008
    R10: ffffffff817dd61e  R11: 0000000000000004  R12: 000000daa3d7c8d1
    R13: ffffffff817dd61e  R14: 0000000000000801  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
--- <NMI exception stack> ---
 #6 [ffff8817f2c83d98] s_show at ffffffff8116ea67
 #7 [ffff8817f2c83e20] seq_read at ffffffff811ae5b9
 #8 [ffff8817f2c83ea0] proc_reg_read at ffffffff811f4dbe
 #9 [ffff8817f2c83ef0] vfs_read at ffffffff81188fe5
#10 [ffff8817f2c83f30] sys_read at ffffffff81189121
#11 [ffff8817f2c83f80] system_call_fastpath at ffffffff8100b0d2
    RIP: 00007fffec9de500  RSP: 00007fffffffe220  RFLAGS: 00010202
    RAX: 0000000000000000  RBX: ffffffff8100b0d2  RCX: 0000000000008050
    RDX: 0000000000002c00  RSI: 00000000007989f9  RDI: 0000000000000004
    RBP: 0000000000000000   R8: 00000000007979e4   R9: 00007fffedaee700
    R10: 3834202020202030  R11: 0000000000000246  R12: 00000000007989f9
    R13: 0000000000002feb  R14: 0000000000793780  R15: 0000000000002feb
    ORIG_RAX: 0000000000000000  CS: 0033  SS: 002b

Could it be doing something similar to "slabinfo" ??

About your parallel with LU-8249, you may be right, but for this ticket we are also missing infos about the exact state and possible cause of the dead-lock, and thus it is also difficult to be sure that its provided patch could be the solution.

Comment by Mahmoud Hanafi [ 28/Jun/16 ]

We do have a crash dump. I can upload it but it can only view by a US citizen. We have done this before.

The python script runs every minute and save several files from /proc/. one of which is /proc/slabinfo. We have been running this for sometime on all our lustre servers.

Comment by Peter Jones [ 28/Jun/16 ]

Mahmoud

Bruno is not a US citizen but if you are able to provide access to the crash dump we can have it examined by someone who is

Peter

Comment by Mahmoud Hanafi [ 28/Jun/16 ]

I can upload the encrypted crash dump. Who should I send the password to?

Comment by Peter Jones [ 28/Jun/16 ]

Oleg

Comment by Mahmoud Hanafi [ 29/Jun/16 ]

core file uploaded and email was sent to Oleg.

Comment by Bruno Faccini (Inactive) [ 01/Jul/16 ]

Oleg, just in case it may help, I have better analyzed the all/whole stacks listing originally provided, and I am definitely convinced now that, as I already strongly suspected before, this "python" thread parsing the /proc/slabinfo stuff is the one blocking all the others.

As per its stack content, that I have already pointed before, it should have taken/spin_lock()'ed a kmem_cache->nodelists[]->list_lock (located @0xffff88203fc00500), that's blocking several others threads, and particularly 2 kiblnd threads (with PIDs 8713/8715) that I have found with the following similar stack :

PID: 8713   TASK: ffff882027d55520  CPU: 9   COMMAND: "kiblnd_sd_02_01"
 #0 [ffff881078826e90] crash_nmi_callback at ffffffff81032256
 #1 [ffff881078826ea0] notifier_call_chain at ffffffff81568515
 #2 [ffff881078826ee0] atomic_notifier_call_chain at ffffffff8156857a
 #3 [ffff881078826ef0] notify_die at ffffffff810a44fe
 #4 [ffff881078826f20] do_nmi at ffffffff8156618f
 #5 [ffff881078826f50] nmi at ffffffff815659f0
    [exception RIP: _spin_lock+33]
    RIP: ffffffff81565261  RSP: ffff88201e8c5ba0  RFLAGS: 00000097
    RAX: 00000000000056fb  RBX: ffff881376245200  RCX: 0000000000000001
    RDX: 00000000000056f7  RSI: 000000000000005a  RDI: ffff88203fc00500
    RBP: ffff88201e8c5ba0   R8: 0000000000000000   R9: ffff881376245200
    R10: 0000000000000001  R11: 0000000000000000  R12: ffff8820294f8c00
    R13: 0000000000000286  R14: ffff88103fcb02c0  R15: ffff88203fc004c0
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
--- <NMI exception stack> ---
 #6 [ffff88201e8c5ba0] _spin_lock at ffffffff81565261
 #7 [ffff88201e8c5ba8] kfree at ffffffff811704b9
 #8 [ffff88201e8c5c18] null_free_rs at ffffffffa08412b0 [ptlrpc]
 #9 [ffff88201e8c5c38] sptlrpc_svc_free_rs at ffffffffa082fded [ptlrpc]
#10 [ffff88201e8c5c58] lustre_free_reply_state at ffffffffa0802f64 [ptlrpc]
#11 [ffff88201e8c5c68] reply_out_callback at ffffffffa0809030 [ptlrpc]
#12 [ffff88201e8c5c98] ptlrpc_master_callback at ffffffffa08085ca [ptlrpc]
#13 [ffff88201e8c5ca8] lnet_eq_enqueue_event at ffffffffa0525b55 [lnet]
#14 [ffff88201e8c5cc8] lnet_msg_detach_md at ffffffffa0523cef [lnet]
#15 [ffff88201e8c5cf8] lnet_finalize at ffffffffa0524dd6 [lnet]
#16 [ffff88201e8c5d68] kiblnd_tx_done at ffffffffa0a11fd7 [ko2iblnd]
#17 [ffff88201e8c5db8] kiblnd_scheduler at ffffffffa0a18170 [ko2iblnd]
#18 [ffff88201e8c5ee8] kthread at ffffffff8109dc8e
#19 [ffff88201e8c5f48] kernel_thread at ffffffff8100c28a

and according to this stack's content, these 2 threads should have spin_lock()'ed (lnet_res_lock()/cfs_percpt_lock() before calling lnet_msg_detach_md()) the lock(s) that all others threads spinning in cfs_percpt_lock() are waiting for.

Now the question is, why does s_show() take so long to release its kmem_cache->nodelists[]->list_lock ? Could there be a loop in any of the associated slabs_full/slabs_partial lists ?

Comment by Mahmoud Hanafi [ 05/Jul/16 ]

Could I get an update on the progress of this issue.

Thanks,

Comment by Oleg Drokin [ 05/Jul/16 ]

Bruno: it looks like you are on to something.

when I do kmem -s in the crashdump, I get this error:

kmem: size-512: full list: slab: ffff8813e0ab36c0  bad prev pointer: ffff881ae71d4540
kmem: size-512: full list: slab: ffff8813e0ab36c0  bad inuse counter: 0
kmem: size-512: full list: slab: ffff8813e0ab36c0  bad s_mem pointer: ffff881ae71d4000

Indeed, ffff881ae71d4540 is invalid kernel address that I cannot access. Same for ffff881ae71d4000

Comment by Bruno Faccini (Inactive) [ 06/Jul/16 ]

Well slab at ffff8813e0ab36c0 may be corrupted and cause s_show() to loop ... Also, for which kmem_cache has it been allocated ?

Comment by Oleg Drokin [ 06/Jul/16 ]

it's "size-512"

Comment by Bruno Faccini (Inactive) [ 08/Jul/16 ]

Right, "crash/kmem -s" had already indicated this!
But then, what about the full content of this slab and even "kmem -S size-512" output ?

Comment by Oleg Drokin [ 08/Jul/16 ]
crash> p *(struct slab *)0xffff8813e0ab36c0
$2 = {
  list = {
    next = 0xffff8817213f0000, 
    prev = 0xffff881ae71d4540
  }, 
  colouroff = 0, 
  s_mem = 0xffff881ae71d4000, 
  inuse = 0, 
  free = 0, 
  nodeid = 1
}

kmem -S size-512 does not list content of this particular one because it is corrupted.
the s_mem location is also invalid, so I cannot peek inside.

Comment by Bruno Faccini (Inactive) [ 12/Jul/16 ]

And what about the next = 0xffff8817213f0000 pointer, is it pointing to a valid area ? And if yes, of which type/family ?
Also, does it leads to a loop that may be the cause of the looping/pseudo-hung execution of s_show()?
Last, does the corruption of the slab at 0xffff8813e0ab36c0 seem to come from an overrun from previous locations ??

Comment by Oleg Drokin [ 12/Jul/16 ]

Yes, the next pointer does point to a valid area:

crash> p *(struct slab *)0xffff8817213f0000
$4 = {
  list = {
    next = 0xffff880b7de4b800, 
    prev = 0xffff880b01a70800
  }, 
  colouroff = 1214182228249458060, 
  s_mem = 0x0, 
  inuse = 2690861296, 
  free = 4294967295, 
  nodeid = 0
}

I suspect it's the same type, but I am not sure how to check that easily.
I do not see a real loop in there with some light probing, but I see that there's an alternative valid path around this next node:

crash> p *(struct slab *)0xffff880b7de4b800
$5 = {
  list = {
    next = 0xffff880c60693800, 
    prev = 0xffff8817213f0000
  }, 
  colouroff = 1214182228245329292, 
  s_mem = 0x0, 
  inuse = 2690860912, 
  free = 4294967295, 
  nodeid = 0
}
crash> p *(struct slab *)0xffff880b01a70800
$6 = {
  list = {
    next = 0xffff8817213f0000, 
    prev = 0xffff88180be3e340
  }, 
  colouroff = 1214182229118793100, 
  s_mem = 0x0, 
  inuse = 2690860912, 
  free = 4294967295, 
  nodeid = 0
}
Comment by Bruno Faccini (Inactive) [ 01/Aug/16 ]

After working with Oleg's help on the crash-dump for this ticket, it seems like there is strong evidence to suggest (similar Slab corruption signature seen during LU-7980 tracking) that this is a duplicate of LU-7980 and it would make sense to use the fix from that and see if that is sufficient.

Comment by Mahmoud Hanafi [ 22/Sep/16 ]

Close this case we will track LU-7980

Generated at Sat Feb 10 02:16:38 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.