|
Hello,
At TGCC, running Lustre 2.1.0, we have encountered multiple occurences where an OSS was so loaded that a lot of Client were complaining against it and we were unable to logon to it.
A crash-dump was forced and it analysis alway show the same situation were almost all CPUs/cores are stuck running kiblnd_sd_<id> and/or ib_cm/<id> threads all spinning around the "kiblnd_data.kib_global_lock" RW-SpinLock, trying to write-locking it, with Kernel stacks/trace-backs like following :
=====================================================================
PID: 10493 TASK: ffff88087a4ae790 CPU: 6 COMMAND: "kiblnd_sd_23"
#0 [ffff88088e447e90] crash_nmi_callback at ffffffff8101fd06
#1 [ffff88088e447ea0] notifier_call_chain at ffffffff814837f5
#2 [ffff88088e447ee0] atomic_notifier_call_chain at ffffffff8148385a
#3 [ffff88088e447ef0] notify_die at ffffffff8108026e
#4 [ffff88088e447f20] do_nmi at ffffffff81481443
#5 [ffff88088e447f50] nmi at ffffffff81480d50
[exception RIP: __write_lock_failed+9]
RIP: ffffffff81264919 RSP: ffff880874453d68 RFLAGS: 00000087
RAX: 0000000000000246 RBX: ffff880f5a797ad8 RCX: 0000000000000078
RDX: 0000000000000246 RSI: 00000000000000d1 RDI: ffffffffa0572ecc
RBP: ffff880874453d70 R8: ffff880d9a408000 R9: 0000000000000012
R10: dead000000100100 R11: 0000000000000001 R12: ffff880ec7518b80
R13: ffff88087cd4a8c0 R14: 0000000000000000 R15: ffff88087d31ee00
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
— <NMI exception stack> —
#6 [ffff880874453d68] __write_lock_failed at ffffffff81264919
#7 [ffff880874453d68] _write_lock_irqsave at ffffffff814805e8
#8 [ffff880874453d78] kiblnd_rx_complete at ffffffffa056682d [ko2iblnd]
#9 [ffff880874453df8] kiblnd_complete at ffffffffa05669f2 [ko2iblnd]
#10 [ffff880874453e38] kiblnd_scheduler at ffffffffa0566d94 [ko2iblnd]
#11 [ffff880874453f48] kernel_thread at ffffffff810041aa
PID: 9312 TASK: ffff88087cd5c080 CPU: 25 COMMAND: "ib_cm/25"
#0 [ffff88048e587e90] crash_nmi_callback at ffffffff8101fd06
#1 [ffff88048e587ea0] notifier_call_chain at ffffffff814837f5
#2 [ffff88048e587ee0] atomic_notifier_call_chain at ffffffff8148385a
#3 [ffff88048e587ef0] notify_die at ffffffff8108026e
#4 [ffff88048e587f20] do_nmi at ffffffff81481443
#5 [ffff88048e587f50] nmi at ffffffff81480d50
[exception RIP: __write_lock_failed+9]
RIP: ffffffff81264919 RSP: ffff88087a033c58 RFLAGS: 00000087
RAX: 0000000000000282 RBX: ffff8810681b41c0 RCX: 0000000000000002
RDX: 0000000000000282 RSI: 0000000000000000 RDI: ffffffffa0572ecc
RBP: ffff88087a033c60 R8: 0000000000000000 R9: 0000000000000000
R10: 0000000000000000 R11: 0000000000000001 R12: ffff880f2291a000
R13: 0000000000000000 R14: ffff8807a4210200 R15: ffffe8f7ff997988
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
— <NMI exception stack> —
#6 [ffff88087a033c58] __write_lock_failed at ffffffff81264919
#7 [ffff88087a033c58] _write_lock_irqsave at ffffffff814805e8
#8 [ffff88087a033c68] kiblnd_close_conn at ffffffffa055fa4b [ko2iblnd]
#9 [ffff88087a033c98] kiblnd_cm_callback at ffffffffa056a910 [ko2iblnd]
#10 [ffff88087a033d08] cma_ib_handler at ffffffffa03686e9 [rdma_cm]
#11 [ffff88087a033d88] cm_process_work at ffffffffa0346e67 [ib_cm]
#12 [ffff88087a033dd8] cm_work_handler at ffffffffa03493c9 [ib_cm]
#13 [ffff88087a033e38] worker_thread at ffffffff810749a0
#14 [ffff88087a033ee8] kthread at ffffffff81079f36
#15 [ffff88087a033f48] kernel_thread at ffffffff810041aa
=====================================================================
and this because the "kiblnd_data.kib_global_lock" read-locker is awaiting to be scheduled on its CPU/Core sinc a very long time with the following Kernel-stack :
=====================================================================
PID: 10494 TASK: ffff88087a4ae040 CPU: 19 COMMAND: "kiblnd_connd"
#0 [ffff88083b577c10] schedule at ffffffff8147dddc
#1 [ffff88083b577cd8] cfs_schedule at ffffffffa03fd74e [libcfs]
#2 [ffff88083b577ce8] kiblnd_pool_alloc_node at ffffffffa05569b3 [ko2iblnd]
#3 [ffff88083b577d58] kiblnd_get_idle_tx at ffffffffa056063d [ko2iblnd]
#4 [ffff88083b577d88] kiblnd_check_sends at ffffffffa0562ed6 [ko2iblnd]
#5 [ffff88083b577e18] kiblnd_check_conns at ffffffffa0563123 [ko2iblnd]
#6 [ffff88083b577e98] kiblnd_connd at ffffffffa05664f7 [ko2iblnd]
#7 [ffff88083b577f48] kernel_thread at ffffffff810041aa
=====================================================================
and this because the CPU/Core it has been scheduled to is starved by one of its write-lock competitor.
This scenario definitelly seems to be a specific case/consequence of the bug/limitation described in this LU-78, but since it now happpens in the "real life" on a customer site running production workload we would like to have its priority to be raised.
Do you agree with both my analysis and request ???
|
|
Possible ways to fix this problem could be to :
_ avoid re-schedule()/yield CPU in kiblnd_pool_alloc_node(), but wait/spin active.
_ change kiblnd_data.kib_global_lock RW-SpinLock to a Semaphore.
What do you think ??
|
|
I've posted a patch at here:
http://review.whamcloud.com/#change,2166
|
|
Since it will allow kiblnd_pool_alloc_node() to re-schedule() without keeping a read-lock on kiblnd_data.kib_global_lock, this patch looks good to me.
On the other hand, this problem is very solid/frequent at TGCC, so do you think this patch will be exposed/tested soon ???
|
|
Integrated in lustre-master » x86_64,server,el5,inkernel #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-master » x86_64,client,el5,inkernel #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd.h
- lnet/klnds/o2iblnd/o2iblnd_cb.c
|
|
Integrated in lustre-master » i686,server,el5,ofa #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-master » x86_64,client,el5,ofa #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd.h
- lnet/klnds/o2iblnd/o2iblnd_cb.c
|
|
Integrated in lustre-master » x86_64,client,ubuntu1004,inkernel #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd.h
- lnet/klnds/o2iblnd/o2iblnd_cb.c
|
|
Integrated in lustre-master » x86_64,client,el6,ofa #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd.h
- lnet/klnds/o2iblnd/o2iblnd_cb.c
|
|
Integrated in lustre-master » i686,client,el5,ofa #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd.h
- lnet/klnds/o2iblnd/o2iblnd_cb.c
|
|
Integrated in lustre-master » x86_64,client,sles11,inkernel #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd.h
- lnet/klnds/o2iblnd/o2iblnd_cb.c
|
|
Integrated in lustre-master » x86_64,server,el6,ofa #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-master » x86_64,server,el5,ofa #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd.h
- lnet/klnds/o2iblnd/o2iblnd_cb.c
|
|
Integrated in lustre-master » i686,server,el6,inkernel #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-master » i686,client,el6,inkernel #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-master » i686,client,el5,inkernel #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-master » x86_64,server,el6,inkernel #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-master » x86_64,client,el6,inkernel #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-master » i686,server,el5,inkernel #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd.h
- lnet/klnds/o2iblnd/o2iblnd_cb.c
|
|
Integrated in lustre-master » i686,client,el6,ofa #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-master » i686,server,el6,ofa #493
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision cc875104bb81313415167425ce21c562ddf540c9)
Result = SUCCESS
Oleg Drokin : cc875104bb81313415167425ce21c562ddf540c9
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Our production systems got hit by this problem also.
|
|
Jay, the patch is already in master now and I'm going to close this ticket.
|
|
patch landed and close it
|
|
Ah, I missed your "review" link and though it had not gone through maloo yet.
Cool, I will pick it up! Thanks!
|
|
Cherry-pick into b2_1 was smooth, but failed in b1_8 tree.
Do you plan to backport the change to b1_8? We got hit with this problem with our 1.8.6 servers twice last week.
|
|
Not sure it is the ideal way to do so, but I was not sure it need a new JIRA creation since this update is mainly for information ...
So just for info, during a multi-OSSes hang situation mainly reproducing LU-78's problem, I found an other, quite similar but different, dead-lock scenario whith the following details :
_ all CPUs are busy running either ib_cm/<id> or kiblnd_sd_<id> and the kiblnd_connd which, this time, is not hung waiting to be re-scheduled on a starved CPU but instead live-lock spinning on a (struct kib_poolset *)>ps_lock, like several ib_cm/<id> or kiblnd_sd_<id> other threads, due to (struct kib_poolset *)>ps_increasing set, and/but still owning kiblnd_data.kib_global_lock from kiblnd_check_conns() :
==========================================================================
PID: 15371 TASK: ffff881067135100 CPU: 26 COMMAND: "kiblnd_connd"
#0 [ffff88088e587e90] crash_nmi_callback at ffffffff8101fd06
0000001 [ffff88088e587ea0] notifier_call_chain at ffffffff814837f5
0000002 [ffff88088e587ee0] atomic_notifier_call_chain at ffffffff8148385a
0000003 [ffff88088e587ef0] notify_die at ffffffff8108026e
0000004 [ffff88088e587f20] do_nmi at ffffffff81481443
0000005 [ffff88088e587f50] nmi at ffffffff81480d50
[exception RIP: _spin_lock+30]
RIP: ffffffff8148062e RSP: ffff88107cfb3ce0 RFLAGS: 00000287
RAX: 0000000000000987 RBX: ffff88107c04ec30 RCX: 0000000000000000
RDX: 000000000000097c RSI: ffff88088e592e10 RDI: ffff88107c04ec30
RBP: ffff88107cfb3ce0 R8: 0000000000000001 R9: 00000000ffffffff
R10: 0000000000000000 R11: 0000000000000001 R12: ffff88107c04ec60
R13: ffff88107c04ec40 R14: ffff88107cfb3d18 R15: 0000000000000001
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
— <NMI exception stack> —
0000006 [ffff88107cfb3ce0] _spin_lock at ffffffff8148062e
0000007 [ffff88107cfb3ce8] kiblnd_pool_alloc_node at ffffffffa055c7c8 [ko2iblnd]
0000008 [ffff88107cfb3d58] kiblnd_get_idle_tx at ffffffffa056663d [ko2iblnd]
0000009 [ffff88107cfb3d88] kiblnd_check_sends at ffffffffa0568ed6 [ko2iblnd]
0000010 [ffff88107cfb3e18] kiblnd_check_conns at ffffffffa0569123 [ko2iblnd]
0000011 [ffff88107cfb3e98] kiblnd_connd at ffffffffa056c4f7 [ko2iblnd]
0000012 [ffff88107cfb3f48] kernel_thread at ffffffff810041aa
==========================================================================
_ the thread who did set "ps_increasing" in kiblnd_pool_alloc_node() has been re-scheduled during kmalloc(), but is unable to get-back on its CPU/Core due to "ib_cm/29" thread now "write-lock" spinning on kiblnd_data.kib_global_lock, both with the following stacks :
========================================================================
PID: 15354 TASK: ffff881067145850 CPU: 29 COMMAND: "kiblnd_sd_07"
#0 [ffff88107d0b7590] schedule at ffffffff8147dddc
0000001 [ffff88107d0b7658] __cond_resched at ffffffff8104d44a
0000002 [ffff88107d0b7678] _cond_resched at ffffffff8147e680
0000003 [ffff88107d0b7688] kmem_cache_alloc_node_notrace at ffffffff811466d8
0000004 [ffff88107d0b76c8] __kmalloc_node at ffffffff811468eb
0000005 [ffff88107d0b7718] __vmalloc_area_node at ffffffff81133c2f
0000006 [ffff88107d0b7778] __vmalloc_node at ffffffff81133bc2
0000007 [ffff88107d0b77b8] vmalloc at ffffffff81133e7c
0000008 [ffff88107d0b77c8] cfs_alloc_large at ffffffffa040399e [libcfs]
0000009 [ffff88107d0b77d8] kiblnd_create_tx_pool at ffffffffa055e649 [ko2iblnd]
0000010 [ffff88107d0b7888] kiblnd_pool_alloc_node at ffffffffa055c90f [ko2iblnd]
0000011 [ffff88107d0b78f8] kiblnd_get_idle_tx at ffffffffa056663d [ko2iblnd]
0000012 [ffff88107d0b7928] kiblnd_check_sends at ffffffffa0568ed6 [ko2iblnd]
0000013 [ffff88107d0b79b8] kiblnd_post_rx at ffffffffa056b170 [ko2iblnd]
0000014 [ffff88107d0b7a48] kiblnd_recv at ffffffffa056b46a [ko2iblnd]
0000015 [ffff88107d0b7b08] lnet_ni_recv at ffffffffa0464608 [lnet]
0000016 [ffff88107d0b7b98] lnet_recv_put at ffffffffa0464966 [lnet]
0000017 [ffff88107d0b7be8] lnet_parse at ffffffffa046b32f [lnet]
0000018 [ffff88107d0b7ce8] kiblnd_handle_rx at ffffffffa056bb8b [ko2iblnd]
0000019 [ffff88107d0b7d78] kiblnd_rx_complete at ffffffffa056c850 [ko2iblnd]
0000020 [ffff88107d0b7df8] kiblnd_complete at ffffffffa056c9f2 [ko2iblnd]
0000021 [ffff88107d0b7e38] kiblnd_scheduler at ffffffffa056cd94 [ko2iblnd]
0000022 [ffff88107d0b7f48] kernel_thread at ffffffff810041aa
PID: 9529 TASK: ffff88087dad7850 CPU: 29 COMMAND: "ib_cm/29"
#0 [ffff88048e5c7e90] crash_nmi_callback at ffffffff8101fd06
0000001 [ffff88048e5c7ea0] notifier_call_chain at ffffffff814837f5
0000002 [ffff88048e5c7ee0] atomic_notifier_call_chain at ffffffff8148385a
0000003 [ffff88048e5c7ef0] notify_die at ffffffff8108026e
0000004 [ffff88048e5c7f20] do_nmi at ffffffff81481443
0000005 [ffff88048e5c7f50] nmi at ffffffff81480d50
[exception RIP: __write_lock_failed+9]
RIP: ffffffff81264919 RSP: ffff880879e3ba28 RFLAGS: 00000087
RAX: 0000000000000246 RBX: ffff880804b90840 RCX: 0000000000000000
RDX: 0000000000000246 RSI: 0000000000000050 RDI: ffffffffa0578ecc
RBP: ffff880879e3ba30 R8: 0000000000000246 R9: 0000000000000010
R10: 00000000787c3001 R11: 0000000000000001 R12: ffff88107cc11600
R13: ffff88107d28fd80 R14: 000500030a653047 R15: ffff880879e3bb90
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
— <NMI exception stack> —
0000006 [ffff880879e3ba28] __write_lock_failed at ffffffff81264919
0000007 [ffff880879e3ba28] _write_lock_irqsave at ffffffff814805e8
0000008 [ffff880879e3ba38] kiblnd_create_peer at ffffffffa05597a0 [ko2iblnd]
0000009 [ffff880879e3bab8] kiblnd_passive_connect at ffffffffa056e9df [ko2iblnd]
0000010 [ffff880879e3bbd8] kiblnd_cm_callback at ffffffffa057059d [ko2iblnd]
0000011 [ffff880879e3bc48] cma_req_handler at ffffffffa03741c0 [rdma_cm]
0000012 [ffff880879e3bd08] cm_process_work at ffffffffa0350e67 [ib_cm]
0000013 [ffff880879e3bd58] cm_req_handler at ffffffffa03527e0 [ib_cm]
0000014 [ffff880879e3bdd8] cm_work_handler at ffffffffa03532d5 [ib_cm]
0000015 [ffff880879e3be38] worker_thread at ffffffff810749a0
0000016 [ffff880879e3bee8] kthread at ffffffff81079f36
0000017 [ffff880879e3bf48] kernel_thread at ffffffff810041aa
========================================================================
So, my assumption is that the fix from LU-78 already provided for the original situation may also fix/avoid this one, again due to now calling kiblnd_check_sends() from kiblnd_check_conns() without locking kiblnd_data.kib_global_lock anymore ...
But anyway, I also wanted to report/document this 2nd scenario, and also the potential "live-lock" situation that has been encountered with the following code path in kiblnd_pool_alloc_node() :
=======================================
cfs_list_t *
kiblnd_pool_alloc_node(kib_poolset_t *ps)
{
cfs_list_t *node;
kib_pool_t *pool;
int rc;
again:
cfs_spin_lock(&ps->ps_lock);
cfs_list_for_each_entry(pool, &ps->ps_pool_list, po_list) {
......
/* no available tx pool and ... */
if (ps->ps_increasing)
{
/* another thread is allocating a new pool */
cfs_spin_unlock(&ps->ps_lock);
CDEBUG(D_NET, "Another thread is allocating new "
"%s pool, waiting for her to complete\n",
ps->ps_name);
cfs_schedule();
goto again;
}
.......
=======================================
which shoud be changed with a semaphore/wait-queue mechanism ??...
|
|
Integrated in lustre-b2_1 » x86_64,client,sles11,inkernel #41
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155)
Result = SUCCESS
Oleg Drokin : dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155
Files :
- lnet/klnds/o2iblnd/o2iblnd.h
- lnet/klnds/o2iblnd/o2iblnd_cb.c
|
|
Integrated in lustre-b2_1 » i686,client,el6,inkernel #41
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155)
Result = SUCCESS
Oleg Drokin : dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155
Files :
- lnet/klnds/o2iblnd/o2iblnd.h
- lnet/klnds/o2iblnd/o2iblnd_cb.c
|
|
Integrated in lustre-b2_1 » x86_64,server,el6,inkernel #41
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155)
Result = SUCCESS
Oleg Drokin : dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155
Files :
- lnet/klnds/o2iblnd/o2iblnd.h
- lnet/klnds/o2iblnd/o2iblnd_cb.c
|
|
Integrated in lustre-b2_1 » i686,client,el5,ofa #41
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155)
Result = SUCCESS
Oleg Drokin : dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-b2_1 » x86_64,server,el5,ofa #41
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155)
Result = SUCCESS
Oleg Drokin : dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-b2_1 » x86_64,client,el6,inkernel #41
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155)
Result = SUCCESS
Oleg Drokin : dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-b2_1 » i686,server,el6,inkernel #41
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155)
Result = SUCCESS
Oleg Drokin : dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-b2_1 » x86_64,client,el5,inkernel #41
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155)
Result = SUCCESS
Oleg Drokin : dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155
Files :
- lnet/klnds/o2iblnd/o2iblnd.h
- lnet/klnds/o2iblnd/o2iblnd_cb.c
|
|
Integrated in lustre-b2_1 » i686,server,el5,inkernel #41
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155)
Result = SUCCESS
Oleg Drokin : dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-b2_1 » x86_64,server,el5,inkernel #41
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155)
Result = SUCCESS
Oleg Drokin : dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-b2_1 » i686,server,el5,ofa #41
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155)
Result = SUCCESS
Oleg Drokin : dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155
Files :
- lnet/klnds/o2iblnd/o2iblnd.h
- lnet/klnds/o2iblnd/o2iblnd_cb.c
|
|
Integrated in lustre-b2_1 » x86_64,client,el5,ofa #41
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155)
Result = SUCCESS
Oleg Drokin : dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155
Files :
- lnet/klnds/o2iblnd/o2iblnd_cb.c
- lnet/klnds/o2iblnd/o2iblnd.h
|
|
Integrated in lustre-b2_1 » i686,client,el5,inkernel #41
LU-78 o2iblnd: kiblnd_check_conns can deadlock (Revision dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155)
Result = SUCCESS
Oleg Drokin : dcf1b2e6e2f22935f823cb7610b33b8f9c3ef155
Files :
- lnet/klnds/o2iblnd/o2iblnd.h
- lnet/klnds/o2iblnd/o2iblnd_cb.c
|
Generated at Sat Feb 10 01:03:31 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.