[LU-3766] ASSERTION( stripe < lio->lis_stripe_count ) Created: 15/Aug/13  Updated: 09/Oct/21  Resolved: 09/Oct/21

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.1.5
Fix Version/s: None

Type: Bug Priority: Critical
Reporter: Rustem Bikboulatov Assignee: WC Triage
Resolution: Cannot Reproduce Votes: 0
Labels: None
Environment:

Linux 2.6.32-279.19.1.el6_lustre.x86_64 #1 SMP


Attachments: GIF File 20140113 - Hardware Diagram v0.1_R3.gif    
Severity: 3
Rank (Obsolete): 9700

 Description   

We have a kernel crash on Lustre Client 2.1.5 with the following assertion:

LustreError: 31091:0:(lov_io.c:214:lov_sub_get()) ASSERTION( stripe < lio->lis_stripe_count ) failed:
LustreError: 31091:0:(lov_io.c:214:lov_sub_get()) LBUG

It very similar to:

LU-2652
LU-3524

This bug has been fixed in 2.4? If so, any plans to fix it in 2.1? And how can you get around the error (perhaps by configuring) without updating?

[root@r03 lustre_2.1.5]# crash /usr/lib/debug/lib/modules/2.6.32-279.19.1.el6_lustre.x86_64/vmlinux /var/crash/127.0.0.1-2013-08-13-10\:15\:56/vmcore

crash 6.0.4-2.el6
Copyright (C) 2002-2012 Red Hat, Inc.
Copyright (C) 2004, 2005, 2006 IBM Corporation
Copyright (C) 1999-2006 Hewlett-Packard Co
Copyright (C) 2005, 2006 Fujitsu Limited
Copyright (C) 2006, 2007 VA Linux Systems Japan K.K.
Copyright (C) 2005 NEC Corporation
Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions. Enter "help copying" to see the conditions.
This program has absolutely no warranty. Enter "help warranty" for details.

GNU gdb (GDB) 7.3.1
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...

KERNEL: /usr/lib/debug/lib/modules/2.6.32-279.19.1.el6_lustre.x86_64/vmlinux
DUMPFILE: /var/crash/127.0.0.1-2013-08-13-10:15:56/vmcore [PARTIAL DUMP]
CPUS: 16
DATE: Tue Aug 13 10:14:51 2013
UPTIME: 4 days, 12:04:11
LOAD AVERAGE: 0.00, 0.11, 0.12
TASKS: 513
NODENAME: r03
RELEASE: 2.6.32-279.19.1.el6_lustre.x86_64
VERSION: #1 SMP Wed Mar 20 16:37:18 PDT 2013
MACHINE: x86_64 (2400 Mhz)
MEMORY: 12 GB
PANIC: "Kernel panic - not syncing: LBUG"
PID: 31091
COMMAND: "lrvfarmd"
TASK: ffff88013cd3b500 [THREAD_INFO: ffff880149fd4000]
CPU: 1
STATE: TASK_RUNNING (PANIC)

crash> log

LustreError: 31091:0:(lov_io.c:214:lov_sub_get()) ASSERTION( stripe < lio->lis_stripe_count ) failed:
LustreError: 31091:0:(lov_io.c:214:lov_sub_get()) LBUG
Pid: 31091, comm: lrvfarmd

Call Trace:
[<ffffffffa034a785>] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
[<ffffffffa034ad97>] lbug_with_loc+0x47/0xb0 [libcfs]
[<ffffffffa099e93f>] lov_sub_get+0x47f/0x6f0 [lov]
[<ffffffffa0998cfc>] lov_page_init_raid0+0x14c/0x770 [lov]
[<ffffffff812754b4>] ? call_rwsem_down_read_failed+0x14/0x30
[<ffffffffa0995a54>] lov_page_init+0x54/0xe0 [lov]
[<ffffffffa04a415c>] cl_page_find0+0x1cc/0x850 [obdclass]
[<ffffffffa04a4811>] cl_page_find+0x11/0x20 [obdclass]
[<ffffffffa0a591d2>] ll_cl_init+0x152/0x560 [lustre]
[<ffffffff8116b858>] ? mem_cgroup_cache_charge+0x118/0x130
[<ffffffffa0a5962a>] ll_readpage+0x4a/0x200 [lustre]
[<ffffffff811117ec>] generic_file_aio_read+0x1fc/0x700
[<ffffffff8109672f>] ? up+0x2f/0x50
[<ffffffffa0a80cdb>] vvp_io_read_start+0x13b/0x3e0 [lustre]
[<ffffffffa04ac23a>] cl_io_start+0x6a/0x140 [obdclass]
[<ffffffffa04b0a7c>] cl_io_loop+0xcc/0x190 [obdclass]
[<ffffffffa0a31047>] ll_file_io_generic+0x3a7/0x560 [lustre]
[<ffffffffa0a31339>] ll_file_aio_read+0x139/0x2c0 [lustre]
[<ffffffffa0a317f9>] ll_file_read+0x169/0x2a0 [lustre]
[<ffffffff81176cb5>] vfs_read+0xb5/0x1a0
[<ffffffff81176df1>] sys_read+0x51/0x90
[<ffffffff814ed03e>] ? do_device_not_available+0xe/0x10
[<ffffffff8100b072>] system_call_fastpath+0x16/0x1b

Kernel panic - not syncing: LBUG
Pid: 31091, comm: lrvfarmd Not tainted 2.6.32-279.19.1.el6_lustre.x86_64 #1
Call Trace:
[<ffffffff814e9811>] ? panic+0xa0/0x168
[<ffffffffa034adeb>] ? lbug_with_loc+0x9b/0xb0 [libcfs]
[<ffffffffa099e93f>] ? lov_sub_get+0x47f/0x6f0 [lov]
[<ffffffffa0998cfc>] ? lov_page_init_raid0+0x14c/0x770 [lov]
[<ffffffff812754b4>] ? call_rwsem_down_read_failed+0x14/0x30
[<ffffffffa0995a54>] ? lov_page_init+0x54/0xe0 [lov]
[<ffffffffa04a415c>] ? cl_page_find0+0x1cc/0x850 [obdclass]
[<ffffffffa04a4811>] ? cl_page_find+0x11/0x20 [obdclass]
[<ffffffffa0a591d2>] ? ll_cl_init+0x152/0x560 [lustre]
[<ffffffff8116b858>] ? mem_cgroup_cache_charge+0x118/0x130
[<ffffffffa0a5962a>] ? ll_readpage+0x4a/0x200 [lustre]
[<ffffffff811117ec>] ? generic_file_aio_read+0x1fc/0x700
[<ffffffff8109672f>] ? up+0x2f/0x50
[<ffffffffa0a80cdb>] ? vvp_io_read_start+0x13b/0x3e0 [lustre]
[<ffffffffa04ac23a>] ? cl_io_start+0x6a/0x140 [obdclass]
[<ffffffffa04b0a7c>] ? cl_io_loop+0xcc/0x190 [obdclass]
[<ffffffffa0a31047>] ? ll_file_io_generic+0x3a7/0x560 [lustre]
[<ffffffffa0a31339>] ? ll_file_aio_read+0x139/0x2c0 [lustre]
[<ffffffffa0a317f9>] ? ll_file_read+0x169/0x2a0 [lustre]
[<ffffffff81176cb5>] ? vfs_read+0xb5/0x1a0
[<ffffffff81176df1>] ? sys_read+0x51/0x90
[<ffffffff814ed03e>] ? do_device_not_available+0xe/0x10
[<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b



 Comments   
Comment by Rustem Bikboulatov [ 19/Aug/13 ]

Today we had another "Lustre Client" crash:

[root@r03 ~]# crash /usr/lib/debug/lib/modules/2.6.32-279.19.1.el6_lustre.x86_64/vmlinux /var/crash/127.0.0.1-2013-08-19-01\:19\:55/vmcore

crash 6.0.4-2.el6
Copyright (C) 2002-2012 Red Hat, Inc.
Copyright (C) 2004, 2005, 2006 IBM Corporation
Copyright (C) 1999-2006 Hewlett-Packard Co
Copyright (C) 2005, 2006 Fujitsu Limited
Copyright (C) 2006, 2007 VA Linux Systems Japan K.K.
Copyright (C) 2005 NEC Corporation
Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions. Enter "help copying" to see the conditions.
This program has absolutely no warranty. Enter "help warranty" for details.

GNU gdb (GDB) 7.3.1
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...

KERNEL: /usr/lib/debug/lib/modules/2.6.32-279.19.1.el6_lustre.x86_64/vmlinux
DUMPFILE: /var/crash/127.0.0.1-2013-08-19-01:19:55/vmcore [PARTIAL DUMP]
CPUS: 16
DATE: Mon Aug 19 01:18:50 2013
UPTIME: 3 days, 06:55:45
LOAD AVERAGE: 1.48, 2.67, 3.51
TASKS: 572
NODENAME: r03
RELEASE: 2.6.32-279.19.1.el6_lustre.x86_64
VERSION: #1 SMP Wed Mar 20 16:37:18 PDT 2013
MACHINE: x86_64 (2400 Mhz)
MEMORY: 12 GB
PANIC: "Kernel panic - not syncing: LBUG"
PID: 9099
COMMAND: "lrvfarmd"
TASK: ffff8802b063f500 [THREAD_INFO: ffff8802b0640000]
CPU: 6
STATE: TASK_RUNNING (PANIC)

crash> log

LustreError: 9099:0:(lov_io.c:214:lov_sub_get()) ASSERTION( stripe < lio->lis_stripe_count ) failed:
LustreError: 9099:0:(lov_io.c:214:lov_sub_get()) LBUG
Pid: 9099, comm: lrvfarmd

Call Trace:
[<ffffffffa0372785>] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
[<ffffffffa0372d97>] lbug_with_loc+0x47/0xb0 [libcfs]
[<ffffffffa099a93f>] lov_sub_get+0x47f/0x6f0 [lov]
[<ffffffffa0994cfc>] lov_page_init_raid0+0x14c/0x770 [lov]
[<ffffffff812754b4>] ? call_rwsem_down_read_failed+0x14/0x30
[<ffffffffa0991a54>] lov_page_init+0x54/0xe0 [lov]
[<ffffffffa04ba15c>] cl_page_find0+0x1cc/0x850 [obdclass]
[<ffffffffa04ba811>] cl_page_find+0x11/0x20 [obdclass]
[<ffffffffa0a551d2>] ll_cl_init+0x152/0x560 [lustre]
[<ffffffff8116b858>] ? mem_cgroup_cache_charge+0x118/0x130
[<ffffffffa0a5562a>] ll_readpage+0x4a/0x200 [lustre]
[<ffffffff811117ec>] generic_file_aio_read+0x1fc/0x700
[<ffffffff8109672f>] ? up+0x2f/0x50
[<ffffffffa0a7ccdb>] vvp_io_read_start+0x13b/0x3e0 [lustre]
[<ffffffffa04c223a>] cl_io_start+0x6a/0x140 [obdclass]
[<ffffffffa04c6a7c>] cl_io_loop+0xcc/0x190 [obdclass]
[<ffffffffa0a2d047>] ll_file_io_generic+0x3a7/0x560 [lustre]
[<ffffffffa0a2d339>] ll_file_aio_read+0x139/0x2c0 [lustre]
[<ffffffffa0a2d7f9>] ll_file_read+0x169/0x2a0 [lustre]
[<ffffffff81176cb5>] vfs_read+0xb5/0x1a0
[<ffffffff81176df1>] sys_read+0x51/0x90
[<ffffffff814ed03e>] ? do_device_not_available+0xe/0x10
[<ffffffff8100b072>] system_call_fastpath+0x16/0x1b

Kernel panic - not syncing: LBUG
Pid: 9099, comm: lrvfarmd Not tainted 2.6.32-279.19.1.el6_lustre.x86_64 #1
Call Trace:
[<ffffffff814e9811>] ? panic+0xa0/0x168
[<ffffffffa0372deb>] ? lbug_with_loc+0x9b/0xb0 [libcfs]
[<ffffffffa099a93f>] ? lov_sub_get+0x47f/0x6f0 [lov]
[<ffffffffa0994cfc>] ? lov_page_init_raid0+0x14c/0x770 [lov]
[<ffffffff812754b4>] ? call_rwsem_down_read_failed+0x14/0x30
[<ffffffffa0991a54>] ? lov_page_init+0x54/0xe0 [lov]
[<ffffffffa04ba15c>] ? cl_page_find0+0x1cc/0x850 [obdclass]
[<ffffffffa04ba811>] ? cl_page_find+0x11/0x20 [obdclass]
[<ffffffffa0a551d2>] ? ll_cl_init+0x152/0x560 [lustre]
[<ffffffff8116b858>] ? mem_cgroup_cache_charge+0x118/0x130
[<ffffffffa0a5562a>] ? ll_readpage+0x4a/0x200 [lustre]
[<ffffffff811117ec>] ? generic_file_aio_read+0x1fc/0x700
[<ffffffff8109672f>] ? up+0x2f/0x50
[<ffffffffa0a7ccdb>] ? vvp_io_read_start+0x13b/0x3e0 [lustre]
[<ffffffffa04c223a>] ? cl_io_start+0x6a/0x140 [obdclass]
[<ffffffffa04c6a7c>] ? cl_io_loop+0xcc/0x190 [obdclass]
[<ffffffffa0a2d047>] ? ll_file_io_generic+0x3a7/0x560 [lustre]
[<ffffffffa0a2d339>] ? ll_file_aio_read+0x139/0x2c0 [lustre]
[<ffffffffa0a2d7f9>] ? ll_file_read+0x169/0x2a0 [lustre]
[<ffffffff81176cb5>] ? vfs_read+0xb5/0x1a0
[<ffffffff81176df1>] ? sys_read+0x51/0x90
[<ffffffff814ed03e>] ? do_device_not_available+0xe/0x10
[<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b

crash> kmem -i
PAGES TOTAL PERCENTAGE
TOTAL MEM 3014311 11.5 GB ----
FREE 2394588 9.1 GB 79% of TOTAL MEM
USED 619723 2.4 GB 20% of TOTAL MEM
SHARED 90534 353.6 MB 3% of TOTAL MEM
BUFFERS 114 456 KB 0% of TOTAL MEM
CACHED 87771 342.9 MB 2% of TOTAL MEM
SLAB 381764 1.5 GB 12% of TOTAL MEM

TOTAL SWAP 524286 2 GB ----
SWAP USED 426 1.7 MB 0% of TOTAL SWAP
SWAP FREE 523860 2 GB 99% of TOTAL SWAP

Comment by Rustem Bikboulatov [ 29/Aug/13 ]

And another crash:

LustreError: 3447:0:(lov_io.c:214:lov_sub_get()) ASSERTION( stripe < lio->lis_stripe_count ) failed:
LustreError: 3447:0:(lov_io.c:214:lov_sub_get()) LBUG
Pid: 3447, comm: lrvfarmd

Call Trace:
[<ffffffffa0374785>] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
[<ffffffffa0374d97>] lbug_with_loc+0x47/0xb0 [libcfs]
[<ffffffffa099493f>] lov_sub_get+0x47f/0x6f0 [lov]
[<ffffffffa098ecfc>] lov_page_init_raid0+0x14c/0x770 [lov]
[<ffffffff812754b4>] ? call_rwsem_down_read_failed+0x14/0x30
[<ffffffffa098ba54>] lov_page_init+0x54/0xe0 [lov]
[<ffffffffa04bc15c>] cl_page_find0+0x1cc/0x850 [obdclass]
[<ffffffffa04bc811>] cl_page_find+0x11/0x20 [obdclass]
[<ffffffffa0a4f1d2>] ll_cl_init+0x152/0x560 [lustre]
[<ffffffff8116b858>] ? mem_cgroup_cache_charge+0x118/0x130
[<ffffffffa0a4f62a>] ll_readpage+0x4a/0x200 [lustre]
[<ffffffff811117ec>] generic_file_aio_read+0x1fc/0x700
[<ffffffff8109672f>] ? up+0x2f/0x50
[<ffffffffa0a76cdb>] vvp_io_read_start+0x13b/0x3e0 [lustre]
[<ffffffffa04c423a>] cl_io_start+0x6a/0x140 [obdclass]
[<ffffffffa04c8a7c>] cl_io_loop+0xcc/0x190 [obdclass]
[<ffffffffa0a27047>] ll_file_io_generic+0x3a7/0x560 [lustre]
[<ffffffffa0a27339>] ll_file_aio_read+0x139/0x2c0 [lustre]
[<ffffffffa0a277f9>] ll_file_read+0x169/0x2a0 [lustre]
[<ffffffff81176cb5>] vfs_read+0xb5/0x1a0
[<ffffffff81176df1>] sys_read+0x51/0x90
[<ffffffff814ed03e>] ? do_device_not_available+0xe/0x10
[<ffffffff8100b072>] system_call_fastpath+0x16/0x1b

Kernel panic - not syncing: LBUG
Pid: 3447, comm: lrvfarmd Not tainted 2.6.32-279.19.1.el6_lustre.x86_64 #1
Call Trace:
[<ffffffff814e9811>] ? panic+0xa0/0x168
[<ffffffffa0374deb>] ? lbug_with_loc+0x9b/0xb0 [libcfs]
[<ffffffffa099493f>] ? lov_sub_get+0x47f/0x6f0 [lov]
[<ffffffffa098ecfc>] ? lov_page_init_raid0+0x14c/0x770 [lov]
[<ffffffff812754b4>] ? call_rwsem_down_read_failed+0x14/0x30
[<ffffffffa098ba54>] ? lov_page_init+0x54/0xe0 [lov]
[<ffffffffa04bc15c>] ? cl_page_find0+0x1cc/0x850 [obdclass]
[<ffffffffa04bc811>] ? cl_page_find+0x11/0x20 [obdclass]
[<ffffffffa0a4f1d2>] ? ll_cl_init+0x152/0x560 [lustre]
[<ffffffff8116b858>] ? mem_cgroup_cache_charge+0x118/0x130
[<ffffffffa0a4f62a>] ? ll_readpage+0x4a/0x200 [lustre]
[<ffffffff811117ec>] ? generic_file_aio_read+0x1fc/0x700
[<ffffffff8109672f>] ? up+0x2f/0x50
[<ffffffffa0a76cdb>] ? vvp_io_read_start+0x13b/0x3e0 [lustre]
[<ffffffffa04c423a>] ? cl_io_start+0x6a/0x140 [obdclass]
[<ffffffffa04c8a7c>] ? cl_io_loop+0xcc/0x190 [obdclass]
[<ffffffffa0a27047>] ? ll_file_io_generic+0x3a7/0x560 [lustre]
[<ffffffffa0a27339>] ? ll_file_aio_read+0x139/0x2c0 [lustre]
[<ffffffffa0a277f9>] ? ll_file_read+0x169/0x2a0 [lustre]
[<ffffffff81176cb5>] ? vfs_read+0xb5/0x1a0
[<ffffffff81176df1>] ? sys_read+0x51/0x90
[<ffffffff814ed03e>] ? do_device_not_available+0xe/0x10
[<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b

Comment by Rustem Bikboulatov [ 14/Jan/14 ]

Lustre Cluster Diagram

Comment by Rustem Bikboulatov [ 14/Jan/14 ]

Here the cluster configuration:

Lustre Server MGS/MDS - mmp-2
Lustre Servers OSS - n11, n12, n13, n14, n15, n21, n22, n23, n24, n25
Lustre Clients - r01, r02, r03, r04, mmp-1, vn-1, cln01, cln02, cln03, cln04

(refer to the diagram "20140113 - Hardware Diagram v0.1_R3.gif" in attachment)

Environment:
Linux 2.6.32-279.19.1.el6_lustre.x86_64 #1 SMP

Mount points:

OSS:
/dev/md11 on /lustre/ost type lustre (rw,noauto,_netdev,abort_recov)

MGS/MDS:
/dev/lustre_mgs on /lustre/mgs type lustre (rw,noauto,_netdev,abort_recov)
/dev/lustre_mdt1 on /lustre/mdt1 type lustre (rw,noauto,_netdev,abort_recov)

Clients:
mmp-2@tcp:mmp-1@tcp:/lustre1 on /array1 type lustre (rw,noauto,_netdev,flock,abort_recov,lazystatfs)

Stripe config:

[root@mmp-1 ~]# lfs getstripe /array1/.
/array1/.
stripe_count: 1 stripe_size: 1048576 stripe_offset: -1

kdump config:

core_collector makedumpfile -c --message-level 1 -d 31

Application Software Description (LRVfarm):

LRVfarm - it's the software that processes media files (video+audio) and creates a proxy video ( low resolution video). Each running task contains a small task file, which is located on Lustre file system. Software LRVfarm runs several threads (processes) - 8 processes on each client (r01, r02, r03, r04). For a total of 32 LRVfarm processes running in cluster. Each process uses the file locking feature when performing tasks. LRVfarm process locks a task file and perform the task. Other LRVfarm processes (on local client and remote clients) also attempt to lock that task files, but it is not possible in the case when the task already running (for the reason that the file is already locked by another LRVfarm process). Periodically, all clients are crashes (kernel panic) with different errors. Here the crash statistics for the last time:

Client r01
==================
2013-12-30-13:27:09 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-12-18-17:53:58 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-11-27-11:33:23 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-11-22-19:13:07 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-10-29-18:38:51 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-10-06-13:47:27 osc_lock_detach+0x51/0x1b0 [osc]
2013-09-29-07:10:08 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-08-29-16:34:36 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-08-08-20:42:27 osc_lock_detach+0x51/0x1b0 [osc]

Client r02
==================
2014-01-13-16:50:05 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2014-01-07-23:05:13 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-12-25-10:01:04 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-12-16-00:34:43 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-11-25-12:54:33 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-11-14-21:59:24 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-09-11-15:57:46 ASSERTION( stripe < lio->lis_stripe_count ) failed:

Client r03
==================
2013-11-06-20:25:26 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-10-16-19:04:01 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-10-14-18:38:01 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-09-30-19:26:50 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-09-12-19:55:17 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-09-10-11:22:25 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-08-19-01:19:55 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-08-13-10:15:56 ASSERTION( stripe < lio->lis_stripe_count ) failed:

Client r04
==================
2013-12-08-06:19:11 cl_lock_mutex_get+0x2e/0xe0 [obdclass]
2013-12-03-13:32:04 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-11-16-08:49:25 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-10-21-22:14:46 ASSERTION( stripe < lio->lis_stripe_count ) failed:
2013-10-13-18:48:27 osc_lock_detach+0x51/0x1b0 [osc]
2013-09-01-14:26:14 osc_lock_detach+0x51/0x1b0 [osc]

Generated at Sat Feb 10 01:36:43 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.