[LU-3433] Encountered a assertion for the ols_state being set to a impossible state Created: 03/Jun/13 Updated: 12/Nov/14 Resolved: 20/Feb/14 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.4.0, Lustre 2.4.1 |
| Fix Version/s: | Lustre 2.5.0 |
| Type: | Bug | Priority: | Major |
| Reporter: | James A Simmons | Assignee: | Niu Yawei (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | mn4 | ||
| Environment: |
Cray Lustre 2.4 clients running on SLES11 SP2. |
||
| Attachments: |
|
||||||||||||||||
| Issue Links: |
|
||||||||||||||||
| Severity: | 3 | ||||||||||||||||
| Rank (Obsolete): | 8552 | ||||||||||||||||
| Description |
|
During a test run using Lustre 2.4 one of our clients encountered this LBUG. [2013-05-29 18:39:47][c7-1c1s3n3]LustreError:16573:0:(osc_lock.c:1165:osc_lock_enqueue()) ASSERTION( ols->ols_state == OLS_NEW ) failed: Impossible state: 4 |
| Comments |
| Comment by Peter Jones [ 03/Jun/13 ] |
|
Niu Could you please look into this one? Thanks Peter |
| Comment by Patrick Farrell (Inactive) [ 03/Jun/13 ] |
|
Cray has recently noticed this LBUG while testing for a possible different source for the EINTR reported in I'll attach the reproducer for this bug for write/truncate. Briefly, the reproducer spawns a child thread, then the parent sets up a signal handler for SIGINT with SA_RESTART set, and loops the IO operation in question (It's set up for write or truncate, and could be modified to use read), while the child spams SIGINT at the parent. This bug is much easier to reproduce with +trace debugging enabled. Without, it usually requires looping the reproducer, sometimes for several minutes. The provided shell script loops the reproducer. |
| Comment by Niu Yawei (Inactive) [ 04/Jun/13 ] |
|
This looks like a regression introduced by |
| Comment by James A Simmons [ 04/Jun/13 ] |
|
Do you have a reproducer for the |
| Comment by Patrick Farrell (Inactive) [ 04/Jun/13 ] |
|
James, It's the same reproducer, actually. This one - as long as it's in write mode and not truncate [it might work in truncate, I haven't tested] - should reproduce Niu, I'm not sure if I could reproduce it - I can try a bit later. Without Presumably, when a system call is restarted, sometimes we wind up in this nasty state. If EINTR is returned instead (
|
| Comment by Patrick Farrell (Inactive) [ 07/Jun/13 ] |
|
Niu, James, We have a candidate patch for this from Xyratex that we're testing on our end. It appears to resolve the issue, and once I'm sure it's solid, we'll make sure it's pushed upstream.
|
| Comment by Vitaly Fertman [ 19/Jun/13 ] |
| Comment by Niu Yawei (Inactive) [ 27/Aug/13 ] |
|
patch landed for 2.5 |
| Comment by Jian Yu [ 10/Dec/13 ] |
|
Lustre Build: http://build.whamcloud.com/job/lustre-b2_4/66/ The same failure occurred while running parallel-scale test write_disjoint: Console log on client node: 17:35:46:Lustre: DEBUG MARKER: == parallel-scale test write_disjoint: write_disjoint == 17:31:44 (1386639104) 17:35:46:LustreError: 12360:0:(osc_lock.c:1165:osc_lock_enqueue()) ASSERTION( ols->ols_state == OLS_NEW ) failed: Impossible state: 6 17:35:46:LustreError: 12360:0:(osc_lock.c:1165:osc_lock_enqueue()) LBUG 17:35:46:Pid: 12360, comm: write_disjoint 17:35:47: 17:35:47:Call Trace: 17:35:47: [<ffffffffa19ac895>] libcfs_debug_dumpstack+0x55/0x80 [libcfs] 17:35:47: [<ffffffffa19ace97>] lbug_with_loc+0x47/0xb0 [libcfs] 17:35:47: [<ffffffffa05e4fc0>] ? osc_lock_enqueue+0x0/0x870 [osc] 17:35:47: [<ffffffffa05e5674>] osc_lock_enqueue+0x6b4/0x870 [osc] 17:35:47: [<ffffffffa19ad717>] ? cfs_waitq_broadcast+0x17/0x20 [libcfs] 17:35:47: [<ffffffffa1b15f5e>] ? cl_lock_state_signal+0x7e/0x160 [obdclass] 17:35:47: [<ffffffffa1b19d4c>] cl_enqueue_try+0xfc/0x300 [obdclass] 17:35:47: [<ffffffffa06620da>] lov_lock_enqueue+0x22a/0x850 [lov] 17:35:47: [<ffffffffa1b19d4c>] cl_enqueue_try+0xfc/0x300 [obdclass] 17:35:47: [<ffffffffa1b1b13f>] cl_enqueue_locked+0x6f/0x1f0 [obdclass] 17:35:47: [<ffffffffa1b1bdae>] cl_lock_request+0x7e/0x270 [obdclass] 17:35:47: [<ffffffffa1b2104c>] cl_io_lock+0x3cc/0x560 [obdclass] 17:35:48: [<ffffffffa1b21282>] cl_io_loop+0xa2/0x1b0 [obdclass] 17:35:48: [<ffffffffa06e09e0>] ll_file_io_generic+0x450/0x600 [lustre] 17:35:48: [<ffffffffa06e1922>] ll_file_aio_write+0x142/0x2c0 [lustre] 17:35:48: [<ffffffffa06e1c0c>] ll_file_write+0x16c/0x2a0 [lustre] 17:35:48: [<ffffffff81181398>] vfs_write+0xb8/0x1a0 17:35:48: [<ffffffff81181c91>] sys_write+0x51/0x90 17:35:48: [<ffffffff810dc685>] ? __audit_syscall_exit+0x265/0x290 17:35:48: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b 17:35:48: 17:35:48:Kernel panic - not syncing: LBUG Patch back-ported to Lustre b2_4 branch: http://review.whamcloud.com/8530 |
| Comment by Jian Yu [ 17/Dec/13 ] |
|
Patch landed on Lustre b2_4 branch for 2.4.2. |
| Comment by Andreas Dilger [ 19/Dec/13 ] |
|
Patch is being reverted from b2_4 due to test failures: http://review.whamcloud.com/8621 |
| Comment by Christopher Morrone [ 30/Jan/14 ] |
|
We saw the same assertion in production at LLNL with Lustre 2.4.0-19chaos. Unfortunately it did not dump the backtrace to the console, and the crash dump failed. But the assertion and particular unknown state (4) were the same as in this ticket, so I am assuming that it is the same bug. Since the fix needed to be reverted, it would appear that this issue needs to be reopened. |
| Comment by Patrick Farrell (Inactive) [ 07/Feb/14 ] |
|
Christopher - I'm not sure what exactly your 2.4.0-19 consists of, but Intel's 2.4.0 does not have this patch reverted. Only if you're pulling updated sources from the b2.4 (or whatever it's called - sorry) branch in the repo, would you have the patch reverted. If you've got 2.4.0 (or even 2.4.1, I believe), you wouldn't have the patch to begin with. |
| Comment by Christopher Morrone [ 07/Feb/14 ] |
|
Patrick - I understand that. We hit the bug described in this ticket, not the bug caused by the fix for this ticket. We would like to be able to cherry-pick the fix so that our client does not assert hit this bug, but if the patch needed to be reverted it is not yet safe for me to do so. |
| Comment by Niu Yawei (Inactive) [ 08/Feb/14 ] |
|
The fix of Bob, what's your opinion? I see the revert patch was uploaded by you, why we revert this patch instead of backport the |
| Comment by Peter Jones [ 08/Feb/14 ] |
|
Just timing. At the time the fix for |
| Comment by Niu Yawei (Inactive) [ 10/Feb/14 ] |
|
ported |
| Comment by Karsten Weiss [ 25/Mar/14 ] |
|
Niu/Peter, will this backport be merged into b2_4 (for 2.4.4)? I saw this bug on three lustre 2.4.2 clients and AFAIKS your patch is not included in 2.4.3 (we're using this version now) or b2_4. |
| Comment by Peter Jones [ 25/Mar/14 ] |
|
Karsten This would certainly be under consideration for a fuller 2.4.x maintenance release. 2.4.3 had limited content because it was an unscheduled release driven by the need to issue a 2.4.x release to address the security vulnerability discovered ( Peter |