[LU-4290] osp_sync_threads encounters EIO on mount Created: 22/Nov/13 Updated: 20/Mar/14 Resolved: 20/Mar/14 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.4.1 |
| Fix Version/s: | Lustre 2.6.0 |
| Type: | Bug | Priority: | Critical |
| Reporter: | Blake Caldwell | Assignee: | Alex Zhuravlev |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Environment: |
RHEL 6.4/distro IB |
||
| Attachments: |
|
| Severity: | 2 |
| Rank (Obsolete): | 11773 |
| Description |
|
We encountered this assertion in production, libcfs_panic_on_lbug was set to 1, so server rebooted. On mount, the same assertion and lbug would occur. Filesystem will mount with panic_on_lbug set to 0. We've captured a crash dump and lustre log messages with the debug flags: [root@atlas-mds3 ~]# cat /proc/sys/lnet/debug Ran e2fsck: and only fixed the quota inconsistencies it found. At the moment, we are back to production after the osp_sync_threads lbugs on mount. There are hung task messages about osp_sync_threads as would be expected. We want to fix the root issue that is causing the assertions. kernel messages during one of the failed mounts We also see this message: |
| Comments |
| Comment by Peter Jones [ 22/Nov/13 ] |
|
Alex Could you please comment on this one? Thanks Peter |
| Comment by Alex Zhuravlev [ 22/Nov/13 ] |
|
Hi Blake, do you have lustre logs for the case? |
| Comment by Blake Caldwell [ 22/Nov/13 ] |
|
Similar to |
| Comment by Alex Zhuravlev [ 22/Nov/13 ] |
|
strange.. it looks like the log file is short. the last claimed record is 64767: 00000040:00001000:15.0:1385095225.257089:0:19973:0:(llog.c:344:llog_process_thread()) index: 3430 last_index 64767 but we aren't able to find 3429 even: 00000040:00001000:23.0:1385095225.346339:0:19973:0:(llog_osd.c:553:llog_osd_next_block()) looking for log index 3430 (cur idx 3430 off 227712) what kind of storage do you use? is it a software raid? have you encountered power loss issues or something similar? |
| Comment by Blake Caldwell [ 22/Nov/13 ] |
|
It's a SAS-attached NetApp 5524 with redundant controllers. There were no power issues. However, there were problems on the hardware side. At the time, some code was being tested that was read-only but triggered an error on the storage device. The messages below are representative of the ones seen. The EIO errors below should have been returned to the testing code, not a lustre process. However something related caused the EIO to osp_sync_threads to trip the LBUG around 20:20. Nov 21 20:01:18 atlas-mds3 kernel: [3408100.021270] sd 6:0:13:0: [sdp] Result: hostbyte=DID_SOFT_ERROR driverbyte=DRIVER_OK The system rebooted at ~20:20 and the last messages in syslog were: |
| Comment by Alex Zhuravlev [ 22/Nov/13 ] |
|
I can't say for sure, but llog code detected missing records in the file, which might be a result of I/O issues. |
| Comment by Blake Caldwell [ 22/Nov/13 ] |
|
Delete which file for OST llog? |
| Comment by Blake Caldwell [ 22/Nov/13 ] |
|
We will have an opportunity next Tuesday if more work is needed (truncate llog?). When running with osp_sync_threads tasks that are hung are we turning away client I/O for that OST? |
| Comment by Alex Zhuravlev [ 22/Nov/13 ] |
|
the responsibility of osp_sync_thread is to destroy OST objects once the corresponding files are unlinked. technically it should be OK to run MDS with OSP thread stuck. but I'd rather remove the file manually... 00000040:00080000:15.0:1385095225.257044:0:19973:0:(llog_cat.c:558:llog_cat_process_cb()) processing log 0x18a3:1:0 at index 6 of catalog 0x204:1 so, it should be /O/1/d3/6307. I'd verify this with llog_reader utility and file's size should a bit more than 227712. it'd be great if you can attach output of llog_reader here. when the file is confirmed you can remove it manually and start MDS. |
| Comment by David Dillow [ 22/Nov/13 ] |
|
We'll need to find four files IIRC, since we had four separate OST processes LBUG, correct? Blake, we should see how many of the llog_cat.c:558 messages we can find and correlate with the 4 threads that died; we may have to iterate a few times to get them all. |
| Comment by David Dillow [ 22/Nov/13 ] |
|
Alex, can you comment on the mapping from 0x18a3:1:0 to /O/1/d3/6307 so we can replicate it for the other logs? |
| Comment by David Dillow [ 22/Nov/13 ] |
|
Here's logs being processed by the the four threads that LBUG'd: |
| Comment by Alex Zhuravlev [ 22/Nov/13 ] |
|
0x18a3:1:0 - 1 is a sequence, so /O/1 - a hierarchy storying all the objects from sequence 1. (gdb) p 0x18a3 /O/1/d3/6307 I'd suggest to rename the files to something like 6307.B to be able to recover in case, not just remove them. |
| Comment by Blake Caldwell [ 22/Nov/13 ] |
|
Alex, We pulled the files below from debugfs and put them through llog_reader. The output of each (0x18a4, 0x18a3, 0x18a2, 0x18a1) is attached. We compared it to another log file that was processed successfully, which was empty. We will wait to hear back if these look good and will make a copy and remove. |
| Comment by Blake Caldwell [ 22/Nov/13 ] |
|
If someone could please confirm whether the attached llogs contain the expected information, we will proceed with renaming them. Until then we will operate in this non-optimal state. Thanks. |
| Comment by Alex Zhuravlev [ 25/Nov/13 ] |
|
looks so. I suggest to rename them using direct ldiskfs mount and restart MDS. |
| Comment by Blake Caldwell [ 26/Nov/13 ] |
|
Renaming the files is ldiskfs resolved the mount issues. It reported a failure reading 2 llogs and the mount was successful. The other 2 were likely rate-limited by rsyslog. Nov 26 09:33:17 atlas-mds3 kernel: [ 251.406572] Lustre: Lustre: Build Version: 2.4.1--CHANGED-2.6.32-358.18.1.el6.atlas.x86_64 |
| Comment by Blake Caldwell [ 26/Nov/13 ] |
|
When looking at the mtimes of other llogs we noticed that it corresponded to the time of the last successful mount. Note that 6305 was Nov 5 12:07. The output of llog_reader contains the same: Now the question of why these logs got truncated during normal operation. We identified that the MDT returned an error code not handled by the scsi layer. So if the other llogs were consumed and cleared on lustre mount (is that correct?), they don't appear to get appended/committed to in normal operation. Why would an EIO affect the llogs? root@atlas-mds3 d1]# ls -l |
| Comment by Andreas Dilger [ 27/Nov/13 ] |
|
Alex, it is never good to assert on IO errors from the disk. Should this be converted to an error and handled more gracefully? |
| Comment by Alex Zhuravlev [ 28/Nov/13 ] |
|
yes, definitely. I'm thinking on what would be a good reaction here. just skip such a log? remove it? |
| Comment by Andreas Dilger [ 29/Nov/13 ] |
|
Blake, Alex, |
| Comment by Jason Hill (Inactive) [ 10/Feb/14 ] |
|
So the last discussion point was about a future looking fix - has any work occurred? Should we go ahead and create another LU for that effort and go ahead and close this issue out? – |
| Comment by Alex Zhuravlev [ 12/Feb/14 ] |
|
Jason, I've been working on an automated test for this and similar issues. |
| Comment by Alex Zhuravlev [ 14/Feb/14 ] |
|
a preliminary patch: http://review.whamcloud.com/#/c/9281/ |
| Comment by James Nunez (Inactive) [ 13/Mar/14 ] |
|
The patch http://review.whamcloud.com/#/c/9281/ landed for 2.6. The patch that landed was "preliminary". Does anything else need to be done to complete this ticket? |
| Comment by Blake Caldwell [ 13/Mar/14 ] |
|
Great! This ticket can be closed now. |
| Comment by Peter Jones [ 20/Mar/14 ] |
|
I checked with Alex and he agrees to close this ticket and track and further similar work under new tickets |