[LU-4290] osp_sync_threads encounters EIO on mount Created: 22/Nov/13  Updated: 20/Mar/14  Resolved: 20/Mar/14

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.4.1
Fix Version/s: Lustre 2.6.0

Type: Bug Priority: Critical
Reporter: Blake Caldwell Assignee: Alex Zhuravlev
Resolution: Fixed Votes: 0
Labels: None
Environment:

RHEL 6.4/distro IB


Attachments: File 6305.llog_out     File 6306.llog_out     File 6307.llog_out     File 6308.llog_out     File lustre-log.1385095225.19969.gz     File lustre-log.1385095225.19971.gz     File lustre-log.1385095225.19973.gz     File lustre-log.1385095225.19975.gz    
Severity: 2
Rank (Obsolete): 11773

 Description   

We encountered this assertion in production, libcfs_panic_on_lbug was set to 1, so server rebooted. On mount, the same assertion and lbug would occur. Filesystem will mount with panic_on_lbug set to 0. We've captured a crash dump and lustre log messages with the debug flags:

[root@atlas-mds3 ~]# cat /proc/sys/lnet/debug
trace ioctl neterror warning other error emerg ha config console

Ran e2fsck:
e2fsck -f -j /dev/mapper/atlas2-mdt1-journal /dev/mapper/atlas2-mdt1

and only fixed the quota inconsistencies it found.

At the moment, we are back to production after the osp_sync_threads lbugs on mount. There are hung task messages about osp_sync_threads as would be expected. We want to fix the root issue that is causing the assertions.

kernel messages during one of the failed mounts
Nov 21 21:16:44 atlas-mds3 kernel: [ 911.319839] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts:
Nov 21 21:16:44 atlas-mds3 kernel: [ 911.986208] Lustre: mdt_num_threads module parameter is deprecated, use mds_num_threads instead or unset both for dynamic thread startup
Nov 21 21:16:46 atlas-mds3 kernel: [ 913.069371] Lustre: atlas2-MDT0000: used disk, loading
Nov 21 21:16:47 atlas-mds3 kernel: [ 914.261572] LustreError: 18945:0:(osp_sync.c:862:osp_sync_thread()) ASSERTION( rc == 0 || rc == LLOG_PROC_BREAK ) failed: 0 changes, 0 in progress, 0 in flight: -5
Nov 21 21:16:47 atlas-mds3 kernel: [ 914.278318] LustreError: 18945:0:(osp_sync.c:862:osp_sync_thread()) LBUG
Nov 21 21:16:47 atlas-mds3 kernel: [ 914.286036] Pid: 18945, comm: osp-syn-256
Nov 21 21:16:47 atlas-mds3 kernel: [ 914.290841]
Nov 21 21:16:47 atlas-mds3 kernel: [ 914.290844] Call Trace:

We also see this message:
Nov 21 23:01:01 atlas-mds3 kernel: [ 1512.633528] ERST: NVRAM ERST Log Address Range is not implemented yet



 Comments   
Comment by Peter Jones [ 22/Nov/13 ]

Alex

Could you please comment on this one?

Thanks

Peter

Comment by Alex Zhuravlev [ 22/Nov/13 ]

Hi Blake, do you have lustre logs for the case?

Comment by Blake Caldwell [ 22/Nov/13 ]

Similar to LU-3063 assert with EIO, but this was on MDT and there was no memory pressure.

Comment by Alex Zhuravlev [ 22/Nov/13 ]

strange.. it looks like the log file is short. the last claimed record is 64767:

00000040:00001000:15.0:1385095225.257089:0:19973:0:(llog.c:344:llog_process_thread()) index: 3430 last_index 64767

but we aren't able to find 3429 even:

00000040:00001000:23.0:1385095225.346339:0:19973:0:(llog_osd.c:553:llog_osd_next_block()) looking for log index 3430 (cur idx 3430 off 227712)
00000040:00000001:23.0:1385095225.346342:0:19973:0:(llog_osd.c:656:llog_osd_next_block()) Process leaving via out (rc=18446744073709551611 : -5 : 0xfffffffffffffffb)

what kind of storage do you use? is it a software raid? have you encountered power loss issues or something similar?

Comment by Blake Caldwell [ 22/Nov/13 ]

It's a SAS-attached NetApp 5524 with redundant controllers. There were no power issues. However, there were problems on the hardware side. At the time, some code was being tested that was read-only but triggered an error on the storage device. The messages below are representative of the ones seen. The EIO errors below should have been returned to the testing code, not a lustre process. However something related caused the EIO to osp_sync_threads to trip the LBUG around 20:20.

Nov 21 20:01:18 atlas-mds3 kernel: [3408100.021270] sd 6:0:13:0: [sdp] Result: hostbyte=DID_SOFT_ERROR driverbyte=DRIVER_OK
Nov 21 20:01:18 atlas-mds3 kernel: [3408100.030243] sd 6:0:13:0: [sdp] CDB: Read(10): 28 00 dc 60 01 00 00 3c 00 00
Nov 21 20:01:18 atlas-mds3 kernel: [3408100.040522] device-mapper: multipath: Failing path 8:240.
Nov 21 20:01:19 atlas-mds3 kernel: [3408100.389700] sd 6:0:12:0: [sdk] Unhandled error code
Nov 21 20:01:19 atlas-mds3 kernel: [3408100.395458] sd 6:0:12:0: [sdk] Result: hostbyte=DID_SOFT_ERROR driverbyte=DRIVER_OK
Nov 21 20:01:19 atlas-mds3 kernel: [3408100.404427] sd 6:0:12:0: [sdk] CDB: Read(10): 28 00 dc 60 01 00 00 3c 00 00
Nov 21 20:01:19 atlas-mds3 kernel: [3408100.412758] device-mapper: multipath: Failing path 8:160.
Nov 21 20:01:19 atlas-mds3 kernel: [3408100.766397] sd 6:0:8:0: [sdaj] Unhandled error code
Nov 21 20:01:19 atlas-mds3 kernel: [3408100.772150] sd 6:0:8:0: [sdaj] Result: hostbyte=DID_SOFT_ERROR driverbyte=DRIVER_OK
Nov 21 20:01:19 atlas-mds3 kernel: [3408100.781131] sd 6:0:8:0: [sdaj] CDB: Read(10): 28 00 dc 60 01 00 00 3c 00 00
Nov 21 20:01:19 atlas-mds3 kernel: [3408100.789487] device-mapper: multipath: Failing path 66:48.
Nov 21 20:01:21 atlas-mds3 kernel: [3408102.801256] Buffer I/O error on device dm-2, logical block 404822017
Nov 21 20:01:21 atlas-mds3 kernel: [3408102.808691] lost page write due to I/O error on dm-2
Nov 21 20:01:21 atlas-mds3 kernel: [3408102.814580] end_request: I/O error, dev dm-2, sector 3239118328
Nov 21 20:01:21 atlas-mds3 kernel: [3408102.975980] end_request: I/O error, dev dm-2, sector 3239442680
Nov 21 20:01:21 atlas-mds3 kernel: [3408102.982901] end_request: I/O error, dev dm-2, sector 3239442944
Nov 21 20:01:21 atlas-mds3 kernel: [3408102.989845] end_request: I/O error, dev dm-2, sector 3239447520
Nov 21 20:01:21 atlas-mds3 kernel: [3408102.996781] end_request: I/O error, dev dm-2, sector 3239462088

The system rebooted at ~20:20 and the last messages in syslog were:
Nov 21 20:13:48 atlas-mds3 kernel: [3408848.596067] sd 6:0:8:0: [sdaj] CDB: Read(10): 28 00 dc 60 01 08 00 18 00 00
Nov 21 20:13:48 atlas-mds3 kernel: [3408848.604397] device-mapper: multipath: Failing path 66:48.

Comment by Alex Zhuravlev [ 22/Nov/13 ]

I can't say for sure, but llog code detected missing records in the file, which might be a result of I/O issues.
the simplest solution to the issue is to remove that file at the cost of orphaned objects on OST (so, lost disk space) until lfsck run.

Comment by Blake Caldwell [ 22/Nov/13 ]

Delete which file for OST llog?

Comment by Blake Caldwell [ 22/Nov/13 ]

We will have an opportunity next Tuesday if more work is needed (truncate llog?). When running with osp_sync_threads tasks that are hung are we turning away client I/O for that OST?

Comment by Alex Zhuravlev [ 22/Nov/13 ]

the responsibility of osp_sync_thread is to destroy OST objects once the corresponding files are unlinked. technically it should be OK to run MDS with OSP thread stuck. but I'd rather remove the file manually...

00000040:00080000:15.0:1385095225.257044:0:19973:0:(llog_cat.c:558:llog_cat_process_cb()) processing log 0x18a3:1:0 at index 6 of catalog 0x204:1

so, it should be /O/1/d3/6307. I'd verify this with llog_reader utility and file's size should a bit more than 227712. it'd be great if you can attach output of llog_reader here. when the file is confirmed you can remove it manually and start MDS.

Comment by David Dillow [ 22/Nov/13 ]

We'll need to find four files IIRC, since we had four separate OST processes LBUG, correct?

Blake, we should see how many of the llog_cat.c:558 messages we can find and correlate with the 4 threads that died; we may have to iterate a few times to get them all.

Comment by David Dillow [ 22/Nov/13 ]

Alex, can you comment on the mapping from 0x18a3:1:0 to /O/1/d3/6307 so we can replicate it for the other logs?

Comment by David Dillow [ 22/Nov/13 ]

Here's logs being processed by the the four threads that LBUG'd:
00000040:00080000:15.0:1385095225.247128:0:19969:0:(llog_cat.c:558:llog_cat_process_cb()) processing log 0x18a1:1:0 at index 6 of catalog 0x200:1
00000040:00080000:15.0:1385095225.253880:0:19971:0:(llog_cat.c:558:llog_cat_process_cb()) processing log 0x18a2:1:0 at index 6 of catalog 0x202:1
00000040:00080000:15.0:1385095225.257044:0:19973:0:(llog_cat.c:558:llog_cat_process_cb()) processing log 0x18a3:1:0 at index 6 of catalog 0x204:1
00000040:00080000:15.0:1385095225.261700:0:19975:0:(llog_cat.c:558:llog_cat_process_cb()) processing log 0x18a4:1:0 at index 6 of catalog 0x206:1

Comment by Alex Zhuravlev [ 22/Nov/13 ]

0x18a3:1:0 - 1 is a sequence, so /O/1 - a hierarchy storying all the objects from sequence 1.

(gdb) p 0x18a3
$1 = 6307
(gdb) p 0x18a3 & 31
$1 = 3

/O/1/d3/6307

I'd suggest to rename the files to something like 6307.B to be able to recover in case, not just remove them.

Comment by Blake Caldwell [ 22/Nov/13 ]

Alex,

We pulled the files below from debugfs and put them through llog_reader. The output of each (0x18a4, 0x18a3, 0x18a2, 0x18a1) is attached. We compared it to another log file that was processed successfully, which was empty.

We will wait to hear back if these look good and will make a copy and remove.

Comment by Blake Caldwell [ 22/Nov/13 ]

If someone could please confirm whether the attached llogs contain the expected information, we will proceed with renaming them. Until then we will operate in this non-optimal state. Thanks.

Comment by Alex Zhuravlev [ 25/Nov/13 ]

looks so. I suggest to rename them using direct ldiskfs mount and restart MDS.

Comment by Blake Caldwell [ 26/Nov/13 ]

Renaming the files is ldiskfs resolved the mount issues. It reported a failure reading 2 llogs and the mount was successful. The other 2 were likely rate-limited by rsyslog.

Nov 26 09:33:17 atlas-mds3 kernel: [ 251.406572] Lustre: Lustre: Build Version: 2.4.1--CHANGED-2.6.32-358.18.1.el6.atlas.x86_64
Nov 26 09:33:27 atlas-mds3 kernel: [ 261.291971] LDISKFS-fs (dm-3): recovery complete
Nov 26 09:33:27 atlas-mds3 kernel: [ 261.343137] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. quota=on. Opts:
Nov 26 09:33:28 atlas-mds3 kernel: [ 262.967910] Lustre: atlas2-MDT0000: used disk, loading
Nov 26 09:33:29 atlas-mds3 kernel: [ 263.695531] LustreError: 14567:0:(llog_cat.c:192:llog_cat_id2handle()) atlas2-OST00ff-osc-MDT0000: error opening log id 0x18a1:1:0: rc = -2
Nov 26 09:33:29 atlas-mds3 kernel: [ 263.709646] LustreError: 14567:0:(llog_cat.c:795:cat_cancel_cb()) atlas2-OST00ff-osc-MDT0000: cannot find handle for llog 0x18a1:1: -2
Nov 26 09:33:29 atlas-mds3 kernel: [ 263.723280] LustreError: 14567:0:(llog_cat.c:833:llog_cat_init_and_process()) atlas2-OST00ff-osc-MDT0000: llog_process() with cat_cancel_cb failed: rc = -2
Nov 26 09:33:29 atlas-mds3 kernel: [ 263.746477] LustreError: 14567:0:(llog_cat.c:192:llog_cat_id2handle()) atlas2-OST0100-osc-MDT0000: error opening log id 0x18a2:1:0: rc = -2
Nov 26 09:33:29 atlas-mds3 kernel: [ 263.760683] LustreError: 14567:0:(llog_cat.c:795:cat_cancel_cb()) atlas2-OST0100-osc-MDT0000: cannot find handle for llog 0x18a2:1: -2
Nov 26 09:33:29 atlas-mds3 kernel: [ 263.774522] LustreError: 14567:0:(llog_cat.c:833:llog_cat_init_and_process()) atlas2-OST0100-osc-MDT0000: llog_process() with cat_cancel_cb failed: rc = -2
Nov 26 09:33:31 atlas-mds3 kernel: [ 265.647379] LustreError: 11-0: atlas2-MDT0000-lwp-MDT0000: Communicating with 0@lo, operation mds_connect failed with -11.
Nov 26 09:33:31 atlas-mds3 kernel: [ 265.655134] Lustre: atlas2-MDT0000: Imperative Recovery enabled, recovery window shrunk from 1800-5400 down to 900-2700

Comment by Blake Caldwell [ 26/Nov/13 ]

When looking at the mtimes of other llogs we noticed that it corresponded to the time of the last successful mount. Note that 6305 was Nov 5 12:07. The output of llog_reader contains the same:
Time : Tue Nov 5 12:07:45 2013

Now the question of why these logs got truncated during normal operation. We identified that the MDT returned an error code not handled by the scsi layer.
Nov 26 10:58:27 atlas-mds3 kernel: [ 3820.152740] mpt2sas0: #011handle(0x000c), ioc_status(scsi data underrun)(0x0045), smid(1750)

So if the other llogs were consumed and cleared on lustre mount (is that correct?), they don't appear to get appended/committed to in normal operation. Why would an EIO affect the llogs?

root@atlas-mds3 d1]# ls -l
total 740
rw-rr- 1 root root 19776 Nov 21 23:40 10017
rw-rr- 1 root root 19584 Nov 21 23:40 10049
rw-rr- 1 root root 19712 Nov 21 23:40 10081
rw-rr- 1 root root 229120 Nov 5 12:07 6305
rw-rr- 1 root root 8320 Nov 21 22:42 8321
rw-rr- 1 root root 24192 Nov 21 23:40 9345
rw-rr- 1 root root 24192 Nov 21 23:40 9377

Comment by Andreas Dilger [ 27/Nov/13 ]

Alex, it is never good to assert on IO errors from the disk. Should this be converted to an error and handled more gracefully?

Comment by Alex Zhuravlev [ 28/Nov/13 ]

yes, definitely. I'm thinking on what would be a good reaction here. just skip such a log? remove it?

Comment by Andreas Dilger [ 29/Nov/13 ]

Blake,
The timestamps on the logs are not updated by the Lustre code, so that is why it appears they are not modified after mount. Also, logs are only used once and then deleted, so new ones are crated each mount.

Alex,
I think that if there is an error looking up a record in the llog that unlink should be skipped and the next record processed. Once all the records are processed (for good or bad) the log file will be deleted anyway. I don't think this should be handled by the llog code internally, since we don't necessarily want to delete a config file if there us a bad block on disk or some other toor set problem. For the object unlink case, it would eventually be cleaned up by LFSCK so I don't think it is terrible if some records are not processed.

Comment by Jason Hill (Inactive) [ 10/Feb/14 ]

So the last discussion point was about a future looking fix - has any work occurred? Should we go ahead and create another LU for that effort and go ahead and close this issue out?


-Jason

Comment by Alex Zhuravlev [ 12/Feb/14 ]

Jason, I've been working on an automated test for this and similar issues.

Comment by Alex Zhuravlev [ 14/Feb/14 ]

a preliminary patch: http://review.whamcloud.com/#/c/9281/

Comment by James Nunez (Inactive) [ 13/Mar/14 ]

The patch http://review.whamcloud.com/#/c/9281/ landed for 2.6.

The patch that landed was "preliminary". Does anything else need to be done to complete this ticket?

Comment by Blake Caldwell [ 13/Mar/14 ]

Great! This ticket can be closed now.

Comment by Peter Jones [ 20/Mar/14 ]

I checked with Alex and he agrees to close this ticket and track and further similar work under new tickets

Generated at Sat Feb 10 01:41:23 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.