[LU-5148] OSTs won't mount following upgrade to 2.4.2 Created: 05/Jun/14 Updated: 18/Jul/16 Resolved: 29/Apr/16 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major |
| Reporter: | Ned Bass | Assignee: | Hongchao Zhang |
| Resolution: | Cannot Reproduce | Votes: | 0 |
| Labels: | llnl | ||
| Attachments: |
|
||||||||
| Issue Links: |
|
||||||||
| Severity: | 3 | ||||||||
| Rank (Obsolete): | 14209 | ||||||||
| Description |
|
A production lustre cluster "porter" was upgraded from 2.4.0-28chaos to lustre-2.4.2-11chaos today. OSTs now will not start. # porter1 /root > /etc/init.d/lustre start Stopping snmpd: [ OK ] Shutting down cerebrod: [ OK ] Mounting porter1/lse-ost0 on /mnt/lustre/local/lse-OST0001 mount.lustre: mount porter1/lse-ost0 at /mnt/lustre/local/lse-OST0001 failed: Input/output error Is the MGS running? # porter1 /root > Lustre: Lustre: Build Version: 2.4.2-11chaos-11chaos--PRISTINE-2.6.32-431.17.2.1chaos.ch5.2.x86_64 LustreError: 137-5: lse-OST0002_UUID: not available for connect from 192.168.115.67@o2ib10 (no target) LustreError: 137-5: lse-OST0002_UUID: not available for connect from 192.168.120.38@o2ib7 (no target) LustreError: 137-5: lse-OST0002_UUID: not available for connect from 192.168.120.101@o2ib7 (no target) LustreError: 5426:0:(client.c:1053:ptlrpc_import_delay_req()) @@@ send limit expired req@ffff881026873800 x1470103660003336/t0(0) o253->MGC172.19.1.165@o2ib100@172.19.1.165@o2ib100:26/25 lens 4768/4768 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1 LustreError: 5426:0:(obd_mount_server.c:1140:server_register_target()) lse-OST0001: error registering with the MGS: rc = -5 (not fatal) LustreError: 137-5: lse-OST0002_UUID: not available for connect from 192.168.116.205@o2ib5 (no target) LustreError: 137-5: lse-OST0002_UUID: not available for connect from 192.168.114.162@o2ib5 (no target) LustreError: Skipped 19 previous similar messages LustreError: 5426:0:(client.c:1053:ptlrpc_import_delay_req()) @@@ send limit expired req@ffff881026873800 x1470103660003340/t0(0) o101->MGC172.19.1.165@o2ib100@172.19.1.165@o2ib100:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1 LustreError: 137-5: lse-OST0002_UUID: not available for connect from 192.168.120.162@o2ib7 (no target) LustreError: Skipped 23 previous similar messages LustreError: 5426:0:(client.c:1053:ptlrpc_import_delay_req()) @@@ send limit expired req@ffff881026873800 x1470103660003344/t0(0) o101->MGC172.19.1.165@o2ib100@172.19.1.165@o2ib100:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1 LustreError: 15c-8: MGC172.19.1.165@o2ib100: The configuration from log 'lse-OST0001' failed (-5). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information. LustreError: 5426:0:(obd_mount_server.c:1273:server_start_targets()) failed to start server lse-OST0001: -5 Lustre: lse-OST0001: Unable to start target: -5 LustreError: 5426:0:(obd_mount_server.c:865:lustre_disconnect_lwp()) lse-MDT0000-lwp-OST0001: Can't end config log lse-client. LustreError: 5426:0:(obd_mount_server.c:1442:server_put_super()) lse-OST0001: failed to disconnect lwp. (rc=-2) LustreError: 5426:0:(obd_mount_server.c:1472:server_put_super()) no obd lse-OST0001 Lustre: server umount lse-OST0001 complete LustreError: 5426:0:(obd_mount.c:1290:lustre_fill_super()) Unable to mount (-5) # porter1 /root > lctl ping 172.19.1.165@o2ib100 # <-- MGS NID 12345-0@lo 12345-172.19.1.165@o2ib100 |
| Comments |
| Comment by Ned Bass [ 05/Jun/14 ] |
|
Attached -1 debug logs from OSS lustre.log.porter1.1402001330.gz |
| Comment by Bruno Faccini (Inactive) [ 06/Jun/14 ] |
|
Hello Ned, |
| Comment by Peter Jones [ 06/Jun/14 ] |
|
Hongchao Is there anything else that you can add? Thanks Peter |
| Comment by Ned Bass [ 06/Jun/14 ] |
|
Hi Bruno, |
| Comment by Ned Bass [ 06/Jun/14 ] |
|
Also, I was finally able to get the OSTs to mount by unmounting and remounting the MDT, but leaving the MGT (which is a separate dataset) mounted. |
| Comment by Hongchao Zhang [ 06/Jun/14 ] |
|
there is no "MGS_CONNECT" request found in MGS/MGS in the log "lustre.log.porter-mds1.1402001323.gz", and there is even no Hi Ned, could you please attach the logs containing the stack traces of the mgs service mentioned above, thanks! |
| Comment by Ned Bass [ 09/Jun/14 ] |
|
Hongchao Zhang, I've attached the MDS console log: porter-mds1.console.txt |
| Comment by Hongchao Zhang [ 11/Jun/14 ] |
|
there is a similar issue of ZFS in https://github.com/zfsonlinux/zfs/issues/542, what is the version of ZFS installed on your site? |
| Comment by Christopher Morrone [ 11/Jun/14 ] |
|
The version of ZFS installed at our site is quite a bit newer than 0.6.0.*. Our version of ZFS is very close to the tip of master, and what will soon be tagged as 0.6.3. |
| Comment by Hongchao Zhang [ 26/Jun/14 ] |
|
Hi Could you please print the actual code lines of the following address, 2014-06-05 11:25:55 [<ffffffffa0db44b4>] mgs_ir_update+0x244/0xb00 [mgs] 2014-06-05 11:25:55 [<ffffffffa0d9287c>] mgs_handle_target_reg+0x40c/0xe30 [mgs] And I can find the related codes in these functions in https://github.com/chaos/lustre/blob/2.4.2-11chaos/lustre/mgs/ Thanks very much! |
| Comment by Ned Bass [ 26/Jun/14 ] |
|
Here you go. (gdb) l *(mgs_handle_target_reg+0x40c)
0x18ac is in mgs_handle_target_reg (/usr/src/debug/lustre-2.4.2/lustre/mgs/mgs_handler.c:322).
317
318 if (opc == LDD_F_OPC_READY) {
319 CDEBUG(D_MGS, "fs: %s index: %d is ready to reconnect.\n",
320 mti->mti_fsname, mti->mti_stripe_index);
321 rc = mgs_ir_update(env, mgs, mti);
322 if (rc) {
323 LASSERT(!(mti->mti_flags & LDD_F_IR_CAPABLE));
324 CERROR("Update IR return with %d(ignore and IR "
325 "disabled)\n", rc);
326 }
(gdb) l *( mgs_ir_update+0x244 ) 0x234e4 is in mgs_ir_update (/usr/src/debug/lustre-2.4.2/lustre/mgs/mgs_nids.c:270). 265 rc = dt_record_write(env, fsdb, &buf, &off, th); 266 267 out: 268 dt_trans_stop(env, mgs->mgs_bottom, th); 269 out_put: 270 lu_object_put(env, &fsdb->do_lu); 271 RETURN(rc); 272 } 273 274 #define MGS_NIDTBL_VERSION_INIT 2 |
| Comment by Hongchao Zhang [ 27/Jun/14 ] |
|
Hi, Could you please try the debug patch at http://review.whamcloud.com/#/c/10869/ to check whether this issue occurs again? Thanks very much! |
| Comment by Peter Jones [ 30/Jun/14 ] |
|
Hongchao Could you please elaborate as to how this patch works? Thanks Peter |
| Comment by Hongchao Zhang [ 01/Jul/14 ] |
|
this debug patch changed the IR (Imperative Recovery) operations in MGS to update asynchronously, and if the issue won't occur again, we can isolate the problem |
| Comment by Oleg Drokin [ 15/Jul/14 ] |
|
Looking at the stacktraces in the logs, it seems everybody is either blocked on the transaction commit wait inside zfs or on the semaphore that is held by somebody that waits on the transaction commit. So it really looks like some sort of in-zfs wait for me. There's no dump of all threads stacks in here, so I wonder if you have one where it is visible there's lustre induced deadlock of some sort above zfs? |
| Comment by Cliff White (Inactive) [ 25/Aug/14 ] |
|
I attempted to reproduce this on Hyperion, by starting with 2.4.0 and upgrading to 2.4.2 after running some IO tests. I could not create the failure, however was using the whamcloud 2.4.2 release, which may be different |
| Comment by John Fuchs-Chesney (Inactive) [ 29/Apr/16 ] |
|
Hello Ned, Do you have any update for us on this elderly ticket? Has this issue been resolved by use of later versions, for example? We would like to mark it as resolved, if you have no objection? Thanks, |
| Comment by Ned Bass [ 29/Apr/16 ] |
|
Closing as stale. |