[LU-4282] some OSTs reported as inactive in lfs df, UP with lctl dl, data not accessible Created: 20/Nov/13 Updated: 20/Feb/14 Resolved: 24/Jan/14 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.4.1 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Blocker |
| Reporter: | Frederik Ferner (Inactive) | Assignee: | Minh Diep |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | None | ||
| Environment: |
MDS and OSS on Lustre 2.4.1, clients lustre 1.8.9, all Red Hat Enterprise Linux. |
||
| Attachments: |
|
||||||||
| Issue Links: |
|
||||||||
| Severity: | 3 | ||||||||
| Rank (Obsolete): | 11756 | ||||||||
| Description |
|
As indicated in After upgrading the servers from 2.3 to 2.4.1 (MDT build #51 of b2_4 from jenkins) our clients can no longer fully access this file system. The clients can mount the file system and can access one OST on each of the two OSSes, but the other OSSes are not accessible and are shown as inactive in lfs df output and /proc/fs/lustre/lov/*/target_obd, but are shown as UP in lctl dl. [bnh65367@cs04r-sc-serv-07 ~]$ lctl dl |grep play01 91 UP lov play01-clilov-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 4 92 UP mdc play01-MDT0000-mdc-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 5 93 UP osc play01-OST0000-osc-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 5 94 UP osc play01-OST0001-osc-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 5 95 UP osc play01-OST0002-osc-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 5 96 UP osc play01-OST0003-osc-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 5 97 UP osc play01-OST0004-osc-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 5 98 UP osc play01-OST0005-osc-ffff810076ae2000 9186608e-d432-283c-0e6e-47b800427d3e 5 [bnh65367@cs04r-sc-serv-07 ~]$ lfs df /mnt/play01 UUID 1K-blocks Used Available Use% Mounted on play01-MDT0000_UUID 78636320 3502948 75133372 4% /mnt/play01[MDT:0] play01-OST0000_UUID 7691221300 4506865920 3184355380 59% /mnt/play01[OST:0] play01-OST0001_UUID 7691221300 3765688064 3925533236 49% /mnt/play01[OST:1] play01-OST0002_UUID : inactive device play01-OST0003_UUID : inactive device play01-OST0004_UUID : inactive device play01-OST0005_UUID : inactive device filesystem summary: 15382442600 8272553984 7109888616 54% /mnt/play01 [bnh65367@cs04r-sc-serv-07 ~]$ cat /proc/fs/lustre/lov/play01-clilov-ffff810076ae2000/target_obd 0: play01-OST0000_UUID ACTIVE 1: play01-OST0001_UUID ACTIVE 2: play01-OST0002_UUID INACTIVE 3: play01-OST0003_UUID INACTIVE 4: play01-OST0004_UUID INACTIVE 5: play01-OST0005_UUID INACTIVE As expected the fail-over OSS for each OST does see connection attempts and reports (correctly) that that OST is not available on this OSS. I have confirmed that the OSTs are mounted on the OSSes correctly. For the other client that I have tried to bring back the situation is similar but the OSTs that are inactive are slightly different: [bnh65367@cs04r-sc-serv-06 ~]$ lfs df /mnt/play01 UUID 1K-blocks Used Available Use% Mounted on play01-MDT0000_UUID 78636320 3502948 75133372 4% /mnt/play01[MDT:0] play01-OST0000_UUID : inactive device play01-OST0001_UUID 7691221300 3765688064 3925533236 49% /mnt/play01[OST:1] play01-OST0002_UUID 7691221300 1763305508 5927915792 23% /mnt/play01[OST:2] play01-OST0003_UUID : inactive device play01-OST0004_UUID : inactive device play01-OST0005_UUID : inactive device filesystem summary: 15382442600 5528993572 9853449028 36% /mnt/play01 [bnh65367@cs04r-sc-serv-06 ~]$ play01-OST0000, play01-OST0002, play01-OST0004 are on one OSS I have tested the network, don't see any errors, lnet_selftest between the clients and the OSSes works at line rate at least for the first client (1GigE client...), nothing obvious on the second client either. For completeness I should probably mention that all the servers (MDS and OSSes) have changed IP addresses at the same time as the upgrade, I have verified the information is correctly changed on the targets, both clients have been rebooted multiple times since the IP address change, without any changes. |
| Comments |
| Comment by Minh Diep [ 20/Nov/13 ] |
|
Hi Frederik, Could you show us the command you used to change IP address on the servers? |
| Comment by Frederik Ferner (Inactive) [ 20/Nov/13 ] |
|
I unmounted the MDT, MGT, all OSTs, and the two clients I'm currently trying to use, other clients were left up to be rebooted later, changed the options etc in /etc/modprobe.d/lustre.conf to bring up the correct NIDs on the servers, confirmed with lctl list_nids that the correct IPs were identified, then ran this on the mds/mgs tunefs.lustre --erase-params --writeconf /dev/vg_play01/mgs tunefs.lustre --erase-params --writeconf --mgsnode=172.23.144.5@tcp0 --mgsnode=172.23.144.6@tcp0 --servicenode=172.23.144.5@tcp0 --servicenode=172.23.144.6@tcp0 --param mdt.quota_type=ug --param mdt.group_upcall=/usr/sbin/l_getgroups --mountfsoptions=iopen_nopriv,user_xattr,errors=remount-ro,acl /dev/vg_play01/mdt On the OSSes I ran the following for each OST: tunefs.lustre --erase-params --writeconf --mgsnode=172.23.144.5@tcp0 --mgsnode=172.23.144.6@tcp0 --servicenode=172.23.144.14@tcp0 --servicenode=172.23.144.18@tcp0 --param ost.quota_type=ug /dev/mapper/ost_play01_0 The mounted first MGS, then MGT, then all OSTs, then tried to bring those two clients back... I've actually ran those commands a few more times now, partly because the LBUG on the MDT seemed to confuse things... |
| Comment by Minh Diep [ 20/Nov/13 ] |
|
Why did you include two mgsnode? could you show tunefs.lustre --dryrun <mdt>, and tunefs.lustre --dryrun <ost> ? |
| Comment by Frederik Ferner (Inactive) [ 21/Nov/13 ] |
|
to mgsnode because the MGS will fail over between the same two machines as the MDT even though it is on a separate partition. tunefs.lustre --dryrun for mdt and ost: [bnh65367@cs04r-sc-mds02-03 ~]$ sudo tunefs.lustre --dryrun /dev/mapper/vg_play01-mdt
checking for existing Lustre data: found
Reading CONFIGS/mountdata
Read previous values:
Target: play01-MDT0000
Index: 0
Lustre FS: play01
Mount type: ldiskfs
Flags: 0x1001
(MDT no_primnode )
Persistent mount opts: iopen_nopriv,user_xattr,errors=remount-ro,acl
Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.5@tcp failover.node=172.23.144.6@tcp mdt.quota_type=ug mdt.group_upcall=/usr/sbin/l_getgroups
Permanent disk data:
Target: play01-MDT0000
Index: 0
Lustre FS: play01
Mount type: ldiskfs
Flags: 0x1001
(MDT no_primnode )
Persistent mount opts: iopen_nopriv,user_xattr,errors=remount-ro,acl
Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.5@tcp failover.node=172.23.144.6@tcp mdt.quota_type=ug mdt.group_upcall=/usr/sbin/l_getgroups
exiting before disk write.
[bnh65367@cs04r-sc-mds02-03 ~]$
OSTs: [bnh65367@cs04r-sc-oss01-04 ~]$ for i in /dev/mapper/ost_play01_* ; do sudo tunefs.lustre --dryrun $i ; done
checking for existing Lustre data: found
Reading CONFIGS/mountdata
Read previous values:
Target: play01-OST0000
Index: 0
Lustre FS: play01
Mount type: ldiskfs
Flags: 0x1402
(OST no_primnode )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug
Permanent disk data:
Target: play01-OST0000
Index: 0
Lustre FS: play01
Mount type: ldiskfs
Flags: 0x1402
(OST no_primnode )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug
exiting before disk write.
checking for existing Lustre data: found
Reading CONFIGS/mountdata
Read previous values:
Target: play01-OST0001
Index: 1
Lustre FS: play01
Mount type: ldiskfs
Flags: 0x1002
(OST no_primnode )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug
Permanent disk data:
Target: play01-OST0001
Index: 1
Lustre FS: play01
Mount type: ldiskfs
Flags: 0x1002
(OST no_primnode )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug
exiting before disk write.
checking for existing Lustre data: found
Reading CONFIGS/mountdata
Read previous values:
Target: play01-OST0002
Index: 2
Lustre FS: play01
Mount type: ldiskfs
Flags: 0x1002
(OST no_primnode )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug
Permanent disk data:
Target: play01-OST0002
Index: 2
Lustre FS: play01
Mount type: ldiskfs
Flags: 0x1002
(OST no_primnode )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug
exiting before disk write.
checking for existing Lustre data: found
Reading CONFIGS/mountdata
Read previous values:
Target: play01-OST0003
Index: 3
Lustre FS: play01
Mount type: ldiskfs
Flags: 0x1002
(OST no_primnode )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug
Permanent disk data:
Target: play01-OST0003
Index: 3
Lustre FS: play01
Mount type: ldiskfs
Flags: 0x1002
(OST no_primnode )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug
exiting before disk write.
checking for existing Lustre data: found
Reading CONFIGS/mountdata
Read previous values:
Target: play01-OST0004
Index: 4
Lustre FS: play01
Mount type: ldiskfs
Flags: 0x1002
(OST no_primnode )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug
Permanent disk data:
Target: play01-OST0004
Index: 4
Lustre FS: play01
Mount type: ldiskfs
Flags: 0x1002
(OST no_primnode )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.14@tcp failover.node=172.23.144.18@tcp ost.quota_type=ug
exiting before disk write.
checking for existing Lustre data: found
Reading CONFIGS/mountdata
Read previous values:
Target: play01-OST0005
Index: 5
Lustre FS: play01
Mount type: ldiskfs
Flags: 0x1002
(OST no_primnode )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug
Permanent disk data:
Target: play01-OST0005
Index: 5
Lustre FS: play01
Mount type: ldiskfs
Flags: 0x1002
(OST no_primnode )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=172.23.144.5@tcp mgsnode=172.23.144.6@tcp failover.node=172.23.144.18@tcp failover.node=172.23.144.14@tcp ost.quota_type=ug
exiting before disk write.
[bnh65367@cs04r-sc-oss01-04 ~]$
|
| Comment by Minh Diep [ 21/Nov/13 ] |
|
I don't understand "MGS will fail over between the same two machines as the MDT even though it is on a separate partition." I assume that cs04r-sc-mds02-03 = 72.23.144.5@tcp and another-mss-host = 172.23.144.6@tcp are these two sharing the same storage/device? |
| Comment by Frederik Ferner (Inactive) [ 21/Nov/13 ] |
|
Sorry, should have provided a bit more background... The file system has two OSS in active-active configuration, with the new IPs 172.23.144.14 and 172.23.144.18 sharing a storage array. For the MDS we also have two servers sharing a storage array, the new IPs for those are indeed 172.23.144.5 (cs04r-sc-mds02-03) and 172.23.144.6 (cs04r-sc-mds02-04). Cheers, |
| Comment by Minh Diep [ 21/Nov/13 ] |
|
ok thanks. Ah, I also see you share mgs? or it's a typo? tunefs.lustre --erase-params --writeconf /dev/vg_play01/mgs <<<<< If you have combined mgs/mdt, then you should only have 1 mgsnode Note: no --erase-params on second tunefs.lustre cmd |
| Comment by Minh Diep [ 21/Nov/13 ] |
|
don't forget the unmount all clients and OST while you --writeconf the MDS, Then run |
| Comment by Frederik Ferner (Inactive) [ 21/Nov/13 ] |
|
The MGS is on the same shared storage as the MDT, same LVM volume group but a separate logical volume. So I think I need your first set of commands. though I don't see how they are much different from mine. In any case, I've run them again after umounting everything, brought everything back up, no change. This time I noticed the following -16 errors in the logs, I assume they are because the OSTs are still in recovery but thought I'd mention them. Also there is this initially error about communication with 0@lo that I don't recall seeing before. Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: MGS: Logs for fs play01 were removed by user request. All servers must be restarted in order to regenerate the logs. Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-MDT0000.mdt.quota_type in log play01-MDT0000 Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: Skipped 1 previous similar message Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: play01-MDT0000: used disk, loading Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: Lustre: 4012:0:(mdt_handler.c:4948:mdt_process_config()) For interoperability, skip this mdt.quota_type. It is obsolete. Nov 21 17:30:06 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-MDT0000-lwp-MDT0000: Communicating with 0@lo, operation mds_connect failed with -11. Nov 21 17:30:32 cs04r-sc-mds02-03 kernel: Lustre: MGS: Regenerating play01-OST0000 log by user request. Nov 21 17:30:32 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-OST0000.ost.quota_type in log play01-OST0000 Nov 21 17:30:32 cs04r-sc-mds02-03 kernel: Lustre: Skipped 1 previous similar message Nov 21 17:30:38 cs04r-sc-mds02-03 kernel: Lustre: MGS: Regenerating play01-OST0001 log by user request. Nov 21 17:30:38 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-OST0001.ost.quota_type in log play01-OST0001 Nov 21 17:30:39 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-OST0000-osc-MDT0000: Communicating with 172.23.144.14@tcp, operation ost_connect failed with -16. Nov 21 17:30:54 cs04r-sc-mds02-03 kernel: Lustre: MGS: Regenerating play01-OST0002 log by user request. Nov 21 17:30:54 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-OST0002.ost.quota_type in log play01-OST0002 Nov 21 17:31:02 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-OST0003-osc-MDT0000: Communicating with 172.23.144.18@tcp, operation ost_connect failed with -16. Nov 21 17:31:02 cs04r-sc-mds02-03 kernel: LustreError: Skipped 1 previous similar message Nov 21 17:31:07 cs04r-sc-mds02-03 kernel: Lustre: MGS: Regenerating play01-OST0004 log by user request. Nov 21 17:31:07 cs04r-sc-mds02-03 kernel: Lustre: Skipped 1 previous similar message Nov 21 17:31:10 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-OST0001-osc-MDT0000: Communicating with 172.23.144.18@tcp, operation ost_connect failed with -16. Nov 21 17:31:11 cs04r-sc-mds02-03 kernel: Lustre: Setting parameter play01-OST0005.ost.quota_type in log play01-OST0005 Nov 21 17:31:11 cs04r-sc-mds02-03 kernel: Lustre: Skipped 2 previous similar messages Nov 21 17:31:44 cs04r-sc-mds02-03 kernel: LustreError: 11-0: play01-OST0000-osc-MDT0000: Communicating with 172.23.144.14@tcp, operation ost_connect failed with -16. Nov 21 17:31:44 cs04r-sc-mds02-03 kernel: LustreError: Skipped 2 previous similar messages Nov 21 17:32:34 cs04r-sc-mds02-03 kernel: Lustre: play01-MDT0000: Will be in recovery for at least 5:00, or until 1 client reconnects Nov 21 17:32:34 cs04r-sc-mds02-03 kernel: Lustre: play01-MDT0000: Denying connection for new client 45cd72fa-56c3-f257-0ed7-154d629ee603 (at 172.23.136.7@tcp), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 4:59 |
| Comment by Frederik Ferner (Inactive) [ 21/Nov/13 ] |
|
Ok, more testing with this showed that for all OSTs that were shown as inactive on the clients, the fail over OSS was seeing connection attempts. So failing over the OSTs to the other node makes them available on the clients, however failing over OSTs that were active makes them unavailable on the clients. So to make this clearer: Initially the OSTs were distributed like this: OSTs mounted on 172.23.144.14: play01-OST0000, play01-OST0002, play01-OST0004 In this configuration the clients were only able to access play01-OST0000 and play01-OST0001. The following distribution of OSTs makes them available on both clients I tested today: OSTs mounted on 172.23.144.14: play01-OST0000, play01-OST0003, play01-OST0005 As soon as any of the OSTs is mounted on the other OSS, none of the clients will connect to it, it appears (with a possible exception of clients that have not been rebooted recently, unloading lustre modules and starting again on the client seems to bring them into the first category. The same parameters/failnode setup works without problems so far on our other file systems were all servers are still running 1.8. |
| Comment by Minh Diep [ 25/Nov/13 ] |
|
Hi Frederik, Is it working now? I believe there might be a small step that we missed somewhere during the process. Please let me know if everything is working. Thanks |
| Comment by Frederik Ferner (Inactive) [ 25/Nov/13 ] |
|
Minh, it is sort of working. I have one configuration/setup where all OSTs can be accessed by all clients I've tried to bring up. However if I try to bring any of the OSTs up on a different OSS then they are on now, none of my clients even tries to contact this OSS. Recovery doesn't even start... So I'd not say everything is working, but the urgency is lower as we have a work around. (which is valid until one of the servers fails...) I would appreciate help in fully resolving this. Let me know if there are any diagnostics that I should provide... Kind regards, |
| Comment by Minh Diep [ 25/Nov/13 ] |
|
This seems to relate to |
| Comment by Frederik Ferner (Inactive) [ 27/Nov/13 ] |
|
CONFIGS directories for MDT and MGS, including llog_reader output |
| Comment by Frederik Ferner (Inactive) [ 27/Nov/13 ] |
|
Minh, I wasn't quite sure which logs you want, so I remounted both mdt and mgs with ldiskfs, copied all files in the CONFIGS directories to a different location and ran llog_reader over them. The result is in the attached file. Thanks, |
| Comment by Minh Diep [ 30/Nov/13 ] |
|
Hongchao, Could you check if this is a dump of |
| Comment by Minh Diep [ 30/Nov/13 ] |
|
I looked at the client log Header size : 8192 line #15 add_uuid should have 172.23.144.6 on the second nid instead of *5. This is a dup of |
| Comment by Hongchao Zhang [ 20/Dec/13 ] |
|
yes, It should be a duplicate of Hi Frederik, could you please try with the patch http://review.whamcloud.com/#/c/8372/? thanks |
| Comment by Minh Diep [ 24/Jan/14 ] |
|
dup of |
| Comment by John Fuchs-Chesney (Inactive) [ 20/Feb/14 ] |
|
Frederick – can I check if this is now resolved? If so, I will mark it as such. Thanks ~ jfc. |
| Comment by John Fuchs-Chesney (Inactive) [ 20/Feb/14 ] |
|
Frederick – my error. I see this is already resolved, so no action required. ~ jfc. |