Details
-
Bug
-
Resolution: Fixed
-
Minor
-
Lustre 2.10.0
-
3
-
9223372036854775807
Description
conf-sanity test_103 hangs when the MGS and MDS are on separate nodes.
Looking at the test_renamfs() routine in conf-sanity.sh, the first thing it does is check if we have a combined MDS and MGS. If we do, then rename the MGS. After that, we rename the MGS:
7197 test_renamefs() {
7198 local newname=$1
7199
7200 echo "rename $FSNAME to $newname"
7201
7202 if [ ! combined_mgs_mds ]; then
7203 local facet=$(mgsdevname)
7204
7205 do_facet mgs \
7206 "$TUNEFS --fsname=$newname --rename=$FSNAME -v $facet"||
7207 error "(7) Fail to rename MGS"
7208 if [ "$(facet_fstype $facet)" = "zfs" ]; then
7209 reimport_zpool mgs $newname-mgs
7210 fi
7211 fi
7212
7213 for num in $(seq $MDSCOUNT); do
7214 local facet=$(mdsdevname $num)
7215
7216 do_facet mds${num} \
7217 "$TUNEFS --fsname=$newname --rename=$FSNAME -v $facet"||
7218 error "(8) Fail to rename MDT $num"
7219 if [ "$(facet_fstype $facet)" = "zfs" ]; then
7220 reimport_zpool mds${num} $newname-mdt${num}
7221 fi
7222 done
…
Yet, looking at the console output, we see that we enter the test_renamefs() routine and, with a separate MDS/MGS, we just skip over renaming the MGS and rename the MDS:
rename scratch to mylustre
checking for existing Lustre data: found
Reading CONFIGS/mountdata
Read previous values:
Target: scratch-MDT0000
Index: 0
Lustre FS: scratch
Mount type: ldiskfs
Flags: 0x1
(MDT )
Persistent mount opts: user_xattr,errors=remount-ro
Parameters: mgsnode=10.100.4.154@tcp sys.timeout=20 lov.stripesize=1048576 lov.stripecount=0 mdt.identity_upcall=/usr/sbin/l_getidentity
…
The problem is that the test_renamefs() routine checks for a combined MDS/MGS (calls combined_mgs_mds) in the “test” brackets and test doesn’t execute the combined_mgs_mds routine.
The fix is to remove the “[“ “]” from line 7202 of conf-sanity.sh.
Attachments
Issue Links
- is related to
-
LU-8688 All Lustre test suites should run/PASS with separate MDS and MGS
-
- Open
-