Details
-
Bug
-
Resolution: Fixed
-
Blocker
-
Lustre 2.7.0
-
FSTYPE=zfs
-
3
-
16703
Description
Patch http://review.whamcloud.com/9078 for LU-4119 introduced a regression failure in conf-sanity test 83 under ZFS configuration:
start ost1 service on onyx-36vm8 CMD: onyx-36vm8 mkdir -p /mnt/ost1 CMD: onyx-36vm8 zpool list -H lustre-ost1 >/dev/null 2>&1 || zpool import -f -o cachefile=none -d /dev/lvm-Role_OSS lustre-ost1 Starting ost1: lustre-ost1/ost1 /mnt/ost1 CMD: onyx-36vm8 mkdir -p /mnt/ost1; mount -t lustre lustre-ost1/ost1 /mnt/ost1 onyx-36vm8: mount.lustre: mount lustre-ost1/ost1 at /mnt/ost1 failed: No such file or directory onyx-36vm8: Is the MGS specification correct? onyx-36vm8: Is the filesystem name correct? onyx-36vm8: If upgrading, is the copied client log valid? (see upgrade docs) Start of lustre-ost1/ost1 on ost1 failed 2
On OSS node:
07:55:10:Lustre: DEBUG MARKER: mkdir -p /mnt/ost1; mount -t lustre lustre-ost1/ost1 /mnt/ost1 07:55:10:LustreError: 2367:0:(obd_mount_server.c:1168:server_register_target()) lustre-OST0000: error registering with the MGS: rc = -2 (not fatal) 07:55:10:LustreError: 13a-8: Failed to get MGS log lustre-OST0000 and no local copy.
On MDS node:
08:56:17:Lustre: DEBUG MARKER: zfs get -H -o value lustre:svname lustre-mdt1/mdt1 2>/dev/null 08:56:17:Lustre: DEBUG MARKER: lctl set_param -n mdt.lustre*.enable_remote_dir=1 08:56:17:LustreError: 13b-9: lustre-OST0000 claims to have registered, but this MGS does not know about it, preventing registration. 08:56:17:LustreError: 13b-9: lustre-OST0001 claims to have registered, but this MGS does not know about it, preventing registration.
Maloo reports:
https://testing.hpdd.intel.com/test_sets/77dccc26-6ccf-11e4-960c-5254006e85c2
https://testing.hpdd.intel.com/test_sets/f6eaf896-717d-11e4-b5de-5254006e85c2
https://testing.hpdd.intel.com/test_sets/04731068-7c08-11e4-bdab-5254006e85c2
https://testing.hpdd.intel.com/test_sets/bb09e096-7c0c-11e4-bdab-5254006e85c2
Info required for matching: conf-sanity 83