Details
-
Bug
-
Resolution: Fixed
-
Critical
-
Lustre 2.4.0
-
FSTYPE=zfs
FAILURE_MODE=HARD
-
3
-
8083
Description
While running recovery-*-scale tests with FSTYPE=zfs and FAILURE_MODE=HARD under failover configuration, the tests failed as follows:
Failing mds1 on wtm-9vm3 + pm -h powerman --off wtm-9vm3 Command completed successfully waiting ! ping -w 3 -c 1 wtm-9vm3, 4 secs left ... waiting ! ping -w 3 -c 1 wtm-9vm3, 3 secs left ... waiting ! ping -w 3 -c 1 wtm-9vm3, 2 secs left ... waiting ! ping -w 3 -c 1 wtm-9vm3, 1 secs left ... waiting for wtm-9vm3 to fail attempts=3 + pm -h powerman --off wtm-9vm3 Command completed successfully reboot facets: mds1 + pm -h powerman --on wtm-9vm3 Command completed successfully Failover mds1 to wtm-9vm7 04:28:49 (1367234929) waiting for wtm-9vm7 network 900 secs ... 04:28:49 (1367234929) network interface is UP CMD: wtm-9vm7 hostname mount facets: mds1 Starting mds1: lustre-mdt1/mdt1 /mnt/mds1 CMD: wtm-9vm7 mkdir -p /mnt/mds1; mount -t lustre lustre-mdt1/mdt1 /mnt/mds1 wtm-9vm7: mount.lustre: lustre-mdt1/mdt1 has not been formatted with mkfs.lustre or the backend filesystem type is not supported by this tool Start of lustre-mdt1/mdt1 on mds1 failed 19
Maloo report: https://maloo.whamcloud.com/test_sets/ac7cbc10-b0e3-11e2-b2c4-52540035b04c