Details
-
Bug
-
Resolution: Unresolved
-
Minor
-
None
-
Lustre 2.9.0
-
None
-
autotest review-zfs-part-2
-
3
-
9223372036854775807
Description
conf-sanity test 21e fails on review-zfs-part-2 wth
'add fs2mgs failed'
The test_log we see that ‘zpool create’ cannot complete because one of the devices is less than 64M:
== conf-sanity test 21e: separate MGS and MDS == 00:31:10 (1461198670) CMD: trevis-4vm11 grep -c /mnt/fs3ost' ' /proc/mounts CMD: trevis-4vm11 lsmod | grep lnet > /dev/null && lctl dl | grep ' ST ' CMD: trevis-4vm11 ! zpool list -H lustre-ost2_2 >/dev/null 2>&1 || grep -q ^lustre-ost2_2/ /proc/mounts || zpool export lustre-ost2_2 CMD: trevis-4vm11 mkfs.lustre --mgs --param=sys.timeout=20 --backfstype=zfs --device-size=3145728 --fsname=test1234 --reformat lustre-ost2_2/ost2_2 /dev/lvm-Role_OSS/S2 trevis-4vm11: trevis-4vm11: mkfs.lustre FATAL: Unable to create pool lustre-ost2_2 (256) trevis-4vm11: trevis-4vm11: mkfs.lustre FATAL: mkfs failed 256 Permanent disk data: Target: MGS Index: unassigned Lustre FS: test1234 Mount type: zfs Flags: 0x64 (MGS first_time update ) Persistent mount opts: Parameters: sys.timeout=20 mkfs_cmd = zpool create -f -O canmount=off lustre-ost2_2 /dev/lvm-Role_OSS/S2 cannot create 'lustre-ost2_2': one or more devices is less than the minimum size (64M) conf-sanity test_21e: @@@@@@ FAIL: add fs2mgs failed
We’ve only seen two instances of this failure so far. If this is an issue with our tests running in a small VM, then wouldn’t we expect this test to fail most (all?) of the time?
Recent test failure logs are at
https://testing.hpdd.intel.com/test_sets/77cbf5d2-0773-11e6-b5f1-5254006e85c2
https://testing.hpdd.intel.com/test_sets/cb7f6462-0bf5-11e6-b5f1-5254006e85c2
Attachments
Issue Links
- is related to
-
LU-10424 FSTYPE=zfs bash lustre/tests/llmount.sh fails unless /tmp/lustre-{mdt1,ost1,ost2} all exist and have size at least 64M
-
- Open
-