[LU-10315] conf-sanity test_32c: FAIL: Mkfs new MDT failed Created: 01/Dec/17  Updated: 28/Jul/20  Resolved: 28/Jul/20

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.10.2
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Maloo Assignee: WC Triage
Resolution: Cannot Reproduce Votes: 0
Labels: None
Environment:

client and server: 2.10.2 RC1 RHEL7.4 zfs DNE


Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

This issue was created by maloo for sarah_lw <wei3.liu@intel.com>

This issue relates to the following test suite run: https://testing.hpdd.intel.com/test_sets/191d5366-d69b-11e7-9c63-52540065bddc.

The sub-test test_32c failed with the following error:

test_32c failed with 1
wait for devices to go
CMD: onyx-47vm4 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/usr/lib64/lustre/tests//usr/lib64/lustre/tests:/usr/lib64/lustre/tests:/usr/lib64/lustre/tests/../utils:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/qt-3.3/bin:/usr/lib64/compat-openmpi16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/sbin:/sbin:/bin::/sbin:/bin:/usr/sbin: NAME=autotest_config sh rpc.sh /usr/sbin/lctl device_list 
onyx-47vm4: onyx-47vm4.onyx.hpdd.intel.com: executing /usr/sbin/lctl device_list
CMD: onyx-47vm4 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/usr/lib64/lustre/tests//usr/lib64/lustre/tests:/usr/lib64/lustre/tests:/usr/lib64/lustre/tests/../utils:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/qt-3.3/bin:/usr/lib64/compat-openmpi16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/sbin:/sbin:/bin::/sbin:/bin:/usr/sbin: NAME=autotest_config sh rpc.sh losetup -a 
onyx-47vm4: onyx-47vm4.onyx.hpdd.intel.com: executing losetup -a
CMD: onyx-47vm4 mount -t lustre -o writeconf t32fs-mdt1/mdt1 /tmp/t32/mnt/mdt
mkfs new MDT on lustre-mdt1_2/mdt1_2....
CMD: onyx-47vm4 grep -c /mnt/lustre-mds1' ' /proc/mounts
CMD: onyx-47vm4 lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
CMD: onyx-47vm4 ! zpool list -H lustre-mdt1 >/dev/null 2>&1 ||
			grep -q ^lustre-mdt1/ /proc/mounts ||
			zpool export  lustre-mdt1
onyx-47vm4: cannot export 'lustre-mdt1': pool I/O is currently suspended
CMD: onyx-47vm4 mkfs.lustre --mgsnode=onyx-47vm4@tcp --fsname=t32fs --mdt --index=1 --param=sys.timeout=20 --param=lov.stripesize=1048576 --param=lov.stripecount=0 --param=mdt.identity_upcall=/usr/sbin/l_getidentity --backfstype=zfs --device-size=200000 --reformat lustre-mdt1_2/mdt1_2 /dev/lvm-Role_MDS/S1
onyx-47vm4: 
onyx-47vm4: mkfs.lustre FATAL: Unable to create filesystem lustre-mdt1_2/mdt1_2 (256)
onyx-47vm4: 
onyx-47vm4: mkfs.lustre FATAL: mkfs failed 256
 conf-sanity test_32c: @@@@@@ FAIL: Mkfs new MDT failed 


 Comments   
Comment by Andreas Dilger [ 28/Jul/20 ]

Has not been seen since original report.

Generated at Sat Feb 10 02:33:57 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.