Details
-
Bug
-
Resolution: Fixed
-
Minor
-
None
-
None
-
3
-
9223372036854775807
Description
x:lustre-release# export MGSDEV=/tmp/lustre-mgs x:lustre-release# llmount.sh ... x:lustre-release# ONLY=72 bash lustre/tests/conf-sanity.sh Logging to shared log directory: /tmp/test_logs/1484229307 Client: Lustre version: 2.9.51_25_g31aa2bc MDS: Lustre version: 2.9.51_25_g31aa2bc OSS: Lustre version: 2.9.51_25_g31aa2bc excepting tests: 32newtarball 101 24b skipping tests SLOW=no: 45 69 Stopping clients: x /mnt/lustre (opts:) Stopping client x /mnt/lustre opts: Stopping clients: x /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on x Stopping /mnt/lustre-ost1 (opts:-f) on x Stopping /mnt/lustre-ost2 (opts:-f) on x waited 0 for 11 ST ost OSS OSS_uuid 0 Stopping /mnt/lustre-mgs (opts:) on x Loading modules from /root/lustre-release/lustre detected 8 online CPUs by sysfs libcfs will create CPU partition based on online CPUs debug=-1 subsystem_debug=all Formatting mgs, mds, osts Format mgs: /tmp/lustre-mgs Format mds1: /tmp/lustre-mdt1 Format ost1: /tmp/lustre-ost1 Format ost2: /tmp/lustre-ost2 start mgs Starting mgs: -o loop /tmp/lustre-mgs /mnt/lustre-mgs Started MGS start mds service on x Starting mds1: -o loop /tmp/lustre-mdt1 /mnt/lustre-mds1 Commit the device label on /tmp/lustre-mdt1 Started lustre-MDT0000 start ost1 service on x Starting ost1: -o loop /tmp/lustre-ost1 /mnt/lustre-ost1 Commit the device label on /tmp/lustre-ost1 Started lustre-OST0000 osc.lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 1 sec stop ost1 service on x Stopping /mnt/lustre-ost1 (opts:-f) on x stop mds service on x Stopping /mnt/lustre-mds1 (opts:-f) on x umount lustre on /mnt/lustre..... stop ost1 service on x stop mds service on x umount lustre on /mnt/lustre..... stop ost1 service on x stop mds service on x == conf-sanity test 72: test fast symlink with extents flag enabled ================================== 07:55:45 (1484229345) Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x61 (MDT first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.122.111@tcp sys.timeout=20 lov.stripesize=1048576 lov.stripecount=0 mdt.identity_upcall=/root/lustre-release/lustre/utils/l_getidentity formatting backing filesystem ldiskfs on /dev/loop1 target name lustre:MDT0000 4k blocks 50000 options -I 512 -i 2048 -q -O dirdata,uninit_bg,^extents,dir_nlink,quota,huge_file,flex_bg -E lazy_itable_init,lazy_journal_init -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -I 512 -i 2048 -q -O dirdata,uninit_bg,^extents,dir_nlink,quota,huge_file,flex_bg -E lazy_itable_init,lazy_journal_init -F /dev/loop1 50000 Writing CONFIGS/mountdata tune2fs 1.42.12.wc1 (15-Sep-2014) Permanent disk data: Target: lustre:OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.122.111@tcp sys.timeout=20 formatting backing filesystem ldiskfs on /dev/loop1 target name lustre:OST0000 4k blocks 50000 options -I 256 -q -O extents,uninit_bg,dir_nlink,quota,huge_file,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0000 -I 256 -q -O extents,uninit_bg,dir_nlink,quota,huge_file,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init -F /dev/loop1 50000 Writing CONFIGS/mountdata start mds service on x Starting mds1: -o loop /tmp/lustre-mdt1 /mnt/lustre-mds1 mount.lustre: mount /dev/loop1 at /mnt/lustre-mds1 failed: Address already in use The target service's index is already in use. (/dev/loop1) Start of /tmp/lustre-mdt1 on mds1 failed 98 conf-sanity test_72: @@@@@@ FAIL: start mds failed Trace dump: = /root/lustre-release/lustre/tests/test-framework.sh:4826:error() = lustre/tests/conf-sanity.sh:5142:test_72() = /root/lustre-release/lustre/tests/test-framework.sh:5102:run_one() = /root/lustre-release/lustre/tests/test-framework.sh:5141:run_one_logged() = /root/lustre-release/lustre/tests/test-framework.sh:4940:run_test() = lustre/tests/conf-sanity.sh:5163:main() Dumping lctl log to /tmp/test_logs/1484229307/conf-sanity.test_72.*.1484229346.log Dumping logs only on local client. Resetting fail_loc on all nodes...done. FAIL 72 (2s)
Attachments
Issue Links
- is related to
-
LU-8688 All Lustre test suites should run/PASS with separate MDS and MGS
- Open