Logging to shared log directory: /tmp/test_logs/1449534691 excepting tests: 32newtarball 59 64 80 skipping tests SLOW=no: 30a 31 45 69 Stopping clients: eagle-36vm6 /mnt/t32fs (opts:) Stopping clients: eagle-36vm6 /mnt/t32fs2 (opts:) Loading modules from /usr/lib64/lustre/tests/.. detected 1 online CPUs by sysfs libcfs will create CPU partition based on online CPUs debug=-1 subsystem_debug=all -lnet -lnd -pinger Formatting mgs, mds, osts Format mds1: mdt Format mds2: mdt2 Format ost1: ost start mds service on eagle-36vm6 Starting mds1: -o loop mdt /mnt/mds1 Started t32fs-MDT0000 start mds service on eagle-36vm6 Starting mds2: -o loop mdt2 /mnt/mds2 Started t32fs-MDT0001 start ost1 service on eagle-36vm6 Starting ost1: -o loop ost /mnt/ost1 Started t32fs-OST0000 osc.t32fs-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 7 sec osc.t32fs-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on eagle-36vm6 Stopping /mnt/ost1 (opts:-f) on eagle-36vm6 stop mds service on eagle-36vm6 Stopping /mnt/mds1 (opts:-f) on eagle-36vm6 stop mds service on eagle-36vm6 Stopping /mnt/mds2 (opts:-f) on eagle-36vm6 umount lustre on /mnt/t32fs..... stop ost1 service on eagle-36vm6 stop mds service on eagle-36vm6 stop mds service on eagle-36vm6 modules unloaded. == conf-sanity test 32newtarball: Create a new test_32 disk image tarball for this version == 00:32:05 (1449534725) Loading modules from /usr/lib64/lustre/tests/.. detected 1 online CPUs by sysfs libcfs will create CPU partition based on online CPUs debug=-1 subsystem_debug=all -lnet -lnd -pinger quota/lquota options: 'hash_lqs_cur_bits=3' 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0109229 s, 960 MB/s Stopping clients: eagle-36vm6 /mnt/t32fs (opts:) Stopping clients: eagle-36vm6 /mnt/t32fs2 (opts:) Loading modules from /usr/lib64/lustre/tests/.. detected 1 online CPUs by sysfs libcfs will create CPU partition based on online CPUs debug=-1 subsystem_debug=all -lnet -lnd -pinger Formatting mgs, mds, osts Format mds1: mdt Format mds2: mdt2 Format ost1: ost Checking servers environments Checking clients eagle-36vm6 environments Loading modules from /usr/lib64/lustre/tests/.. detected 1 online CPUs by sysfs libcfs will create CPU partition based on online CPUs debug=-1 subsystem_debug=all -lnet -lnd -pinger Setup mgs, mdt, osts Starting mds1: -o loop mdt /mnt/mds1 Started t32fs-MDT0000 Starting mds2: -o loop mdt2 /mnt/mds2 Started t32fs-MDT0001 Starting ost1: -o loop ost /mnt/ost1 Started t32fs-OST0000 mount t32fs on /mnt/t32fs..... Starting client: eagle-36vm6: -o user_xattr,flock eagle-36vm6@tcp:/t32fs /mnt/t32fs Starting client eagle-36vm6: -o user_xattr,flock eagle-36vm6@tcp:/t32fs /mnt/t32fs Started clients eagle-36vm6: eagle-36vm6@tcp:/t32fs on /mnt/t32fs type lustre (rw,user_xattr,flock) Using TIMEOUT=20 seting jobstats to procname_uid Setting t32fs.sys.jobid_var from disable to procname_uid Waiting 90 secs for update Updated after 7s: wanted 'procname_uid' got 'procname_uid' disable quota as required + /usr/bin/lfs setquota -u 60000 -b 0 -B 20480 -i 0 -I 2 /mnt/t32fs warning: inode hardlimit is smaller than the miminal qunit size, please see the help of setquota or Lustre manual for details. + set +x Stopping clients: eagle-36vm6 /mnt/t32fs (opts:) Stopping client eagle-36vm6 /mnt/t32fs opts: Stopping clients: eagle-36vm6 /mnt/t32fs2 (opts:) Stopping /mnt/mds1 (opts:-f) on eagle-36vm6 Stopping /mnt/mds2 (opts:-f) on eagle-36vm6 Stopping /mnt/ost1 (opts:-f) on eagle-36vm6 Checking servers environments Checking clients eagle-36vm6 environments Loading modules from /usr/lib64/lustre/tests/.. detected 1 online CPUs by sysfs libcfs will create CPU partition based on online CPUs debug=-1 subsystem_debug=all -lnet -lnd -pinger Setup mgs, mdt, osts Starting mds1: -o loop mdt /mnt/mds1 Started t32fs-MDT0000 Starting mds2: -o loop mdt2 /mnt/mds2 Started t32fs-MDT0001 Starting ost1: -o loop ost /mnt/ost1 Started t32fs-OST0000 mount t32fs on /mnt/t32fs..... Starting client: eagle-36vm6: -o user_xattr,flock eagle-36vm6@tcp:/t32fs /mnt/t32fs Starting client eagle-36vm6: -o user_xattr,flock eagle-36vm6@tcp:/t32fs /mnt/t32fs Started clients eagle-36vm6: eagle-36vm6@tcp:/t32fs on /mnt/t32fs type lustre (rw,user_xattr,flock) Using TIMEOUT=20 disable quota as required /mnt/t32fs /usr/lib64/lustre/tests /usr/lib64/lustre/tests Disk quotas for user 60000 (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/t32fs 10240 0 20480 - 1 0 2 - t32fs-MDT0000_UUID 0 - 0 - 1 - 0 - t32fs-MDT0001_UUID 0 - 0 - 0 - 0 - t32fs-OST0000_UUID 10240 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Stopping clients: eagle-36vm6 /mnt/t32fs (opts:) Stopping client eagle-36vm6 /mnt/t32fs opts: Stopping clients: eagle-36vm6 /mnt/t32fs2 (opts:) Stopping /mnt/mds1 (opts:-f) on eagle-36vm6 Stopping /mnt/mds2 (opts:-f) on eagle-36vm6 Stopping /mnt/ost1 (opts:-f) on eagle-36vm6 /tmp/t32_image_create/src /usr/lib64/lustre/tests /usr/lib64/lustre/tests /tmp/t32_image_create/img /usr/lib64/lustre/tests arch bspace commit ispace kernel list mdt mdt2 ost sha1sums /usr/lib64/lustre/tests Resetting fail_loc on all nodes...done. PASS 32newtarball (174s) umount lustre on /mnt/t32fs..... stop ost1 service on eagle-36vm6 stop mds service on eagle-36vm6 stop mds service on eagle-36vm6 modules unloaded. Stopping clients: eagle-36vm6 /mnt/t32fs (opts:) Stopping clients: eagle-36vm6 /mnt/t32fs2 (opts:) Loading modules from /usr/lib64/lustre/tests/.. detected 1 online CPUs by sysfs libcfs will create CPU partition based on online CPUs debug=-1 subsystem_debug=all -lnet -lnd -pinger quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: mdt Format mds2: mdt2 Format ost1: ost == conf-sanity test complete, duration 225 sec == 00:35:16 (1449534916)