Formatting '/tmp/tmp.20a7BdrJQP/root.img', fmt=qcow2 cluster_size=4096 extended_l2=off compression_type=zlib size=781488128 backing_file=/lt/images/centos8.img backing_fmt=raw lazy_refcounts=off refcount_bits=16 RAM: 5496, ENV: FSTYPE=ldiskfs MDSCOUNT=2 VM ready in 11 Reading test skip list from /tmp/ltest.config EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing check_logdir /tmp/ltest-logs tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Logging to shared log directory: /tmp/ltest-logs tmp.20a7BdrJQP: executing yml_node Writer error: failed to resolve Netlink family id IOC_LIBCFS_GET_NI error 22: Invalid argument Client: 2.15.54 MDS: 2.15.54 OSS: 2.15.54 excepting tests: 32 53 63 102 115 119 123F 32newtarball 110 skipping tests SLOW=no: 45 69 106 111 114 Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /tmp/lustre-mdt1 Format mds2: /tmp/lustre-mdt2 Format ost1: /tmp/lustre-ost1 Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /tmp/lustre-mdt1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /tmp/lustre-mdt2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /tmp/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 3 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP == conf-sanity test 0: single mount setup ================ 15:19:54 (1679930394) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 0 (30s) == conf-sanity test 1: start up ost twice (should return errors) ========================================================== 15:20:25 (1679930425) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost second time... start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 mount.lustre: according to /etc/mtab /dev/mapper/ost1_flakey is already mounted on /mnt/lustre-ost1 Start of /dev/mapper/ost1_flakey on ost1 failed 17 mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 1 (40s) == conf-sanity test 2: start up mds twice (should return err) ========================================================== 15:21:04 (1679930464) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start mds second time.. start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 mount.lustre: according to /etc/mtab /dev/mapper/mds1_flakey is already mounted on /mnt/lustre-mds1 Start of /dev/mapper/mds1_flakey on mds1 failed 17 start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 2 (31s) == conf-sanity test 3: mount client twice (should return err) ========================================================== 15:21:35 (1679930495) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre mount.lustre: according to /etc/mtab tmp.20a7BdrJQP@tcp:/lustre is already mounted on /mnt/lustre setup single mount lustre success umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 3 (41s) == conf-sanity test 4: force cleanup ost, then cleanup === 15:22:16 (1679930536) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:-f) stop ost1 service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 4 (66s) == conf-sanity test 5a: force cleanup mds, then cleanup == 15:23:22 (1679930602) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre USER PID ACCESS COMMAND /mnt/lustre: root kernel mount /mnt/lustre /mnt/lustre is in use by user space process. stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP killing umount waiting for umount to finish conf-sanity.sh: line 361: 20228 Terminated $UMOUNT -f $MOUNT manual umount lustre on /mnt/lustre.... umount: /mnt/lustre: not mounted. stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP modules unloaded. manual umount lustre on /mnt/lustre.... umount: /mnt/lustre: not mounted. /etc/mtab updated in 0 secs PASS 5a (59s) == conf-sanity test 5b: Try to start a client with no MGS (should return errs) ========================================================== 15:24:21 (1679930661) start ost1 service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre mount.lustre: mount tmp.20a7BdrJQP@tcp:/lustre at /mnt/lustre failed: Input/output error Is the MGS running? umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP modules unloaded. PASS 5b (96s) == conf-sanity test 5c: cleanup after failed mount (bug 2712) (should return errs) ========================================================== 15:25:57 (1679930757) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount wrong.lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/wrong.lustre /mnt/lustre mount.lustre: mount tmp.20a7BdrJQP@tcp:/wrong.lustre at /mnt/lustre failed: File name too long umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 5c (18s) == conf-sanity test 5d: mount with ost down ============== 15:26:15 (1679930775) start ost1 service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:-f) stop ost1 service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 5d (139s) == conf-sanity test 5e: delayed connect, don't crash (bug 10268) ========================================================== 15:28:34 (1679930914) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" fail_loc=0x80000506 mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS PASS 5e (37s) == conf-sanity test 5f: mds down, cleanup after failed mount (bug 2712) ========================================================== 15:29:11 (1679930951) SKIP: conf-sanity test_5f needs separate mgs and mds SKIP 5f (1s) == conf-sanity test 5g: handle missing debugfs =========== 15:29:13 (1679930953) modprobe: FATAL: Module lustre not found in directory /lib/modules/4.18.0 error: get_param: param_path 'devices': No such file or directory none /sys/kernel/debug debugfs rw,relatime 0 0 PASS 5g (0s) == conf-sanity test 5h: start mdt failure at mdt_fs_setup() ========================================================== 15:29:13 (1679930953) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Stopping /mnt/lustre-mds1 (opts:) on tmp.20a7BdrJQP fail_loc=0x80000135 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 mount.lustre: mount /dev/mapper/mds1_flakey at /mnt/lustre-mds1 failed: No such file or directory Is the MGS specification correct? Is the filesystem name correct? If upgrading, is the copied client log valid? (see upgrade docs) Start of /dev/mapper/mds1_flakey on mds1 failed 2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 5h (48s) == conf-sanity test 5i: start mdt failure at mdt_quota_init() ========================================================== 15:30:01 (1679931001) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Stopping /mnt/lustre-mds1 (opts:) on tmp.20a7BdrJQP fail_loc=0x80000A05 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 mount.lustre: mount /dev/mapper/mds1_flakey at /mnt/lustre-mds1 failed: Bad file descriptor Start of /dev/mapper/mds1_flakey on mds1 failed 9 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 5i (46s) == conf-sanity test 6: manual umount, then mount again === 15:30:47 (1679931047) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre manual umount lustre on /mnt/lustre.... mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 6 (40s) == conf-sanity test 7: manual umount, then cleanup ======= 15:31:27 (1679931087) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre manual umount lustre on /mnt/lustre.... stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 7 (38s) == conf-sanity test 8: double mount setup ================ 15:32:05 (1679931125) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre mount lustre on /mnt/lustre2..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre2 setup double mount lustre success umount lustre on /mnt/lustre2..... Stopping client tmp.20a7BdrJQP /mnt/lustre2 (opts:) umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 8 (35s) == conf-sanity test 9: test ptldebug and subsystem for mkfs ========================================================== 15:32:40 (1679931160) start ost1 service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" debug=inode trace subsystem_debug=mds ost lnet.debug success lnet.subsystem_debug success stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP PASS 9 (84s) == conf-sanity test 10a: find lctl param broken symlinks ========================================================== 15:34:04 (1679931244) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 10a (34s) == conf-sanity test 17: Verify failed mds_postsetup won't fail assertion (2936) (should return errs) ========================================================== 15:34:38 (1679931278) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. Remove mds config log debugfs 1.45.6.wc3 (28-Sep-2020) start ost1 service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 mount.lustre: mount /dev/mapper/mds1_flakey at /mnt/lustre-mds1 failed: No such file or directory Is the MGS specification correct? Is the filesystem name correct? If upgrading, is the copied client log valid? (see upgrade docs) Start of /dev/mapper/mds1_flakey on mds1 failed 2 Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 17 (208s) == conf-sanity test 18: check mkfs creates large journals ========================================================== 15:38:06 (1679931486) SKIP: conf-sanity test_18 /dev/mapper/mds1_flakey too small for 2000000kB MDS SKIP 18 (0s) == conf-sanity test 19a: start/stop MDS without OSTs ===== 15:38:06 (1679931486) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 19a (13s) == conf-sanity test 19b: start/stop OSTs without MDS ===== 15:38:19 (1679931499) start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP PASS 19b (60s) == conf-sanity test 20: remount ro,rw mounts work and doesn't break /etc/mtab ========================================================== 15:39:19 (1679931559) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success mount lustre with opts remount,ro on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o remount,ro tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre touch: cannot touch '/mnt/lustre/f20.conf-sanity': Read-only file system mount lustre with opts remount,rw on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o remount,rw tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP PASS 20 (26s) == conf-sanity test 21a: start mds before ost, stop ost first ========================================================== 15:39:45 (1679931585) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 1 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 21a (22s) == conf-sanity test 21b: start ost before mds, stop mds first ========================================================== 15:40:07 (1679931607) start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP PASS 21b (103s) == conf-sanity test 21c: start mds between two osts, stop mds last ========================================================== 15:41:50 (1679931710) start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost2 service on tmp.20a7BdrJQP Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Commit the device label on /tmp/lustre-ost2 Started lustre-OST0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 1 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop ost2 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 PASS 21c (117s) == conf-sanity test 21d: start mgs then ost and then mds ========================================================== 15:43:47 (1679931827) SKIP: conf-sanity test_21d need separate mgs device SKIP 21d (0s) == conf-sanity test 21e: separate MGS and MDS ============ 15:43:47 (1679931827) SKIP: conf-sanity test_21e mixed loopback and real device not working SKIP 21e (1s) == conf-sanity test 22: start a client before osts (should return errs) ========================================================== 15:43:48 (1679931828) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Client mount with ost in logs, but none running start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:-f) PASS Client mount with a running ost start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 12 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state (FULL|IDLE) osc.lustre-OST0000-osc-ffff8b7863bc2000.ost_server_uuid 40 osc.lustre-OST0000-osc-ffff8b7863bc2000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" setup single mount lustre success PASS umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 22 (65s) == conf-sanity test 23a: interrupt client during recovery mount delay ========================================================== 15:44:53 (1679931893) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Stopping /mnt/lustre-mds1 (opts:) on tmp.20a7BdrJQP Stopping client /mnt/lustre (opts: -f) Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre mount pid is 66684, mount.lustre pid is 66685 PID TTY TIME CMD 66685 ? 00:00:00 mount.lustre PID TTY TIME CMD waiting for mount to finish root 66684 66683 0 15:45 ? 00:00:00 mount -t lustre -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre root 66685 66684 0 15:45 ? 00:00:00 /sbin/mount.lustre tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre -o rw,user_xattr,flock root 66719 64787 0 15:45 ? 00:00:00 grep mount ./../tests/test-framework.sh: line 6120: 66684 Terminated LUSTRE="/mnt/build/lustre/tests/.." bash -c "mount -t lustre -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre" PID1= PID2= umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 23a (55s) == conf-sanity test 23b: Simulate -EINTR during mount ==== 15:45:48 (1679931948) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" fail_loc=0x80000313 mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 23b (45s) == conf-sanity test 24a: Multiple MDTs on a single node == 15:46:33 (1679931993) SKIP: conf-sanity test_24a mixed loopback and real device not working SKIP 24a (1s) == conf-sanity test 24b: Multiple MGSs on a single node (should return err) ========================================================== 15:46:34 (1679931994) SKIP: conf-sanity test_24b mixed loopback and real device not working SKIP 24b (0s) == conf-sanity test 25: Verify modules are referenced ==== 15:46:34 (1679931994) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 25 (37s) == conf-sanity test 26: MDT startup failure cleans LOV (should return errs) ========================================================== 15:47:12 (1679932032) Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' fail_loc=0x80000135 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 mount.lustre: mount /dev/mapper/mds1_flakey at /mnt/lustre-mds1 failed: No such file or directory Is the MGS specification correct? Is the filesystem name correct? If upgrading, is the copied client log valid? (see upgrade docs) Start of /dev/mapper/mds1_flakey on mds1 failed 2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 26 (25s) == conf-sanity test 27a: Reacquire MGS lock if OST started first ========================================================== 15:47:37 (1679932057) umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP modules unloaded. start ost1 service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Requeue thread should have started: 76709 ? 00:00:00 ll_cfg_requeue Setting lustre-OST0000.ost.client_cache_seconds from 110 to 115 Waiting 90s for '115' Updated after 2s: want '115' got '115' stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 27a (129s) == conf-sanity test 27b: Reacquire MGS lock after failover ========================================================== 15:49:46 (1679932186) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Failing mds1 on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:) on tmp.20a7BdrJQP 15:50:15 (1679932215) shut down Failover mds1 to tmp.20a7BdrJQP mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 15:50:26 (1679932226) targets are mounted 15:50:27 (1679932227) facet_failover done Setting lustre-MDT0000.mdt.identity_acquire_expire from 30 to 35 Waiting 90s for '35' Updated after 6s: want '35' got '35' Setting lustre-MDT0000.mdc.max_rpcs_in_flight from 8 to 13 Waiting 90s for '13' Updated after 7s: want '13' got '13' setup single mount lustre success umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 27b (76s) == conf-sanity test 28A: permanent parameter setting ===== 15:51:02 (1679932262) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre ORIG:41 MAX:41 ORIG:38 MAX:41 Setting lustre.llite.max_read_ahead_whole_mb from 41 to 39 Waiting 90s for '39' Updated after 3s: want '39' got '39' Setting lustre.llite.max_read_ahead_whole_mb from 39 to 40 Waiting 90s for '40' Updated after 5s: want '40' got '40' umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre New config success: got 40 Setting lustre.llite.max_read_ahead_whole_mb from 40 to 38 Waiting 90s for '38' Updated after 6s: want '38' got '38' umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 28A (53s) == conf-sanity test 28a: set symlink parameters permanently with lctl ========================================================== 15:51:55 (1679932315) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Setting lustre-OST0000.ost.client_cache_seconds from 115 to 230 Waiting 90s for '230' Updated after 7s: want '230' got '230' Setting lustre-OST0000.ost.client_cache_seconds from 230 to 115 Waiting 90s for '115' Updated after 6s: want '115' got '115' Setting lustre-OST0000.osd.auto_scrub from 2592000 to 1 Waiting 90s for '1' Updated after 6s: want '1' got '1' Setting lustre-OST0000.osd.auto_scrub from 1 to 2592000 Waiting 90s for '2592000' Updated after 5s: want '2592000' got '2592000' umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 28a (61s) == conf-sanity test 29: permanently remove an OST ======== 15:52:56 (1679932376) start ost2 service on tmp.20a7BdrJQP Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Started lustre-OST0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Setting lustre-OST0001.osc.active from 1 to 0 Waiting 90s for '0' Waiting 80s for '0' Updated after 11s: want '0' got '0' Live client success: got lustre-OST0001_UUID FULL DEACTIVATED check osc.lustre-OST0001-osc-MDT0000.active target updated after 0 sec (got 0) check osc.lustre-OST0001-osc-MDT0001.active target updated after 0 sec (got 0) umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre New client success: got 'lustre-OST0001_UUID NEW DEACTIVATED' Setting lustre-OST0001.osc.active from 0 to 1 Waiting 90s for '1' Updated after 7s: want '1' got '1' umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost2 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. tunefs.lustre: Unable to mount /dev/mapper/mds1_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity tunefs.lustre: Unable to mount /dev/mapper/mds2_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity tunefs.lustre: Unable to mount /dev/mapper/ost1_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 tunefs.lustre: Unable to mount /dev/mapper/ost2_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 tunefs failed, reformatting instead Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 4 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 29 (152s) == conf-sanity test 30a: Big config llog and permanent parameter deletion ========================================================== 15:55:28 (1679932528) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Big config llog Setting lustre.llite.max_read_ahead_whole_mb from 41 to 1 Waiting 90s for '1' Updated after 3s: want '1' got '1' Setting lustre.llite.max_read_ahead_whole_mb from 1 to 2 Waiting 90s for '2' Updated after 5s: want '2' got '2' Setting lustre.llite.max_read_ahead_whole_mb from 2 to 3 Waiting 90s for '3' Updated after 9s: want '3' got '3' Setting lustre.llite.max_read_ahead_whole_mb from 3 to 4 Waiting 90s for '4' Updated after 8s: want '4' got '4' Setting lustre.llite.max_read_ahead_whole_mb from 4 to 5 Waiting 90s for '5' Updated after 7s: want '5' got '5' Setting lustre.llite.max_read_ahead_whole_mb from 5 to 4 Waiting 90s for '4' Updated after 7s: want '4' got '4' Setting lustre.llite.max_read_ahead_whole_mb from 4 to 3 Waiting 90s for '3' Updated after 8s: want '3' got '3' Setting lustre.llite.max_read_ahead_whole_mb from 3 to 2 Waiting 90s for '2' Updated after 7s: want '2' got '2' Setting lustre.llite.max_read_ahead_whole_mb from 2 to 1 Waiting 90s for '1' Updated after 5s: want '1' got '1' Setting lustre.llite.max_read_ahead_whole_mb from 1 to 2 Waiting 90s for '2' Updated after 7s: want '2' got '2' Setting lustre.llite.max_read_ahead_whole_mb from 2 to 3 Waiting 90s for '3' Updated after 8s: want '3' got '3' Setting lustre.llite.max_read_ahead_whole_mb from 3 to 4 Waiting 90s for '4' Updated after 9s: want '4' got '4' Setting lustre.llite.max_read_ahead_whole_mb from 4 to 5 Waiting 90s for '5' Updated after 6s: want '5' got '5' Setting lustre.llite.max_read_ahead_whole_mb from 5 to 4 Waiting 90s for '4' Waiting 80s for '4' Updated after 11s: want '4' got '4' Setting lustre.llite.max_read_ahead_whole_mb from 4 to 3 Waiting 90s for '3' Updated after 4s: want '3' got '3' Setting lustre.llite.max_read_ahead_whole_mb from 3 to 2 Waiting 90s for '2' Updated after 10s: want '2' got '2' Setting lustre.llite.max_read_ahead_whole_mb from 2 to 1 Waiting 90s for '1' Updated after 10s: want '1' got '1' Setting lustre.llite.max_read_ahead_whole_mb from 1 to 2 Waiting 90s for '2' Updated after 8s: want '2' got '2' Setting lustre.llite.max_read_ahead_whole_mb from 2 to 3 Waiting 90s for '3' Updated after 7s: want '3' got '3' Setting lustre.llite.max_read_ahead_whole_mb from 3 to 4 Waiting 90s for '4' Updated after 6s: want '4' got '4' Setting lustre.llite.max_read_ahead_whole_mb from 4 to 5 Waiting 90s for '5' Waiting 80s for '5' Updated after 11s: want '5' got '5' umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre PASS Erase parameter setting umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre deleted (default) value=41, orig=41 umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 30a (190s) == conf-sanity test 30b: Remove failover nids ============ 15:58:38 (1679932718) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Using fake nid 192.168.125.50@tcp Setting lustre-OST0000.failover.node from /mnt/build/lustre/tests/../utils/lctl get_param -n osc.lustre-OST0000-osc-[^M]*.import | grep failover_nids | sed -n 's/.*\(192.168.125.50@tcp\).*/\1/p' to 192.168.125.50@tcp Waiting 90s for '192.168.125.50@tcp' should have 5 entries in failover nids string, have 5 umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre only 4 final entries should remain in failover nids string, have 4 umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 30b (40s) == conf-sanity test 31: Connect to non-existent node (shouldn't crash) ========================================================== 15:59:19 (1679932759) umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP modules unloaded. PASS 31 (4s) SKIP: conf-sanity test_32a skipping excluded test 32a (base 32) SKIP: conf-sanity test_32b skipping excluded test 32b (base 32) SKIP: conf-sanity test_32c skipping excluded test 32c (base 32) SKIP: conf-sanity test_32d skipping excluded test 32d (base 32) SKIP: conf-sanity test_32e skipping excluded test 32e (base 32) SKIP: conf-sanity test_32f skipping excluded test 32f (base 32) SKIP: conf-sanity test_32g skipping excluded test 32g (base 32) == conf-sanity test 33a: Mount ost with a large index number ========================================================== 15:59:23 (1679932763) SKIP: conf-sanity test_33a mixed loopback and real device not working SKIP 33a (1s) == conf-sanity test 33b: Drop cancel during umount ======= 15:59:24 (1679932764) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0651426 s, 16.1 MB/s fail_loc=0x80000304 umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 33b (42s) == conf-sanity test 34a: umount with opened file should be fail ========================================================== 16:00:06 (1679932806) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Reading test skip list from /tmp/ltest.config EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" multiop /mnt/lustre/file vO_c TMPPIPE=/tmp/multiop_open_wait_pipe.105527 manual umount lustre on /mnt/lustre.... umount: /mnt/lustre: target is busy. umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 34a (40s) == conf-sanity test 34b: force umount with failed mds should be normal ========================================================== 16:00:46 (1679932846) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP manual umount lustre on /mnt/lustre.... umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP modules unloaded. PASS 34b (55s) == conf-sanity test 34c: force umount with failed ost should be normal ========================================================== 16:01:41 (1679932901) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP manual umount lustre on /mnt/lustre.... umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 34c (55s) == conf-sanity test 35a: Reconnect to the last active server first ========================================================== 16:02:36 (1679932956) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre debug=ha Set up a fake failnode for the MDS Wait for RECONNECT_INTERVAL seconds (10s) conf-sanity.sh test_35a 2023-03-2716h03m06s Stopping the MDT: lustre-MDT0000 stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Restarting the MDT: lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 Wait for df (113547) ... done debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout Debug log: 339 lines, 339 kept, 0 dropped, 0 bad. umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. tunefs.lustre: Unable to mount /dev/mapper/mds1_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity tunefs.lustre: Unable to mount /dev/mapper/mds2_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity tunefs.lustre: Unable to mount /dev/mapper/ost1_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 tunefs.lustre: Unable to mount /dev/loop4: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x162 (OST first_time update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 tunefs failed, reformatting instead Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 35a (98s) == conf-sanity test 35b: Continue reconnection retries, if the active server is busy ========================================================== 16:04:14 (1679933054) SKIP: conf-sanity test_35b local MDS SKIP 35b (1s) == conf-sanity test 36: df report consistency on OSTs with different block size ========================================================== 16:04:15 (1679933055) SKIP: conf-sanity test_36 mixed loopback and real device not working SKIP 36 (0s) == conf-sanity test 37: verify set tunables works for symlink device ========================================================== 16:04:16 (1679933056) MDS : /dev/mapper/mds1_flakey SYMLINK : /tmp/sym_mdt.img mount symlink device - /tmp/sym_mdt.img Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported mount_op=arg[0] = /sbin/mount.lustre arg[1] = -v arg[2] = -o arg[3] = rw,localrecov arg[4] = /dev/mapper/mds1_flakey arg[5] = /mnt/lustre-mds1 source = /dev/mapper/mds1_flakey (/dev/mapper/mds1_flakey), target = /mnt/lustre-mds1 options = rw,localrecov checking for existing Lustre data: found mounting device /dev/mapper/mds1_flakey at /mnt/lustre-mds1, flags=0x1000000 options=user_xattr,errors=remount-ro,localrecov,osd=osd-ldiskfs,mgs,param=sys.timeout=20,param=mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity,svname=lustre-MDT0000,device=/dev/mapper/mds1_flakey PASS 37 (4s) == conf-sanity test 38: MDS recreates missing lov_objid file from OST data ========================================================== 16:04:20 (1679933060) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre copying 10 files to /mnt/lustre/d38.conf-sanity tar: Removing leading `/' from member names tar: Removing leading `/' from hard link targets umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=43 osp.lustre-OST0000-osc-MDT0001.prealloc_next_id=3 stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP delete lov_objid file on MDS 000000 42 000008 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=2 osp.lustre-OST0000-osc-MDT0001.prealloc_next_id=34 umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP 000000 2 000008 8+0 records in 8+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 9.932e-05 s, 41.2 MB/s start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=35 osp.lustre-OST0000-osc-MDT0001.prealloc_next_id=66 umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP 000000 34 000008 files compared the same umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP modules unloaded. PASS 38 (71s) == conf-sanity test 39: leak_finder recognizes both LUSTRE and LNET malloc messages ========================================================== 16:05:31 (1679933131) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 39 (36s) == conf-sanity test 40: race during service thread startup ========================================================== 16:06:07 (1679933167) start ost1 service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" fail_loc=0x80000706 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 40 (129s) == conf-sanity test 41a: mount mds with --nosvc and --nomgs ========================================================== 16:08:16 (1679933296) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov -o nosvc -n /dev/mapper/mds1_flakey /mnt/lustre-mds1 nomtab: 1 Start /dev/mapper/mds1_flakey without service Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov -o nomgs,force /dev/mapper/mds1_flakey /mnt/lustre-mds1 force: 1 Started lustre-MDT0000 mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre blah blah umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP PASS 41a (32s) == conf-sanity test 41b: mount mds with --nosvc and --nomgs on first mount ========================================================== 16:08:48 (1679933328) Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov -o nosvc -n /dev/mapper/mds1_flakey /mnt/lustre-mds1 nomtab: 1 Start /dev/mapper/mds1_flakey without service Commit the device label on /dev/mapper/mds1_flakey Started lustre:MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov -o nomgs,force /dev/mapper/mds1_flakey /mnt/lustre-mds1 force: 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre blah blah umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:-f) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP PASS 41b (67s) == conf-sanity test 41c: concurrent mounts of MDT/OST should all fail but one ========================================================== 16:09:55 (1679933395) umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP modules unloaded. start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' fail_loc=0x80000716 mount.lustre: mount /dev/mapper/mds1_flakey at /mnt/lustre-mds1 failed: Operation already in progress The target service is already running. (/dev/mapper/mds1_flakey) fail_loc=0x0 1st MDT start succeed 2nd MDT start failed with 114 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing lsmod fail_loc=0x80000716 mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: Operation already in progress The target service is already running. (/dev/mapper/ost1_flakey) fail_loc=0x0 1st OST start succeed 2nd OST start failed with 114 stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 41c (114s) == conf-sanity test 42: allow client/server mount/unmount with invalid config param ========================================================== 16:11:49 (1679933509) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 42 (75s) == conf-sanity test 43a: check root_squash and nosquash_nids ========================================================== 16:13:04 (1679933584) SKIP: conf-sanity test_43a missing user with uid=501 gid=501 SKIP 43a (1s) == conf-sanity test 43b: parse nosquash_nids with commas in expr_list ========================================================== 16:13:05 (1679933585) SKIP: conf-sanity test_43b mixed loopback and real device not working SKIP 43b (0s) == conf-sanity test 44: mounted client proc entry exists ========================================================== 16:13:05 (1679933585) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 44 (37s) SKIP: conf-sanity test_45 skipping SLOW test 45 == conf-sanity test 46a: handle ost additional - wide striped file ========================================================== 16:13:43 (1679933623) Testing with 2 OSTs Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Commit the device label on /tmp/lustre-ost2 Started lustre-OST0001 tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state (FULL|IDLE) osc.lustre-OST0001-osc-ffff8b7859afd000.ost_server_uuid 40 osc.lustre-OST0001-osc-ffff8b7859afd000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre2..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre2 /mnt/lustre2 stripe_count: -1 stripe_size: 1048576 pattern: raid0 stripe_offset: -1 /mnt/lustre2/widestripe lmm_stripe_count: 2 lmm_stripe_size: 1048576 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 2 0x2 0x2c0000401 0 34 0x22 0x280000401 File: /mnt/lustre/widestripe Size: 3 Blocks: 1 IO Block: 4194304 regular file Device: 2c54f966h/743766374d Inode: 144115238826934273 Links: 1 Access: (0674/-rw-rwxr--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2023-03-27 16:14:37.000000000 +0000 Modify: 2023-03-27 16:14:37.000000000 +0000 Change: 2023-03-27 16:14:37.000000000 +0000 Birth: - umount lustre on /mnt/lustre2..... Stopping client tmp.20a7BdrJQP /mnt/lustre2 (opts:) umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP stop ost1 service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP modules unloaded. tunefs.lustre: Unable to mount /dev/mapper/mds1_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity tunefs.lustre: Unable to mount /dev/mapper/mds2_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity tunefs.lustre: Unable to mount /dev/mapper/ost1_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 tunefs.lustre: Unable to mount /dev/mapper/ost2_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 tunefs failed, reformatting instead Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 2 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 46a (120s) == conf-sanity test 47: server restart does not make client loss lru_resize settings ========================================================== 16:15:44 (1679933744) Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success ldlm.namespaces.lustre-MDT0000-lwp-MDT0000.lru_size=100 ldlm.namespaces.lustre-MDT0000-lwp-MDT0001.lru_size=100 ldlm.namespaces.lustre-MDT0000-lwp-OST0000.lru_size=100 ldlm.namespaces.lustre-MDT0000-mdc-ffff8b782676b000.lru_size=100 ldlm.namespaces.lustre-MDT0000-osp-MDT0001.lru_size=100 ldlm.namespaces.lustre-MDT0001-lwp-OST0000.lru_size=100 ldlm.namespaces.lustre-MDT0001-mdc-ffff8b782676b000.lru_size=100 ldlm.namespaces.lustre-MDT0001-osp-MDT0000.lru_size=100 ldlm.namespaces.lustre-OST0000-osc-MDT0000.lru_size=100 ldlm.namespaces.lustre-OST0000-osc-MDT0001.lru_size=100 ldlm.namespaces.lustre-OST0000-osc-ffff8b782676b000.lru_size=100 ldlm.namespaces.lustre-MDT0000-lwp-MDT0000.lru_size=100 ldlm.namespaces.lustre-MDT0000-lwp-MDT0001.lru_size=100 ldlm.namespaces.lustre-MDT0000-lwp-OST0000.lru_size=100 ldlm.namespaces.lustre-MDT0000-mdc-ffff8b782676b000.lru_size=100 ldlm.namespaces.lustre-MDT0000-osp-MDT0001.lru_size=100 ldlm.namespaces.lustre-MDT0001-lwp-OST0000.lru_size=100 ldlm.namespaces.lustre-MDT0001-mdc-ffff8b782676b000.lru_size=100 ldlm.namespaces.lustre-MDT0001-osp-MDT0000.lru_size=100 ldlm.namespaces.lustre-OST0000-osc-MDT0000.lru_size=100 ldlm.namespaces.lustre-OST0000-osc-MDT0001.lru_size=100 Failing ost1 on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:) on tmp.20a7BdrJQP 16:16:12 (1679933772) shut down Failover ost1 to tmp.20a7BdrJQP mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 16:16:23 (1679933783) targets are mounted 16:16:23 (1679933783) facet_failover done Failing mds1 on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:) on tmp.20a7BdrJQP 16:16:24 (1679933784) shut down Failover mds1 to tmp.20a7BdrJQP mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 16:16:39 (1679933799) targets are mounted 16:16:39 (1679933799) facet_failover done ldlm.namespaces.lustre-MDT0000-lwp-MDT0000.lru_size=0 ldlm.namespaces.lustre-MDT0000-lwp-MDT0001.lru_size=100 ldlm.namespaces.lustre-MDT0000-lwp-OST0000.lru_size=0 ldlm.namespaces.lustre-MDT0000-mdc-ffff8b782676b000.lru_size=100 ldlm.namespaces.lustre-MDT0000-osp-MDT0001.lru_size=100 ldlm.namespaces.lustre-MDT0001-lwp-OST0000.lru_size=0 ldlm.namespaces.lustre-MDT0001-mdc-ffff8b782676b000.lru_size=100 ldlm.namespaces.lustre-MDT0001-osp-MDT0000.lru_size=200 ldlm.namespaces.lustre-OST0000-osc-MDT0000.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-MDT0001.lru_size=100 umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 47 (80s) == conf-sanity test 48: too many acls on file ============ 16:17:03 (1679933823) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success /mnt/lustre stripe_count: -1 stripe_size: 1048576 pattern: raid0 stripe_offset: -1 /mnt/lustre/widestripe lmm_stripe_count: 1 lmm_stripe_size: 1048576 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 67 0x43 0x280000401 It is expected to hold at least 4500 ACL entries File: /mnt/lustre/widestripe Size: 3 Blocks: 8 IO Block: 4194304 regular file Device: 2c54f966h/743766374d Inode: 144115272381366274 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2023-03-27 16:17:18.000000000 +0000 Modify: 2023-03-27 16:17:18.000000000 +0000 Change: 2023-03-27 16:18:54.000000000 +0000 Birth: - getfacl: Removing leading '/' from absolute path names Failing mds1 on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:) on tmp.20a7BdrJQP 16:18:55 (1679933935) shut down Failover mds1 to tmp.20a7BdrJQP mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 16:19:06 (1679933946) targets are mounted 16:19:06 (1679933946) facet_failover done tmp.20a7BdrJQP: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping client tmp.20a7BdrJQP /mnt/lustre opts:-f Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 48 (201s) == conf-sanity test 49a: check PARAM_SYS_LDLM_TIMEOUT option of mkfs.lustre ========================================================== 16:20:24 (1679934024) Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success check ldlm_timout... umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 49a (31s) == conf-sanity test 49b: check PARAM_SYS_LDLM_TIMEOUT option of mkfs.lustre ========================================================== 16:20:55 (1679934055) Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 49b (40s) == conf-sanity test 50a: lazystatfs all servers available ========================================================== 16:21:35 (1679934095) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre llite.lustre-ffff8b7834581000.lazystatfs=1 multiop /mnt/lustre vf_ TMPPIPE=/tmp/multiop_open_wait_pipe.1481 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 95248 1632 84960 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 95248 1508 85084 2% /mnt/lustre[MDT:1] lustre-OST0000_UUID 142216 1524 126692 2% /mnt/lustre[OST:0] filesystem_summary: 142216 1524 126692 2% /mnt/lustre conf-sanity.sh: line 4174: kill: (180202) - No such process umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 50a (44s) == conf-sanity test 50b: lazystatfs all servers down ===== 16:22:19 (1679934139) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre llite.lustre-ffff8b7840e3f000.lazystatfs=1 stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP tmp.20a7BdrJQP: executing wait_import_state DISCONN osc.lustre-OST0000-osc-ffff8b7840e3f000.ost_server_uuid 40 osc.lustre-OST0000-osc-ffff8b7840e3f000.ost_server_uuid in DISCONN state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" 0 UP osd-ldiskfs lustre-MDT0000-osd lustre-MDT0000-osd_UUID 9 1 UP mgs MGS MGS 4 2 UP mgc MGC192.168.125.30@tcp ec1b5175 3 UP mds MDS MDS_uuid 2 4 UP lod lustre-MDT0000-mdtlov lustre-MDT0000-mdtlov_UUID 3 5 UP mdt lustre-MDT0000 lustre-MDT0000_UUID 10 6 UP mdd lustre-MDD0000 lustre-MDD0000_UUID 3 7 UP qmt lustre-QMT0000 lustre-QMT0000_UUID 3 8 UP osp lustre-MDT0001-osp-MDT0000 lustre-MDT0000-mdtlov_UUID 4 9 UP osp lustre-OST0000-osc-MDT0000 lustre-MDT0000-mdtlov_UUID 4 10 UP lwp lustre-MDT0000-lwp-MDT0000 lustre-MDT0000-lwp-MDT0000_UUID 4 11 UP osd-ldiskfs lustre-MDT0001-osd lustre-MDT0001-osd_UUID 7 12 UP lod lustre-MDT0001-mdtlov lustre-MDT0001-mdtlov_UUID 3 13 UP mdt lustre-MDT0001 lustre-MDT0001_UUID 6 14 UP mdd lustre-MDD0001 lustre-MDD0001_UUID 3 15 UP osp lustre-MDT0000-osp-MDT0001 lustre-MDT0001-mdtlov_UUID 4 16 UP osp lustre-OST0000-osc-MDT0001 lustre-MDT0001-mdtlov_UUID 4 17 UP lwp lustre-MDT0000-lwp-MDT0001 lustre-MDT0000-lwp-MDT0001_UUID 4 23 UP lov lustre-clilov-ffff8b7840e3f000 a1581850-24c8-4a88-9333-694dbcf710d6 3 24 UP lmv lustre-clilmv-ffff8b7840e3f000 a1581850-24c8-4a88-9333-694dbcf710d6 4 25 UP mdc lustre-MDT0000-mdc-ffff8b7840e3f000 a1581850-24c8-4a88-9333-694dbcf710d6 4 26 UP mdc lustre-MDT0001-mdc-ffff8b7840e3f000 a1581850-24c8-4a88-9333-694dbcf710d6 4 27 UP osc lustre-OST0000-osc-ffff8b7840e3f000 a1581850-24c8-4a88-9333-694dbcf710d6 4 OSCs should all be DISCONN multiop /mnt/lustre vf_ TMPPIPE=/tmp/multiop_open_wait_pipe.1481 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 95248 1632 84960 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 95248 1508 85084 2% /mnt/lustre[MDT:1] filesystem_summary: 190496 3140 170044 2% /mnt/lustre conf-sanity.sh: line 4174: kill: (182972) - No such process umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 50b (48s) == conf-sanity test 50c: lazystatfs one server down ====== 16:23:07 (1679934187) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost2 service on tmp.20a7BdrJQP Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Commit the device label on /tmp/lustre-ost2 Started lustre-OST0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre llite.lustre-ffff8b7847728000.lazystatfs=1 stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP tmp.20a7BdrJQP: executing wait_import_state DISCONN os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in DISCONN state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state DISCONN os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in DISCONN state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" multiop /mnt/lustre vf_ TMPPIPE=/tmp/multiop_open_wait_pipe.1481 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 95248 1680 84912 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 95248 1532 85060 2% /mnt/lustre[MDT:1] lustre-OST0001_UUID 142216 1528 126688 2% /mnt/lustre[OST:1] filesystem_summary: 142216 1528 126688 2% /mnt/lustre conf-sanity.sh: line 4174: kill: (184648) - No such process umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost2 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 sys.ldlm_timeout=19 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 sys.ldlm_timeout=19 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 sys.ldlm_timeout=19 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 sys.ldlm_timeout=19 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 sys.ldlm_timeout=19 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 sys.ldlm_timeout=19 checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 sys.ldlm_timeout=19 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 sys.ldlm_timeout=19 PASS 50c (47s) == conf-sanity test 50d: lazystatfs client/server conn race ========================================================== 16:23:54 (1679934234) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost2 service on tmp.20a7BdrJQP Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Started lustre-OST0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre llite.lustre-ffff8b783aa9f000.lazystatfs=1 stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP multiop /mnt/lustre vf_ TMPPIPE=/tmp/multiop_open_wait_pipe.1481 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 95248 1680 84912 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 95248 1532 85060 2% /mnt/lustre[MDT:1] lustre-OST0001_UUID 142216 1528 126688 2% /mnt/lustre[OST:1] filesystem_summary: 142216 1528 126688 2% /mnt/lustre conf-sanity.sh: line 4174: kill: (186355) - No such process umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost2 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 sys.ldlm_timeout=19 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 sys.ldlm_timeout=19 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 sys.ldlm_timeout=19 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 sys.ldlm_timeout=19 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 sys.ldlm_timeout=19 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 sys.ldlm_timeout=19 checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 sys.ldlm_timeout=19 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 sys.ldlm_timeout=19 PASS 50d (46s) == conf-sanity test 50e: normal statfs all servers down == 16:24:40 (1679934280) Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 3 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 2 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP tmp.20a7BdrJQP: executing wait_import_state DISCONN os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in DISCONN state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state DISCONN os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in DISCONN state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre llite.lustre-ffff8b7829a55000.lazystatfs=0 multiop /mnt/lustre v_f TMPPIPE=/tmp/multiop_open_wait_pipe.1481 start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 8 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 50e (90s) == conf-sanity test 50f: normal statfs one server in down ========================================================== 16:26:10 (1679934370) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 2 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost2 service on tmp.20a7BdrJQP Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Commit the device label on /tmp/lustre-ost2 Started lustre-OST0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 5 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost2 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP tmp.20a7BdrJQP: executing wait_import_state DISCONN os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in DISCONN state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state DISCONN os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in DISCONN state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre llite.lustre-ffff8b783fb48000.lazystatfs=0 multiop /mnt/lustre v_f TMPPIPE=/tmp/multiop_open_wait_pipe.1481 start ost2 service on tmp.20a7BdrJQP Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Started lustre-OST0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 4 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost2 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:-f) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 PASS 50f (100s) == conf-sanity test 50g: deactivated OST should not cause panic ========================================================== 16:27:50 (1679934470) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre start ost2 service on tmp.20a7BdrJQP Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Started lustre-OST0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state (FULL|IDLE) osc.lustre-OST0001-osc-ffff8b7842437000.ost_server_uuid 40 osc.lustre-OST0001-osc-ffff8b7842437000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Filesystem 1K-blocks Used Available Use% Mounted on 192.168.125.30@tcp:/lustre 142216 1524 126692 2% /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost2 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 PASS 50g (48s) == conf-sanity test 50h: LU-642: activate deactivated OST ========================================================== 16:28:38 (1679934518) checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x142 (OST update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 osc.active=0 Writing CONFIGS/mountdata start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost2 service on tmp.20a7BdrJQP Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Started lustre-OST0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Setting lustre-OST0000.osc.active from 0 to 1 Waiting 90s for '1' Updated after 2s: want '1' got '1' create a file after OST1 is activated 10+0 records in 10+0 records out 10485760 bytes (10 MB, 10 MiB) copied, 0.550794 s, 19.0 MB/s umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost2 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 50h (44s) == conf-sanity test 50i: activate deactivated MDT ======== 16:29:22 (1679934562) Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x41 (MDT update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity mdc.active=0 Writing CONFIGS/mountdata start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost2 service on tmp.20a7BdrJQP Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Started lustre-OST0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre error: conf_param: Operation not permitted Setting lustre-MDT0001.mdc.active from 0 to 1 Waiting 90s for '1' Updated after 3s: want '1' got '1' tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state (FULL|IDLE) osp.lustre-MDT0000-osp-MDT0001.mdt_server_uuid 40 osp.lustre-MDT0000-osp-MDT0001.mdt_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state (FULL|IDLE) osp.lustre-MDT0001-osp-MDT0000.mdt_server_uuid 40 osp.lustre-MDT0001-osp-MDT0000.mdt_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" total: 1 open/close in 0.02 seconds: 49.58 ops/second Setting lustre-MDT0001.mdc.active from 1 to 0 Waiting 90s for '0' Updated after 5s: want '0' got '0' check osp.lustre-MDT0001-osp-MDT0000.active target updated after 0 sec (got 0) lfs mkdir: dirstripe error on '/mnt/lustre/d50i.conf-sanity/2': No such device lfs setdirstripe: cannot create dir '/mnt/lustre/d50i.conf-sanity/2': No such device umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop ost1 service on tmp.20a7BdrJQP PASS 50i (45s) == conf-sanity test 51: Verify that mdt_reint handles RMF_MDT_MD correctly when an OST is added ========================================================== 16:30:07 (1679934607) Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success fail_loc=0x142 start ost2 service on tmp.20a7BdrJQP Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Commit the device label on /tmp/lustre-ost2 Started lustre-OST0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 1 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost2 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:-f) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. tunefs.lustre: Unable to mount /dev/mapper/mds1_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity tunefs.lustre: Unable to mount /dev/mapper/mds2_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity tunefs.lustre: Unable to mount /dev/mapper/ost1_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 tunefs.lustre: Unable to mount /dev/mapper/ost2_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 tunefs failed, reformatting instead Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 51 (139s) == conf-sanity test 52: check recovering objects from lost+found ========================================================== 16:32:26 (1679934746) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre ........ backup files to /tmp/d52.conf-sanity tar: Removing leading `/' from member names tar: Removing leading `/' from hard link targets getfattr: Removing leading '/' from absolute path names umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP mount ost1 as ldiskfs backup objects to /tmp/conf52/objects tar: Removing leading `/' from member names tar: Removing leading `/' from hard link targets getfattr: Removing leading '/' from absolute path names start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre tar: Removing leading `/' from member names tar: Removing leading `/' from hard link targets getfattr: Removing leading '/' from absolute path names umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 52 (44s) SKIP: conf-sanity test_53a skipping excluded test 53a (base 53) SKIP: conf-sanity test_53b skipping excluded test 53b (base 53) == conf-sanity test 54a: test llverdev and partial verify of device ========================================================== 16:33:11 (1679934791) tmp.20a7BdrJQP: executing run_llverdev /dev/mapper/ost1_flakey -p tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: llverdev: /dev/mapper/ost1_flakey is 204800000 bytes (0.190735 GB) in size tmp.20a7BdrJQP: timestamp: 1679934792 chunksize: 1048576 size: 204800000 tmp.20a7BdrJQP: write offset: 200000kB inf MB/s tmp.20a7BdrJQP: read offset: 200000kB inf MB/s tmp.20a7BdrJQP: llverdev: data verified successfully Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 1 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 54a (41s) == conf-sanity test 54b: test llverfs and partial verify of filesystem ========================================================== 16:33:52 (1679934832) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre llverfs: unable to open ext3 fs on '/mnt/lustre' Timestamp: 1679934840 dirs: 2, fs blocks: 35554 write_done: /mnt/lustre/llverfs_dir00001/file000, current: 8.42758 MB/s, overall: 8.42758 MB/s, ETA: 0:00:00 read_done: /mnt/lustre/llverfs_dir00001/file000, current: 225.81 MB/s, overall: 225.81 MB/s, ETA: 0:00:00 umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 54b (29s) == conf-sanity test 55: check lov_objid size ============= 16:34:21 (1679934861) Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F /dev/mapper/mds1_flakey 200000k Writing CONFIGS/mountdata Permanent disk data: Target: lustre:OST03ff Index: 1023 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre:OST03ff kilobytes 200000 options -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST03ff -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 mount.lustre: mount /dev/mapper/mds2_flakey at /mnt/lustre-mds2 failed: No such file or directory Is the MGS specification correct? Is the filesystem name correct? If upgrading, is the copied client log valid? (see upgrade docs) Start of /dev/mapper/mds2_flakey on mds2 failed 2 start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST03ff tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST03ff-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping client tmp.20a7BdrJQP /mnt/lustre opts: Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 mount.lustre: mount /dev/mapper/mds2_flakey at /mnt/lustre-mds2 failed: No such file or directory Is the MGS specification correct? Is the filesystem name correct? If upgrading, is the copied client log valid? (see upgrade docs) Start of /dev/mapper/mds2_flakey on mds2 failed 2 start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST03ff tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST03ff-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre checking size of lov_objid for ost index 1023 ok, lov_objid size is correct: 8192 Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping client tmp.20a7BdrJQP /mnt/lustre opts: Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F /dev/mapper/mds1_flakey 200000k Writing CONFIGS/mountdata Permanent disk data: Target: lustre:OST0800 Index: 2048 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre:OST0800 kilobytes 200000 options -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0800 -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 mount.lustre: mount /dev/mapper/mds2_flakey at /mnt/lustre-mds2 failed: No such file or directory Is the MGS specification correct? Is the filesystem name correct? If upgrading, is the copied client log valid? (see upgrade docs) Start of /dev/mapper/mds2_flakey on mds2 failed 2 start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0800 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0800-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping client tmp.20a7BdrJQP /mnt/lustre opts: Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 mount.lustre: mount /dev/mapper/mds2_flakey at /mnt/lustre-mds2 failed: No such file or directory Is the MGS specification correct? Is the filesystem name correct? If upgrading, is the copied client log valid? (see upgrade docs) Start of /dev/mapper/mds2_flakey on mds2 failed 2 start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0800 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0800-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre checking size of lov_objid for ost index 2048 ok, lov_objid size is correct: 16392 Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping client tmp.20a7BdrJQP /mnt/lustre opts: Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 1 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 55 (136s) == conf-sanity test 56a: check big OST indexes and out-of-index-order start ========================================================== 16:36:37 (1679934997) Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 Permanent disk data: Target: lustre:OST2710 Index: 10000 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre:OST2710 kilobytes 200000 options -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST2710 -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata Permanent disk data: Target: lustre:OST03e8 Index: 1000 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 formatting backing filesystem ldiskfs on /dev/loop4 target name lustre:OST03e8 kilobytes 200000 options -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST03e8 -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F /dev/loop4 200000k Writing CONFIGS/mountdata start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST2710 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST2710-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost2 service on tmp.20a7BdrJQP Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Started lustre-OST03e8 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST03e8-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre ok OBDS: 1000: lustre-OST03e8_UUID ACTIVE 10000: lustre-OST2710_UUID ACTIVE 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122522 s, 3.3 MB/s tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST2710-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST2710-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST2710-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST2710-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST03e8-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST03e8-osc-MDT0000.ost_server_uuid in FULL state after 5 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST03e8-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST03e8-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping client tmp.20a7BdrJQP /mnt/lustre opts: Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 5 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 56a (103s) == conf-sanity test 56b: test target_obd correctness with nonconsecutive MDTs ========================================================== 16:38:20 (1679935100) SKIP: conf-sanity test_56b needs >= 3 MDTs SKIP 56b (1s) == conf-sanity test 57a: initial registration from failnode should fail (should return errs) ========================================================== 16:38:21 (1679935101) tmp.20a7BdrJQP: executing load_modules_local tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: Loading modules from /mnt/build/lustre/tests/.. tmp.20a7BdrJQP: detected 2 online CPUs by sysfs tmp.20a7BdrJQP: Force libcfs to create 2 CPU partitions tmp.20a7BdrJQP: gss/krb5 is not supported checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x162 (OST first_time update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x142 (OST update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 failover.node=192.168.125.30@tcp Writing CONFIGS/mountdata start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: Cannot assign requested address Start of /dev/mapper/ost1_flakey on ost1 failed 99 umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 57a (22s) == conf-sanity test 57b: initial registration from servicenode should not fail ========================================================== 16:38:43 (1679935123) tmp.20a7BdrJQP: executing load_modules_local tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: Loading modules from /mnt/build/lustre/tests/.. tmp.20a7BdrJQP: detected 2 online CPUs by sysfs tmp.20a7BdrJQP: Force libcfs to create 2 CPU partitions tmp.20a7BdrJQP: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' tmp.20a7BdrJQP: gss/krb5 is not supported tmp.20a7BdrJQP: quota/lquota options: 'hash_lqs_cur_bits=3' checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 failover.node=192.168.125.30@tcp Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 failover.node=192.168.125.30@tcp checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x142 (OST update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x142 (OST update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 failover.node=192.168.125.30@tcp Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1142 (OST update writeconf no_primnode ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 failover.node=192.168.125.30@tcp:192.168.125.30@tcp Writing CONFIGS/mountdata start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 57b (28s) == conf-sanity test 58: missing llog files must not prevent MDT from mounting ========================================================== 16:39:11 (1679935151) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre total: 100 open/close in 0.63 seconds: 158.99 ops/second - unlinked 0 (time 1679935166 ; total 0 ; last 0) total: 100 unlinks in 1 seconds: 100.000000 unlinks/second stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 16 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" fail_loc=0 umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 58 (82s) == conf-sanity test 59: writeconf mount option =========== 16:40:33 (1679935233) tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP original ost count: 1 (expect > 0) tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid after mdt writeconf count: 0 (expect 0) OST start without writeconf should fail: mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: No such file or directory Is the MGS specification correct? Is the filesystem name correct? If upgrading, is the copied client log valid? (see upgrade docs) OST start with writeconf should succeed: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid after ost writeconf count: 1 (expect 1) tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid after ost2 writeconf count: 2 (expect 2) tunefs.lustre: Unable to mount /dev/mapper/mds1_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity tunefs.lustre: Unable to mount /dev/mapper/mds2_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity tunefs.lustre: Unable to mount /dev/mapper/ost1_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1002 (OST no_primnode ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 failover.node=192.168.125.30@tcp:192.168.125.30@tcp Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1102 (OST writeconf no_primnode ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 failover.node=192.168.125.30@tcp:192.168.125.30@tcp tunefs.lustre: Unable to mount /dev/mapper/ost2_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 tunefs failed, reformatting instead Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 59 (84s) == conf-sanity test 60a: check mkfs.lustre --mkfsoptions -E -O options setting ========================================================== 16:41:57 (1679935317) Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 200000 options -I 1024 -i 2560 -q -O ^uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E stride=64,lazy_journal_init="0",lazy_itable_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -I 1024 -i 2560 -q -O ^uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E stride=64,lazy_journal_init="0",lazy_itable_init="0" -F /dev/mapper/mds1_flakey 200000k Writing CONFIGS/mountdata Permanent disk data: Target: lustre:MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x61 (MDT first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/mds2_flakey target name lustre:MDT0001 kilobytes 200000 options -I 1024 -i 2560 -q -O ^uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E stride=64,lazy_journal_init="0",lazy_itable_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0001 -I 1024 -i 2560 -q -O ^uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E stride=64,lazy_journal_init="0",lazy_itable_init="0" -F /dev/mapper/mds2_flakey 200000k Writing CONFIGS/mountdata dumpe2fs 1.45.6.wc3 (28-Sep-2020) stop mds service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 60a (50s) == conf-sanity test 60b: check mkfs.lustre MDT default features ========================================================== 16:42:47 (1679935367) dumpe2fs 1.45.6.wc3 (28-Sep-2020) Filesystem features: has_journal ext_attr resize_inode dir_index filetype flex_bg ea_inode dirdata large_dir sparse_super large_file huge_file uninit_bg dir_nlink quota project Journal features: journal_incompat_revoke PASS 60b (0s) == conf-sanity test 61a: large xattr ===================== 16:42:47 (1679935367) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre save large xattr of 65536 bytes on trusted.big on /mnt/lustre/f61a.conf-sanity shrink value of trusted.big on /mnt/lustre/f61a.conf-sanity grow value of trusted.big on /mnt/lustre/f61a.conf-sanity check value of trusted.big on /mnt/lustre/f61a.conf-sanity after remounting MDS Failing mds1 on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:) on tmp.20a7BdrJQP 16:43:02 (1679935382) shut down Failover mds1 to tmp.20a7BdrJQP mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 16:43:17 (1679935397) targets are mounted 16:43:17 (1679935397) facet_failover done tmp.20a7BdrJQP: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" remove large xattr trusted.big from /mnt/lustre/f61a.conf-sanity Failing mds1 on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:) on tmp.20a7BdrJQP 16:43:25 (1679935405) shut down Failover mds1 to tmp.20a7BdrJQP mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 16:43:41 (1679935421) targets are mounted 16:43:41 (1679935421) facet_failover done tmp.20a7BdrJQP: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping client tmp.20a7BdrJQP /mnt/lustre opts: Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP e2fsck -d -v -t -t -f -y /dev/mapper/mds1_flakey -m8 e2fsck 1.45.6.wc3 (28-Sep-2020) Use max possible thread num: 1 instead thread 0 jumping to group 0 e2fsck_pass1_run:2517: increase inode 81 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 82 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 83 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 84 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 85 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 86 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 87 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 88 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 89 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 90 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 91 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 92 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 93 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 94 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 95 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 96 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 97 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 98 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 99 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 100 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 101 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 102 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 103 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 104 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 105 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 106 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 107 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 108 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 109 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 110 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 111 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 112 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 113 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 114 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 115 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 116 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 117 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 118 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 119 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 120 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 121 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 122 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 123 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 124 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 125 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 126 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 127 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 128 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 129 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 130 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 131 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 132 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 133 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 134 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 135 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 136 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 137 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 138 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 139 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 140 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 141 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 142 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 143 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 144 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 145 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 146 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 147 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 148 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 149 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 150 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 151 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 152 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 153 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 154 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 155 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 156 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 157 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 158 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 159 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 160 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26697 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26724 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26726 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26727 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26728 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26729 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26730 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53372 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53373 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53374 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53375 badness 0 to 2 Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 268k/0k (148k/121k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 693.48MB/s [Thread 0] Scanned group range [0, 3), inodes 276 Pass 2: Checking directory structure Pass 2: Memory used: 268k/0k (115k/154k), time: 0.01/ 0.00/ 0.01 Pass 2: I/O read: 1MB, write: 0MB, rate: 124.67MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 268k/0k (115k/154k), time: 0.03/ 0.02/ 0.01 Pass 3A: Memory used: 268k/0k (115k/154k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 268k/0k (113k/156k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 7936.51MB/s Pass 4: Checking reference counts Pass 4: Memory used: 268k/0k (73k/196k), time: 0.04/ 0.00/ 0.04 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 268k/0k (72k/197k), time: 0.01/ 0.00/ 0.01 Pass 5: I/O read: 1MB, write: 0MB, rate: 67.91MB/s 274 inodes used (0.34%, out of 79992) 5 non-contiguous files (1.8%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 24545 blocks used (49.09%, out of 50000) 0 bad blocks 1 large file 147 regular files 117 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 264 files Memory used: 268k/0k (74k/194k), time: 0.11/ 0.02/ 0.06 I/O read: 1MB, write: 1MB, rate: 9.33MB/s e2fsck -d -v -t -t -f -y /dev/mapper/mds2_flakey -m8 e2fsck 1.45.6.wc3 (28-Sep-2020) Use max possible thread num: 1 instead thread 0 jumping to group 0 e2fsck_pass1_run:2517: increase inode 81 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 82 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 83 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 84 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 85 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 86 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 87 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 88 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 89 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 90 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 91 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 92 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 93 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 94 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 95 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 96 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 97 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 98 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 99 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 100 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 101 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 102 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 103 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 104 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 105 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 106 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 107 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 108 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 109 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 110 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 111 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 112 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 113 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 114 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 115 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 116 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 117 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 118 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 119 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 120 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 121 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 122 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 123 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 124 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 125 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 126 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 127 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 128 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 129 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 130 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 131 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 132 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 133 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 134 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 135 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 136 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 137 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 138 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 139 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 140 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 141 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 142 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 143 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 144 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 145 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 146 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 147 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 148 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 150 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26719 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26720 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26721 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26722 badness 0 to 2 Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 268k/0k (148k/121k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 651.89MB/s [Thread 0] Scanned group range [0, 3), inodes 256 Pass 2: Checking directory structure Pass 2: Memory used: 268k/0k (115k/154k), time: 0.01/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 133.82MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 268k/0k (115k/154k), time: 0.02/ 0.01/ 0.00 Pass 3A: Memory used: 268k/0k (115k/154k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 268k/0k (113k/156k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 8403.36MB/s Pass 4: Checking reference counts Pass 4: Memory used: 268k/0k (73k/196k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 268k/0k (72k/197k), time: 0.01/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 113.44MB/s 254 inodes used (0.32%, out of 79992) 2 non-contiguous files (0.8%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 24514 blocks used (49.03%, out of 50000) 0 bad blocks 1 large file 135 regular files 109 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 244 files Memory used: 268k/0k (75k/194k), time: 0.05/ 0.02/ 0.00 I/O read: 1MB, write: 1MB, rate: 20.13MB/s start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 61a (113s) == conf-sanity test 61b: large xattr ===================== 16:44:40 (1679935480) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping client tmp.20a7BdrJQP /mnt/lustre opts: Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP debugfs 1.45.6.wc3 (28-Sep-2020) large ea <163> debugfs 1.45.6.wc3 (28-Sep-2020) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Started LFSCK on the device lustre-MDT0000: scrub namespace Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping client tmp.20a7BdrJQP /mnt/lustre opts: Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP e2fsck -d -v -t -t -f -y /dev/mapper/mds1_flakey -m8 e2fsck 1.45.6.wc3 (28-Sep-2020) Use max possible thread num: 1 instead thread 0 jumping to group 0 e2fsck_pass1_run:2517: increase inode 81 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 82 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 83 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 84 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 85 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 86 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 87 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 88 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 89 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 90 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 91 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 92 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 93 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 94 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 95 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 96 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 97 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 98 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 99 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 100 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 101 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 102 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 103 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 104 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 105 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 106 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 107 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 108 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 109 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 110 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 111 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 112 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 113 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 114 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 115 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 116 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 117 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 118 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 119 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 120 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 121 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 122 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 123 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 124 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 125 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 126 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 127 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 128 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 129 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 130 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 131 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 132 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 133 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 134 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 135 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 136 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 137 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 138 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 139 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 140 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 141 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 142 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 143 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 144 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 145 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 146 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 147 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 148 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 149 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 150 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 151 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 152 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 153 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 154 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 155 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 156 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 157 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 158 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 159 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 160 badness 0 to 2 e2fsck_pass1_run:2507: increase inode 163 badness 0 to 1 e2fsck_pass1_run:2517: increase inode 163 badness 1 to 3 e2fsck_pass1_run:2517: increase inode 26697 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26724 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26726 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26727 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26728 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26729 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26730 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53372 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53373 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53374 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53375 badness 0 to 2 Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 276k/0k (156k/121k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 710.73MB/s [Thread 0] Scanned group range [0, 3), inodes 277 Pass 2: Checking directory structure Pass 2: Memory used: 276k/0k (123k/154k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 611.62MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 276k/0k (123k/154k), time: 0.01/ 0.00/ 0.00 Pass 3A: Memory used: 276k/0k (123k/154k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 276k/0k (121k/156k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 16949.15MB/s Pass 4: Checking reference counts Pass 4: Memory used: 276k/0k (73k/204k), time: 0.03/ 0.02/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 276k/0k (72k/205k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 545.26MB/s 276 inodes used (0.35%, out of 79992) 6 non-contiguous files (2.2%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 1/0/0 24563 blocks used (49.13%, out of 50000) 0 bad blocks 1 large file 148 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 266 files Memory used: 276k/0k (75k/202k), time: 0.05/ 0.04/ 0.01 I/O read: 1MB, write: 1MB, rate: 18.46MB/s e2fsck -d -v -t -t -f -y /dev/mapper/mds2_flakey -m8 e2fsck 1.45.6.wc3 (28-Sep-2020) Use max possible thread num: 1 instead thread 0 jumping to group 0 e2fsck_pass1_run:2517: increase inode 81 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 82 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 83 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 84 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 85 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 86 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 87 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 88 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 89 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 90 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 91 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 92 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 93 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 94 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 95 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 96 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 97 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 98 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 99 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 100 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 101 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 102 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 103 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 104 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 105 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 106 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 107 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 108 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 109 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 110 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 111 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 112 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 113 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 114 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 115 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 116 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 117 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 118 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 119 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 120 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 121 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 122 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 123 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 124 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 125 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 126 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 127 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 128 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 129 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 130 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 131 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 132 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 133 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 134 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 135 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 136 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 137 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 138 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 139 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 140 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 141 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 142 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 143 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 144 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 145 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 146 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 147 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 148 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 150 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26719 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26720 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26721 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26722 badness 0 to 2 Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 268k/0k (148k/121k), time: 0.01/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 99.75MB/s [Thread 0] Scanned group range [0, 3), inodes 256 Pass 2: Checking directory structure Pass 2: Memory used: 268k/0k (115k/154k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 339.56MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 268k/0k (115k/154k), time: 0.05/ 0.02/ 0.00 Pass 3A: Memory used: 268k/0k (115k/154k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 268k/0k (113k/156k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 15873.02MB/s Pass 4: Checking reference counts Pass 4: Memory used: 268k/0k (73k/196k), time: 0.01/ 0.01/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 268k/0k (72k/197k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 535.33MB/s 254 inodes used (0.32%, out of 79992) 2 non-contiguous files (0.8%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 24514 blocks used (49.03%, out of 50000) 0 bad blocks 1 large file 135 regular files 109 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 244 files Memory used: 268k/0k (75k/194k), time: 0.07/ 0.04/ 0.00 I/O read: 1MB, write: 1MB, rate: 14.81MB/s start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 61b (90s) == conf-sanity test 62: start with disabled journal ====== 16:46:10 (1679935570) disable journal for mds tune2fs 1.45.6.wc3 (28-Sep-2020) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 mount.lustre: mount /dev/mapper/mds1_flakey at /mnt/lustre-mds1 failed: Invalid argument This may have multiple causes. Are the mount options correct? Check the syslog for more info. Start of /dev/mapper/mds1_flakey on mds1 failed 22 disable journal for ost tune2fs 1.45.6.wc3 (28-Sep-2020) start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: Invalid argument This may have multiple causes. Are the mount options correct? Check the syslog for more info. Start of /dev/mapper/ost1_flakey on ost1 failed 22 umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP modules unloaded. Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 4 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 62 (64s) SKIP: conf-sanity test_63 skipping excluded test 63 == conf-sanity test 64: check lfs df --lazy ============== 16:47:15 (1679935635) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost2 service on tmp.20a7BdrJQP Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Commit the device label on /tmp/lustre-ost2 Started lustre-OST0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre stop ost2 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP /mnt/build/lustre/tests/../utils/lfs df UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 95248 1668 84924 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 95248 1532 85060 2% /mnt/lustre[MDT:1] lustre-OST0000_UUID 142216 1524 126692 2% /mnt/lustre[OST:0] filesystem_summary: 142216 1524 126692 2% /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:-f) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. tunefs.lustre: Unable to mount /dev/mapper/mds1_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity tunefs.lustre: Unable to mount /dev/mapper/mds2_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity tunefs.lustre: Unable to mount /dev/mapper/ost1_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 tunefs.lustre: Unable to mount /dev/mapper/ost2_flakey: No such device Is the ldiskfs module available? tunefs.lustre FATAL: failed to write local files tunefs.lustre: exiting with 19 (No such device) checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 tunefs failed, reformatting instead Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 5 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 64 (106s) == conf-sanity test 65: re-create the lost last_rcvd file when server mount ========================================================== 16:49:00 (1679935740) stop mds service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP debugfs 1.45.6.wc3 (28-Sep-2020) /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP debugfs 1.45.6.wc3 (28-Sep-2020) /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps PASS 65 (13s) == conf-sanity test 66: replace nids ===================== 16:49:13 (1679935753) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Setting lustre-OST0000.osc.active from 1 to 0 Waiting 90s for '0' Updated after 2s: want '0' got '0' replace_nids should fail if MDS, OSTs and clients are UP error: replace_nids: Operation now in progress umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) replace_nids should fail if MDS and OSTs are UP error: replace_nids: Operation now in progress stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP replace_nids should fail if MDS is UP error: replace_nids: Operation now in progress stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov -o nosvc /dev/mapper/mds1_flakey /mnt/lustre-mds1 Start /dev/mapper/mds1_flakey without service Started lustre-MDT0000 command should accept two parameters replace primary NIDs for a device usage: replace_nids [,nid2,nid3:nid4,nid5:nid6] correct device name should be passed error: replace_nids: No such device or address wrong nids list should not destroy the system replace primary NIDs for a device usage: replace_nids [,nid2,nid3:nid4,nid5:nid6] replace primary NIDs for a device usage: replace_nids [,nid2,nid3:nid4,nid5:nid6] replace OST nid command should accept two parameters replace primary NIDs for a device usage: replace_nids [,nid2,nid3:nid4,nid5:nid6] wrong nids list should not destroy the system replace primary NIDs for a device usage: replace_nids [,nid2,nid3:nid4,nid5:nid6] set NIDs with failover replace MDS nid stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Setting lustre-OST0000.osc.active from 0 to 1 Waiting 90s for '1' Waiting 80s for '1' Waiting 70s for '1' Updated after 25s: want '1' got '1' start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre setup single mount lustre success umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 66 (137s) == conf-sanity test 67: test routes conversion and configuration ========================================================== 16:51:30 (1679935890) PASS 67 (1s) == conf-sanity test 68: be able to reserve specific sequences in FLDB ========================================================== 16:51:31 (1679935891) umount lustre on /mnt/lustre..... start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" seq.ctl-lustre-MDT0000.fldb=[0x2c0000400-12884902912):0:mdt seq.srv-lustre-MDT0000.space=clear mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre seq.srv-lustre-MDT0000.space=[0x300000400 - 0x340000400]:0:mdt [0x200000bd1:0x1:0x0] umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 68 (33s) SKIP: conf-sanity test_69 skipping SLOW test 69 == conf-sanity test 70a: start MDT0, then OST, then MDT1 ========================================================== 16:52:05 (1679935925) umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP modules unloaded. start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 70a (36s) == conf-sanity test 70b: start OST, MDT1, MDT0 =========== 16:52:41 (1679935961) start ost1 service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 70b (136s) == conf-sanity test 70c: stop MDT0, mkdir fail, create remote dir fail ========================================================== 16:54:57 (1679936097) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP deactivate lustre-MDT0000-mdc-ffff8b7818be7000 mkdir: cannot create directory '/mnt/lustre/d70c.conf-sanity': Cannot send after transport endpoint shutdown lfs mkdir: cannot resolve path '/mnt/lustre/d70c.conf-sanity/remote_dir': Cannot send after transport endpoint shutdown (108) lfs mkdir: '/mnt/lustre/d70c.conf-sanity/remote_dir' is not on a Lustre filesystem: Cannot send after transport endpoint shutdown (108) lfs setdirstripe: cannot create dir '/mnt/lustre/d70c.conf-sanity/remote_dir': Cannot send after transport endpoint shutdown umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) lsof: status error on /mnt/lustre: Cannot send after transport endpoint shutdown lsof 4.93.2 latest revision: https://github.com/lsof-org/lsof latest FAQ: https://github.com/lsof-org/lsof/blob/master/00FAQ latest (non-formatted) man page: https://github.com/lsof-org/lsof/blob/master/Lsof.8 usage: [-?abhKlnNoOPRtUvVX] [+|-c c] [+|-d s] [+D D] [+|-E] [+|-e s] [+|-f[gG]] [-F [f]] [-g [s]] [-i [i]] [+|-L [l]] [+m [m]] [+|-M] [-o [o]] [-p s] [+|-r [t]] [-s [p:s]] [-S [t]] [-T [t]] [-u s] [+|-w] [-x [fl]] [--] [names] Use the ``-h'' option to get more help information. stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP modules unloaded. PASS 70c (34s) == conf-sanity test 70d: stop MDT1, mkdir succeed, create remote dir fail ========================================================== 16:55:31 (1679936131) start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP deactivate lustre-MDT0001-mdc-ffff8b783c082000 lfs mkdir: dirstripe error on '/mnt/lustre/d70d.conf-sanity/remote_dir': No such device lfs setdirstripe: cannot create dir '/mnt/lustre/d70d.conf-sanity/remote_dir': No such device umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP modules unloaded. PASS 70d (34s) == conf-sanity test 70e: Sync-on-Cancel will be enabled by default on DNE ========================================================== 16:56:05 (1679936165) Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F /dev/mapper/mds1_flakey 200000k Writing CONFIGS/mountdata start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 Permanent disk data: Target: lustre:OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre:OST0000 kilobytes 200000 options -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0000 -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Permanent disk data: Target: lustre:MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x61 (MDT first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/mds2_flakey target name lustre:MDT0001 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0001 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F /dev/mapper/mds2_flakey 200000k Writing CONFIGS/mountdata start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state (FULL|IDLE) osp.lustre-MDT0000-osp-MDT0001.mdt_server_uuid 40 osp.lustre-MDT0000-osp-MDT0001.mdt_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state (FULL|IDLE) osp.lustre-MDT0001-osp-MDT0000.mdt_server_uuid 40 osp.lustre-MDT0001-osp-MDT0000.mdt_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP modules unloaded. PASS 70e (95s) == conf-sanity test 71a: start MDT0 OST0, MDT1, OST1 ===== 16:57:40 (1679936260) SKIP: conf-sanity test_71a needs separate MGS/MDT SKIP 71a (1s) == conf-sanity test 71b: start MDT1, OST0, MDT0, OST1 ==== 16:57:41 (1679936261) SKIP: conf-sanity test_71b needs separate MGS/MDT SKIP 71b (1s) == conf-sanity test 71c: start OST0, OST1, MDT1, MDT0 ==== 16:57:42 (1679936262) SKIP: conf-sanity test_71c needs separate MGS/MDT SKIP 71c (0s) == conf-sanity test 71d: start OST0, MDT1, MDT0, OST1 ==== 16:57:42 (1679936262) SKIP: conf-sanity test_71d needs separate MGS/MDT SKIP 71d (1s) == conf-sanity test 71e: start OST0, MDT1, OST1, MDT0 ==== 16:57:43 (1679936263) SKIP: conf-sanity test_71e needs separate MGS/MDT SKIP 71e (0s) == conf-sanity test 72: test fast symlink with extents flag enabled ========================================================== 16:57:43 (1679936263) umount lustre on /mnt/lustre..... stop ost1 service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP modules unloaded. Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F /dev/mapper/mds1_flakey 200000k Writing CONFIGS/mountdata tune2fs 1.45.6.wc3 (28-Sep-2020) Permanent disk data: Target: lustre:MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x61 (MDT first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/mds2_flakey target name lustre:MDT0001 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0001 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F /dev/mapper/mds2_flakey 200000k Writing CONFIGS/mountdata tune2fs 1.45.6.wc3 (28-Sep-2020) Permanent disk data: Target: lustre:OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre:OST0000 kilobytes 200000 options -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0000 -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre total: 3 open/close in 0.02 seconds: 121.24 ops/second create 3 short symlinks total 8 drwxr-xr-x 4 root root 4096 Mar 27 16:58 . drwxr-xr-x 10 root root 200 Mar 27 16:49 .. drwxr-xr-x 2 root root 4096 Mar 27 16:58 d72.conf-sanity lrwxrwxrwx 1 root root 45 Mar 27 16:58 f72.conf-sanity-1 -> /mnt/lustre/d72.conf-sanity/f72.conf-sanity-1 lrwxrwxrwx 1 root root 45 Mar 27 16:58 f72.conf-sanity-2 -> /mnt/lustre/d72.conf-sanity/f72.conf-sanity-2 lrwxrwxrwx 1 root root 45 Mar 27 16:58 f72.conf-sanity-3 -> /mnt/lustre/d72.conf-sanity/f72.conf-sanity-3 umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 e2fsck 1.45.6.wc3 (28-Sep-2020) Use max possible thread num: 1 instead thread 0 jumping to group 0 e2fsck_pass1_run:2517: increase inode 81 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 82 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 83 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 84 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 85 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 86 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 87 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 88 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 89 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 90 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 91 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 92 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 93 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 94 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 95 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 96 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 97 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 98 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 99 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 100 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 101 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 102 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 103 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 104 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 105 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 106 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 107 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 108 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 109 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 110 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 111 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 112 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 113 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 114 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 115 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 116 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 117 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 118 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 119 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 120 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 121 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 122 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 123 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 124 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 125 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 126 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 127 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 128 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 129 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 130 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 131 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 132 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 133 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 134 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 135 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 136 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 137 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 138 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 139 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 140 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 141 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 142 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 143 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 144 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 145 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 146 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 147 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 148 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 149 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 150 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 151 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 152 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 153 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 154 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 155 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 156 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 157 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 158 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 159 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 160 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26697 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26724 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26725 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26726 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53372 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53373 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53374 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53375 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53376 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53378 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53379 badness 0 to 2 Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 268k/0k (150k/119k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 299.40MB/s [Thread 0] Scanned group range [0, 3), inodes 281 Pass 2: Checking directory structure Pass 2: Memory used: 268k/0k (114k/155k), time: 0.01/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 75.71MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 268k/0k (114k/155k), time: 0.02/ 0.00/ 0.00 Pass 3: Memory used: 268k/0k (113k/156k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 268k/0k (72k/197k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 268k/0k (72k/197k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 542.01MB/s 280 inodes used (0.35%, out of 79992) 5 non-contiguous files (1.8%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 263 24546 blocks used (49.09%, out of 50000) 0 bad blocks 1 large file 149 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 3 symbolic links (3 fast symbolic links) 0 sockets ------------ 270 files Memory used: 268k/0k (74k/195k), time: 0.03/ 0.01/ 0.00 I/O read: 1MB, write: 0MB, rate: 32.29MB/s PASS 72 (51s) == conf-sanity test 73: failnode to update from mountdata properly ========================================================== 16:58:34 (1679936314) checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x42 (OST update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 failover.node=1.2.3.4@tcp Writing CONFIGS/mountdata start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre failover_nids: [ 0@lo, 1.2.3.4@tcp ] umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 73 (65s) == conf-sanity test 75: The order of --index should be irrelevant ========================================================== 16:59:39 (1679936379) Permanent disk data: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x45 (MDT MGS update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre-MDT0000 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-MDT0000 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F /dev/mapper/mds1_flakey 200000k Writing CONFIGS/mountdata Permanent disk data: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x42 (OST update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre-OST0000 kilobytes 200000 options -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-OST0000 -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata Permanent disk data: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x45 (MDT MGS update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre-MDT0000 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-MDT0000 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F /dev/mapper/mds1_flakey 200000k Writing CONFIGS/mountdata Permanent disk data: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x42 (OST update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 device size = 195MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre-OST0000 kilobytes 200000 options -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-OST0000 -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 75 (50s) == conf-sanity test 76a: set permanent params with lctl across mounts ========================================================== 17:00:29 (1679936429) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Change MGS params max_dirty_mb: 662 new_max_dirty_mb: 652 Waiting 90s for '652' Updated after 9s: want '652' got '652' 652 Check the value is stored after remount Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping client tmp.20a7BdrJQP /mnt/lustre opts: Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP Checking servers environments Checking clients tmp.20a7BdrJQP environments Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Commit the device label on /tmp/lustre-ost2 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Starting client tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Started clients tmp.20a7BdrJQP: 192.168.125.30@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8b783e34c000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8b783e34c000.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 6s: want 'procname_uid' got 'procname_uid' disable quota as required Change OST params client_cache_count: 128 new_client_cache_count: 256 Waiting 90s for '256' Updated after 2s: want '256' got '256' 256 Check the value is stored after remount Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping client tmp.20a7BdrJQP /mnt/lustre opts: Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP Checking servers environments Checking clients tmp.20a7BdrJQP environments Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Starting client tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Started clients tmp.20a7BdrJQP: 192.168.125.30@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8b7867025000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8b7867025000.idle_timeout=debug disable quota as required 256 Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping client tmp.20a7BdrJQP /mnt/lustre opts: Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP PASS 76a (122s) == conf-sanity test 76b: verify params log setup correctly ========================================================== 17:02:31 (1679936551) Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Checking servers environments Checking clients tmp.20a7BdrJQP environments Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Starting client tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Started clients tmp.20a7BdrJQP: 192.168.125.30@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8b783ea12000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8b783ea12000.idle_timeout=debug disable quota as required mgs.MGS.live.params= fsname: params flags: 0x20 gen: 2 Secure RPC Config Rules: imperative_recovery_state: state: startup nonir_clients: 0 nidtbl_version: 2 notify_duration_total: 0.000000000 notify_duation_max: 0.000000000 notify_count: 0 Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping client tmp.20a7BdrJQP /mnt/lustre opts: Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP PASS 76b (40s) == conf-sanity test 76c: verify changelog_mask is applied with lctl set_param -P ========================================================== 17:03:11 (1679936591) Checking servers environments Checking clients tmp.20a7BdrJQP environments Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Starting client tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Started clients tmp.20a7BdrJQP: 192.168.125.30@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8b782b66f000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8b782b66f000.idle_timeout=debug disable quota as required Change changelog_mask Check the value is stored after mds remount stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 21 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping client tmp.20a7BdrJQP /mnt/lustre opts: Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP PASS 76c (88s) == conf-sanity test 76d: verify llite.*.xattr_cache can be set by 'lctl set_param -P' correctly ========================================================== 17:04:39 (1679936679) Checking servers environments Checking clients tmp.20a7BdrJQP environments Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Starting client tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Started clients tmp.20a7BdrJQP: 192.168.125.30@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8b7859ecf000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8b7859ecf000.idle_timeout=debug disable quota as required lctl set_param -P llite.*.xattr_cache=0 Waiting 90s for '0' Updated after 2s: want '0' got '0' Check llite.*.xattr_cache on client /mnt/lustre umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre Check llite.*.xattr_cache on the new client /mnt/lustre2 mount lustre on /mnt/lustre2..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre2 umount lustre on /mnt/lustre2..... Stopping client tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:) Stopping client tmp.20a7BdrJQP /mnt/lustre opts: Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP PASS 76d (45s) == conf-sanity test 77: comma-separated MGS NIDs and failover node NIDs ========================================================== 17:05:24 (1679936724) SKIP: conf-sanity test_77 mixed loopback and real device not working SKIP 77 (1s) == conf-sanity test 78: run resize2fs on MDT and OST filesystems ========================================================== 17:05:25 (1679936725) Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format ost1: /dev/mapper/ost1_flakey start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre create test files UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 83240 1616 73832 3% /mnt/lustre[MDT:0] lustre-OST0000_UUID 124712 1388 110724 2% /mnt/lustre[OST:0] filesystem_summary: 124712 1388 110724 2% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 72000 272 71728 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 45008 302 44706 1% /mnt/lustre[OST:0] filesystem_summary: 44978 272 44706 1% /mnt/lustre 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117951 s, 8.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0909723 s, 11.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.078318 s, 13.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0707876 s, 14.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0357201 s, 29.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227744 s, 46.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0588217 s, 17.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145467 s, 7.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0867332 s, 12.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0962656 s, 10.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0639178 s, 16.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.039068 s, 26.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0759483 s, 13.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0709295 s, 14.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0880696 s, 11.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0624624 s, 16.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0688047 s, 15.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.05921 s, 17.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0542108 s, 19.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0663671 s, 15.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.070445 s, 14.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0539338 s, 19.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0553702 s, 18.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151879 s, 6.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0583921 s, 18.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0772126 s, 13.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0478468 s, 21.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0324801 s, 32.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312275 s, 33.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0552904 s, 19.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0436334 s, 24.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0420595 s, 24.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.043632 s, 24.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0491421 s, 21.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0479774 s, 21.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0511139 s, 20.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0420019 s, 25.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0559374 s, 18.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0334625 s, 31.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0440845 s, 23.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.060638 s, 17.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0761192 s, 13.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0570778 s, 18.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0539304 s, 19.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0392928 s, 26.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0854263 s, 12.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312466 s, 33.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0461269 s, 22.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0706362 s, 14.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0647618 s, 16.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0725894 s, 14.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0666559 s, 15.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0693899 s, 15.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0628201 s, 16.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0546167 s, 19.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0680115 s, 15.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0657262 s, 16.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0737229 s, 14.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0522691 s, 20.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0700827 s, 15.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0747703 s, 14.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0810769 s, 12.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0557588 s, 18.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.091364 s, 11.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136422 s, 7.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0576827 s, 18.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0794911 s, 13.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0462614 s, 22.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0669772 s, 15.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.050216 s, 20.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0748442 s, 14.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0536682 s, 19.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0666956 s, 15.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.044503 s, 23.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0724422 s, 14.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0434067 s, 24.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317176 s, 33.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0377676 s, 27.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0444526 s, 23.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0443074 s, 23.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0354895 s, 29.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0374268 s, 28.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0358867 s, 29.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0347116 s, 30.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0431499 s, 24.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261052 s, 40.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0530922 s, 19.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0351175 s, 29.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0346503 s, 30.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0842318 s, 12.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.056289 s, 18.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0529553 s, 19.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0624029 s, 16.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0848364 s, 12.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 1.03645 s, 1.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0559535 s, 18.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0657775 s, 15.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0692772 s, 15.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0755548 s, 13.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133443 s, 7.9 MB/s umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP modules unloaded. e2fsck -d -v -t -t -f -y /dev/mapper/mds1_flakey -m8 e2fsck 1.45.6.wc3 (28-Sep-2020) Use max possible thread num: 1 instead thread 0 jumping to group 0 e2fsck_pass1_run:2517: increase inode 81 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 82 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 83 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 84 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 85 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 86 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 87 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 88 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 89 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 90 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 91 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 92 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 93 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 94 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 95 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 96 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 97 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 98 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 99 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 100 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 101 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 102 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 103 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 104 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 105 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 106 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 107 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 108 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 109 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 110 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 111 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 112 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 113 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 114 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 115 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 116 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 117 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 118 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 119 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 120 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 121 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 122 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 123 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 124 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 125 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 126 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 127 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 128 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 129 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 130 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 131 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 132 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 133 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 134 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 135 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 136 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 137 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 138 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 139 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 140 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 141 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 142 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 143 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 144 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 145 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 146 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 147 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 148 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 149 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 150 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 151 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 152 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 153 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 154 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 155 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 156 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 157 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 158 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24029 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24057 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24058 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24059 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24061 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24062 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24063 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24064 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24065 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 48047 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 48048 badness 0 to 2 Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 272k/0k (148k/125k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 508.39MB/s [Thread 0] Scanned group range [0, 3), inodes 373 Pass 2: Checking directory structure Pass 2: Memory used: 272k/0k (111k/162k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 568.18MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 272k/0k (111k/162k), time: 0.02/ 0.01/ 0.00 Pass 3A: Memory used: 272k/0k (111k/162k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 272k/0k (110k/163k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 9009.01MB/s Pass 4: Checking reference counts Pass 4: Memory used: 272k/0k (73k/200k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 272k/0k (72k/201k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 509.94MB/s 372 inodes used (0.52%, out of 72000) 4 non-contiguous files (1.1%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 22546 blocks used (50.10%, out of 45000) 0 bad blocks 1 large file 244 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 362 files Memory used: 272k/0k (75k/198k), time: 0.03/ 0.01/ 0.00 I/O read: 1MB, write: 1MB, rate: 34.11MB/s e2fsck -d -v -t -t -f -y /dev/mapper/ost1_flakey -m8 e2fsck 1.45.6.wc3 (28-Sep-2020) Use max possible thread num: 1 instead thread 0 jumping to group 0 e2fsck_pass1_run:2517: increase inode 81 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 82 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 89 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 90 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 91 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 92 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 93 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 94 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 95 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 96 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 97 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 98 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 99 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 100 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 101 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 102 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 103 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 104 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 105 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 106 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 107 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 132 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 134 badness 0 to 2 Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 2) [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] Pass 1: Memory used: 264k/0k (141k/124k), time: 0.01/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 5MB, write: 0MB, rate: 882.30MB/s [Thread 0] Scanned group range [0, 2), inodes 398 Pass 2: Checking directory structure Pass 2: Memory used: 264k/0k (103k/162k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 649.35MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 264k/0k (109k/156k), time: 0.02/ 0.01/ 0.01 Pass 3A: Memory used: 264k/0k (109k/156k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 264k/0k (101k/164k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 9174.31MB/s Pass 4: Checking reference counts Pass 4: Memory used: 264k/0k (75k/190k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 264k/0k (74k/191k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 575.37MB/s 398 inodes used (0.88%, out of 45008) 2 non-contiguous files (0.5%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 392 37721 blocks used (83.82%, out of 45000) 0 bad blocks 1 large file 216 regular files 172 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 388 files Memory used: 264k/0k (74k/191k), time: 0.03/ 0.01/ 0.01 I/O read: 2MB, write: 1MB, rate: 71.59MB/s resize2fs 1.45.6.wc3 (28-Sep-2020) Resizing the filesystem on /dev/mapper/mds1_flakey to 50000 (4k) blocks. The filesystem on /dev/mapper/mds1_flakey is now 50000 (4k) blocks long. resize2fs 1.45.6.wc3 (28-Sep-2020) Resizing the filesystem on /dev/mapper/ost1_flakey to 50000 (4k) blocks. The filesystem on /dev/mapper/ost1_flakey is now 50000 (4k) blocks long. e2fsck -d -v -t -t -f -y /dev/mapper/mds1_flakey -m8 e2fsck 1.45.6.wc3 (28-Sep-2020) Use max possible thread num: 1 instead thread 0 jumping to group 0 e2fsck_pass1_run:2517: increase inode 81 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 82 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 83 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 84 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 85 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 86 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 87 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 88 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 89 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 90 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 91 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 92 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 93 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 94 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 95 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 96 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 97 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 98 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 99 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 100 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 101 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 102 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 103 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 104 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 105 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 106 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 107 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 108 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 109 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 110 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 111 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 112 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 113 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 114 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 115 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 116 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 117 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 118 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 119 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 120 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 121 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 122 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 123 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 124 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 125 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 126 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 127 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 128 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 129 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 130 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 131 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 132 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 133 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 134 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 135 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 136 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 137 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 138 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 139 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 140 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 141 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 142 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 143 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 144 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 145 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 146 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 147 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 148 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 149 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 150 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 151 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 152 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 153 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 154 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 155 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 156 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 157 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 158 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24029 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24057 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24058 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24059 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24061 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24062 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24063 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24064 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24065 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 48047 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 48048 badness 0 to 2 Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 272k/0k (148k/125k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 227.84MB/s [Thread 0] Scanned group range [0, 3), inodes 373 Pass 2: Checking directory structure Pass 2: Memory used: 272k/0k (111k/162k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 669.34MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 272k/0k (111k/162k), time: 0.01/ 0.00/ 0.00 Pass 3A: Memory used: 272k/0k (111k/162k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 272k/0k (110k/163k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 16949.15MB/s Pass 4: Checking reference counts Pass 4: Memory used: 272k/0k (73k/200k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 272k/0k (72k/201k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 557.41MB/s 372 inodes used (0.52%, out of 72000) 4 non-contiguous files (1.1%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 22546 blocks used (45.09%, out of 50000) 0 bad blocks 1 large file 244 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 362 files Memory used: 272k/0k (75k/198k), time: 0.03/ 0.00/ 0.02 I/O read: 1MB, write: 1MB, rate: 35.84MB/s e2fsck -d -v -t -t -f -y /dev/mapper/ost1_flakey -m8 e2fsck 1.45.6.wc3 (28-Sep-2020) Use max possible thread num: 1 instead thread 0 jumping to group 0 e2fsck_pass1_run:2517: increase inode 81 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 82 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 89 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 90 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 91 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 92 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 93 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 94 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 95 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 96 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 97 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 98 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 99 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 100 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 101 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 102 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 103 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 104 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 105 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 106 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 107 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 132 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 134 badness 0 to 2 Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 2) [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] Pass 1: Memory used: 264k/0k (141k/124k), time: 0.01/ 0.00/ 0.01 [Thread 0] Pass 1: I/O read: 5MB, write: 0MB, rate: 454.30MB/s [Thread 0] Scanned group range [0, 2), inodes 398 Pass 2: Checking directory structure Pass 2: Memory used: 264k/0k (103k/162k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 351.99MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 264k/0k (109k/156k), time: 0.03/ 0.01/ 0.01 Pass 3A: Memory used: 264k/0k (109k/156k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 264k/0k (101k/164k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 15625.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 264k/0k (75k/190k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 264k/0k (74k/191k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 233.48MB/s 398 inodes used (0.88%, out of 45008) 2 non-contiguous files (0.5%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 392 37721 blocks used (75.44%, out of 50000) 0 bad blocks 1 large file 216 regular files 172 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 388 files Memory used: 264k/0k (74k/191k), time: 0.04/ 0.01/ 0.02 I/O read: 2MB, write: 1MB, rate: 52.12MB/s start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre check files after expanding the MDT and OST filesystems /mnt/lustre/d78.conf-sanity/f78.conf-sanity-1 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-1 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-2 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-2 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-3 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-3 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-4 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-4 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-5 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-5 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-6 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-6 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-7 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-7 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-8 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-8 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-9 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-9 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-10 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-10 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-11 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-11 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-12 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-12 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-13 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-13 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-14 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-14 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-15 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-15 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-16 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-16 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-17 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-17 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-18 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-18 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-19 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-19 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-20 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-20 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-21 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-21 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-22 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-22 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-23 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-23 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-24 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-24 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-25 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-25 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-26 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-26 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-27 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-27 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-28 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-28 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-29 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-29 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-30 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-30 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-31 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-31 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-32 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-32 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-33 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-33 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-34 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-34 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-35 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-35 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-36 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-36 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-37 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-37 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-38 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-38 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-39 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-39 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-40 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-40 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-41 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-41 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-42 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-42 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-43 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-43 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-44 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-44 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-45 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-45 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-46 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-46 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-47 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-47 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-48 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-48 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-49 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-49 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-50 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-50 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-51 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-51 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-52 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-52 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-53 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-53 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-54 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-54 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-55 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-55 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-56 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-56 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-57 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-57 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-58 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-58 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-59 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-59 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-60 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-60 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-61 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-61 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-62 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-62 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-63 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-63 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-64 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-64 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-65 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-65 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-66 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-66 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-67 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-67 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-68 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-68 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-69 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-69 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-70 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-70 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-71 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-71 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-72 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-72 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-73 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-73 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-74 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-74 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-75 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-75 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-76 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-76 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-77 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-77 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-78 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-78 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-79 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-79 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-80 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-80 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-81 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-81 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-82 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-82 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-83 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-83 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-84 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-84 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-85 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-85 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-86 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-86 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-87 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-87 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-88 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-88 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-89 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-89 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-90 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-90 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-91 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-91 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-92 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-92 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-93 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-93 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-94 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-94 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-95 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-95 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-96 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-96 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-97 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-97 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-98 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-98 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-99 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-99 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-100 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-100 has size 1048576 OK create more files after expanding the MDT and OST filesystems 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0553001 s, 19.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349771 s, 30.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0360704 s, 29.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0658502 s, 15.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0446806 s, 23.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0406595 s, 25.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315284 s, 33.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233469 s, 44.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0539964 s, 19.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333638 s, 31.4 MB/s umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP modules unloaded. e2fsck -d -v -t -t -f -y /dev/mapper/mds1_flakey -m8 e2fsck 1.45.6.wc3 (28-Sep-2020) Use max possible thread num: 1 instead thread 0 jumping to group 0 e2fsck_pass1_run:2517: increase inode 81 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 82 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 83 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 84 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 85 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 86 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 87 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 88 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 89 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 90 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 91 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 92 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 93 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 94 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 95 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 96 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 97 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 98 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 99 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 100 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 101 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 102 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 103 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 104 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 105 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 106 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 107 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 108 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 109 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 110 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 111 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 112 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 113 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 114 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 115 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 116 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 117 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 118 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 119 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 120 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 121 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 122 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 123 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 124 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 125 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 126 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 127 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 128 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 129 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 130 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 131 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 132 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 133 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 134 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 135 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 136 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 137 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 138 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 139 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 140 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 141 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 142 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 143 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 144 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 145 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 146 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 147 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 148 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 149 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 150 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 151 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 152 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 153 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 154 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 155 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 156 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 157 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 158 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24029 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24057 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24058 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24059 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24061 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24062 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24063 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24064 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24065 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 48047 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 48048 badness 0 to 2 Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 272k/0k (148k/125k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 406.17MB/s [Thread 0] Scanned group range [0, 3), inodes 383 Pass 2: Checking directory structure Pass 2: Memory used: 272k/0k (111k/162k), time: 0.02/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 63.37MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 272k/0k (111k/162k), time: 0.04/ 0.01/ 0.00 Pass 3A: Memory used: 272k/0k (111k/162k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 272k/0k (110k/163k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 4065.04MB/s Pass 4: Checking reference counts Pass 4: Memory used: 272k/0k (73k/200k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 272k/0k (72k/201k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 552.18MB/s 382 inodes used (0.53%, out of 72000) 4 non-contiguous files (1.0%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 22546 blocks used (45.09%, out of 50000) 0 bad blocks 1 large file 254 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 372 files Memory used: 272k/0k (75k/198k), time: 0.04/ 0.02/ 0.00 I/O read: 1MB, write: 1MB, rate: 22.67MB/s e2fsck -d -v -t -t -f -y /dev/mapper/ost1_flakey -m8 e2fsck 1.45.6.wc3 (28-Sep-2020) Use max possible thread num: 1 instead thread 0 jumping to group 0 e2fsck_pass1_run:2517: increase inode 81 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 82 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 89 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 90 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 91 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 92 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 93 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 94 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 95 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 96 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 97 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 98 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 99 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 100 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 101 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 102 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 103 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 104 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 105 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 106 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 107 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 132 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 134 badness 0 to 2 Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 2) [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] Pass 1: Memory used: 264k/0k (141k/124k), time: 0.01/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 5MB, write: 0MB, rate: 405.55MB/s [Thread 0] Scanned group range [0, 2), inodes 402 Pass 2: Checking directory structure Pass 2: Memory used: 264k/0k (103k/162k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 645.99MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 264k/0k (109k/156k), time: 0.02/ 0.01/ 0.00 Pass 3A: Memory used: 264k/0k (109k/156k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 264k/0k (101k/164k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 9174.31MB/s Pass 4: Checking reference counts Pass 4: Memory used: 264k/0k (75k/190k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 264k/0k (74k/191k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 511.25MB/s 402 inodes used (0.89%, out of 45008) 2 non-contiguous files (0.5%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 396 40281 blocks used (80.56%, out of 50000) 0 bad blocks 1 large file 220 regular files 172 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 392 files Memory used: 264k/0k (74k/191k), time: 0.03/ 0.01/ 0.01 I/O read: 2MB, write: 1MB, rate: 76.04MB/s resize2fs 1.45.6.wc3 (28-Sep-2020) Resizing the filesystem on /dev/mapper/mds1_flakey to 47500 (4k) blocks. The filesystem on /dev/mapper/mds1_flakey is now 47500 (4k) blocks long. resize2fs 1.45.6.wc3 (28-Sep-2020) Resizing the filesystem on /dev/mapper/ost1_flakey to 47500 (4k) blocks. The filesystem on /dev/mapper/ost1_flakey is now 47500 (4k) blocks long. e2fsck -d -v -t -t -f -y /dev/mapper/mds1_flakey -m8 e2fsck 1.45.6.wc3 (28-Sep-2020) Use max possible thread num: 1 instead thread 0 jumping to group 0 e2fsck_pass1_run:2517: increase inode 81 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 82 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 83 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 84 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 85 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 86 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 87 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 88 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 89 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 90 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 91 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 92 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 93 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 94 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 95 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 96 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 97 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 98 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 99 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 100 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 101 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 102 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 103 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 104 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 105 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 106 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 107 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 108 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 109 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 110 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 111 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 112 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 113 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 114 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 115 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 116 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 117 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 118 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 119 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 120 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 121 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 122 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 123 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 124 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 125 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 126 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 127 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 128 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 129 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 130 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 131 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 132 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 133 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 134 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 135 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 136 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 137 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 138 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 139 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 140 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 141 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 142 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 143 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 144 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 145 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 146 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 147 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 148 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 149 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 150 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 151 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 152 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 153 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 154 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 155 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 156 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 157 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 158 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24029 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24057 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24058 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24059 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24061 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24062 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24063 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24064 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 24065 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 48047 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 48048 badness 0 to 2 Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 272k/0k (148k/125k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 414.59MB/s [Thread 0] Scanned group range [0, 3), inodes 383 Pass 2: Checking directory structure Pass 2: Memory used: 272k/0k (111k/162k), time: 0.01/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 169.78MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 272k/0k (111k/162k), time: 0.03/ 0.01/ 0.01 Pass 3A: Memory used: 272k/0k (111k/162k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 272k/0k (110k/163k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 15151.52MB/s Pass 4: Checking reference counts Pass 4: Memory used: 272k/0k (73k/200k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 272k/0k (72k/201k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 319.18MB/s 382 inodes used (0.53%, out of 72000) 4 non-contiguous files (1.0%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 22546 blocks used (47.47%, out of 47500) 0 bad blocks 1 large file 254 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 372 files Memory used: 272k/0k (75k/198k), time: 0.03/ 0.01/ 0.01 I/O read: 1MB, write: 1MB, rate: 30.93MB/s e2fsck -d -v -t -t -f -y /dev/mapper/ost1_flakey -m8 e2fsck 1.45.6.wc3 (28-Sep-2020) Use max possible thread num: 1 instead thread 0 jumping to group 0 e2fsck_pass1_run:2517: increase inode 81 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 82 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 89 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 90 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 91 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 92 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 93 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 94 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 95 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 96 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 97 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 98 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 99 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 100 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 101 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 102 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 103 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 104 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 105 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 106 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 107 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 132 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 134 badness 0 to 2 Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 2) [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] Pass 1: Memory used: 264k/0k (141k/124k), time: 0.01/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 5MB, write: 0MB, rate: 682.59MB/s [Thread 0] Scanned group range [0, 2), inodes 402 Pass 2: Checking directory structure Pass 2: Memory used: 264k/0k (103k/162k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 661.38MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 264k/0k (109k/156k), time: 0.04/ 0.00/ 0.01 Pass 3A: Memory used: 264k/0k (109k/156k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 264k/0k (101k/164k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 9174.31MB/s Pass 4: Checking reference counts Pass 4: Memory used: 264k/0k (75k/190k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 264k/0k (74k/191k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 693.00MB/s 402 inodes used (0.89%, out of 45008) 2 non-contiguous files (0.5%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 396 40281 blocks used (84.80%, out of 47500) 0 bad blocks 1 large file 220 regular files 172 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 392 files Memory used: 264k/0k (74k/191k), time: 0.04/ 0.00/ 0.01 I/O read: 2MB, write: 1MB, rate: 46.15MB/s start mds service on tmp.20a7BdrJQP Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre check files after shrinking the MDT and OST filesystems /mnt/lustre/d78.conf-sanity/f78.conf-sanity-1 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-1 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-2 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-2 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-3 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-3 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-4 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-4 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-5 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-5 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-6 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-6 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-7 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-7 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-8 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-8 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-9 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-9 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-10 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-10 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-11 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-11 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-12 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-12 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-13 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-13 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-14 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-14 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-15 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-15 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-16 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-16 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-17 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-17 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-18 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-18 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-19 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-19 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-20 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-20 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-21 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-21 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-22 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-22 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-23 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-23 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-24 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-24 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-25 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-25 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-26 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-26 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-27 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-27 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-28 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-28 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-29 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-29 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-30 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-30 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-31 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-31 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-32 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-32 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-33 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-33 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-34 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-34 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-35 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-35 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-36 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-36 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-37 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-37 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-38 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-38 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-39 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-39 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-40 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-40 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-41 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-41 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-42 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-42 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-43 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-43 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-44 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-44 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-45 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-45 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-46 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-46 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-47 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-47 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-48 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-48 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-49 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-49 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-50 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-50 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-51 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-51 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-52 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-52 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-53 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-53 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-54 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-54 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-55 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-55 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-56 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-56 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-57 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-57 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-58 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-58 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-59 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-59 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-60 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-60 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-61 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-61 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-62 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-62 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-63 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-63 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-64 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-64 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-65 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-65 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-66 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-66 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-67 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-67 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-68 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-68 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-69 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-69 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-70 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-70 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-71 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-71 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-72 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-72 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-73 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-73 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-74 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-74 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-75 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-75 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-76 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-76 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-77 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-77 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-78 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-78 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-79 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-79 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-80 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-80 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-81 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-81 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-82 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-82 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-83 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-83 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-84 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-84 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-85 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-85 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-86 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-86 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-87 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-87 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-88 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-88 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-89 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-89 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-90 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-90 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-91 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-91 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-92 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-92 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-93 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-93 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-94 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-94 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-95 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-95 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-96 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-96 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-97 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-97 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-98 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-98 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-99 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-99 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-100 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-100 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-101 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-101 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-102 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-102 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-103 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-103 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-104 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-104 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-105 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-105 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-106 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-106 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-107 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-107 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-108 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-108 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-109 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-109 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-110 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-110 has size 1048576 OK umount lustre on /mnt/lustre..... Stopping client tmp.20a7BdrJQP /mnt/lustre (opts:) stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP modules unloaded. Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' gss/krb5 is not supported quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 78 (136s) == conf-sanity test 79: format MDT/OST without mgs option (should return errors) ========================================================== 17:07:41 (1679936861) Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported mkfs.lustre FATAL: Must specify --mgs or --mgsnode mkfs.lustre: exiting with 22 (Invalid argument) mkfs.lustre FATAL: Must specify --mgs or --mgsnode mkfs.lustre: exiting with 22 (Invalid argument) mkfs.lustre FATAL: Must specify --mgsnode mkfs.lustre: exiting with 22 (Invalid argument) Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 79 (39s) == conf-sanity test 80: mgc import reconnect race ======== 17:08:20 (1679936900) start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" fail_val=10 fail_loc=0x906 fail_val=10 fail_loc=0x906 start ost2 service on tmp.20a7BdrJQP Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Commit the device label on /tmp/lustre-ost2 Started lustre-OST0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" fail_loc=0 stop ost2 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost2 (opts:-f) on tmp.20a7BdrJQP stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 80 (58s) == conf-sanity test 81: sparse OST indexing ============== 17:09:18 (1679936958) SKIP: conf-sanity test_81 needs >= 3 OSTs SKIP 81 (1s) == conf-sanity test 82a: specify OSTs for file (succeed) or directory (succeed) ========================================================== 17:09:19 (1679936959) SKIP: conf-sanity test_82a needs >= 3 OSTs SKIP 82a (0s) == conf-sanity test 82b: specify OSTs for file with --pool and --ost-list options ========================================================== 17:09:19 (1679936959) SKIP: conf-sanity test_82b needs >= 4 OSTs SKIP 82b (1s) == conf-sanity test 83: ENOSPACE on OST doesn't cause message VFS: Busy inodes after unmount ... ========================================================== 17:09:20 (1679936960) mount the OST /dev/mapper/ost1_flakey as a ldiskfs filesystem mnt_opts run llverfs in partial mode on the OST ldiskfs /mnt/lustre-ost1 tmp.20a7BdrJQP: executing run_llverfs /mnt/lustre-ost1 -vpl no llverfs: write /mnt/lustre-ost1/llverfs_dir00142/file000@0+1048576 short: 368640 written tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: Timestamp: 1679936962 tmp.20a7BdrJQP: dirs: 147, fs blocks: 37602 tmp.20a7BdrJQP: write_done: /mnt/lustre-ost1/llverfs_dir00142/file000, current: 213.564 MB/s, overall: 213.564 MB/s, ETA: 0:00:00 tmp.20a7BdrJQP: tmp.20a7BdrJQP: read_done: /mnt/lustre-ost1/llverfs_dir00141/file000, current: 1551.2 MB/s, overall: 1551.2 MB/s, ETA: 0:00:00 tmp.20a7BdrJQP: unmount the OST /dev/mapper/ost1_flakey Stopping /mnt/lustre-ost1 (opts:) on tmp.20a7BdrJQP checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 mdt.identity_upcall=/mnt/build/lustre/tests/../utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x162 (OST first_time update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.125.30@tcp sys.timeout=20 start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: No space left on device Start of /dev/mapper/ost1_flakey on ost1 failed 28 string err Stopping clients: tmp.20a7BdrJQP /mnt/lustre (opts:-f) Stopping clients: tmp.20a7BdrJQP /mnt/lustre2 (opts:-f) tmp.20a7BdrJQP: executing set_hostid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" Loading modules from /mnt/build/lustre/tests/.. detected 2 online CPUs by sysfs Force libcfs to create 2 CPU partitions gss/krb5 is not supported Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /tmp/lustre-ost2 start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 1 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" stop ost1 service on tmp.20a7BdrJQP Stopping /mnt/lustre-ost1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:-f) on tmp.20a7BdrJQP stop mds service on tmp.20a7BdrJQP Stopping /mnt/lustre-mds2 (opts:-f) on tmp.20a7BdrJQP PASS 83 (45s) == conf-sanity test 84: check recovery_hard_time ========= 17:10:05 (1679937005) start mds service on tmp.20a7BdrJQP start mds service on tmp.20a7BdrJQP Starting mds1: -o localrecov -o recovery_time_hard=60,recovery_time_soft=60 /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 start mds service on tmp.20a7BdrJQP Starting mds2: -o localrecov -o recovery_time_hard=60,recovery_time_soft=60 /dev/mapper/mds2_flakey /mnt/lustre-mds2 Started lustre-MDT0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost1 service on tmp.20a7BdrJQP Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Started lustre-OST0000 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" start ost2 service on tmp.20a7BdrJQP Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Commit the device label on /tmp/lustre-ost2 Started lustre-OST0001 tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid tmp.20a7BdrJQP: Reading test skip list from /tmp/ltest.config tmp.20a7BdrJQP: EXCEPT="$EXCEPT 32 53 63 102 115 119 123F" recovery_time=60, timeout=20, wrap_up=5 mount lustre on /mnt/lustre..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre mount lustre on /mnt/lustre2..... Starting client: tmp.20a7BdrJQP: -o user_xattr,flock tmp.20a7BdrJQP@tcp:/lustre /mnt/lustre2 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 95248 1668 84924 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 95248 1532 85060 2% /mnt/lustre[MDT:1] lustre-OST0000_UUID 142216 1524 126692 2% /mnt/lustre[OST:0] lustre-OST0001_UUID 142216 1388 126828 2% /mnt/lustre[OST:1] filesystem_summary: 284432 2912 253520 2% /mnt/lustre total: 1000 open/close in 1.47 seconds: 679.11 ops/second fail_loc=0x20000709 fail_val=5 Failing mds1 on tmp.20a7BdrJQP Stopping /mnt/lustre-mds1 (opts:) on tmp.20a7BdrJQP 17:10:22 (1679937022) shut down Failover mds1 to tmp.20a7BdrJQP e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 e2fsck 1.45.6.wc3 (28-Sep-2020) Use max possible thread num: 1 instead thread 0 jumping to group 0 e2fsck_pass1_run:2517: increase inode 81 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 82 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 83 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 84 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 85 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 86 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 87 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 88 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 89 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 90 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 91 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 92 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 93 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 94 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 95 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 96 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 97 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 98 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 99 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 100 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 101 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 102 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 103 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 104 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 105 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 106 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 107 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 108 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 109 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 110 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 111 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 112 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 113 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 114 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 115 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 116 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 117 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 118 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 119 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 120 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 121 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 122 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 123 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 124 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 125 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 126 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 127 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 128 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 129 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 130 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 131 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 132 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 133 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 134 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 135 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 136 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 137 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 138 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 139 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 140 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 141 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 142 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 143 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 144 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 145 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 146 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 147 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 148 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 149 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 150 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 151 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 152 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 153 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 154 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 155 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 156 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 157 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 158 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 159 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 160 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 161 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 162 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 163 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 164 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26693 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26721 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26722 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 26723 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53375 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53376 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53377 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53378 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53379 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53380 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53381 badness 0 to 2 e2fsck_pass1_run:2517: increase inode 53382 badness 0 to 2 Warning: skipping journal recovery because doing a read-only filesystem check. Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 268k/0k (148k/121k), time: 0.01/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 143.76MB/s [Thread 0] Scanned group range [0, 3), inodes 277 Pass 2: Checking directory structure Pass 2: Memory used: 268k/0k (114k/155k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 575.71MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 268k/0k (114k/155k), time: 0.03/ 0.00/ 0.01 Pass 3: Memory used: 268k/0k (113k/156k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 268k/0k (72k/196k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Free blocks count wrong (25455, counted=25443). Fix? no Free inodes count wrong (79719, counted=79715). Fix? no Pass 5: Memory used: 268k/0k (72k/197k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 557.72MB/s 273 inodes used (0.34%, out of 79992) 5 non-contiguous files (1.8%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 24545 blocks used (49.09%, out of 50000) 0 bad blocks 1 large file 150 regular files 117 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 267 files Memory used: 268k/0k (74k/195k), time: 0.03/ 0.00/ 0.01 I/O read: 1MB, write: 0MB, rate: 34.27MB/s mount facets: mds1 Starting mds1: -o localrecov -o recovery_time_hard=60,recovery_time_soft=60 /dev/mapper/mds1_flakey /mnt/lustre-mds1 Started lustre-MDT0000 17:10:37 (1679937037) targets are mounted 17:10:37 (1679937037) facet_failover done !!!!!!!!!! [ 6727.432199] LustreError: 343158:0:(osp_internal.h:530:osp_fid_diff()) ASSERTION( fid_seq(fid1) == fid_seq(fid2) ) failed: fid1:[0x2c0000401:0x2:0x0], fid2:[0x100010000:0x1:0x0] !!!!!!!!!! !!!!!!!!!! [ 6727.436720] LustreError: 343158:0:(osp_internal.h:530:osp_fid_diff()) LBUG !!!!!!!!!! !!!!!!!!!! [ 6727.436896] [<0>] lbug_with_loc+0x3e/0x80 [libcfs] !!!!!!!!!! !!!!!!!!!! [ 6727.437978] Kernel panic - not syncing: LBUG !!!!!!!!!! !!!!!!!!!! [ 6727.438184] ? lbug_with_loc+0x3e/0x80 [libcfs] !!!!!!!!!! !!!!!!!!!! [ 6727.438221] lbug_with_loc.cold.6+0x18/0x18 [libcfs] !!!!!!!!!! LTEST: stop requested