Details
-
Bug
-
Resolution: Unresolved
-
Medium
-
None
-
None
-
None
-
3
-
9223372036854775807
Description
This issue was created by maloo for Frederick Dilger <fdilger@whamcloud.com>
This issue relates to the following test suite run: https://testing.whamcloud.com/test_sets/8dda9a2c-e69f-4de7-b700-8e26334860e2
test_31 failed with the following error:
stat process stuck due to unavailable OSTs
Test session details:
clients: https://build.whamcloud.com/job/lustre-b_es-reviews/24383 - 4.18.0-553.51.1.el8_10.x86_64
servers: https://build.whamcloud.com/job/lustre-b_es-reviews/24383 - 4.18.0-553.53.1.el8_lustre.ddn17.x86_64
<<Please provide additional information about the failure here>>
== sanity-flr test 31: make sure glimpse request can be retried ====================================== 01:30:38 (1752024638)
CMD: onyx-139vm6 grep -c /mnt/lustre-ost1' ' /proc/mounts || true
Stopping /mnt/lustre-ost1 (opts
on onyx-139vm6
CMD: onyx-139vm6 umount -d /mnt/lustre-ost1
CMD: onyx-139vm6 lsmod | grep lnet > /dev/null &&
lctl dl | grep ' ST ' || true
CMD: onyx-139vm4.onyx.whamcloud.com lctl get_param -n at_min
CMD: onyx-139vm4.onyx.whamcloud.com PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/opt/iozone/bin:/usr/lib64/openmpi/bin:/usr/share/Modules/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/sbin:/sbin::/sbin:/bin:/usr/sbin: NAME=autotest_config bash rpc.sh wait_import_state (DISCONN|IDLE) osc.lustre-OST0000-osc-ffff9be403473800.ost_server_uuid 40
onyx-139vm4.onyx.whamcloud.com: executing wait_import_state (DISCONN|IDLE) osc.lustre-OST0000-osc-ffff9be403473800.ost_server_uuid 40
osc.lustre-OST0000-osc-ffff9be403473800.ost_server_uuid in DISCONN state after 0 sec
CMD: onyx-139vm6 mkdir -p /mnt/lustre-ost1
CMD: onyx-139vm6 dmsetup status /dev/mapper/ost1_flakey >/dev/null 2>&1
CMD: onyx-139vm6 dmsetup status /dev/mapper/ost1_flakey 2>&1
CMD: onyx-139vm6 test -b /dev/mapper/ost1_flakey
CMD: onyx-139vm6 e2label /dev/mapper/ost1_flakey
Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1
CMD: onyx-139vm6 mkdir -p /mnt/lustre-ost1; mount -t lustre -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1
CMD: onyx-139vm6 e2label /dev/mapper/ost1_flakey 2>/dev/null
CMD: onyx-139vm6 /usr/sbin/lctl set_param seq.cli-lustre-OST0000-super.width=16384
seq.cli-lustre-OST0000-super.width=16384
CMD: onyx-139vm6 /usr/sbin/lctl get_param -n health_check
CMD: onyx-139vm6 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/opt/iozone/bin:/usr/lib64/openmpi/bin:/usr/share/Modules/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/sbin:/sbin::/sbin:/bin:/usr/sbin: NAME=autotest_config bash rpc.sh set_default_debug \"vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck\" \"all\" 4
onyx-139vm6: onyx-139vm6.onyx.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4
CMD: onyx-139vm6 e2label /dev/mapper/ost1_flakey 2>/dev/null | grep -E ':[a-zA-Z]
[0-9]
{4}'
pdsh@onyx-139vm4: onyx-139vm6: ssh exited with exit code 1
CMD: onyx-139vm6 e2label /dev/mapper/ost1_flakey 2>/dev/null
Started lustre-OST0000
CMD: onyx-139vm4.onyx.whamcloud.com lctl get_param -n at_max
affected facets: ost1
CMD: onyx-139vm6 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/opt/iozone/bin:/usr/lib64/openmpi/bin:/usr/share/Modules/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/sbin:/sbin::/sbin:/bin:/usr/sbin: NAME=autotest_config bash rpc.sh _wait_recovery_complete *.lustre-OST0000.recovery_status 1475
onyx-139vm6: onyx-139vm6.onyx.whamcloud.com: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475
onyx-139vm6: *.lustre-OST0000.recovery_status status: COMPLETE
CMD: onyx-139vm6 grep -c /mnt/lustre-ost2' ' /proc/mounts || true
Stopping /mnt/lustre-ost2 (opts
on onyx-139vm6
CMD: onyx-139vm6 umount -d /mnt/lustre-ost2
CMD: onyx-139vm6 lsmod | grep lnet > /dev/null &&
lctl dl | grep ' ST ' || true
CMD: onyx-139vm4.onyx.whamcloud.com lctl get_param -n at_min
CMD: onyx-139vm4.onyx.whamcloud.com PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/opt/iozone/bin:/usr/lib64/openmpi/bin:/usr/share/Modules/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/sbin:/sbin::/sbin:/bin:/usr/sbin: NAME=autotest_config bash rpc.sh wait_import_state (DISCONN|IDLE) osc.lustre-OST0001-osc-ffff9be403473800.ost_server_uuid 40
onyx-139vm4.onyx.whamcloud.com: executing wait_import_state (DISCONN|IDLE) osc.lustre-OST0001-osc-ffff9be403473800.ost_server_uuid 40
osc.lustre-OST0001-osc-ffff9be403473800.ost_server_uuid in DISCONN state after 0 sec
sanity-flr test_31: @@@@@@ FAIL: stat process stuck due to unavailable OSTs
VVVVVVV DO NOT REMOVE LINES BELOW, Added by Maloo for auto-association VVVVVVV
sanity-flr test_31 - stat process stuck due to unavailable OSTs
Attachments
Issue Links
- mentioned in
-
Page Loading...