Details
-
Bug
-
Resolution: Fixed
-
Critical
-
Lustre 2.10.0
-
None
-
onyx-48, full, DNE/ldiskfs
EL7, master branch, v2.9.56.11, b3565
-
3
-
9223372036854775807
Description
https://testing.hpdd.intel.com/test_sessions/cb12c60c-613a-44b3-bfef-03c0651d2607
Note: This was also seen in v2.8: LU-8279
This sanity-scrub subtest failure in test_6 was followed by the same failure in the next ~375 subtests (in sanity-scrub, sanity-benchmark, sanity-lfsck, sanityn, and sanity-hsm.
From test_log:
CMD: onyx-48vm1.onyx.hpdd.intel.com,onyx-48vm2,onyx-48vm3,onyx-48vm7,onyx-48vm8 dmesg Kernel error detected: [11155.947772] VFS: Busy inodes after unmount of dm-1. Self-destruct in 5 seconds. Have a nice day... sanity-scrub test_6: @@@@@@ FAIL: Error in dmesg detected Trace dump: = /usr/lib64/lustre/tests/test-framework.sh:4931:error() = /usr/lib64/lustre/tests/test-framework.sh:5212:run_one() = /usr/lib64/lustre/tests/test-framework.sh:5246:run_one_logged() = /usr/lib64/lustre/tests/test-framework.sh:5093:run_test() = /usr/lib64/lustre/tests/sanity-scrub.sh:773:main()
Note: The "VFS: Busy inodes after unmount" message is also present in the MDS2/MDS4 console log.