Details
-
Bug
-
Resolution: Cannot Reproduce
-
Minor
-
None
-
Lustre 2.4.0
-
None
-
My local single-node VM "cluster". Latest master at the time of writing. Using default local.sh MDSSIZE.
-
3
-
6106
Description
It looks this is fairly easy to reproduce.
== sanity test 51b: mkdir .../t-0 --- .../t-70000 ====================== 10:55:03 (1346122503) - created 10000 (time 1346122511.69 total 8.47 last 8.47) - created 20000 (time 1346122520.57 total 17.35 last 8.88) - created 30000 (time 1346122529.69 total 26.46 last 9.11) mkdir(/mnt/lustre/d51b/t-32343) error: No space left on device total: 32343 creates in 28.63 seconds: 1129.77 creates/second sanity test_51b: @@@@@@ FAIL: test_51b failed with 28 Trace dump: = /root/lustre-release/lustre/tests/test-framework.sh:3638:error_noexit() = /root/lustre-release/lustre/tests/test-framework.sh:3660:error() = /root/lustre-release/lustre/tests/test-framework.sh:3893:run_one() = /root/lustre-release/lustre/tests/test-framework.sh:3922:run_one_logged() = /root/lustre-release/lustre/tests/test-framework.sh:3742:run_test() = sanity.sh:3150:main() Dumping lctl log to /tmp/test_logs/1346122501/sanity.test_51b.*.1346122532.log Dumping logs only on local client. FAIL 51b (36s)
[root@h221f tests]# umount /mnt/mds1 [root@h221f tests]# mount -t ldiskfs -o loop /tmp/lustre-mdt1 lustre-mdt1 lustre-mdt1_2 [root@h221f tests]# mount -t ldiskfs -o loop /tmp/lustre-mdt1 /mnt/mds1 [root@h221f tests]# df /mnt/mds1 Filesystem 1K-blocks Used Available Use% Mounted on /tmp/lustre-mdt1 149944 30856 109088 23% /mnt/mds1 [root@h221f tests]# df -i /mnt/mds1 Filesystem Inodes IUsed IFree IUse% Mounted on /tmp/lustre-mdt1 100000 2678 97322 3% /mnt/mds1