Details
-
Bug
-
Resolution: Unresolved
-
Minor
-
Lustre 2.16.0
-
3
-
9223372036854775807
Description
sanity-quota test_1a, 1b, and 1c failed on 2.16.0 RC1 full-zfs-part-2 test session:
https://testing.whamcloud.com/test_sets/0d7fdd5a-b404-4b82-bd32-aaf538cee475
Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10245* 0 10240 - 1 0 0 - lustre-MDT0000 2 - 0 - 1 - 0 - lustre-OST0000 0 - 0 - - - - - lustre-OST0001 0 - 0 - - - - - lustre-OST0002 0 - 0 - - - - - lustre-OST0003 10244 - 0 - - - - - lustre-OST0004 0 - 0 - - - - - lustre-OST0005 0 - 0 - - - - - lustre-OST0006 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Files for group (quota_usr), count=1: File: /mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1 Size: 11534336 Blocks: 20487 IO Block: 4194304 regular file Device: 2c54f966h/743766374d Inode: 144117285664063501 Links: 1 Access: (0644/-rw-r--r--) Uid: (60000/quota_usr) Gid: (60000/quota_usr) Access: 2024-10-01 10:01:26.000000000 +0000 Modify: 2024-10-01 10:01:36.000000000 +0000 Change: 2024-10-01 10:01:36.000000000 +0000 Birth: 2024-10-01 10:01:26.000000000 +0000 sanity-quota test_1a: @@@@@@ FAIL: user write success, but expect EDQUOT
Attachments
Issue Links
- is related to
-
LU-15299 sanity-quota test_71a: FAIL: user write failure, but expect success
-
- Open
-
- mentioned in
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
Note that there is a new test failure rate report that can be used to see the change in subtest failure rates over time, even if a subtest is already failing intermittently:
https://testing.whamcloud.com/reports?test_set_script_id=61149410-4a46-11e0-a7f6-52540025f9af&sub_test_script_id=91a1b9eb-e7d9-491e-9cdb-b6ccf3a4a53a&source=fail_rate_trend#redirect
This link is showing sanity-quota test_73 going from 0% failure rate to over 75% failure rate in the past few days. The previous failure spikes in March and January were caused by patches that were failing almost all of the subtests.