[LU-2836] Test failure on test suite sanity-quota, subtest test_3 Created: 19/Feb/13 Updated: 09/Mar/21 Resolved: 09/Mar/21 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.4.0 |
| Fix Version/s: | Lustre 2.4.0 |
| Type: | Bug | Priority: | Major |
| Reporter: | Maloo | Assignee: | WC Triage |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | HB, zfs | ||
| Issue Links: |
|
||||||||||||||||||||||||
| Severity: | 3 | ||||||||||||||||||||||||
| Rank (Obsolete): | 6870 | ||||||||||||||||||||||||
| Description |
|
This issue was created by maloo for Li Wei <liwei@whamcloud.com> This issue relates to the following test suite run: https://maloo.whamcloud.com/test_sets/03c8681a-7af8-11e2-b916-52540035b04c. The sub-test test_3 failed with the following error:
Info required for matching: sanity-quota 3 |
| Comments |
| Comment by Niu Yawei (Inactive) [ 20/Feb/13 ] |
14:39:16:running as uid/gid/euid/egid 60000/60000/60000/60000, groups: 14:39:16: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d0.sanity-quota/d3/f.sanity-quota.3-0] [count=1] 14:39:16:1+0 records in 14:39:16:1+0 records out 14:39:16:1048576 bytes (1.0 MB) copied, 0.0296897 s, 35.3 MB/s 14:39:27:Write to exceed soft limit 14:39:27:running as uid/gid/euid/egid 60000/60000/60000/60000, groups: 14:39:27: [dd] [if=/dev/zero] [of=/mnt/lustre/d0.sanity-quota/d3/f.sanity-quota.3-0] [bs=1K] [count=10] [seek=1024] 14:39:27:10+0 records in 14:39:27:10+0 records out 14:39:27:10240 bytes (10 kB) copied, 9.24401 s, 1.1 kB/s 14:39:28:Disk quotas for user quota_usr (uid 60000): It used 11 seconds to flush the data, so the grace has expired before the "Writing before timer goes off" 14:39:30:Write before timer goes off
14:39:30:running as uid/gid/euid/egid 60000/60000/60000/60000, groups:
14:39:30: [dd] [if=/dev/zero] [of=/mnt/lustre/d0.sanity-quota/d3/f.sanity-quota.3-0] [bs=1K] [count=10] [seek=2048]
14:39:30:dd: writing `/mnt/lustre/d0.sanity-quota/d3/f.sanity-quota.3-0': Disk quota exceeded
Not sure why flush 1M bytes data takes so long. |
| Comment by Niu Yawei (Inactive) [ 26/Feb/13 ] |
|
When approaching quota limit, client turns to sync write (one page by one page), on some slow test system, data flush could take very long time (longer than grace time), then grace could expires before we start testing. I think we'd enlarge the grace time for the test_3.
|
| Comment by Niu Yawei (Inactive) [ 26/Feb/13 ] |
| Comment by Nathaniel Clark [ 27/Feb/13 ] |
|
This is a slow dd command, which is related other slow dd commands. |
| Comment by Niu Yawei (Inactive) [ 06/Mar/13 ] |
|
This one is for ldiskfs, it's little bit different with other failures for zfs. |
| Comment by Peter Jones [ 18/Mar/13 ] |
|
Landed for 2.4 |
| Comment by Andreas Dilger [ 01/Oct/14 ] |
|
Bug was closed, but sanity-quota test_3 and test_6 are still being skipped for ZFS filesystems |
| Comment by James A Simmons [ 14/Aug/16 ] |
|
Old blocker for unsupported version |
| Comment by Andreas Dilger [ 22/Dec/17 ] |
|
This test is still being skipped for ZFS. A patch needs to be submitted to remove it from ALWAYS_EXCEPT, with Test-Parameters: sanity-quota ostfilesystemtype=zfs run enough times that we are confident that it is passing consistently. |
| Comment by Gerrit Updater [ 07/Aug/18 ] |
|
Nathaniel Clark (nclark@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/32956 |
| Comment by Gerrit Updater [ 17/Nov/18 ] |
|
Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/32956/ |