[LU-9609] sanity-quota test_1: project write success, but expect edquot Created: 06/Jun/17  Updated: 22/Nov/18  Resolved: 22/Nov/18

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.10.0
Fix Version/s: Lustre 2.10.0

Type: Bug Priority: Minor
Reporter: Maloo Assignee: Wang Shilong (Inactive)
Resolution: Fixed Votes: 0
Labels: None

Issue Links:
Duplicate
duplicates LU-11678 sanity-quota test 1 fails with 'user ... Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

This issue was created by maloo for sarah_lw <wei3.liu@intel.com>

This issue relates to the following test suite run: https://testing.hpdd.intel.com/test_sets/21b731e2-4a5d-11e7-bc6c-5254006e85c2.

The sub-test test_1 failed with the following error:

project write success, but expect edquot

There are multiple project quota related sub tests failed in this session. It may related to LU-5245

test log shows

--------------------------------------
project quota (block hardlimit:10 mb)
Usage: chattr [-RVf] [-+=aAcCdDeijsStTu] [-v version] files...
write ...
running as uid/gid/euid/egid 60000/60000/60000/60000, groups:
 [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1.sanity-quota/f1.sanity-quota-2] [count=5]
5+0 records in
5+0 records out
5242880 bytes (5.2 MB) copied, 0.0806863 s, 65.0 MB/s
write out of block quota ...
running as uid/gid/euid/egid 60000/60000/60000/60000, groups:
 [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1.sanity-quota/f1.sanity-quota-2] [count=5] [seek=5]
5+0 records in
5+0 records out
5242880 bytes (5.2 MB) copied, 0.0704575 s, 74.4 MB/s
CMD: trevis-36vm3,trevis-36vm7 lctl set_param -n osd*.*MDT*.force_sync=1
CMD: trevis-36vm8 lctl set_param -n osd*.*OS*.force_sync=1
running as uid/gid/euid/egid 60000/60000/60000/60000, groups:
 [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1.sanity-quota/f1.sanity-quota-2] [count=10] [seek=10]
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0735773 s, 143 MB/s
Disk quotas for prj 1000 (pid 1000):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
    /mnt/lustre       0       0   10240       -       0       0       0       -
lustre-MDT0000_UUID
                      0       -       0       -       0       -       0       -
lustre-MDT0001_UUID
                      0       -       0       -       0       -       0       -
lustre-MDT0002_UUID
                      0       -       0       -       0       -       0       -
lustre-MDT0003_UUID
                      0       -       0       -       0       -       0       -
lustre-OST0000_UUID
                      0       -       0       -       -       -       -       -
lustre-OST0001_UUID
                      0       -       0       -       -       -       -       -
lustre-OST0002_UUID
                      0       -       0       -       -       -       -       -
lustre-OST0003_UUID
                      0       -       0       -       -       -       -       -
lustre-OST0004_UUID
                      0       -       0       -       -       -       -       -
lustre-OST0005_UUID
                      0       -       0       -       -       -       -       -
lustre-OST0006_UUID
                      0       -       0       -       -       -       -       -
lustre-OST0007_UUID
                      0       -       0       -       -       -       -       -
Total allocated inode limit: 0, total allocated block limit: 0
Files for project (1000):
 sanity-quota test_1: @@@@@@ FAIL: project write success, but expect edquot 


 Comments   
Comment by Sarah Liu [ 06/Jun/17 ]

Hi Wang Shilong,

It seems a project quota issue, can you please take a look?

Thanks

Comment by Wang Shilong (Inactive) [ 07/Jun/17 ]

This might because of e2fsprogs version is not latest to support project quota.

See following errors:
project quota (block hardlimit:10 mb)
Usage: chattr [-RVf] [-+=aAcCdDeijsStTu] [-v version] files...

I will cook a patch to check if e2fsprogs support project change, thanks.

Comment by Wang Shilong (Inactive) [ 07/Jun/17 ]

Fixed is wrapped inside this patch:

https://review.whamcloud.com/#/c/27425/

Comment by Wang Shilong (Inactive) [ 16/Aug/18 ]

We could close this ticket.

Generated at Sat Feb 10 02:27:42 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.