[LU-7467] Quota Account wrong Created: 23/Nov/15  Updated: 08/Sep/16  Resolved: 08/Sep/16

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Mahmoud Hanafi Assignee: Niu Yawei (Inactive)
Resolution: Cannot Reproduce Votes: 0
Labels: None

Issue Links:
Related
is related to LU-7459 Incorrect file count with lfs quota Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

Quota reporting zero inode usage for user.

root.pfe21 # lfs quota -u mcantiel /nobackupp8
Disk quotas for user mcantiel (uid 30403):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
     /nobackupp8 59604412  530000000 1100000000       -       0  100000  150000       

But user has >74K files

root.pfe21 # find /nobackupp8/yjiang2/StarMidMHD3 -user mcantiel 2>/dev/null | wc -l
74223

Example file

-rw-r--r-- 1 mcantiel s1439 819753 Nov 17 22:20 /nobackupp8/yjiang2/StarMidMHD3/Data/id1569/Star-id1569.0029.vtk
debugfs:  stat Star-id1569.0029.vtk
Inode: 189531663   Type: regular    Mode:  0644   Flags: 0x0
Generation: 3904475524    Version: 0x000000c0:2479d74b
User: 30403   Group: 41439   Size: 0
File ACL: 0    Directory ACL: 0
Links: 1   Blockcount: 0
Fragment:  Address: 0    Number: 0    Size: 0
 ctime: 0x564c909d:00000000 -- Wed Nov 18 06:52:13 2015
 atime: 0x5653822b:00000000 -- Mon Nov 23 13:16:27 2015
 mtime: 0x564c18c1:72aae770 -- Tue Nov 17 22:20:49 2015
crtime: 0x564c18c1:72aae770 -- Tue Nov 17 22:20:49 2015
Size of extra inode fields: 28
Extended attributes stored in inode body: 
  lma = "00 00 00 00 00 00 00 00 77 6e 3b 60 03 00 00 00 69 53 00 00 00 00 00 00 " (24)
  lma: fid=[0x3603b6e77:0x5369:0x0] compat=0 incompat=0
  lov = "d0 0b d1 0b 01 00 00 00 69 53 00 00 00 00 00 00 77 6e 3b 60 03 00 00 00 00 00 10 00 01 00 00 00 2c 43 d0 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 5b 00 00 00 " (56)
  link = "df f1 ea 11 01 00 00 00 3e 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 26 00 00 00 03 60 3b 46 a9 00 01 3a 2f 00 00 00 00 53 74 61 72 2d 69 64 31 35 36 39 2e 30 30 32 39 2e 76 74 6b " (62)
BLOCKS:



 Comments   
Comment by Niu Yawei (Inactive) [ 24/Nov/15 ]

It looks similar to LU-7459, the inode accounting for certain user is suddenly broken somehow.

Mahmoud, could provide more details about the problem? What's the Lustre/Kernel version? If the accounting will change when you adding/removing files? Any error messages prompted in dmesg when you adding/removing files (or inquiring quota)?

Comment by Mahmoud Hanafi [ 24/Nov/15 ]

The server is running kernel: 2.6.32-431.29.2.el6 with lustre 2.5.3
The client is running Sles 3.0.101-0.47.67.2 with lustre 2.5.3

No errors are seeing when running quota inquire. Adding new file/directory does change usage.
lfs quota -u mcantiel /nobackupp8
Disk quotas for user mcantiel (uid 30403):
Filesystem kbytes quota limit grace files quota limit grace
/nobackupp8 59609128 8589934592 10737418240 - 11 1000000 2000000 -

I suspect that this discrepancy may have happened when the MDS crashed a few days ago. We had this happen once before to a number of users when the MDS crash due to power outage.

Comment by Niu Yawei (Inactive) [ 25/Nov/15 ]

In that case, we need to run quotacheck (by disable then re-enable quota feature on MDT device, MDT needs be offline) to fix the inconsistent data.

Comment by Mahmoud Hanafi [ 01/Dec/15 ]

This method of fixing these discrepancies is not preferred. We are not able to take downtime every time the quotes are inconsistent.

Comment by Niu Yawei (Inactive) [ 02/Dec/15 ]

Yes, I totally understood your concern, but it's the only way to fix a already corrupted accounting file so far (it's part of e2fsck functionality actually, so it must be done offline)

The reason of the corrupted accounting file is unclear now, if there is some kind of evidence or clue indicating the corruption is caused by certain operation or race, I think we need to trace it down and try to to find out the bug (probably in kernel or e2fsprogs).

Did you ever run e2fsck when the MDT recovered from the power outage? What's the version of e2fsprogs on server?

Comment by Mahmoud Hanafi [ 08/Sep/16 ]

Close.

Comment by Peter Jones [ 08/Sep/16 ]

ok Mahmoud

Generated at Sat Feb 10 02:09:09 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.