[LU-9192] lfs quota reports wrong file count when using ZFS backend Created: 07/Mar/17  Updated: 17/Mar/17  Resolved: 14/Mar/17

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.8.0
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Rick Mohr Assignee: WC Triage
Resolution: Duplicate Votes: 0
Labels: None
Environment:

CentOS Linux release 7.2.1511
Kernel 3.10.0-327.4.4.el7.x86_64
Lustre 2.8.0


Issue Links:
Related
is related to LU-2435 inode accounting in osd-zfs is racy Resolved
Epic/Theme: Quota, zfs
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

I recently set up a Lustre 2.8 file system that uses ZFS for the backend storage (both on the MDT and OSTs). When I was doing some testing, I noticed that the output from lfs quota seemed odd. While the quota information for the amount of used space seemed correct, the info on the number of files was off. For example, when I started with no files, “lfs quota” showed this:

Filesystem kbytes quota limit grace files quota limit grace
/lustre/xxxxx
33 0 0 - 18446744073709551595 0 0 -

As I created empty files one-by-one, the number of files kept incrementing. Once I had 21 files created, lfs quota then reported that I had zero files:

Filesystem kbytes quota limit grace files quota limit grace
/lustre/xxxxx
211 0 0 - 0 0 0 -

It looks like some kind of counter under/overflow, but I am not sure if this is an issue with Lustre or with ZFS.



 Comments   
Comment by Andreas Dilger [ 14/Mar/17 ]

Rick, this is (unfortunately) a known issue. There is new inode quota accounting for ZFS that is in the upcoming 0.7.0 release, along with a patch for Lustre in LU-2435.

Comment by Rick Mohr [ 14/Mar/17 ]

Thanks for the information. When the fix comes, if we upgrade Lustre/ZFS, will it automatically fix the quotas or will we need to run some command to correct the current quota info?

Comment by Andreas Dilger [ 17/Mar/17 ]

If you upgrade both ZFS and Lustre, I believe it will transparently upgrade the quotas.

Generated at Sat Feb 10 02:24:02 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.