[LU-14068] quota transfer failed: rc = -75. Is project enforcement enabled on the ldiskfs filesystem Created: 22/Oct/20  Updated: 30/Oct/20  Resolved: 30/Oct/20

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.12.5
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Stephane Thiell Assignee: Wang Shilong (Inactive)
Resolution: Duplicate Votes: 0
Labels: LTS12
Environment:

2.12.5_7.srcc (https://github.com/stanford-rc/lustre/commits/b2_12_5)


Attachments: File oak-md1-s2.dk.log.gz    
Issue Links:
Related
is related to LU-13519 expand inode if possible for project ... Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

After 2.10.8 to 2.12.5 upgrade on Oak servers, we are planning to enable project quotas like on Fir. However, I'm hitting the following problem:

[root@oak-gw02 ~]# lfs project -p $PROJID -s -r $PROJPATH
lfs: failed to set xattr for '/oak/stanford/groups/ruthm/sthiell/from_krb5_lookout2': Value too large for defined data type
lfs: failed to set xattr for '/oak/stanford/groups/ruthm/sthiell/.blah.txt.swp': Value too large for defined data type

MDS:

Oct 22 16:00:56 oak-md1-s2 kernel: LustreError: 7947:0:(osd_handler.c:2998:osd_quota_transfer()) dm-4: quota transfer failed: rc = -75. Is project enforcement enabled on the ldiskfs filesystem?
Oct 22 16:00:56 oak-md1-s2 kernel: LustreError: 7947:0:(osd_handler.c:2998:osd_quota_transfer()) Skipped 1072 previous similar messages

Full debug from the MDS when this happens attached as oak-md1-s2.dk.log.gz

Project quotas are enabled and also enforced:

[root@oak-md1-s2 ~]# lctl get_param osd-*.*.quota_slave.info
osd-ldiskfs.oak-MDT0000.quota_slave.info=
target name:    oak-MDT0000
pool ID:        0
type:           md
quota enabled:  gp
conn to master: setup
space acct:     ugp
user uptodate:  glb[0],slv[0],reint[0]
group uptodate: glb[1],slv[1],reint[0]
project uptodate: glb[1],slv[1],reint[0]
osd-ldiskfs.oak-MDT0003.quota_slave.info=
target name:    oak-MDT0003
pool ID:        0
type:           md
quota enabled:  gp
conn to master: setup
space acct:     ugp
user uptodate:  glb[0],slv[0],reint[0]
group uptodate: glb[1],slv[1],reint[0]
project uptodate: glb[1],slv[1],reint[0]

Note, this MDT only has 512-byte inodes and we have used resize2fs once in the past to grow it:

[root@oak-md1-s2 ~]# dumpe2fs -h /dev/mapper/md1-rbod1-ssd-mdt0 
dumpe2fs 1.45.6.wc2 (28-Sep-2020)
Filesystem volume name:   oak-MDT0000
Last mounted on:          /
Filesystem UUID:          0ed1cfdd-8e25-4b6b-9cb9-7be1e89d70ad
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery mmp flex_bg dirdata sparse_super large_file huge_file uninit_bg dir_nlink quota project
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              1869611008
Block count:              934803456
Reserved block count:     46740172
Free blocks:              481199422
Free inodes:              1155160366
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      787
Blocks per group:         16384
Fragments per group:      16384
Inodes per group:         32768
Inode blocks per group:   4096
Flex block group size:    16
Filesystem created:       Mon Feb 13 12:36:07 2017
Last mount time:          Mon Oct 19 09:37:19 2020
Last write time:          Mon Oct 19 09:37:19 2020
Mount count:              10
Maximum mount count:      -1
Last checked:             Tue Sep 10 06:37:13 2019
Check interval:           0 (<none>)
Lifetime writes:          168 TB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:	          512
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      be3bd996-8da4-4d22-80e4-e7a4c8ce22a0
Journal backup:           inode blocks
MMP block number:         13560
MMP update interval:      5
User quota inode:         3
Group quota inode:        4
Project quota inode:      325
Journal features:         journal_incompat_revoke
Journal size:             4096M
Journal length:           1048576
Journal sequence:         0x40cddf6c
Journal start:            54810
MMP_block:
    mmp_magic: 0x4d4d50
    mmp_check_interval: 10
    mmp_sequence: 0x00dd28
    mmp_update_date: Thu Oct 22 16:15:18 2020
    mmp_update_time: 1603408518
    mmp_node_name: oak-md1-s2
    mmp_device_name: dm-4


 Comments   
Comment by Stephane Thiell [ 22/Oct/20 ]

So far, I haven't seen the same problem for setting project IDs on files hosted on oak-MDT0001 which is using 1,024-byte inodes. That's the only main difference I can see.

[root@oak-md1-s1 ~]# dumpe2fs -h /dev/mapper/md1-rbod1-ssd-mdt1
dumpe2fs 1.45.6.wc2 (28-Sep-2020)
Filesystem volume name:   oak-MDT0001
Last mounted on:          /
Filesystem UUID:          169de89e-6b5d-4480-b118-8f726d7af07b
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery mmp flex_bg dirdata sparse_super large_file huge_file uninit_bg dir_nlink quota project
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              1495554576
Block count:              934803456
Reserved block count:     46739624
Free blocks:              549513350
Free inodes:              1357906233
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      834
Blocks per group:         20472
Fragments per group:      20472
Inodes per group:         32752
Inode blocks per group:   8188
Flex block group size:    16
Filesystem created:       Thu Oct 18 11:43:21 2018
Last mount time:          Wed Oct 21 08:32:20 2020
Last write time:          Wed Oct 21 08:32:20 2020
Mount count:              9
Maximum mount count:      -1
Last checked:             Wed Sep 30 11:32:31 2020
Check interval:           0 (<none>)
Lifetime writes:          21 TB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:	          1024
Required extra isize:     32
Desired extra isize:      32
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      c48690f3-5625-496d-adb1-9c3288cc8b00
Journal backup:           inode blocks
MMP block number:         17606
MMP update interval:      5
User quota inode:         3
Group quota inode:        4
Project quota inode:      113
Journal features:         journal_incompat_revoke
Journal size:             4096M
Journal length:           1048576
Journal sequence:         0x018fbaff
Journal start:            268485
MMP_block:
    mmp_magic: 0x4d4d50
    mmp_check_interval: 10
    mmp_sequence: 0x0059dd
    mmp_update_date: Thu Oct 22 16:29:21 2020
    mmp_update_time: 1603409361
    mmp_node_name: oak-md1-s1
    mmp_device_name: dm-3 
Comment by Peter Jones [ 23/Oct/20 ]

Shilong

Can you please advise?

Thanks

Peter

Comment by Wang Shilong (Inactive) [ 23/Oct/20 ]

Yup, problem is:

"Required extra isize: 28"

We need "Required extra isize: 32" at least, there is a patch to fix this,

https://review.whamcloud.com/#/c/38505/

Comment by Stephane Thiell [ 24/Oct/20 ]

Thank you Shilong! This is very helpful. Can this patch be added to b2_12?

Comment by Stephane Thiell [ 30/Oct/20 ]

We have the patch running in production and it did fix our issue. I have been able to assign project IDs to many files on MDT with 512-byte inodes. We have seen only a single occurence of "quota transfer failed: rc = -75" since then, I'm not sure why but it's probably an edge case. Thanks again for the patch!

Comment by Peter Jones [ 30/Oct/20 ]

That's good Stephane. We're tracking the landing of the patch under LU-13519

Generated at Sat Feb 10 03:06:34 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.