Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-14068

quota transfer failed: rc = -75. Is project enforcement enabled on the ldiskfs filesystem

    XMLWordPrintable

Details

    • 3
    • 9223372036854775807

    Description

      After 2.10.8 to 2.12.5 upgrade on Oak servers, we are planning to enable project quotas like on Fir. However, I'm hitting the following problem:

      [root@oak-gw02 ~]# lfs project -p $PROJID -s -r $PROJPATH
      lfs: failed to set xattr for '/oak/stanford/groups/ruthm/sthiell/from_krb5_lookout2': Value too large for defined data type
      lfs: failed to set xattr for '/oak/stanford/groups/ruthm/sthiell/.blah.txt.swp': Value too large for defined data type
      

      MDS:

      Oct 22 16:00:56 oak-md1-s2 kernel: LustreError: 7947:0:(osd_handler.c:2998:osd_quota_transfer()) dm-4: quota transfer failed: rc = -75. Is project enforcement enabled on the ldiskfs filesystem?
      Oct 22 16:00:56 oak-md1-s2 kernel: LustreError: 7947:0:(osd_handler.c:2998:osd_quota_transfer()) Skipped 1072 previous similar messages
      

      Full debug from the MDS when this happens attached as oak-md1-s2.dk.log.gz

      Project quotas are enabled and also enforced:

      [root@oak-md1-s2 ~]# lctl get_param osd-*.*.quota_slave.info
      osd-ldiskfs.oak-MDT0000.quota_slave.info=
      target name:    oak-MDT0000
      pool ID:        0
      type:           md
      quota enabled:  gp
      conn to master: setup
      space acct:     ugp
      user uptodate:  glb[0],slv[0],reint[0]
      group uptodate: glb[1],slv[1],reint[0]
      project uptodate: glb[1],slv[1],reint[0]
      osd-ldiskfs.oak-MDT0003.quota_slave.info=
      target name:    oak-MDT0003
      pool ID:        0
      type:           md
      quota enabled:  gp
      conn to master: setup
      space acct:     ugp
      user uptodate:  glb[0],slv[0],reint[0]
      group uptodate: glb[1],slv[1],reint[0]
      project uptodate: glb[1],slv[1],reint[0]
      

      Note, this MDT only has 512-byte inodes and we have used resize2fs once in the past to grow it:

      [root@oak-md1-s2 ~]# dumpe2fs -h /dev/mapper/md1-rbod1-ssd-mdt0 
      dumpe2fs 1.45.6.wc2 (28-Sep-2020)
      Filesystem volume name:   oak-MDT0000
      Last mounted on:          /
      Filesystem UUID:          0ed1cfdd-8e25-4b6b-9cb9-7be1e89d70ad
      Filesystem magic number:  0xEF53
      Filesystem revision #:    1 (dynamic)
      Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery mmp flex_bg dirdata sparse_super large_file huge_file uninit_bg dir_nlink quota project
      Filesystem flags:         signed_directory_hash 
      Default mount options:    user_xattr acl
      Filesystem state:         clean
      Errors behavior:          Continue
      Filesystem OS type:       Linux
      Inode count:              1869611008
      Block count:              934803456
      Reserved block count:     46740172
      Free blocks:              481199422
      Free inodes:              1155160366
      First block:              0
      Block size:               4096
      Fragment size:            4096
      Reserved GDT blocks:      787
      Blocks per group:         16384
      Fragments per group:      16384
      Inodes per group:         32768
      Inode blocks per group:   4096
      Flex block group size:    16
      Filesystem created:       Mon Feb 13 12:36:07 2017
      Last mount time:          Mon Oct 19 09:37:19 2020
      Last write time:          Mon Oct 19 09:37:19 2020
      Mount count:              10
      Maximum mount count:      -1
      Last checked:             Tue Sep 10 06:37:13 2019
      Check interval:           0 (<none>)
      Lifetime writes:          168 TB
      Reserved blocks uid:      0 (user root)
      Reserved blocks gid:      0 (group root)
      First inode:              11
      Inode size:	          512
      Required extra isize:     28
      Desired extra isize:      28
      Journal inode:            8
      Default directory hash:   half_md4
      Directory Hash Seed:      be3bd996-8da4-4d22-80e4-e7a4c8ce22a0
      Journal backup:           inode blocks
      MMP block number:         13560
      MMP update interval:      5
      User quota inode:         3
      Group quota inode:        4
      Project quota inode:      325
      Journal features:         journal_incompat_revoke
      Journal size:             4096M
      Journal length:           1048576
      Journal sequence:         0x40cddf6c
      Journal start:            54810
      MMP_block:
          mmp_magic: 0x4d4d50
          mmp_check_interval: 10
          mmp_sequence: 0x00dd28
          mmp_update_date: Thu Oct 22 16:15:18 2020
          mmp_update_time: 1603408518
          mmp_node_name: oak-md1-s2
          mmp_device_name: dm-4
      

      Attachments

        Issue Links

          Activity

            People

              wshilong Wang Shilong (Inactive)
              sthiell Stephane Thiell
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: