Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-5192

upgrade 2.1 -> 2.4.3 quota errors

    XMLWordPrintable

Details

    • Bug
    • Resolution: Not a Bug
    • Major
    • None
    • Lustre 2.4.3
    • None
    • 3
    • 14510

    Description

      We upgrade our 2.1 server to 2.4.3

      ran
      lctl --quota on all OST and MDT

      we are getting the following errors

      pfe21 /nobackupp8/mhanafi # lfs  quota -v -u mhanafi /nobackupp8
      Disk quotas for user mhanafi (uid 11312):
           Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
          /nobackupp8 [60908]  1275423612 75000000000       -  108850  100000  200000       -
      nbp8-MDT0000_UUID
                        60908       -       0       -  108850       -       0       -
      nbp8-OST0000_UUID
                        83168       -       0       -       -       -       -       -
      nbp8-OST0001_UUID
                        14892       -       0       -       -       -       -       -
      nbp8-OST0002_UUID
                        41212       -       0       -       -       -       -       -
      nbp8-OST0003_UUID
                        70332       -       0       -       -       -       -       -
      nbp8-OST0004_UUID
                        60488       -       0       -       -       -       -       -
      nbp8-OST0005_UUID
                        39652       -       0       -       -       -       -       -
      nbp8-OST0006_UUID
                        60868       -       0       -       -       -       -       -
      Total allocated inode limit: 0, total allocated block limit: 0
      Some errors happened when getting quota info. Some devices may be not working or deactivated. The data in "[]" is inaccurate.
      

      MDS

      nbp8-mds1 ~ # lctl get_param osd-*.*.quota_slave.info
      osd-ldiskfs.nbp8-MDT0000.quota_slave.info=
      target name:    nbp8-MDT0000
      pool ID:        0
      type:           md
      quota enabled:  none
      conn to master: not setup yet
      space acct:     ug
      user uptodate:  glb[0],slv[0],reint[1]
      group uptodate: glb[0],slv[0],reint[1]
      

      OSS

      nbp8-oss2 ~ # lctl get_param osd-*.*.quota_slave.info
      osd-ldiskfs.nbp8-OST0001.quota_slave.info=
      target name:    nbp8-OST0001
      pool ID:        0
      type:           dt
      quota enabled:  none
      conn to master: not setup yet
      space acct:     ug
      user uptodate:  glb[0],slv[0],reint[1]
      group uptodate: glb[0],slv[0],reint[1]
      osd-ldiskfs.nbp8-OST001b.quota_slave.info=
      target name:    nbp8-OST001b
      pool ID:        0
      type:           dt
      quota enabled:  none
      conn to master: not setup yet
      space acct:     ug
      user uptodate:  glb[0],slv[0],reint[1]
      group uptodate: glb[0],slv[0],reint[1]
      osd-ldiskfs.nbp8-OST0035.quota_slave.info=
      target name:    nbp8-OST0035
      pool ID:        0
      type:           dt
      quota enabled:  none
      conn to master: not setup yet
      space acct:     ug
      user uptodate:  glb[0],slv[0],reint[1]
      group uptodate: glb[0],slv[0],reint[1]
      osd-ldiskfs.nbp8-OST004f.quota_slave.info=
      target name:    nbp8-OST004f
      pool ID:        0
      type:           dt
      quota enabled:  none
      conn to master: not setup yet
      space acct:     ug
      user uptodate:  glb[0],slv[0],reint[1]
      group uptodate: glb[0],slv[0],reint[1]
      osd-ldiskfs.nbp8-OST0069.quota_slave.info=
      target name:    nbp8-OST0069
      pool ID:        0
      type:           dt
      quota enabled:  none
      conn to master: not setup yet
      space acct:     ug
      user uptodate:  glb[0],slv[0],reint[1]
      group uptodate: glb[0],slv[0],reint[1]
      osd-ldiskfs.nbp8-OST0083.quota_slave.info=
      target name:    nbp8-OST0083
      pool ID:        0
      type:           dt
      quota enabled:  none
      conn to master: not setup yet
      space acct:     ug
      user uptodate:  glb[0],slv[0],reint[1]
      group uptodate: glb[0],slv[0],reint[1]
      osd-ldiskfs.nbp8-OST009d.quota_slave.info=
      target name:    nbp8-OST009d
      pool ID:        0
      type:           dt
      quota enabled:  none
      conn to master: not setup yet
      space acct:     ug
      user uptodate:  glb[0],slv[0],reint[1]
      group uptodate: glb[0],slv[0],reint[1]
      osd-ldiskfs.nbp8-OST00b7.quota_slave.info=
      target name:    nbp8-OST00b7
      pool ID:        0
      type:           dt
      quota enabled:  none
      conn to master: not setup yet
      space acct:     ug
      user uptodate:  glb[0],slv[0],reint[1]
      group uptodate: glb[0],slv[0],reint[1]
      osd-ldiskfs.nbp8-OST00d1.quota_slave.info=
      target name:    nbp8-OST00d1
      pool ID:        0
      type:           dt
      quota enabled:  none
      conn to master: not setup yet
      space acct:     ug
      user uptodate:  glb[0],slv[0],reint[1]
      group uptodate: glb[0],slv[0],reint[1]
      osd-ldiskfs.nbp8-OST00eb.quota_slave.info=
      target name:    nbp8-OST00eb
      pool ID:        0
      type:           dt
      quota enabled:  none
      conn to master: not setup yet
      space acct:     ug
      user uptodate:  glb[0],slv[0],reint[1]
      group uptodate: glb[0],slv[0],reint[1]
      osd-ldiskfs.nbp8-OST0105.quota_slave.info=
      target name:    nbp8-OST0105
      pool ID:        0
      type:           dt
      quota enabled:  none
      conn to master: not setup yet
      space acct:     ug
      user uptodate:  glb[0],slv[0],reint[1]
      group uptodate: glb[0],slv[0],reint[1]
      osd-ldiskfs.nbp8-OST011f.quota_slave.info=
      target name:    nbp8-OST011f
      pool ID:        0
      type:           dt
      quota enabled:  none
      conn to master: not setup yet
      space acct:     ug
      user uptodate:  glb[0],slv[0],reint[1]
      group uptodate: glb[0],slv[0],reint[1]
      
      nbp8-mds1 ~ # lctl dl
        0 UP osd-ldiskfs MGS-osd MGS-osd_UUID 5
        1 UP mgs MGS MGS 23455
        2 UP mgc MGC10.151.27.60@o2ib 3b2ba8a8-1b82-764e-a3ef-c10d5df8bf04 5
        3 UP osd-ldiskfs nbp8-MDT0000-osd nbp8-MDT0000-osd_UUID 319
        4 UP mds MDS MDS_uuid 3
        5 UP lod nbp8-MDT0000-mdtlov nbp8-MDT0000-mdtlov_UUID 4
        6 UP mdt nbp8-MDT0000 nbp8-MDT0000_UUID 23401
        7 UP mdd nbp8-MDD0000 nbp8-MDD0000_UUID 4
        8 UP qmt nbp8-QMT0000 nbp8-QMT0000_UUID 4
        9 UP osp nbp8-OST0063-osc-MDT0000 nbp8-MDT0000-mdtlov_UUID 5
       10 UP osp nbp8-OST003d-osc-MDT0000 nbp8-MDT0000-mdtlov_UUID 5
       11 UP osp nbp8-OST001c-osc-MDT0000 nbp8-MDT0000-mdtlov_UUID 5
       12 UP osp nbp8-OST012c-osc-MDT0000 nbp8-MDT0000-mdtlov_UUID 5
      .
      .
      .
      .
      
      nbp8-mds1 ~ # tune2fs -l /dev/mapper/nbp8--vg-mdt8 
      tune2fs 1.42.7.wc2 (07-Nov-2013)
      Filesystem volume name:   nbp8-MDT0000
      Last mounted on:          /
      Filesystem UUID:          04d0b84c-180c-4230-9fa6-fcbede07f1bc
      Filesystem magic number:  0xEF53
      Filesystem revision #:    1 (dynamic)
      Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery flex_bg dirdata sparse_super large_file huge_file uninit_bg dir_nlink extra_isize quota
      Filesystem flags:         signed_directory_hash 
      Default mount options:    user_xattr acl
      Filesystem state:         clean
      Errors behavior:          Continue
      Filesystem OS type:       Linux
      Inode count:              966380512
      Block count:              483184640
      Reserved block count:     0
      Free blocks:              325181297
      Free inodes:              827897945
      First block:              0
      Block size:               4096
      Fragment size:            4096
      Reserved GDT blocks:      1024
      Blocks per group:         16376
      Fragments per group:      16376
      Inodes per group:         32752
      Inode blocks per group:   4094
      Flex block group size:    16
      Filesystem created:       Wed Jun  5 17:40:07 2013
      Last mount time:          Wed Jun 11 18:15:54 2014
      Last write time:          Wed Jun 11 18:15:54 2014
      Mount count:              99
      Maximum mount count:      -1
      Last checked:             Wed Jun  5 17:40:07 2013
      Check interval:           0 (<none>)
      Lifetime writes:          48 TB
      Reserved blocks uid:      0 (user root)
      Reserved blocks gid:      0 (group root)
      First inode:              11
      Inode size:	          512
      Required extra isize:     28
      Desired extra isize:      28
      Journal UUID:             4c0a58b3-e261-47cc-80dc-6b45346e8db6
      Journal device:	          0xfd01
      Default directory hash:   half_md4
      Directory Hash Seed:      6ee52b70-b975-477f-9136-9b5bd0eb10b4
      Journal backup:           inode blocks
      User quota inode:         3
      Group quota inode:        4
      
      tune2fs 1.42.7.wc2 (07-Nov-2013)
      Filesystem volume name:   nbp8-OST0001
      Last mounted on:          /
      Filesystem UUID:          819a930e-2e30-48c8-b666-4d1db350bcb7
      Filesystem magic number:  0xEF53
      Filesystem revision #:    1 (dynamic)
      Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize quota
      Filesystem flags:         signed_directory_hash 
      Default mount options:    user_xattr acl
      Filesystem state:         clean
      Errors behavior:          Continue
      Filesystem OS type:       Linux
      Inode count:              22888704
      Block count:              5859483648
      Reserved block count:     0
      Free blocks:              1795574486
      Free inodes:              21669767
      First block:              0
      Block size:               4096
      Fragment size:            4096
      Reserved GDT blocks:      1024
      Blocks per group:         32768
      Fragments per group:      32768
      Inodes per group:         128
      Inode blocks per group:   8
      Flex block group size:    256
      Filesystem created:       Wed Jun  5 19:08:44 2013
      Last mount time:          Wed Jun 11 18:16:40 2014
      Last write time:          Wed Jun 11 18:16:40 2014
      Mount count:              25
      Maximum mount count:      -1
      Last checked:             Wed Jun  5 19:08:44 2013
      Check interval:           0 (<none>)
      Lifetime writes:          42 TB
      Reserved blocks uid:      0 (user root)
      Reserved blocks gid:      0 (group root)
      First inode:              11
      Inode size:	          256
      Required extra isize:     28
      Desired extra isize:      28
      Journal UUID:             fe5db948-55c4-4b70-9b01-2eecf994bb91
      Journal device:	          0xfd00
      Default directory hash:   half_md4
      Directory Hash Seed:      78f2ecbc-31f7-4764-9391-12de7c25a94a
      User quota inode:         3
      Group quota inode:        4
      

      Attachments

        Activity

          People

            niu Niu Yawei (Inactive)
            mhanafi Mahmoud Hanafi
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: