Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-2341

e2fsprogs: m_quota: enable quota feature on mkfs: failed

Details

    • Bug
    • Resolution: Fixed
    • Major
    • None
    • Lustre 2.4.0
    • 3
    • 5592

    Description

      While building e2fsprogs against master branch, the following error occurred:

      m_quota: enable quota feature on mkfs: failed
      --- m_quota/expect.1    2012-11-16 03:35:44.000000000 +0000
      +++ m_quota.1.log       2012-11-16 03:41:38.000000000 +0000
      @@ -24,7 +24,7 @@
       Pass 3: Checking directory connectivity
       Pass 4: Checking reference counts
       Pass 5: Checking group summary information
      -test_filesys: 11/32768 files (18.2% non-contiguous), 5709/131072 blocks
      +test_filesys: 11/32768 files (18.2% non-contiguous), 5703/131072 blocks
       Exit status is 0
      
       Filesystem volume name:   <none>
      @@ -39,7 +39,7 @@
       Inode count:              32768
       Block count:              131072
       Reserved block count:     6553
      -Free blocks:              125363
      +Free blocks:              125369
       Free inodes:              32757
       First block:              1  
       Block size:               1024
      @@ -65,8 +65,8 @@
         Reserved GDT blocks at 3-258
         Block bitmap at 259 (+258), Inode bitmap at 260 (+259)
         Inode table at 261-516 (+260)
      -  7644 free blocks, 2037 free inodes, 2 directories
      -  Free blocks: 549-8192
      +  7650 free blocks, 2037 free inodes, 2 directories
      +  Free blocks: 543-8192
         Free inodes: 12-2048
       Group 1: (Blocks 8193-16384) 
         Backup superblock at 8193, Group descriptors at 8194-8194
      m_raid_opt: raid options: ok  
      m_std: standard filesystem options: ok
      m_uninit: uninitialized group feature: ok
      r_inline_xattr: shrinking filesystem with in-inode extended attributes: ok
      r_move_itable: filesystem resize which requires moving the inode table: ok
      r_resize_inode: filesystem resize with a resize_inode present: ok
      s_basic_scan: e2scan quick test: ok
      t_ext_jnl_rm: remove missing external journal device: ok
      t_mmp_1on: enable MMP using tune2fs: ok
      t_mmp_2off: disable MMP using tune2fs: ok
      t_quota_1on: enable quota using tune2fs: ok
      t_quota_2off: disable quota using tune2fs: ok
      u_mke2fs: e2undo with mke2fs: ok
      u_tune2fs: e2undo with tune2fs: ok
      152 tests succeeded     1 tests failed
      Tests failed: m_quota
      make[2]: *** [test_post] Error 1
      make[2]: Leaving directory `/root/rpmbuild/BUILD/e2fsprogs-1.42.5.wc3/tests'
      make[1]: *** [check-recursive] Error 1
      make[1]: Leaving directory `/root/rpmbuild/BUILD/e2fsprogs-1.42.5.wc3'
      error: Bad exit status from /var/tmp/rpm-tmp.swysI9 (%check)
      
      
      RPM build errors:
          Bad exit status from /var/tmp/rpm-tmp.swysI9 (%check)
      make: *** [rpm] Error 1
      

      Build log is attached.

      Attachments

        Activity

          [LU-2341] e2fsprogs: m_quota: enable quota feature on mkfs: failed

          This problem was not really fixed - the e2fsprogs m_quota test passes on Toro/Rosso, but it fails in my local testing environments. I pushed a debugging patch (http://review.whamcloud.com/6158) to see why these blocks are allocated, and it appears that the size of the quota files is different on Toro/Rosso compared to my local systems:

          Output from Rosso build test:
          http://build.whamcloud.com/job/e2fsprogs-reviews/160/arch=x86_64,distro=el6/console

          m_quota: enable quota feature on mkfs: ok
          debugfs 1.41.12 (17-May-2010)
          /tmp/e2fsprogs-tmp.2w9mYe: catastrophic mode - not reading inode or group bitmaps
          Inode: 3   Type: regular    Mode:  0600   Flags: 0x10
          Generation: 0    Version: 0x00000000
          User:     0   Group:     0   Size: 9216
          File ACL: 0    Directory ACL: 0
          Links: 1   Blockcount: 18
          Fragment:  Address: 0    Number: 0    Size: 0
          ctime: 0x5182ce50 -- Thu May  2 20:36:32 2013
          atime: 0x5182ce50 -- Thu May  2 20:36:32 2013
          mtime: 0x5182ce50 -- Thu May  2 20:36:32 2013
          BLOCKS:
          (0):531, (1):536, (2-5):532-535, (6-8):537-539
          TOTAL: 9
          
          debugfs 1.41.12 (17-May-2010)
          /tmp/e2fsprogs-tmp.2w9mYe: catastrophic mode - not reading inode or group bitmaps
          Inode: 4   Type: regular    Mode:  0600   Flags: 0x10
          Generation: 0    Version: 0x00000000
          User:     0   Group:     0   Size: 9216
          File ACL: 0    Directory ACL: 0
          Links: 1   Blockcount: 18
          Fragment:  Address: 0    Number: 0    Size: 0
          ctime: 0x5182ce50 -- Thu May  2 20:36:32 2013
          atime: 0x5182ce50 -- Thu May  2 20:36:32 2013
          mtime: 0x5182ce50 -- Thu May  2 20:36:32 2013
          BLOCKS:
          (0):540, (1):545, (2-5):541-544, (6-8):546-548
          TOTAL: 9
          

          Output from local testing:

          m_quota: enable quota feature on mkfs: failed
          debugfs 1.42.7.wc1 (12-Apr-2013)
          /tmp/e2fsprogs-tmp.nIEM6f: catastrophic mode - not reading inode or group bitmap
          s
          Inode: 3   Type: regular    Mode:  0600   Flags: 0x10
          Generation: 0    Version: 0x00000000
          User:     0   Group:     0   Size: 6144
          File ACL: 0    Directory ACL: 0
          Links: 1   Blockcount: 12
          Fragment:  Address: 0    Number: 0    Size: 0
          ctime: 0x5182ec44 -- Thu May  2 22:44:20 2013
          atime: 0x5182ec44 -- Thu May  2 22:44:20 2013
          mtime: 0x5182ec44 -- Thu May  2 22:44:20 2013
          BLOCKS:
          (0):531, (1):536, (2-5):532-535
          TOTAL: 6
          
          debugfs 1.42.7.wc1 (12-Apr-2013)
          /tmp/e2fsprogs-tmp.nIEM6f: catastrophic mode - not reading inode or group bitmaps
          Inode: 4   Type: regular    Mode:  0600   Flags: 0x10
          Generation: 0    Version: 0x00000000
          User:     0   Group:     0   Size: 6144
          File ACL: 0    Directory ACL: 0
          Links: 1   Blockcount: 12
          Fragment:  Address: 0    Number: 0    Size: 0
          ctime: 0x5182ec44 -- Thu May  2 22:44:20 2013
          atime: 0x5182ec44 -- Thu May  2 22:44:20 2013
          mtime: 0x5182ec44 -- Thu May  2 22:44:20 2013
          BLOCKS:
          (0):537, (1):542, (2-5):538-541
          TOTAL: 6
          

          Note the local quota files only have 6 blocks allocated each, while the the Toro/Rosso quota files each have 9 blocks. Together, these account for the 6-block difference in the filesystem allocation.

          Possibly the quota files are created with the maximum UID/GID in /etc/passwd and /etc/group (i.e. are filesystem specific), instead of the actual UID/GID that are in use on the filesystem?

          adilger Andreas Dilger added a comment - This problem was not really fixed - the e2fsprogs m_quota test passes on Toro/Rosso, but it fails in my local testing environments. I pushed a debugging patch ( http://review.whamcloud.com/6158 ) to see why these blocks are allocated, and it appears that the size of the quota files is different on Toro/Rosso compared to my local systems: Output from Rosso build test: http://build.whamcloud.com/job/e2fsprogs-reviews/160/arch=x86_64,distro=el6/console m_quota: enable quota feature on mkfs: ok debugfs 1.41.12 (17-May-2010) /tmp/e2fsprogs-tmp.2w9mYe: catastrophic mode - not reading inode or group bitmaps Inode: 3 Type: regular Mode: 0600 Flags: 0x10 Generation: 0 Version: 0x00000000 User: 0 Group: 0 Size: 9216 File ACL: 0 Directory ACL: 0 Links: 1 Blockcount: 18 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x5182ce50 -- Thu May 2 20:36:32 2013 atime: 0x5182ce50 -- Thu May 2 20:36:32 2013 mtime: 0x5182ce50 -- Thu May 2 20:36:32 2013 BLOCKS: (0):531, (1):536, (2-5):532-535, (6-8):537-539 TOTAL: 9 debugfs 1.41.12 (17-May-2010) /tmp/e2fsprogs-tmp.2w9mYe: catastrophic mode - not reading inode or group bitmaps Inode: 4 Type: regular Mode: 0600 Flags: 0x10 Generation: 0 Version: 0x00000000 User: 0 Group: 0 Size: 9216 File ACL: 0 Directory ACL: 0 Links: 1 Blockcount: 18 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x5182ce50 -- Thu May 2 20:36:32 2013 atime: 0x5182ce50 -- Thu May 2 20:36:32 2013 mtime: 0x5182ce50 -- Thu May 2 20:36:32 2013 BLOCKS: (0):540, (1):545, (2-5):541-544, (6-8):546-548 TOTAL: 9 Output from local testing: m_quota: enable quota feature on mkfs: failed debugfs 1.42.7.wc1 (12-Apr-2013) /tmp/e2fsprogs-tmp.nIEM6f: catastrophic mode - not reading inode or group bitmap s Inode: 3 Type: regular Mode: 0600 Flags: 0x10 Generation: 0 Version: 0x00000000 User: 0 Group: 0 Size: 6144 File ACL: 0 Directory ACL: 0 Links: 1 Blockcount: 12 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x5182ec44 -- Thu May 2 22:44:20 2013 atime: 0x5182ec44 -- Thu May 2 22:44:20 2013 mtime: 0x5182ec44 -- Thu May 2 22:44:20 2013 BLOCKS: (0):531, (1):536, (2-5):532-535 TOTAL: 6 debugfs 1.42.7.wc1 (12-Apr-2013) /tmp/e2fsprogs-tmp.nIEM6f: catastrophic mode - not reading inode or group bitmaps Inode: 4 Type: regular Mode: 0600 Flags: 0x10 Generation: 0 Version: 0x00000000 User: 0 Group: 0 Size: 6144 File ACL: 0 Directory ACL: 0 Links: 1 Blockcount: 12 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x5182ec44 -- Thu May 2 22:44:20 2013 atime: 0x5182ec44 -- Thu May 2 22:44:20 2013 mtime: 0x5182ec44 -- Thu May 2 22:44:20 2013 BLOCKS: (0):537, (1):542, (2-5):538-541 TOTAL: 6 Note the local quota files only have 6 blocks allocated each, while the the Toro/Rosso quota files each have 9 blocks. Together, these account for the 6-block difference in the filesystem allocation. Possibly the quota files are created with the maximum UID/GID in /etc/passwd and /etc/group (i.e. are filesystem specific), instead of the actual UID/GID that are in use on the filesystem?
          yujian Jian Yu added a comment -

          After the patch for LU-1606 was landed on master branch, building e2fsprogs succeeded on Jenkins:

          http://build.whamcloud.com/job/e2fsprogs-reviews/128/arch=x86_64,distro=el6/

          Let's close this ticket now.

          yujian Jian Yu added a comment - After the patch for LU-1606 was landed on master branch, building e2fsprogs succeeded on Jenkins: http://build.whamcloud.com/job/e2fsprogs-reviews/128/arch=x86_64,distro=el6/ Let's close this ticket now.
          yujian Jian Yu added a comment -

          Hi Niu,

          Thanks for the info.

          FYI, the original issue I reported here occurred in my manual building. Building e2fsprogs by our build system is blocked by LU-1606 currently. So, after the patch for LU-1606 is landed master branch, I'll trigger an e2fsprogs build on Jenkins to see whether it succeeds or not.

          yujian Jian Yu added a comment - Hi Niu, Thanks for the info. FYI, the original issue I reported here occurred in my manual building. Building e2fsprogs by our build system is blocked by LU-1606 currently. So, after the patch for LU-1606 is landed master branch, I'll trigger an e2fsprogs build on Jenkins to see whether it succeeds or not.

          I still don't see why the free blocks is different on different building environment, see:

          commit a15e46cc3d8d8462a790365e2adb4d1b82014cd4
          Author: Niu Yawei <niu@whamcloud.com>
          Date:   Tue Jun 12 01:37:13 2012 -0700
          
              LU-1502 quota: Add basic tests for quota
          
              Fixed two minor defects in the quota code, added basic tests for
              the quota feature.
          
              Note that the m_quota test is *FAILING* on some systems, but the same
              test is passing on other systems (in particular the build/test nodes):
          
                @@ -24,7 +24,7 @@ Pass 2: Checking directory structure
                 Pass 3: Checking directory connectivity
                 Pass 4: Checking reference counts
                 Pass 5: Checking group summary information
                -test_filesys: 11/32768 files (18.2% non-contiguous), 5703/131072 blocks
                +test_filesys: 11/32768 files (18.2% non-contiguous), 5709/131072 blocks
                 Exit status is 0
          
                 Filesystem volume name:   <none>
                @@ -39,7 +39,7 @@ Filesystem OS type:       Linux
                 Inode count:              32768
                 Block count:              131072
                 Reserved block count:     6553
                -Free blocks:              125369
                +Free blocks:              125363
                 Free inodes:              32757
                 First block:              1
                 Block size:               1024
                @@ -65,8 +65,8 @@ Group 0: (Blocks 1-8192)
                   Reserved GDT blocks at 3-258
                   Block bitmap at 259 (+258), Inode bitmap at 260 (+259)
                   Inode table at 261-516 (+260)
                -  7650 free blocks, 2037 free inodes, 2 directories
                -  Free blocks: 543-8192
                +  7644 free blocks, 2037 free inodes, 2 directories
                +  Free blocks: 549-8192
                   Free inodes: 12-2048
                 Group 1: (Blocks 8193-16384)
                   Backup superblock at 8193, Group descriptors at 8194-8194
          
              I'm leaving this in the "passing-on-build nodes" state, even though it
              is failing on my local system, so that at least we can build and test
              packages.
          
              Signed-off-by: Niu Yawei <niu@whamcloud.com>
              Signed-off-by: Andreas Dilger <adilger@whamcloud.com>
              Change-Id: If3d68075aa89d6abf0cf77be93ee3b7d927ed545
          

          Andreas and I were getting 125369 blocks on local building, but our building system is getting 125363 blocks at that time, now seems the building system is changed. I think it's ok to change it back to 125369 for passing the build. Andreas, what do you think about?

          niu Niu Yawei (Inactive) added a comment - I still don't see why the free blocks is different on different building environment, see: commit a15e46cc3d8d8462a790365e2adb4d1b82014cd4 Author: Niu Yawei <niu@whamcloud.com> Date: Tue Jun 12 01:37:13 2012 -0700 LU-1502 quota: Add basic tests for quota Fixed two minor defects in the quota code, added basic tests for the quota feature. Note that the m_quota test is *FAILING* on some systems, but the same test is passing on other systems (in particular the build/test nodes): @@ -24,7 +24,7 @@ Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information -test_filesys: 11/32768 files (18.2% non-contiguous), 5703/131072 blocks +test_filesys: 11/32768 files (18.2% non-contiguous), 5709/131072 blocks Exit status is 0 Filesystem volume name: <none> @@ -39,7 +39,7 @@ Filesystem OS type: Linux Inode count: 32768 Block count: 131072 Reserved block count: 6553 -Free blocks: 125369 +Free blocks: 125363 Free inodes: 32757 First block: 1 Block size: 1024 @@ -65,8 +65,8 @@ Group 0: (Blocks 1-8192) Reserved GDT blocks at 3-258 Block bitmap at 259 (+258), Inode bitmap at 260 (+259) Inode table at 261-516 (+260) - 7650 free blocks, 2037 free inodes, 2 directories - Free blocks: 543-8192 + 7644 free blocks, 2037 free inodes, 2 directories + Free blocks: 549-8192 Free inodes: 12-2048 Group 1: (Blocks 8193-16384) Backup superblock at 8193, Group descriptors at 8194-8194 I'm leaving this in the "passing-on-build nodes" state, even though it is failing on my local system, so that at least we can build and test packages. Signed-off-by: Niu Yawei <niu@whamcloud.com> Signed-off-by: Andreas Dilger <adilger@whamcloud.com> Change-Id: If3d68075aa89d6abf0cf77be93ee3b7d927ed545 Andreas and I were getting 125369 blocks on local building, but our building system is getting 125363 blocks at that time, now seems the building system is changed. I think it's ok to change it back to 125369 for passing the build. Andreas, what do you think about?
          yujian Jian Yu added a comment -

          After updating m_quota/expect.1 according to the error message, building e2fsprogs succeeded.

          I wonder whether we only need update m_quota/expect.1 to fix this build issue. If we do this, e2fsprogs will likely no longer build against older versions of Lustre.

          yujian Jian Yu added a comment - After updating m_quota/expect.1 according to the error message, building e2fsprogs succeeded. I wonder whether we only need update m_quota/expect.1 to fix this build issue. If we do this, e2fsprogs will likely no longer build against older versions of Lustre.

          People

            yujian Jian Yu
            yujian Jian Yu
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: