Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-136

test e2fsprogs-1.42.wc1 against 32TB+ ldiskfs filesystems

Details

    • Task
    • Resolution: Fixed
    • Major
    • Lustre 2.1.0
    • Lustre 2.1.0, Lustre 1.8.6
    • None
    • 16,038
    • 4966

    Description

      In order for Lustre to use OSTs larger than 16TB, the e2fsprogs "master" branch needs to be tested against such large LUNs. The "master" branch has unreleased modifications that should allow mke2fs, e2fsck, and other tools to use LUNs over 16TB, but it has not been heavily tested at this point.

      Bruce, I believe we previously discussed a test plan for this work, using llverdev and llverfs. Please attach a document or comment here with details. The testing for 16TB LUNs is documented in https://bugzilla.lustre.org/show_bug.cgi?id=16038.

      After the local ldiskfs filesystem testing is complete, then obdfilter-survey and full Lustre client testing is needed.

      Attachments

        Activity

          [LU-136] test e2fsprogs-1.42.wc1 against 32TB+ ldiskfs filesystems

          Yu Jian, I looked through the inodes run, but I didn't see it running e2fsck on the large LUN? That should be added as part of the test script if it isn't there today. If the LUN with the 135M files still exists, can you please start an e2fsck on both the MDS and the OST.

          adilger Andreas Dilger added a comment - Yu Jian, I looked through the inodes run, but I didn't see it running e2fsck on the large LUN? That should be added as part of the test script if it isn't there today. If the LUN with the 135M files still exists, can you please start an e2fsck on both the MDS and the OST.
          yujian Jian Yu added a comment -

          After running for about 53 hours, the test passed at Thu Aug 11 04:41:09 PDT 2011:
          https://maloo.whamcloud.com/test_sets/af225374-c72b-11e0-a7e2-52540025f9af

          The test log was not showed up in the above Maloo report. Please find it in the attachment - large-LUN-inodes.suite_log.ddn-sfa10000e-stack01.log.

          yujian Jian Yu added a comment - After running for about 53 hours, the test passed at Thu Aug 11 04:41:09 PDT 2011: https://maloo.whamcloud.com/test_sets/af225374-c72b-11e0-a7e2-52540025f9af The test log was not showed up in the above Maloo report. Please find it in the attachment - large-LUN-inodes.suite_log.ddn-sfa10000e-stack01.log.
          yujian Jian Yu added a comment - - edited

          The "large-LUN-inodes" testing is going to be started on the latest master branch...

          The inode creation testing on 128TB Lustre filesystem against master branch on CentOS5.6/x86_64 (kernel version: 2.6.18-238.19.1.el5_lustre.gd4ea36c) was started at Mon Aug 8 22:51:49 PDT 2011. About 134M inodes would be created.

          The following builds were used:
          Lustre build: http://newbuild.whamcloud.com/job/lustre-master/246/arch=x86_64,build_type=server,distro=el5,ib_stack=ofa/
          e2fsprogs build: http://newbuild.whamcloud.com/job/e2fsprogs-master/42/arch=x86_64,distro=el5/

          After running for about 53 hours, the test passed at Thu Aug 11 04:41:09 PDT 2011:
          https://maloo.whamcloud.com/test_sets/af225374-c72b-11e0-a7e2-52540025f9af

          Here is a short summary of the test result after running mdsrate with "--create" option:

          # /opt/mpich/bin/mpirun  -np 25 -machinefile /tmp/mdsrate-create.machines /usr/lib64/lustre/tests/mdsrate --create --verbose --ndirs 25 --dirfmt '/mnt/lustre/mdsrate/dir%d' --nfiles 5360000 --filefmt 'file%%d'
          
          Rate: 694.17 eff 694.18 aggr 27.77 avg client creates/sec (total: 25 threads 134000000 creates 25 dirs 1 threads/dir 193035.50 secs)
          
          # lfs df -h /mnt/lustre
          UUID                       bytes        Used   Available Use% Mounted on
          largefs-MDT0000_UUID        1.5T       13.6G        1.4T   1% /mnt/lustre[MDT:0]
          largefs-OST0000_UUID      128.0T        3.6G      121.6T   0% /mnt/lustre[OST:0]
          
          filesystem summary:       128.0T        3.6G      121.6T   0% /mnt/lustre
          
          
          # lfs df -i /mnt/lustre
          UUID                      Inodes       IUsed       IFree IUse% Mounted on
          largefs-MDT0000_UUID  1073741824   134000062   939741762  12% /mnt/lustre[MDT:0]
          largefs-OST0000_UUID   134217728   134006837      210891 100% /mnt/lustre[OST:0]
          
          filesystem summary:   1073741824   134000062   939741762  12% /mnt/lustre
          
          yujian Jian Yu added a comment - - edited The "large-LUN-inodes" testing is going to be started on the latest master branch... The inode creation testing on 128TB Lustre filesystem against master branch on CentOS5.6/x86_64 (kernel version: 2.6.18-238.19.1.el5_lustre.gd4ea36c) was started at Mon Aug 8 22:51:49 PDT 2011 . About 134M inodes would be created. The following builds were used: Lustre build: http://newbuild.whamcloud.com/job/lustre-master/246/arch=x86_64,build_type=server,distro=el5,ib_stack=ofa/ e2fsprogs build: http://newbuild.whamcloud.com/job/e2fsprogs-master/42/arch=x86_64,distro=el5/ After running for about 53 hours, the test passed at Thu Aug 11 04:41:09 PDT 2011 : https://maloo.whamcloud.com/test_sets/af225374-c72b-11e0-a7e2-52540025f9af Here is a short summary of the test result after running mdsrate with "--create" option: # /opt/mpich/bin/mpirun -np 25 -machinefile /tmp/mdsrate-create.machines /usr/lib64/lustre/tests/mdsrate --create --verbose --ndirs 25 --dirfmt '/mnt/lustre/mdsrate/dir%d' --nfiles 5360000 --filefmt 'file%%d' Rate: 694.17 eff 694.18 aggr 27.77 avg client creates/sec (total: 25 threads 134000000 creates 25 dirs 1 threads/dir 193035.50 secs) # lfs df -h /mnt/lustre UUID bytes Used Available Use% Mounted on largefs-MDT0000_UUID 1.5T 13.6G 1.4T 1% /mnt/lustre[MDT:0] largefs-OST0000_UUID 128.0T 3.6G 121.6T 0% /mnt/lustre[OST:0] filesystem summary: 128.0T 3.6G 121.6T 0% /mnt/lustre # lfs df -i /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on largefs-MDT0000_UUID 1073741824 134000062 939741762 12% /mnt/lustre[MDT:0] largefs-OST0000_UUID 134217728 134006837 210891 100% /mnt/lustre[OST:0] filesystem summary: 1073741824 134000062 939741762 12% /mnt/lustre
          yujian Jian Yu added a comment -

          Now, the read operation is ongoing...

          Done.

          After running for about 21 days in total, the 128TB LUN full testing on CentOS5.6/x86_64 (kernel version: 2.6.18-238.12.1.el5_lustre.g5c1e9f9) passed on Lustre master build v2_0_65_0:
          https://maloo.whamcloud.com/test_sets/69c35618-bdd3-11e0-8bdf-52540025f9af

          The "large-LUN-inodes" testing is going to be started on the latest master branch...

          yujian Jian Yu added a comment - Now, the read operation is ongoing... Done. After running for about 21 days in total, the 128TB LUN full testing on CentOS5.6/x86_64 (kernel version: 2.6.18-238.12.1.el5_lustre.g5c1e9f9) passed on Lustre master build v2_0_65_0: https://maloo.whamcloud.com/test_sets/69c35618-bdd3-11e0-8bdf-52540025f9af The "large-LUN-inodes" testing is going to be started on the latest master branch...
          yujian Jian Yu added a comment -

          After running for about 12385 minutes (206 hours, 8 days), the 128TB Lustre filesystem was successfully filled up by llverfs:

          # lfs df -h /mnt/lustre
          UUID                       bytes        Used   Available Use% Mounted on
          largefs-MDT0000_UUID        1.5T      499.3M        1.4T   0% /mnt/lustre[MDT:0]
          largefs-OST0000_UUID      128.0T      121.4T      120.0G 100% /mnt/lustre[OST:0]
          
          filesystem summary:       128.0T      121.4T      120.0G 100% /mnt/lustre
          
          # lfs df -i /mnt/lustre
          UUID                      Inodes       IUsed       IFree IUse% Mounted on
          largefs-MDT0000_UUID  1073741824       32099  1073709725   0% /mnt/lustre[MDT:0]
          largefs-OST0000_UUID   134217728       31191   134186537   0% /mnt/lustre[OST:0]
          
          filesystem summary:   1073741824       32099  1073709725   0% /mnt/lustre
          

          Now, the read operation is ongoing...

          yujian Jian Yu added a comment - After running for about 12385 minutes (206 hours, 8 days), the 128TB Lustre filesystem was successfully filled up by llverfs: # lfs df -h /mnt/lustre UUID bytes Used Available Use% Mounted on largefs-MDT0000_UUID 1.5T 499.3M 1.4T 0% /mnt/lustre[MDT:0] largefs-OST0000_UUID 128.0T 121.4T 120.0G 100% /mnt/lustre[OST:0] filesystem summary: 128.0T 121.4T 120.0G 100% /mnt/lustre # lfs df -i /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on largefs-MDT0000_UUID 1073741824 32099 1073709725 0% /mnt/lustre[MDT:0] largefs-OST0000_UUID 134217728 31191 134186537 0% /mnt/lustre[OST:0] filesystem summary: 1073741824 32099 1073709725 0% /mnt/lustre Now, the read operation is ongoing...
          yujian Jian Yu added a comment -

          After http://review.whamcloud.com/1071 and http://review.whamcloud.com/1073 were merged into the master branch, I proceeded with the 128TB LUN full testing on CentOS5.6/x86_64 (kernel version: 2.6.18-238.12.1.el5_lustre.g5c1e9f9). The testing was started at Sun Jul 10 23:56:02 PDT 2011.

          The following builds were used:
          Lustre build: http://newbuild.whamcloud.com/job/lustre-master/199/arch=x86_64,build_type=server,distro=el5,ib_stack=ofa/
          e2fsprogs build: http://newbuild.whamcloud.com/job/e2fsprogs-master/42/arch=x86_64,distro=el5/

          There were no extra mkfs.lustre options specified when formatting the 128TB OST.

          ===================== format the OST /dev/large_vg/ost_lv =====================
          # time mkfs.lustre --reformat --fsname=largefs --ost --mgsnode=192.168.77.1@o2ib /dev/large_vg/ost_lv
          
             Permanent disk data:
          Target:     largefs-OSTffff
          Index:      unassigned
          Lustre FS:  largefs
          Mount type: ldiskfs
          Flags:      0x72
                        (OST needs_index first_time update )
          Persistent mount opts: errors=remount-ro,extents,mballoc
          Parameters: mgsnode=192.168.77.1@o2ib
          
          device size = 134217728MB
          formatting backing filesystem ldiskfs on /dev/large_vg/ost_lv
                  target name  largefs-OSTffff
                  4k blocks     34359738368
                  options        -J size=400 -I 256 -i 1048576 -q -O extents,uninit_bg,dir_nlink,huge_file,64bit,flex_bg -G 256 -E lazy_journal_init, -F
          mkfs_cmd = mke2fs -j -b 4096 -L largefs-OSTffff  -J size=400 -I 256 -i 1048576 -q -O extents,uninit_bg,dir_nlink,huge_file,64bit,flex_bg -G 256 -E lazy_journal_init, -F /dev/large_vg/ost_lv 34359738368
          Writing CONFIGS/mountdata
          
          real    0m44.489s
          user    0m6.669s
          sys     0m31.087s
          
          yujian Jian Yu added a comment - After http://review.whamcloud.com/1071 and http://review.whamcloud.com/1073 were merged into the master branch, I proceeded with the 128TB LUN full testing on CentOS5.6/x86_64 (kernel version: 2.6.18-238.12.1.el5_lustre.g5c1e9f9). The testing was started at Sun Jul 10 23:56:02 PDT 2011 . The following builds were used: Lustre build: http://newbuild.whamcloud.com/job/lustre-master/199/arch=x86_64,build_type=server,distro=el5,ib_stack=ofa/ e2fsprogs build: http://newbuild.whamcloud.com/job/e2fsprogs-master/42/arch=x86_64,distro=el5/ There were no extra mkfs.lustre options specified when formatting the 128TB OST. ===================== format the OST /dev/large_vg/ost_lv ===================== # time mkfs.lustre --reformat --fsname=largefs --ost --mgsnode=192.168.77.1@o2ib /dev/large_vg/ost_lv Permanent disk data: Target: largefs-OSTffff Index: unassigned Lustre FS: largefs Mount type: ldiskfs Flags: 0x72 (OST needs_index first_time update ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=192.168.77.1@o2ib device size = 134217728MB formatting backing filesystem ldiskfs on /dev/large_vg/ost_lv target name largefs-OSTffff 4k blocks 34359738368 options -J size=400 -I 256 -i 1048576 -q -O extents,uninit_bg,dir_nlink,huge_file,64bit,flex_bg -G 256 -E lazy_journal_init, -F mkfs_cmd = mke2fs -j -b 4096 -L largefs-OSTffff -J size=400 -I 256 -i 1048576 -q -O extents,uninit_bg,dir_nlink,huge_file,64bit,flex_bg -G 256 -E lazy_journal_init, -F /dev/large_vg/ost_lv 34359738368 Writing CONFIGS/mountdata real 0m44.489s user 0m6.669s sys 0m31.087s

          Integrated in lustre-master » i686,server,el6,inkernel #199
          LU-136 change "force_over_16tb" mount option to "force_over_128tb"

          Oleg Drokin : 79ec0a1df07733183f19d71813f99306b31f3636
          Files :

          • ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel6.patch
          • ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel6.series
          • ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel6.patch
          • ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel5-ext4.series
          • ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel6.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel6.patch
          hudson Build Master (Inactive) added a comment - Integrated in lustre-master » i686,server,el6,inkernel #199 LU-136 change "force_over_16tb" mount option to "force_over_128tb" Oleg Drokin : 79ec0a1df07733183f19d71813f99306b31f3636 Files : ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel5.patch ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel5.patch ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel6.patch ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel6.series ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel5.patch ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel5.patch ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel6.patch ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel5-ext4.series ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel6.patch ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel6.patch

          Integrated in lustre-master » x86_64,server,el6,inkernel #199
          LU-136 change "force_over_16tb" mount option to "force_over_128tb"

          Oleg Drokin : 79ec0a1df07733183f19d71813f99306b31f3636
          Files :

          • ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel6.series
          • ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel5-ext4.series
          • ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel6.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel6.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel6.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel6.patch
          hudson Build Master (Inactive) added a comment - Integrated in lustre-master » x86_64,server,el6,inkernel #199 LU-136 change "force_over_16tb" mount option to "force_over_128tb" Oleg Drokin : 79ec0a1df07733183f19d71813f99306b31f3636 Files : ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel6.series ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel5-ext4.series ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel5.patch ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel5.patch ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel6.patch ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel5.patch ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel6.patch ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel6.patch ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel5.patch ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel6.patch

          Integrated in lustre-master » i686,server,el5,ofa #199
          LU-136 change "force_over_16tb" mount option to "force_over_128tb"

          Oleg Drokin : 79ec0a1df07733183f19d71813f99306b31f3636
          Files :

          • ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel6.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel6.patch
          • ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel6.patch
          • ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel5-ext4.series
          • ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel6.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel5.patch
          • ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel6.series
          hudson Build Master (Inactive) added a comment - Integrated in lustre-master » i686,server,el5,ofa #199 LU-136 change "force_over_16tb" mount option to "force_over_128tb" Oleg Drokin : 79ec0a1df07733183f19d71813f99306b31f3636 Files : ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel6.patch ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel5.patch ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel6.patch ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel5.patch ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel6.patch ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel5-ext4.series ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel5.patch ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel6.patch ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel5.patch ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel6.series

          Integrated in lustre-master » i686,server,el5,inkernel #199
          LU-136 change "force_over_16tb" mount option to "force_over_128tb"

          Oleg Drokin : 79ec0a1df07733183f19d71813f99306b31f3636
          Files :

          • ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel5-ext4.series
          • ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel6.patch
          • ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel6.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel6.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel6.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel5.patch
          • ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel6.series
          • ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel5.patch
          hudson Build Master (Inactive) added a comment - Integrated in lustre-master » i686,server,el5,inkernel #199 LU-136 change "force_over_16tb" mount option to "force_over_128tb" Oleg Drokin : 79ec0a1df07733183f19d71813f99306b31f3636 Files : ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel5-ext4.series ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel6.patch ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel5.patch ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel5.patch ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel6.patch ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel6.patch ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel6.patch ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel5.patch ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel6.series ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel5.patch

          Integrated in lustre-master » x86_64,client,ubuntu1004,inkernel #199
          LU-136 change "force_over_16tb" mount option to "force_over_128tb"

          Oleg Drokin : 79ec0a1df07733183f19d71813f99306b31f3636
          Files :

          • ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel6.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel6.patch
          • ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel5-ext4.series
          • ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel5.patch
          • ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel5.patch
          • ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel6.series
          • ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel6.patch
          • ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel6.patch
          hudson Build Master (Inactive) added a comment - Integrated in lustre-master » x86_64,client,ubuntu1004,inkernel #199 LU-136 change "force_over_16tb" mount option to "force_over_128tb" Oleg Drokin : 79ec0a1df07733183f19d71813f99306b31f3636 Files : ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel5.patch ldiskfs/kernel_patches/patches/ext4-extents-mount-option-rhel6.patch ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel6.patch ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel5-ext4.series ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel5.patch ldiskfs/kernel_patches/patches/ext4-force_over_16tb-rhel5.patch ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel5.patch ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel6.series ldiskfs/kernel_patches/patches/ext4-force_over_128tb-rhel6.patch ldiskfs/kernel_patches/patches/ext4-disable-mb-cache-rhel6.patch

          People

            yujian Jian Yu
            adilger Andreas Dilger
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: