Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-12204

mke2fs in e2fsprogs-1.44.5.wc1 fails for large device

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Minor
    • None
    • None
    • None
    • e2fsprogs-1.44.5.wc1, rhel7.5, msater branch
    • 3
    • 9223372036854775807

    Description

      If mke2fs formats large device (e.g. 900TB), it fails becouse device too big.

      # ls -l /dev/ddn/scratch0_ost0003 
      lrwxrwxrwx 1 root root 6 Apr 19 11:19 /dev/ddn/scratch0_ost0003 -> ../sdd
      # fdisk -l /dev/sdd
      
      Disk /dev/sdd: 952451.9 GB, 952451947560960 bytes, 232532213760 sectors
      Units = sectors of 1 * 4096 = 4096 bytes
      Sector size (logical/physical): 4096 bytes / 4096 bytes
      I/O size (minimum/optimal): 2097152 bytes / 2097152 bytes
      
      # mke2fs -V
      mke2fs 1.44.5.wc1 (15-Dec-2018)
      	Using EXT2FS Library version 1.44.5.wc1
      # mkfs.lustre --ost --servicenode=192.168.0.2@tcp --fsname=scratch0 --index=3 --mgsnode=192.168.0.1@tcp --mkfsoptions='-E lazy_itable_init=0,lazy_journal_init=0 -m1 -J size=4096 -O meta_bg' --reformat --backfstype=ldiskfs /dev/ddn/scratch0_ost0003
      
         Permanent disk data:
      Target:     scratch0:OST0003
      Index:      3
      Lustre FS:  scratch0
      Mount type: ldiskfs
      Flags:      0x1062
                    (OST first_time update no_primnode )
      Persistent mount opts: ,errors=remount-ro
      Parameters: failover.node=192.168.0.2@tcp mgsnode=192.168.0.1@tcp
      
      device size = 908328960MB
      formatting backing filesystem ldiskfs on /dev/ddn/scratch0_ost0003
      	target name   scratch0:OST0003
      	4k blocks     232532213760
      	options        -m1 -J size=4096  -I 512 -i 1048576 -q -O meta_bg,extents,uninit_bg,mmp,dir_nlink,quota,huge_file,64bit,flex_bg -G 256 -E lazy_itable_init=0,lazy_journal_init=0 -F
      mkfs_cmd = mke2fs -j -b 4096 -L scratch0:OST0003  -m1 -J size=4096  -I 512 -i 1048576 -q -O meta_bg,extents,uninit_bg,mmp,dir_nlink,quota,huge_file,64bit,flex_bg -G 256 -E lazy_itable_init=0,lazy_journal_init=0 -F /dev/ddn/scratch0_ost0003 232532213760
         mke2fs: Size of device (0x3624000000 blocks) /dev/ddn/scratch0_ost0003 too big to create
         	a filesystem using a blocksize of 4096.
      
      mkfs.lustre FATAL: Unable to build fs /dev/ddn/scratch0_ost0003 (256)
      
      mkfs.lustre FATAL: mkfs failed 256
      

      However, mke2fs in 1.42.13.wc6 works well.

      # mke2fs -V
      mke2fs 1.42.13.wc6 (05-Feb-2017)
      	Using EXT2FS Library version 1.42.13.wc6
      
      # mkfs.lustre --ost --servicenode=192.168.0.2@tcp --fsname=scratch0 --index=3 --mgsnode=192.168.0.1@tcp --mkfsoptions='-E lazy_itable_init=0,lazy_journal_init=0 -m1 -J size=4096 -O meta_bg' --reformat --backfstype=ldiskfs /dev/ddn/scratch0_ost0003
      
         Permanent disk data:
      Target:     scratch0:OST0003
      Index:      3
      Lustre FS:  scratch0
      Mount type: ldiskfs
      Flags:      0x1062
                    (OST first_time update no_primnode )
      Persistent mount opts: ,errors=remount-ro
      Parameters: failover.node=192.168.0.2@tcp mgsnode=192.168.0.1@tcp
      
      device size = 908328960MB
      formatting backing filesystem ldiskfs on /dev/ddn/scratch0_ost0003
      	target name   scratch0:OST0003
      	4k blocks     232532213760
      	options        -m1 -J size=4096  -I 512 -i 1048576 -q -O meta_bg,extents,uninit_bg,mmp,dir_nlink,quota,huge_file,64bit,flex_bg -G 256 -E lazy_itable_init=0,lazy_journal_init=0 -F
      mkfs_cmd = mke2fs -j -b 4096 -L scratch0:OST0003  -m1 -J size=4096  -I 512 -i 1048576 -q -O meta_bg,extents,uninit_bg,mmp,dir_nlink,quota,huge_file,64bit,flex_bg -G 256 -E lazy_itable_init=0,lazy_journal_init=0 -F /dev/ddn/scratch0_ost0003 232532213760
      Writing CONFIGS/mountdata
      

      Attachments

        Activity

          People

            dongyang Dongyang Li
            sihara Shuichi Ihara
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: