Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-7922

ROOT dir created at mkfs time is using a high #d inode, >2G

Details

    • Bug
    • Resolution: Fixed
    • Major
    • Lustre 2.9.0
    • Lustre 2.8.0
    • 3
    • 9223372036854775807

    Description

      Steps to reproduce issue:
      ====================

      export LOAD=yes
      sh llmount.sh
      /mnt/lokesh/seagate/lustre-wc-rel/lustre/tests/../utils/mkfs.lustre --mgs --fsname=lustre --mdt --index=0 --param=sys.timeout=20 --param=lov.stripesize=1048576 --param=lov.stripecount=0 --param=mdt.identity_upcall=/mnt/lokesh/seagate/lustre-wc-rel/lustre/tests/../utils/l_getidentity --backfstype=ldiskfs --device-size=200000 --mkfsoptions="-N 300000  -G 1" --reformat /tmp/lustre-mdt1 > /dev/null
      mkdir -p /mnt/mds1; mount -t lustre -o loop /tmp/lustre-mdt1 /mnt/mds1
      mount -t ldiskfs /dev/loop0 /mnt/test/
       ls -i /mnt/test/
      [root@server_lokesh tests]# ls -i /mnt/test/
        97 changelog_catalog        30001 O        30 oi.16.17      39 oi.16.26      48 oi.16.35      57 oi.16.44      66 oi.16.53      75 oi.16.62        240005 ROOT
          98 changelog_users        13 oi.16.0       31 oi.16.18      40 oi.16.27      49 oi.16.36      58 oi.16.45      67 oi.16.54      76 oi.16.63               85 seq_ctl
           240001 CONFIGS          14 oi.16.1       32 oi.16.19      41 oi.16.28      50 oi.16.37      59 oi.16.46      68 oi.16.55      20 oi.16.7                86 seq_srv
          84 fld                                23 oi.16.10      15 oi.16.2       42 oi.16.29      51 oi.16.38      60 oi.16.47      69 oi.16.56      21 oi.16.8
          99 hsm_actions                24 oi.16.11      33 oi.16.20      16 oi.16.3       52 oi.16.39      61 oi.16.48      70 oi.16.57      22 oi.16.9
      

      Results
      Disk_info after formatting :
      ====================

      [root@server_lokesh tests]# dumpe2fs -h /dev/loop0
      dumpe2fs 1.42.12.wc1 (15-Sep-2014)
      Filesystem volume name:   lustre-MDT0000
      Last mounted on:          /
      Filesystem UUID:          a6926858-ad86-49a6-94ee-225ba0fc57cb
      Filesystem magic number:  0xEF53
      Filesystem revision #:    1 (dynamic)
      Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery flex_bg dirdata sparse_super large_file huge_file uninit_bg dir_nlink quota
      Filesystem flags:         signed_directory_hash
      Default mount options:    user_xattr acl
      Filesystem state:         clean
      Errors behavior:          Continue
      Filesystem OS type:       Linux
      Inode count:              300000
      Block count:              50000
      Reserved block count:     2307
      Free blocks:              7885
      Free inodes:              299987
      First block:              0
      Block size:               4096
      Fragment size:            4096
      Reserved GDT blocks:      78
      Blocks per group:         5120
      Fragments per group:      5120
      Inodes per group:         30000
      Inode blocks per group:   3750
      Filesystem created:       Tue Dec  1 15:11:34 2015
      Last mount time:          Tue Dec  1 15:11:48 2015
      Last write time:          Tue Dec  1 15:11:48 2015
      Mount count:              3
      Maximum mount count:      -1
      Last checked:             Tue Dec  1 15:11:34 2015
      Check interval:           0 (<none>)
      Lifetime writes:          457 kB
      Reserved blocks uid:      0 (user root)
      Reserved blocks gid:      0 (group root)
      First inode:              11
      Inode size:               512
      Required extra isize:     28
      Desired extra isize:      28
      Journal inode:            8
      Default directory hash:   half_md4
      Directory Hash Seed:      9483ebb9-ab24-47eb-b36f-7992baff0cd2
      Journal backup:           inode blocks
      User quota inode:         3
      Group quota inode:        4
      Journal features:         (none)
      Journal size:             16M
      Journal length:           4096
      Journal sequence:         0x00000011
      Journal start:            1
      

      Inode allocation results :
      ========================

      [root@server_lokesh tests]# ls -i /mnt/test/
          97 changelog_catalog   30001 O             30 oi.16.17      39 oi.16.26      48 oi.16.35      57 oi.16.44      66 oi.16.53      75 oi.16.62           240005 ROOT
          98 changelog_users        13 oi.16.0       31 oi.16.18      40 oi.16.27      49 oi.16.36      58 oi.16.45      67 oi.16.54      76 oi.16.63               85 seq_ctl
      240001 CONFIGS                14 oi.16.1       32 oi.16.19      41 oi.16.28      50 oi.16.37      59 oi.16.46      68 oi.16.55      20 oi.16.7                86 seq_srv
          84 fld                    23 oi.16.10      15 oi.16.2       42 oi.16.29      51 oi.16.38      60 oi.16.47      69 oi.16.56      21 oi.16.8
          99 hsm_actions            24 oi.16.11      33 oi.16.20      16 oi.16.3       52 oi.16.39      61 oi.16.48      70 oi.16.57      22 oi.16.9
      As per above results 
      Inode count: 300000
      Free inodes: 299987
      Inodes per group: 30000
      flex_bg 1
      240005 ROOT
      

      ROOT inode is assigned from the 9th out of 10 groups even we have enough free inodes in initial groups.

      Attachments

        Activity

          People

            wc-triage WC Triage
            lokesh.jaliminche Lokesh Nagappa Jaliminche (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: