Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-8124

MDT zpool capacity consumed at greater rate than inode allocation

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Minor
    • Lustre 2.9.0
    • Lustre 2.7.0, Lustre 2.8.0, Lustre 2.9.0
    • CentOS 6.7, Lustre 2.8, ZFS 6.5.3
    • 3
    • 9223372036854775807

    Description

      When running mdtest to create zero byte files on a new LFS to benchmark the MDS I noticed that in creating 600K zero byte files I only used a small percentage of Lustre inodes but the MDT zpool capacity was 65% used. The ratio of inodes used to capacity used not only seems way off but it appears on track to run out of zpool space before Lustre thinks it's out of inodes.

      I'm on a plane and can give specifics later tonight. I know one large LFS is seeing similar on a production LFS.

      In my case the MDT is a five-plex mirror vdev made in the following way:
      zpool create -o ashift=12 -O recordsize=4096 mdt.pool mirror A1 A2 mirror A3 A4 mirror A5 A6 mirror A7 A8 mirror A9 A10

      I have also seen the behavior using default recordsize.

      Maybe I'm overlooking something but it seems the capacity consumption is overtaking inode allocation. This could be something in the way ZFS reports capacity used when Lustre is hooking in at a level below ZFS layer but from the cockpit it looks like my MDT tops out while Lustre thinks there are inodes available.

      Attachments

        Issue Links

          Activity

            People

              adilger Andreas Dilger
              aeonjeffj Jeff Johnson (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: