Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-15281

inode size disparity on ZFS MDTs

    XMLWordPrintable

Details

    • Bug
    • Resolution: Unresolved
    • Minor
    • None
    • None
    • None
    • CentOS 7.6
    • 3
    • 9223372036854775807

    Description

      We have two clusters running, echo and lima.  Before I go further, we are comparing apples and oranges a bit here as:

      The MDT pool on echo is composed of two vdevs that are hardware RAIDs (legacy hardware) so no ZFS mirroring.  The MDT pool on lima is composed of 4 NVMe cards in 2 mirrors.

      The MDS on echo keeps getting very close to filling and we can't work out why.

      The two clusters are both used to do backups with heavy use of hard-linking (using dirvish/rsync).

      I know this is an oversimplification since it's not just inodes on the MDT but, running df -k and df -i to get kB and inodes used then dividing one by the other yields ~14kB/inode on echo and ~3kB/inode on lima.

      Are there any particular diagnostic tools/commands we could use to find what's using all the space on the ZFS MDT?

      echo's MDT is currently using 4.8TB for 350M inodes

      lima's MDT is currently using 2.8TB for 946M inodes

      Happy to provide any other info/params that might be useful.

      Attachments

        1. echo-get-all
          5 kB
        2. lima-get-all
          5 kB

        Activity

          People

            pjones Peter Jones
            dneg Dneg (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated: