Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-11024

Broken inode accounting of MDT on ZFS

    XMLWordPrintable

    Details

    • Severity:
      3
    • Rank (Obsolete):
      9223372036854775807

      Description

      Roughly 6,200 10MB files are created with 'dd' loop:

      [thcrowe@td-mngt01 thcrowe] for i in `seq 1 10000` ; do dd if=/dev/zero of=file-$i bs=1M count=10; done
      ^C
      [thcrowe@td-mngt01 thcrowe]

      All files were written to MDT index 0.

      [thcrowe@td-mngt01 thcrowe]$ lfs getstripe m file* | sort | uniq
      0
      [thcrowe@td-mngt01 thcrowe]$ ls 1 file* | wc -l
      6208

      So at this point, POSIX says I have 6,208 files named file-*.
      Lustre however, reports the numbers differently.

      [thcrowe@td-mngt01 thcrowe]$ lfs quota -u 415432 /mnt/slate
      Disk quotas for usr 415432 (uid 415432):
      Filesystem kbytes quota limit grace files quota limit grace
      /mnt/slate 61619203 0 0 - 1473 0 0 -
      [thcrowe@td-mngt01 thcrowe]$

      Here is the same data directly from the MDT

      [root@slate-mds01 ~]# grep -A1 415432 /proc/fs/lustre/osd-zfs/slate-MDT0000/quota_slave/acct_user

      • id: 415432
        usage: { inodes: 1473, kbytes: 53825 }
        [root@slate-mds01 ~]#

      Following commands were used to dump MDT index 0's zfs contents:

      [root@slate-mds01 ~]# zpool list
      NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
      mgs 186G 6.35M 186G - 0% 0% 1.00x ONLINE -
      slate_mdt0000 8.67T 16.1G 8.66T - 0% 0% 1.00x ONLINE -
      [root@slate-mds01 ~]# zpool set cachefile=/tmp/slate_mdt0000.cache slate_mdt0000
      [root@slate-mds01 ~]# cp /tmp/slate_mdt0000.cache /tmp/slate_mdt0000.cache1
      [root@slate-mds01 ~]# zpool set cachefile="" slate_mdt0000
      [root@slate-mds01 ~]# zdb -ddddd -U /tmp/slate_mdt0000.cache1 slate_mdt0000 > /tmp/slate_mdt0000-zdb-ddddd

      Once the zdb completed it output, it is a simple grep to see what uid 415432 has going on.

      [root@slate-mds01 ~]# grep uid /tmp/slate_mdt0000-zdb-ddddd | grep -c 415432
      6210

      6210 is not 6208, because there are 2 directory objects owned by 415432 in the zdb output.

      Further tests were run to check /proc/fs/lustre/osd-zfs/slate-MDT0000/quota_slave/acct_user reported correct information. If created 100 files, the account number increased from 120 to 205, and if created 1000 files, the account number increased from 120 to 929.

      Space accounting on MDT works well.

       

       

       

        Attachments

          Issue Links

            Activity

              People

              Assignee:
              yong.fan nasf (Inactive)
              Reporter:
              lixi Li Xi (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

                Dates

                Created:
                Updated:
                Resolved: