Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-2240

implement index range lookup for osd-zfs.

Details

    • 3
    • 5303

    Description

      ZFS needs a index range lookup for DNE.

      Attachments

        Issue Links

          Activity

            [LU-2240] implement index range lookup for osd-zfs.

            Sigh.. Well it let me remove files oi.7/0x200000007:0x3:0x0, oi.7/0x200000007:0x4:0x0, and oi.7/0x200000007:0x1:0x0 (inode numbers 414211, 414213, and 414209 respectively) but I'm getting ENOENT when removing the others. Using systemtap, I can see it failing in zfs_zget:

            # grove-mds2 /mnt/grove-mds2/mdt0 > stap /usr/share/doc/systemtap-1.6/examples/general/para-callgraph.stp 'module("zfs").function("*")' -c "rm ./oi.7/0x200000007:0x2:0x0/0x1010000"
            
            ... [snip] ...
            
               677 rm(94074):    ->dmu_buf_get_user db_fake=0xffff880d717f1e40
               679 rm(94074):    <-dmu_buf_get_user return=0xffff880d52c28478
               684 rm(94074):    ->sa_get_userdata hdl=0xffff880d52c28478
               687 rm(94074):    <-sa_get_userdata return=0xffff880e6030ba70
               691 rm(94074):    ->sa_buf_rele db=0xffff880d717f1e40 tag=0x0
               694 rm(94074):     ->dbuf_rele db=0xffff880d717f1e40 tag=0x0
               696 rm(94074):      ->dbuf_rele_and_unlock db=0xffff880d717f1e40 tag=0x0
               698 rm(94074):      <-dbuf_rele_and_unlock 
               699 rm(94074):     <-dbuf_rele 
               701 rm(94074):    <-sa_buf_rele 
               703 rm(94074):   <-zfs_zget return=0x2
               707 rm(94074):   ->zfs_dirent_unlock dl=0xffff880f521949c0
               710 rm(94074):   <-zfs_dirent_unlock 
               712 rm(94074):  <-zfs_dirent_lock return=0x2
               714 rm(94074):  ->rrw_exit rrl=0xffff880d5a100290 tag=0xffffffffa0505727
               716 rm(94074):  <-rrw_exit 
               718 rm(94074): <-zfs_remove return=0x2
               720 rm(94074):<-zpl_unlink return=0xfffffffffffffffe
            

            I tried removing the files in the order that they were listed in the "find" command in my previous comment. So the first "rm" for each distinct inode number succeeded, but the following calls for files referencing the same inode number failed. Perhaps due to incorrect accounting of the number of links for a given inode?

            In case it's useful, the zdb info regarding these objects is below (AFAIK the inode number correspond to its dmu object number):

            # grove-mds2 /mnt/grove-mds2/mdt0 > zdb grove-mds2/mdt0 414209 414211 414213
            Dataset grove-mds2/mdt0 [ZPL], ID 45, cr_txg 110, 4.05G, 2088710 objects
            
                Object  lvl   iblk   dblk  dsize  lsize   %full  type
                414209    1    16K   128K   128K   128K  100.00  ZFS plain file
                414211    2     4K     4K     4K     8K  100.00  ZFS directory
                414213    2     4K     4K     4K     8K  100.00  ZFS directory
            

            I'm beginning to think a reformat is our best option moving forward...

            prakash Prakash Surya (Inactive) added a comment - Sigh.. Well it let me remove files oi.7/0x200000007:0x3:0x0 , oi.7/0x200000007:0x4:0x0 , and oi.7/0x200000007:0x1:0x0 (inode numbers 414211 , 414213 , and 414209 respectively) but I'm getting ENOENT when removing the others. Using systemtap, I can see it failing in zfs_zget : # grove-mds2 /mnt/grove-mds2/mdt0 > stap /usr/share/doc/systemtap-1.6/examples/general/para-callgraph.stp 'module("zfs").function("*")' -c "rm ./oi.7/0x200000007:0x2:0x0/0x1010000" ... [snip] ... 677 rm(94074): ->dmu_buf_get_user db_fake=0xffff880d717f1e40 679 rm(94074): <-dmu_buf_get_user return=0xffff880d52c28478 684 rm(94074): ->sa_get_userdata hdl=0xffff880d52c28478 687 rm(94074): <-sa_get_userdata return=0xffff880e6030ba70 691 rm(94074): ->sa_buf_rele db=0xffff880d717f1e40 tag=0x0 694 rm(94074): ->dbuf_rele db=0xffff880d717f1e40 tag=0x0 696 rm(94074): ->dbuf_rele_and_unlock db=0xffff880d717f1e40 tag=0x0 698 rm(94074): <-dbuf_rele_and_unlock 699 rm(94074): <-dbuf_rele 701 rm(94074): <-sa_buf_rele 703 rm(94074): <-zfs_zget return=0x2 707 rm(94074): ->zfs_dirent_unlock dl=0xffff880f521949c0 710 rm(94074): <-zfs_dirent_unlock 712 rm(94074): <-zfs_dirent_lock return=0x2 714 rm(94074): ->rrw_exit rrl=0xffff880d5a100290 tag=0xffffffffa0505727 716 rm(94074): <-rrw_exit 718 rm(94074): <-zfs_remove return=0x2 720 rm(94074):<-zpl_unlink return=0xfffffffffffffffe I tried removing the files in the order that they were listed in the "find" command in my previous comment. So the first "rm" for each distinct inode number succeeded, but the following calls for files referencing the same inode number failed. Perhaps due to incorrect accounting of the number of links for a given inode? In case it's useful, the zdb info regarding these objects is below (AFAIK the inode number correspond to its dmu object number): # grove-mds2 /mnt/grove-mds2/mdt0 > zdb grove-mds2/mdt0 414209 414211 414213 Dataset grove-mds2/mdt0 [ZPL], ID 45, cr_txg 110, 4.05G, 2088710 objects Object lvl iblk dblk dsize lsize %full type 414209 1 16K 128K 128K 128K 100.00 ZFS plain file 414211 2 4K 4K 4K 8K 100.00 ZFS directory 414213 2 4K 4K 4K 8K 100.00 ZFS directory I'm beginning to think a reformat is our best option moving forward...

            yes, I'd suggest to remove them .. and I'd suggest to take a snapshot just before that unfortunately I'm unable to reproduce the case locally:
            I can't generate such an image (can't even find the code in gerrit using 0x200000007 for quota).

            bzzz Alex Zhuravlev added a comment - yes, I'd suggest to remove them .. and I'd suggest to take a snapshot just before that unfortunately I'm unable to reproduce the case locally: I can't generate such an image (can't even find the code in gerrit using 0x200000007 for quota).

            Here's what I see on the MDS:

            # grove-mds2 /tmp/zfs > ls -li oi.3/0x200000003* oi.5/0x200000005* oi.6/0x200000006* oi.7/0x200000007* seq* quota*
               176 -rw-r--r-- 1 root root  8 Dec 31  1969 oi.3/0x200000003:0x1:0x0
               180 -rw-r--r-- 1 root root  0 Dec 31  1969 oi.3/0x200000003:0x3:0x0
            414212 -rw-r--r-- 1 root root  2 Dec 31  1969 oi.5/0x200000005:0x1:0x0
            414214 -rw-r--r-- 1 root root  2 Dec 31  1969 oi.5/0x200000005:0x2:0x0
            417923 -rw-r--r-- 1 root root  2 Dec 31  1969 oi.6/0x200000006:0x10000:0x0
            417924 -rw-r--r-- 1 root root  2 Dec 31  1969 oi.6/0x200000006:0x1010000:0x0
            417927 -rw-r--r-- 1 root root  2 Dec 31  1969 oi.6/0x200000006:0x1020000:0x0
            417926 -rw-r--r-- 1 root root  2 Dec 31  1969 oi.6/0x200000006:0x20000:0x0
            414209 -rw-r--r-- 1 root root  8 Dec 31  1969 oi.7/0x200000007:0x1:0x0
            414211 -rw-r--r-- 1 root root  2 Dec 31  1969 oi.7/0x200000007:0x3:0x0
            414213 -rw-r--r-- 1 root root  2 Dec 31  1969 oi.7/0x200000007:0x4:0x0
            414209 -rw-r--r-- 1 root root  8 Dec 31  1969 seq-200000007-lastid
               173 -rw-rw-rw- 1 root root 24 Dec 31  1969 seq_ctl
               174 -rw-rw-rw- 1 root root 24 Dec 31  1969 seq_srv
            
            oi.3/0x200000003:0x2:0x0:
            total 0
            
            oi.3/0x200000003:0x4:0x0:
            total 9
            417925 drwxr-xr-x 2 root root 2 Dec 31  1969 dt-0x0
            417922 drwxr-xr-x 2 root root 2 Dec 31  1969 md-0x0
            
            oi.3/0x200000003:0x5:0x0:
            total 9
            417923 -rw-r--r-- 1 root root 2 Dec 31  1969 0x10000
            417924 -rw-r--r-- 1 root root 2 Dec 31  1969 0x1010000
            
            oi.3/0x200000003:0x6:0x0:
            total 9
            417927 -rw-r--r-- 1 root root 2 Dec 31  1969 0x1020000
            417926 -rw-r--r-- 1 root root 2 Dec 31  1969 0x20000
            
            oi.7/0x200000007:0x2:0x0:
            total 18
            414211 -rw-r--r-- 1 root root 2 Dec 31  1969 0x10000
            414212 -rw-r--r-- 1 root root 2 Dec 31  1969 0x10000-MDT0000
            414213 -rw-r--r-- 1 root root 2 Dec 31  1969 0x1010000
            414214 -rw-r--r-- 1 root root 2 Dec 31  1969 0x1010000-MDT0000
            
            quota_master:
            total 9
            417925 drwxr-xr-x 2 root root 2 Dec 31  1969 dt-0x0
            417922 drwxr-xr-x 2 root root 2 Dec 31  1969 md-0x0
            
            quota_slave:
            total 18
            414211 -rw-r--r-- 1 root root 2 Dec 31  1969 0x10000
            414212 -rw-r--r-- 1 root root 2 Dec 31  1969 0x10000-MDT0000
            414213 -rw-r--r-- 1 root root 2 Dec 31  1969 0x1010000
            414214 -rw-r--r-- 1 root root 2 Dec 31  1969 0x1010000-MDT0000
            

            I'm somewhat guessing as to what the on disk format is supposed to look like, but it does appear to be using the new quota sequence numbers (0x200000005ULL and 0x200000006ULL).

            So, does this mean I can go ahead and remove these files:

            # grove-mds2 /tmp/zfs > find . -inum 414209 -o -inum 414211 -o -inum 414213
            ./oi.7/0x200000007:0x3:0x0
            ./oi.7/0x200000007:0x4:0x0
            ./oi.7/0x200000007:0x2:0x0/0x1010000
            ./oi.7/0x200000007:0x2:0x0/0x10000
            ./oi.7/0x200000007:0x1:0x0
            ./seq-200000007-lastid
            ./quota_slave/0x1010000
            ./quota_slave/0x10000
            

            ?

            prakash Prakash Surya (Inactive) added a comment - Here's what I see on the MDS: # grove-mds2 /tmp/zfs > ls -li oi.3/0x200000003* oi.5/0x200000005* oi.6/0x200000006* oi.7/0x200000007* seq* quota* 176 -rw-r--r-- 1 root root 8 Dec 31 1969 oi.3/0x200000003:0x1:0x0 180 -rw-r--r-- 1 root root 0 Dec 31 1969 oi.3/0x200000003:0x3:0x0 414212 -rw-r--r-- 1 root root 2 Dec 31 1969 oi.5/0x200000005:0x1:0x0 414214 -rw-r--r-- 1 root root 2 Dec 31 1969 oi.5/0x200000005:0x2:0x0 417923 -rw-r--r-- 1 root root 2 Dec 31 1969 oi.6/0x200000006:0x10000:0x0 417924 -rw-r--r-- 1 root root 2 Dec 31 1969 oi.6/0x200000006:0x1010000:0x0 417927 -rw-r--r-- 1 root root 2 Dec 31 1969 oi.6/0x200000006:0x1020000:0x0 417926 -rw-r--r-- 1 root root 2 Dec 31 1969 oi.6/0x200000006:0x20000:0x0 414209 -rw-r--r-- 1 root root 8 Dec 31 1969 oi.7/0x200000007:0x1:0x0 414211 -rw-r--r-- 1 root root 2 Dec 31 1969 oi.7/0x200000007:0x3:0x0 414213 -rw-r--r-- 1 root root 2 Dec 31 1969 oi.7/0x200000007:0x4:0x0 414209 -rw-r--r-- 1 root root 8 Dec 31 1969 seq-200000007-lastid 173 -rw-rw-rw- 1 root root 24 Dec 31 1969 seq_ctl 174 -rw-rw-rw- 1 root root 24 Dec 31 1969 seq_srv oi.3/0x200000003:0x2:0x0: total 0 oi.3/0x200000003:0x4:0x0: total 9 417925 drwxr-xr-x 2 root root 2 Dec 31 1969 dt-0x0 417922 drwxr-xr-x 2 root root 2 Dec 31 1969 md-0x0 oi.3/0x200000003:0x5:0x0: total 9 417923 -rw-r--r-- 1 root root 2 Dec 31 1969 0x10000 417924 -rw-r--r-- 1 root root 2 Dec 31 1969 0x1010000 oi.3/0x200000003:0x6:0x0: total 9 417927 -rw-r--r-- 1 root root 2 Dec 31 1969 0x1020000 417926 -rw-r--r-- 1 root root 2 Dec 31 1969 0x20000 oi.7/0x200000007:0x2:0x0: total 18 414211 -rw-r--r-- 1 root root 2 Dec 31 1969 0x10000 414212 -rw-r--r-- 1 root root 2 Dec 31 1969 0x10000-MDT0000 414213 -rw-r--r-- 1 root root 2 Dec 31 1969 0x1010000 414214 -rw-r--r-- 1 root root 2 Dec 31 1969 0x1010000-MDT0000 quota_master: total 9 417925 drwxr-xr-x 2 root root 2 Dec 31 1969 dt-0x0 417922 drwxr-xr-x 2 root root 2 Dec 31 1969 md-0x0 quota_slave: total 18 414211 -rw-r--r-- 1 root root 2 Dec 31 1969 0x10000 414212 -rw-r--r-- 1 root root 2 Dec 31 1969 0x10000-MDT0000 414213 -rw-r--r-- 1 root root 2 Dec 31 1969 0x1010000 414214 -rw-r--r-- 1 root root 2 Dec 31 1969 0x1010000-MDT0000 I'm somewhat guessing as to what the on disk format is supposed to look like, but it does appear to be using the new quota sequence numbers (0x200000005ULL and 0x200000006ULL). So, does this mean I can go ahead and remove these files: # grove-mds2 /tmp/zfs > find . -inum 414209 -o -inum 414211 -o -inum 414213 ./oi.7/0x200000007:0x3:0x0 ./oi.7/0x200000007:0x4:0x0 ./oi.7/0x200000007:0x2:0x0/0x1010000 ./oi.7/0x200000007:0x2:0x0/0x10000 ./oi.7/0x200000007:0x1:0x0 ./seq-200000007-lastid ./quota_slave/0x1010000 ./quota_slave/0x10000 ?

            could you check whether your filesystem has been using new quota files now? they're supposed to be in the following sequences:
            FID_SEQ_QUOTA = 0x200000005ULL,
            FID_SEQ_QUOTA_GLB = 0x200000006ULL,

            if so, then it should be OK to just remove old quota files in 0x200000007 sequence.

            bzzz Alex Zhuravlev added a comment - could you check whether your filesystem has been using new quota files now? they're supposed to be in the following sequences: FID_SEQ_QUOTA = 0x200000005ULL, FID_SEQ_QUOTA_GLB = 0x200000006ULL, if so, then it should be OK to just remove old quota files in 0x200000007 sequence.

            seq-<SEQ>-lastid stores the last used ID in sequence <SEQ>

            bzzz Alex Zhuravlev added a comment - seq-<SEQ>-lastid stores the last used ID in sequence <SEQ>

            Sigh, this is the difficulty with following the development branch - you are picking up all of the dirty laundry that is normally put away before the release is made. Typically, we don't want anyone to use development releases for long-lived filesystems for exactly this reason.

            Hopefully Mike or Alex configure out something to resolve this easily.

            adilger Andreas Dilger added a comment - Sigh, this is the difficulty with following the development branch - you are picking up all of the dirty laundry that is normally put away before the release is made. Typically, we don't want anyone to use development releases for long-lived filesystems for exactly this reason. Hopefully Mike or Alex configure out something to resolve this easily.

            Right, I confirmed in a VM that deleting oi.7/0x200000007:0x1:0x0 avoids the crash, and lets it successfully add the new root fid:

            $ ls -lid ./oi.7/0x200000007:0x1:0x0 ROOT
            177 drwxr-xr-x 158 root root 2 Dec  6 12:55 ./oi.7/0x200000007:0x1:0x0/
            177 drwxr-xr-x 158 root root 2 Dec  6 12:55 ROOT/
            

            I'm not sure what the seq-200000007-lastid is for, or if it's safe to remove its OI entry. The MDT for the production filesystem has a similar file, but in a non-colliding sequence:

            # grove-mds1 /mnt/mdtsnap > ls -lid seq-200000003-lastid  oi.3/0x200000003:0x1:0x0
            207172557 -rw-r--r-- 1 root root 8 Dec 31  1969 oi.3/0x200000003:0x1:0x0
            207172557 -rw-r--r-- 1 root root 8 Dec 31  1969 seq-200000003-lastid
            

            So hopefully we won't run into a problem there.

            It would be nice if the conversion code handled these collisions. But since there should be very few affected filesystems in the wild, we could probably live with a manual workaround.

            nedbass Ned Bass (Inactive) added a comment - Right, I confirmed in a VM that deleting oi.7/0x200000007:0x1:0x0 avoids the crash, and lets it successfully add the new root fid: $ ls -lid ./oi.7/0x200000007:0x1:0x0 ROOT 177 drwxr-xr-x 158 root root 2 Dec 6 12:55 ./oi.7/0x200000007:0x1:0x0/ 177 drwxr-xr-x 158 root root 2 Dec 6 12:55 ROOT/ I'm not sure what the seq-200000007-lastid is for, or if it's safe to remove its OI entry. The MDT for the production filesystem has a similar file, but in a non-colliding sequence: # grove-mds1 /mnt/mdtsnap > ls -lid seq-200000003-lastid oi.3/0x200000003:0x1:0x0 207172557 -rw-r--r-- 1 root root 8 Dec 31 1969 oi.3/0x200000003:0x1:0x0 207172557 -rw-r--r-- 1 root root 8 Dec 31 1969 seq-200000003-lastid So hopefully we won't run into a problem there. It would be nice if the conversion code handled these collisions. But since there should be very few affected filesystems in the wild, we could probably live with a manual workaround.

            I'm curious if this is the commit that is biting us:

            commit 5b64ac7f7cf2767acb75b872eaffcf6d255d0501
            Author: Mikhail Pershin <tappro@whamcloud.com>
            Date:   Thu Oct 4 14:24:43 2012 +0400
            
                LU-1943 class: FID_SEQ_LOCAL_NAME set to the Orion value
                
                Keep the same numbers for Orion and master for compatibility
                
                Signed-off-by: Mikhail Pershin <tappro@whamcloud.com>
                Change-Id: I318eba9860be7849ee4a8d828cf27e5fb91164e9
                Reviewed-on: http://review.whamcloud.com/4179
                Reviewed-by: Andreas Dilger <adilger@whamcloud.com>
                Tested-by: Hudson
                Tested-by: Maloo <whamcloud.maloo@gmail.com>
                Reviewed-by: Alex Zhuravlev <bzzz@whamcloud.com>
            
            diff --git a/lustre/include/lustre/lustre_idl.h b/lustre/include/lustre/lustre_idl.h
            index 4705c1d..bae42d0 100644
            --- a/lustre/include/lustre/lustre_idl.h
            +++ b/lustre/include/lustre/lustre_idl.h
            @@ -421,13 +421,12 @@ enum fid_seq {
                    /* sequence for local pre-defined FIDs listed in local_oid */
                     FID_SEQ_LOCAL_FILE = 0x200000001ULL,
                     FID_SEQ_DOT_LUSTRE = 0x200000002ULL,
            -        /* XXX 0x200000003ULL is reserved for FID_SEQ_LLOG_OBJ */
                    /* sequence is used for local named objects FIDs generated
                     * by local_object_storage library */
            +       FID_SEQ_LOCAL_NAME = 0x200000003ULL,
                     FID_SEQ_SPECIAL    = 0x200000004ULL,
                     FID_SEQ_QUOTA      = 0x200000005ULL,
                     FID_SEQ_QUOTA_GLB  = 0x200000006ULL,
            -       FID_SEQ_LOCAL_NAME = 0x200000007ULL,
                     FID_SEQ_NORMAL     = 0x200000400ULL,
                     FID_SEQ_LOV_DEFAULT= 0xffffffffffffffffULL
             };
            
            prakash Prakash Surya (Inactive) added a comment - I'm curious if this is the commit that is biting us: commit 5b64ac7f7cf2767acb75b872eaffcf6d255d0501 Author: Mikhail Pershin <tappro@whamcloud.com> Date: Thu Oct 4 14:24:43 2012 +0400 LU-1943 class: FID_SEQ_LOCAL_NAME set to the Orion value Keep the same numbers for Orion and master for compatibility Signed-off-by: Mikhail Pershin <tappro@whamcloud.com> Change-Id: I318eba9860be7849ee4a8d828cf27e5fb91164e9 Reviewed-on: http://review.whamcloud.com/4179 Reviewed-by: Andreas Dilger <adilger@whamcloud.com> Tested-by: Hudson Tested-by: Maloo <whamcloud.maloo@gmail.com> Reviewed-by: Alex Zhuravlev <bzzz@whamcloud.com> diff --git a/lustre/include/lustre/lustre_idl.h b/lustre/include/lustre/lustre_idl.h index 4705c1d..bae42d0 100644 --- a/lustre/include/lustre/lustre_idl.h +++ b/lustre/include/lustre/lustre_idl.h @@ -421,13 +421,12 @@ enum fid_seq { /* sequence for local pre-defined FIDs listed in local_oid */ FID_SEQ_LOCAL_FILE = 0x200000001ULL, FID_SEQ_DOT_LUSTRE = 0x200000002ULL, - /* XXX 0x200000003ULL is reserved for FID_SEQ_LLOG_OBJ */ /* sequence is used for local named objects FIDs generated * by local_object_storage library */ + FID_SEQ_LOCAL_NAME = 0x200000003ULL, FID_SEQ_SPECIAL = 0x200000004ULL, FID_SEQ_QUOTA = 0x200000005ULL, FID_SEQ_QUOTA_GLB = 0x200000006ULL, - FID_SEQ_LOCAL_NAME = 0x200000007ULL, FID_SEQ_NORMAL = 0x200000400ULL, FID_SEQ_LOV_DEFAULT= 0xffffffffffffffffULL };

            I think we're getting the -EEXISTS (i.e. -17) error back from zap_add when we try inserting the new root fid (0x200000007:0x1:0x0) into the OI since it already exists.

            prakash Prakash Surya (Inactive) added a comment - I think we're getting the -EEXISTS (i.e. -17 ) error back from zap_add when we try inserting the new root fid ( 0x200000007:0x1:0x0 ) into the OI since it already exists.

            Mounting a snapshot of the MDT through the POSIX layer, I found that objects in the quota_slave directory and the file seq-200000007-lastid are using FID_SEQ_ROOT. Note the matching inode numbers:

            $ ls -li oi.7/0x200000007* seq-200000007-lastid quota_slave/
            414209 -rw-r--r-- 1 root root 8 Dec 31  1969 oi.7/0x200000007:0x1:0x0
            414211 -rw-r--r-- 1 root root 2 Dec 31  1969 oi.7/0x200000007:0x3:0x0
            414213 -rw-r--r-- 1 root root 2 Dec 31  1969 oi.7/0x200000007:0x4:0x0
            414209 -rw-r--r-- 1 root root 8 Dec 31  1969 seq-200000007-lastid
            
            oi.7/0x200000007:0x2:0x0:
            total 22K
            414211 -rw-r--r-- 1 root root 2 Dec 31  1969 0x10000
            414212 -rw-r--r-- 1 root root 2 Dec 31  1969 0x10000-MDT0000
            414213 -rw-r--r-- 1 root root 2 Dec 31  1969 0x1010000
            414214 -rw-r--r-- 1 root root 2 Dec 31  1969 0x1010000-MDT0000
            
            quota_slave/:
            total 22K
            414211 -rw-r--r-- 1 root root 2 Dec 31  1969 0x10000
            414212 -rw-r--r-- 1 root root 2 Dec 31  1969 0x10000-MDT0000
            414213 -rw-r--r-- 1 root root 2 Dec 31  1969 0x1010000
            414214 -rw-r--r-- 1 root root 2 Dec 31  1969 0x1010000-MDT0000
            
            nedbass Ned Bass (Inactive) added a comment - Mounting a snapshot of the MDT through the POSIX layer, I found that objects in the quota_slave directory and the file seq-200000007-lastid are using FID_SEQ_ROOT. Note the matching inode numbers: $ ls -li oi.7/0x200000007* seq-200000007-lastid quota_slave/ 414209 -rw-r--r-- 1 root root 8 Dec 31 1969 oi.7/0x200000007:0x1:0x0 414211 -rw-r--r-- 1 root root 2 Dec 31 1969 oi.7/0x200000007:0x3:0x0 414213 -rw-r--r-- 1 root root 2 Dec 31 1969 oi.7/0x200000007:0x4:0x0 414209 -rw-r--r-- 1 root root 8 Dec 31 1969 seq-200000007-lastid oi.7/0x200000007:0x2:0x0: total 22K 414211 -rw-r--r-- 1 root root 2 Dec 31 1969 0x10000 414212 -rw-r--r-- 1 root root 2 Dec 31 1969 0x10000-MDT0000 414213 -rw-r--r-- 1 root root 2 Dec 31 1969 0x1010000 414214 -rw-r--r-- 1 root root 2 Dec 31 1969 0x1010000-MDT0000 quota_slave/: total 22K 414211 -rw-r--r-- 1 root root 2 Dec 31 1969 0x10000 414212 -rw-r--r-- 1 root root 2 Dec 31 1969 0x10000-MDT0000 414213 -rw-r--r-- 1 root root 2 Dec 31 1969 0x1010000 414214 -rw-r--r-- 1 root root 2 Dec 31 1969 0x1010000-MDT0000

            And enabling D_OTHER I gather this message:

            2013-03-20 14:37:14 Lustre: 58602:0:(osd_oi.c:720:osd_convert_root_to_new_seq()) lstest-MDT0000: /ROOT -> [0x200000001:0x6:0x0] -> 177
            
            prakash Prakash Surya (Inactive) added a comment - And enabling D_OTHER I gather this message: 2013-03-20 14:37:14 Lustre: 58602:0:(osd_oi.c:720:osd_convert_root_to_new_seq()) lstest-MDT0000: /ROOT -> [0x200000001:0x6:0x0] -> 177

            People

              bzzz Alex Zhuravlev
              di.wang Di Wang (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: