[LU-9711] Entry '..' not shown in lustre directory Created: 26/Jun/17  Updated: 27/Jun/17  Resolved: 26/Jun/17

Status: Closed
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major
Reporter: nasf (Inactive) Assignee: nasf (Inactive)
Resolution: Not a Bug Votes: 0
Labels: None

Issue Links:
Duplicate
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

Some customer reported that some directory has problem that the entry '..' is not present:

pwd; stat .; ls -la 
/lustre/home/i200/mjf-i200/work/HYDRA_nodotdot 
File: '.' 
Size: 4096 Blocks: 8 IO Block: 4096 directory 
Device: b40e0b0h/188801200d Inode: 216173336164617001 Links: 2 
Access: (0750/drwxr-x---) Uid: (29114/mjf-i200) Gid: (20002/ i200) 
Access: 2017-04-17 10:50:59.000000000 +0100 
Modify: 2017-04-17 10:49:07.000000000 +0100 
Change: 2017-04-17 10:50:48.000000000 +0100 
Birth: - 
total 4 
drwxr-x--- 2 mjf-i200 i200 4096 Apr 17 10:49 . 

Checking in the MDT, it seems to be there:

[root@cirrus-mds1 ~]# debugfs -cR 'ls -l ROOT/home/i200/mjf-i200/work/HYDRA_nodotdot/' /dev/mapper/vg_mdt0000_indy2lfs-mdt0000
debugfs 1.42.13.wc5.ddn1 (15-Apr-2016)
/dev/mapper/vg_mdt0000_indy2lfs-mdt0000: catastrophic mode - not reading inode or group bitmaps
 244739512   40750 (2)  29114  20002    4096 17-Apr-2017 23:38 .
 244739468   40750 (2)  29114  20002    4096 19-Apr-2017 10:10 ..
[root@cirrus-mds1 ~]# debugfs -cR 'stat ROOT/home/i200/mjf-i200/work/HYDRA_nodotdot/' /dev/mapper/vg_mdt0000_indy2lfs-mdt0000
debugfs 1.42.13.wc5.ddn1 (15-Apr-2016)
/dev/mapper/vg_mdt0000_indy2lfs-mdt0000: catastrophic mode - not reading inode or group bitmaps
Inode: 244739512   Type: directory    Mode:  0750   Flags: 0x0
Generation: 1458826816    Version: 0x00000027:1d3fe408
User: 29114   Group: 20002   Project:     0   Size: 4096
File ACL: 0    Directory ACL: 0
Links: 2   Blockcount: 8
Fragment:  Address: 0    Number: 0    Size: 0
 ctime: 0x58f54401:00000000 -- Mon Apr 17 23:38:57 2017
 atime: 0x58f9d240:00000000 -- Fri Apr 21 10:34:56 2017
 mtime: 0x58f54401:00000000 -- Mon Apr 17 23:38:57 2017
crtime: 0x58d10bbd:6e447984 -- Tue Mar 21 11:17:17 2017
Size of extra inode fields: 32
Extended attributes stored in inode body:
  lma = "00 00 00 00 00 00 00 00 00 81 00 00 03 00 00 00 29 cb 00 00 00 00 00 00 " (24)
  lma: fid=[0x300008100:0xcb29:0x0] compat=0 incompat=0
  link = "df f1 ea 11 01 00 00 00 38 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 20 00 00 00 03 00 00 81 00 00 01 8d 14 00 00 00 00 48 59 44 52 41 5f 6e 6f 64 6f 74 64 6f 74 " (56)
BLOCKS:
(0):122441712
TOTAL: 1

Looking at the parent directory with debugfs:

 [root@cirrus-mds1 ~]# debugfs -cR 'ls -l ROOT/home/i200/mjf-i200/work/' /dev/mapper/vg_mdt0000_indy2lfs-mdt0000
debugfs 1.42.13.wc5.ddn1 (15-Apr-2016)
/dev/mapper/vg_mdt0000_indy2lfs-mdt0000: catastrophic mode - not reading inode or group bitmaps
 244739468   40750 (2)  29114  20002    4096 19-Apr-2017 10:10 .
 236208657   40750 (18)  29114  20002    4096 20-Apr-2017 18:15 ..
 244739512   40750 (18)  29114  20002    4096 17-Apr-2017 23:38 HYDRA_nodotdot
 254185195   40750 (18)  29114  20002    4096 17-Apr-2017 23:37 HYDRA_dotdot


 Comments   
Comment by nasf (Inactive) [ 26/Jun/17 ]

Sorry, the issue only exists on b_ieel3_0. Master has already resolve it.

Comment by nasf (Inactive) [ 27/Jun/17 ]

According to latest implementation on master, if the system upgrade from Lustre-1.8, or the 2.x system restored from MDT file-level backup, then there may be NOT FID-in-dirent for the ".." entry, because:

1) There may be NOT enough space to hold the FID-in-dirent after the ".." entry. We do NOT want to re-insert ".." entry to avoid moving ".." to other non-2nd slots.
2) Only the ".." entry has no FID-in-dirent will not much affect the readdir() performance.

As I remembered that I have ever made some patch about updating ".." entry for rename case. I suspected that your trouble of ".." relocated to other slot may be caused by some old improperly rename.

Generated at Sat Feb 10 02:28:33 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.