[LU-9836] Issues with 2.10 upgrade and files missing LMAC_FID_ON_OST flag Created: 04/Aug/17  Updated: 31/Oct/18  Resolved: 25/Jan/18

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.10.0
Fix Version/s: Lustre 2.11.0, Lustre 2.10.4

Type: Bug Priority: Critical
Reporter: Julien Wallior Assignee: nasf (Inactive)
Resolution: Fixed Votes: 0
Labels: None
Environment:

3.10.0-514.21.1.el7_lustre.x86_64


Attachments: File 800a_mount.log.gz     File 800a_mount_patched.log.0.gz     File debugfs_mount_logs.tar.gz     File l210_loop_4g.tar.xz     File ls_output.gz    
Issue Links:
Related
is related to LU-11584 kernel BUG at ldiskfs.h:1907! Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

Last weekend, we've upgraded our lustre from 2.7 to 2.10. After the upgrade, we were missing about 36M objects. After a bunch of troubleshooting, we ended up running e2fsck (which recovered the objects to lost+found) and ll_recover_lost_found_objs (which moved them back to the proper place in the ldiskfs filesystem). It's worth noting that lfsck couldn't recover the objects from lost+found (because of some kind of incompatibility between the objects EA and lfsck, details following).

Couple of remarks:
1. it looks like the functionality from ll_recover_lost_found_objs has been moved to the oi_scrub initial process but it would not work for us. We noticed 2 things:
a. in the lab, we recreated the issue (by moving manually objects to lost+found) and osd_initial_OI_scrub() would recover only the first 255 objects. We couldn't figure out why it stopped at 255 and restarting the OST would not recover any more than the initial 255.
b. in prod, osd_initial_OI_scrub() would run but not fix anything. The trace would come back with osd_ios_lf_fill() returning -EINVAL. After troubleshooting this issue more, it turns out all the objects in lost+found do have no compat flag (in particular no LMAC_FID_ON_OST) in the LMA extended attribute and eventually we end up in with osd_get_idif() returning -EINVAL (because __ost_xattr_get() returned 24). We believe all those files were created with lustre 2.7.
-> this is how far we got troubleshooting those 2 issues. Sounds like bugs, we are happy to give more details and/or file a bug report if that helps.
2. our lustre has 96 OST (id 0 to 97). All of the bad objects were located on 24 of them (id 48 to 71) – about 1.5M bad inodes out of 3M per OST. What's special about id 48 to 71, is that those OSTs have been reformatted about 6 months ago (with the same id, but at creation we forgot to add --replace to mkfs or do a writeconf). At the time, we saw some "precreate FID 0x0:3164581 is over 100000 larger than the LAST_ID 0x0:0, only precreating the last 10000 objects." in the logs. This sounds like the potential root cause to our issue last week, but we really can't figure out how this would have caused half of inodes to not get LMAC_FID_ON_OST and get lost in ldiskfs.
3. after fixing everything, when we run the lfsck -t scrub, all the bad objects are being checked and reported as failed in oi_scrub (example below). After digging, this comes down to the same ost_get_idif() function returning -EINVAL. We can fix this by copying files.
checked: 3383278
updated: 0
failed: 1469776

Overall, we just wanted to report this on the mailing list in case someone else runs into this issue and see if we should open bugs about 1.a. and 1.b. And also, we were curious whether anybody had any explanation on how we got there and whether 2. could explain it.

This is pretty dense, but overall reports 3 issues:

  • osd_initial_OI_scrub() seems to recover up to 255 files from lost+found, never more
  • somehow we got some files without compat flags that don't get processed by osd_initial_OI_scrub() and report failed by lfsck -t scrub
  • we have 24 OST of 96 that had 1.5M unattached inodes somehow and we're curious how this could have happened.


 Comments   
Comment by Peter Jones [ 04/Aug/17 ]

Fan Yong

Could you please advise on this issue?

Julien

Can you please confirm the original configuration before the upgrade? This was RHEL 6.x and vanilla community 2.7 (i.e no patches)? Which version of e2fsprogs are you using?

Thanks

Peter

Comment by Julien Wallior [ 04/Aug/17 ]

Before the upgrade we were on lustre 2.7 with kernel 2.6.32_504.8.1.el6_lustre.x86_64. Vanilla community 2.7.0 + RHEL6.

Currently we have e2fsprogs-1.42.13.wc5-7.el7.x86_64.

Comment by nasf (Inactive) [ 08/Aug/17 ]

a. in the lab, we recreated the issue (by moving manually objects to lost+found) and osd_initial_OI_scrub() would recover only the first 255 objects. We couldn't figure out why it stopped at 255 and restarting the OST would not recover any more than the initial 255.
b. in prod, osd_initial_OI_scrub() would run but not fix anything. The trace would come back with osd_ios_lf_fill() returning -EINVAL. After troubleshooting this issue more, it turns out all the objects in lost+found do have no compat flag (in particular no LMAC_FID_ON_OST) in the LMA extended attribute and eventually we end up in with osd_get_idif() returning -EINVAL (because __ost_xattr_get() returned 24). We believe all those files were created with lustre 2.7.

If the OST-object is created under lustre-2.7, it should have the compat flag LMAC_FID_ON_OST, at least it is true in my local test. On the other hand, since some orphans (255) can be recovered from lost+found, then those OST-objects have LMAC_FID_ON_OST flag. According to your description, it seems that the left non-recovered OST-objects have no LMAC_FID_ON_OST flag, right? If yes, then it is difficult to explain why some OST-objects have LMAC_FID_ON_OST flag, but others not, although all of them are created under Lustre-2.7.

Julien,

Would you please to show me two OST-objects via debugfs -c -R "stat", one is recovered by the initial OI scrub, the other is NOT.

Thanks!

Comment by Julien Wallior [ 08/Aug/17 ]

I think we are mixing 2 issues.

On one filesystem (prod), ldiskfs was corrupted somehow and we had to run e2fsck to recover lost inodes in lost+found. None of the inode in lost+found had LMAC_FID_ON_OST and none were recovered by initial_OI_scrub. They look like this:
debugfs: stat <10238>
Inode: 10238 Type: regular Mode: 0666 Flags: 0x80000
Generation: 2018246403 Version: 0x00000004:0a633f72
User: 0 Group: 0 Size: 0
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 0
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x598065da:274d5320 – Tue Aug 1 07:28:26 2017
atime: 0x598065ac:95ec7cb8 – Tue Aug 1 07:27:40 2017
mtime: 0x595d8f8e:00000000 – Wed Jul 5 21:17:02 2017
crtime: 0x595dc9b9:6dd93e9c – Thu Jul 6 01:25:13 2017
Size of extra inode fields: 32
Extended attributes stored in inode body:
lma = "00 00 00 00 00 00 00 00 00 04 00 80 0e 00 00 00 7e 47 bc 03 00 00 00 00 " (24)
lma: fid=[0xe80000400:0x3bc477e:0x0] compat=0 incompat=0
fid = "01 65 00 40 0e 00 00 00 64 13 01 00 01 00 00 00 " (16)
fid: parent=[0xe40006501:0x11364:0x0] stripe=1
EXTENTS:

Creation time is July 2017 and we were definitely running 2.7 at that time.
Now we don't know how ldiskfs got corrupted but it seems weird that the corruption would have unattached the inode from the directory structure AND removed the LMAC_FID_ON_OST flag.

The second issue was happening on an other filesystem (lab). As we figured out what had happened in prod, we tried to repro things in the lab to understand them better. One of the experiment we did was: mount the ost ldiskfs and move ~500 objects from O/<group>/d<mod>/<obj> to lost+found. At that point, we didn't know about the LMAC_FID_ON_OST flag and in retrospect, all the object had it. When starting the lustre filesystem, initial_OI_scrub claimed to have recovered 255 objects and we could confirm there were only ~250 objects left in lost+found. We tried umount/mount and it would not recover more objects.
The debugfs of those objects looks like this:
debugfs: stat #18991
Inode: 18991 Type: regular Mode: 0666 Flags: 0x80000
Generation: 491746900 Version: 0x00000004:0005dd90
User: 3162 Group: 200 Size: 8388608
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 16384
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x5980f0ac:d732dca0 – Tue Aug 1 17:20:44 2017
atime: 0x00000000:00000000 – Wed Dec 31 19:00:00 1969
mtime: 0x597f23ed:00000000 – Mon Jul 31 08:34:53 2017
crtime: 0x597f23e6:20238468 – Mon Jul 31 08:34:46 2017
Size of extra inode fields: 32
Extended attributes stored in inode body:
lma = "18 00 00 00 00 00 00 00 00 00 04 00 01 00 00 00 a0 49 00 00 00 00 00 00 a7 13 00 40 04 00 00 00 6e 1d 00 00 00 00 01 00 00 00 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 " (64)
lma: fid=[0x100040000:0x49a0:0x0] compat=18 incompat=0
EXTENTS:
(0-2047):79423488-79425535

Let me know if that helps.

Comment by nasf (Inactive) [ 08/Aug/17 ]

For the test on lab system, you used Lustre-2.7 or Lustre-2.10 to when mount the OST? (then found ~250 object unrecovered)

Comment by Julien Wallior [ 08/Aug/17 ]

The lab system was running 2.10 (but I can't say whether the files were created with 2.7 or 2.10).

Comment by nasf (Inactive) [ 08/Aug/17 ]

What is the output on lab OST0004?

debugfs "stat /O/0/d0/18848"
Comment by Julien Wallior [ 08/Aug/17 ]

I couldn't find that object on OST0004, but I found it on OST0006 if that helps.

9:28 wallior@lstosstestbal801 /proc/fs/lustre% cat osd-ldiskfs/dlustre-OST0004/mntdev 
/dev/mapper/801a
9:29 wallior@lstosstestbal801 /proc/fs/lustre% sudo debugfs -c /dev/mapper/801a
debugfs 1.42.13.wc5 (15-Apr-2016)
/dev/mapper/801a: catastrophic mode - not reading inode or group bitmaps
debugfs:  stat /O/0/d0/18848
/O/0/d0/18848: File not found by ext2_lookup 

9:30 wallior@lstosstestbal801 /proc/fs/lustre% sudo debugfs -c /dev/mapper/801c
debugfs 1.42.13.wc5 (15-Apr-2016)
/dev/mapper/801c: catastrophic mode - not reading inode or group bitmaps
debugfs:  stat /O/0/d0/18848
Inode: 19010   Type: regular    Mode:  0666   Flags: 0x80000
Generation: 2666600371    Version: 0x00000004:000074ba
User:  3162   Group:   200   Size: 4194304
File ACL: 0    Directory ACL: 0
Links: 1   Blockcount: 8192
Fragment:  Address: 0    Number: 0    Size: 0
 ctime: 0x597f2343:00000000 -- Mon Jul 31 08:32:03 2017
 atime: 0x00000000:00000000 -- Wed Dec 31 19:00:00 1969
 mtime: 0x597f2343:00000000 -- Mon Jul 31 08:32:03 2017
crtime: 0x597f2340:d8d5faec -- Mon Jul 31 08:32:00 2017
Size of extra inode fields: 32
Extended attributes stored in inode body: 
  lma = "18 00 00 00 00 00 00 00 00 00 06 00 01 00 00 00 a0 49 00 00 00 00 00 00 a6 13 00 40 04 00 00 00 fc e7 01 00 00 00 01 00 00 00 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 " (64)
  lma: fid=[0x100060000:0x49a0:0x0] compat=18 incompat=0
EXTENTS:
(0-1023):28288000-28289023
Comment by nasf (Inactive) [ 08/Aug/17 ]

non-exist is the expected result, that means the OI slot on the OST0004 for the object (with #18991) is not reused by others.

Comment by Julien Wallior [ 08/Aug/17 ]

oh i see. sure, because we had moved that object by hand in an attempt to recreate the state prod was in after running e2fsck.

so initial_OI_scrub() should be moving that file back in place, no?

Comment by nasf (Inactive) [ 08/Aug/17 ]

In theory, it should move the entry from /lost+found back to its original OI slot. I am studying related logic.

Comment by nasf (Inactive) [ 09/Aug/17 ]

Julien,

I have made some tests locally. Firstly, I create 300 files under Lustre-2.7 + el6.6 with loop devices. Then remount the OST as "ldiskfs" and move all related OST-objects from "O/0/dN" directory to "lost+found". And then, I stopped Lustre system. And then, I copy (scp) the Lustre devices (loop files) to another server, and mount it as "lustre" under Lustre-2.10 + el7.3. When the OST mount up, all the OST-object have been recovered from lost+found to their original OI slots. Not reproduce your trouble.

Since you have both the Lustre-2.7 (prod) and Lustre-2.10 (lab) environment, would you please to repeat my test and check whether you can reproduce the issues or not? If you can reproduce, would you please to check whether the OST-objects on the Lustre-2.7 (prod) have LMAC_FID_ON_OST flag or not before upgrading to Lustre-2.10? If you cannot reproduce the issue as my test did, would you please to show me (your way) how to reproduce the issue?

Thanks!

Comment by Julien Wallior [ 10/Aug/17 ]

@nasf –
We are running some tests in the lab. We will try to repro and let you know what we get. Is there any doc about extended attributes format over time? Some LMA is 24B, some 64, some have FID, some not.

Comment by nasf (Inactive) [ 15/Aug/17 ]

About the LMA size:
1) For the file created before Lustre-2.10, its size is 24 bytes.
2) Since Lustre-2.10, with PFL introduced, more information may be stored in the LMA, the its size becomes 64 bytes.

Comment by Tim McMullan [ 15/Aug/17 ]

Hey @nasf, I've been working with Julien on this in the lab trying to reproduce this in a simpler environment to what I’ve been testing with previously. I have been able to reproduce the oi_scrub process on mount not recovering all files in a pure 2.10 environment.
The setup I used to repro is 1 client, 1 mds (both mgs and mdt), 1 oss (single ost). All are running Centos 7.3 with RPMS sourced from https://downloads.hpdd.intel.com/public/lustre/latest-feature-release/el7.3.1611/ and the appropriate e2fsprogs on the servers.

This is the procure I've been following (executed in order, starting with everything unmounted):
MDS:
mkfs.lustre --reformat --fsname=dlustre --mgs /dev/mapper/lustre_mgs
mkfs.lustre --reformat --fsname=dlustre --mdt --mgsnode=10.11.204.4@o2ib --index=0 /dev/mapper/lustre_mdt0
update ldev.conf:
lstmdstestbal800 - mgs /dev/mapper/lustre_mgs
lstmdstestbal800 - mdt0 /dev/mapper/lustre_mdt0
service lustre start

OSS:
mkfs.lustre --reformat --fsname=dlustre --ost --mgsnode=10.11.204.4@o2ib --index=0 /dev/mapper/800a
update ldev.conf:
lstosstestbal800 - ost0 /dev/mapper/800a
service lustre start

CLIENT:
mount -t lustre 10.11.204.4@o2ib0:/dlustre /mnt/dlustre/
(generate files full of random data + checksums)
umount /mnt/dlustre

OSS:
umount /dev/mapper/800a
mount -t ldiskfs /dev/mapper/800a /mnt/debug/800a
(mv 512+ files from /mnt/debug/800a/O to /mnt/debug/800a/lost+found)
umount /dev/mapper/800a
mount -t lustre /dev/mapper/800a /mnt/dlustre/800a

CLIENT:
mount -t lustre 10.11.204.4@o2ib0:/dlustre /mnt/dlustre/
(verify checksums, many files won't open)

Comment by nasf (Inactive) [ 16/Aug/17 ]

mcmult, Thanks for the update. Questions:
1) Have you sync on the client before umount?
2) "mv 512+ files from /mnt/debug/800a/O to /mnt/debug/800a/lost+foun". is it some subdir under "/O" or some leaves are moved?
3) Would you please to select a file that cannot be opened, then find out its OST-object via "lfs getstripe", and then stat such OST-object via debugfs on the OST?

Thanks!

Comment by Tim McMullan [ 17/Aug/17 ]

Sure thing!
1) I've done it both with and without syncing on the client. For this test I did sync before umount.
2) I have not been moving leaves for this.
3)
#lfs getstripe output:
lfs getstripe ./dir.4.sha1sum
./dir.4.sha1sum
lmm_stripe_count: 1
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 0
obdidx objid objid group
0 5131 0x140b 0

#debugfs stat output, both in its correct location and in the lost+found
debugfs: stat O/0/d11/5131
O/0/d11/5131: File not found by ext2_lookup
debugfs: stat lost+found/#5223
Inode: 5223 Type: regular Mode: 0666 Flags: 0x80000
Generation: 2167043536 Version: 0x00000001:0000c0f3
User: 3162 Group: 200 Size: 101293
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 200
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x59958791:af3313f4 – Thu Aug 17 08:09:53 2017
atime: 0x00000000:00000000 – Wed Dec 31 19:00:00 1969
mtime: 0x599586d6:00000000 – Thu Aug 17 08:06:46 2017
crtime: 0x599586d6:113e5010 – Thu Aug 17 08:06:46 2017
Size of extra inode fields: 32
Extended attributes stored in inode body:
lma = "08 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 0b 14 00 00 00 00 00 00 " (24)
lma: fid=[0x100000000:0x140b:0x0] compat=8 incompat=0
fid = "01 04 00 00 02 00 00 00 17 14 00 00 00 00 00 00 00 00 10 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 " (44)
fid: objid=4296015872 seq=0 parent=[0x200000401:0x1417:0x0] stripe=0
EXTENTS:
(0-24):9822180-9822204

Thanks!

Comment by nasf (Inactive) [ 17/Aug/17 ]

mcmult, would you please to umount the OST, then enable -1 level Lustre kernel debug logs on the OST, and then mount the OST as "lustre". Collect Lustre kernel debug logs just after the mount succeed. Show me the logs, thanks!

Comment by Tim McMullan [ 17/Aug/17 ]

I've attached the log (800a_mount.log.gz) to the ticket.  It should cover from when i issued the mount to about a half second after mount returned.  Thanks!

Comment by Tim McMullan [ 25/Aug/17 ]

Hey @nasf, just wondering if this log has been helpful.  I can also generate a log on a fresh instance of lustre and get the log while it recovers files and stops partway though if you would like it!

Thanks!

 

Comment by nasf (Inactive) [ 28/Aug/17 ]
00100000:00000001:7.0:1502990563.221005:0:18533:0:(osd_scrub.c:2331:osd_ios_general_scan()) Process entered
00100000:00000001:7.0:1502990563.221042:0:18533:0:(osd_scrub.c:2105:osd_ios_lf_fill()) Process entered
00100000:00000001:7.0:1502990563.221042:0:18533:0:(osd_scrub.c:2109:osd_ios_lf_fill()) Process leaving (rc=0 : 0 : 0)
00100000:00000001:7.0:1502990563.221043:0:18533:0:(osd_scrub.c:2105:osd_ios_lf_fill()) Process entered
00100000:00000001:7.0:1502990563.221044:0:18533:0:(osd_scrub.c:2109:osd_ios_lf_fill()) Process leaving (rc=0 : 0 : 0)
00100000:00000001:7.0:1502990563.221005:0:18533:0:(osd_scrub.c:2331:osd_ios_general_scan()) Process entered

That means the /lost+found only contains '.' and '..' entries, empty. So if you have the environment of partly recovered system, would you please to show me the output:

debugfs -c -R "ls /lost+found/" $device

If the /lost+found is not empty, then please re-collect the Lustre kernel debug logs as you did in the comment https://jira.hpdd.intel.com/browse/LU-9836?focusedCommentId=205627&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-205627

Thanks!

Comment by Tim McMullan [ 28/Aug/17 ]

I've uploaded debugfs_mount_logs.tar.gz which contains the output ls through debugfs and a mount performed just after running it.

 

Thanks!

Comment by nasf (Inactive) [ 28/Aug/17 ]

It is strange that the debugfs shows that the /lost+found is not empty, but the readdir() during mount only found "." and ".." entries. Currently, I am not sure what caused such strange behavior, but since debugfs parses the directory by itself logic, not general readdir(), I would suggest to mount the device "ldiskfs", then double check the /lost+found directory. Thanks!

Comment by Tim McMullan [ 28/Aug/17 ]

I just mounted it up ldiskfs, ran ls, and  to make things more strange I am seeing the same set of objects (ls_output.gz.  I see that 8314 is different though... 0 length file.  I can try manually restoring it and see if it continues as expected?

Thanks!

Comment by Gerrit Updater [ 28/Aug/17 ]

Fan Yong (fan.yong@intel.com) uploaded a new patch: https://review.whamcloud.com/28757
Subject: LU-9836 scrub: 32bit hash for scanning lost+found
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 762d699d30f6023e7b239ed7fd138ea849c766bd

Comment by nasf (Inactive) [ 28/Aug/17 ]

mcmult, honestly, I am not sure why the readdir() cannot return name entry from non-empty lost+found directory. I made a debug patch 28757. Would you please to try it just on your current OST image to see whether the items under lost+found can be recovered? Please re-collect the -1 level debug logs on the OST when the patch applied. Thanks!

Comment by Tim McMullan [ 29/Aug/17 ]

I applied the patch and mounted it up, here is the log from the patched mount.  800a_mount_patched.log.0.gz

Thank you!800a_mount_patched.log.0.gz

Comment by nasf (Inactive) [ 30/Aug/17 ]

mcmult, are you using loop device or real block device for the test? If it is loop device, how large is it? and is it possible to upload the 'bad' image?

Comment by Tim McMullan [ 30/Aug/17 ]

This is on a real block device and is too large to upload reasonably.  I will try to recreate on a loopback device that is small enough to upload here

Comment by Tim McMullan [ 05/Sep/17 ]

I got it to reproduce in a loopback device.  I've attached l210_loop_4g.tar.xz containing 3 copies of the 4gig ost file in various states (freshly written to, after I've moved objects to lost+found, after mounting it for the first time), and the log from when it mounted.

As an interesting note, I had tried to do this with a much smaller disk and fewer objects and the recovery process worked correctly.  We have been seeing it stop around 250-260.  I tried just moving 300 objects into lost+found and all of them were recovered successfully.

Comment by nasf (Inactive) [ 11/Sep/17 ]

mcmult, where have you uploaded the image 210_loop_4g.tar.xz to? what the smallest image size (and what is smallest files number) you can reproduce the issue?

Comment by Tim McMullan [ 11/Sep/17 ]

Please check again, I had thought the upload finished but it hadn't.  The file should be here now.  Sorry about that! 

In testing I had skipped straight from 512MB and 300 files to 4GB and 1024 files since I knew it would show the issue.  Some of our initial tests were done with 512 files and that also showed the issue.

Comment by Gerrit Updater [ 08/Jan/18 ]

Fan Yong (fan.yong@intel.com) uploaded a new patch: https://review.whamcloud.com/30770
Subject: LU-9836 osd-ldiskfs: read directory completely
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 9605f06ee38fc8d8b8d7d263e9dc3bb88f2d828d

Comment by nasf (Inactive) [ 08/Jan/18 ]

mcmult, Thanks for your help.

We found the reason reason for why some orphans cannot be recovery. The patch https://review.whamcloud.com/30770 is master based, but it also be applicable for b2_10. You can verify it when you have time.

Thanks!

Comment by Tim McMullan [ 19/Jan/18 ]

Thank you for the patch! We were able to test it and it resolved the issue for us.

Thanks again!

Comment by Gerrit Updater [ 25/Jan/18 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/30770/
Subject: LU-9836 osd-ldiskfs: read directory completely
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 1e2cd1c7025080879a27f9ad9a3896fd3e0e8753

Comment by Gerrit Updater [ 25/Jan/18 ]

Minh Diep (minh.diep@intel.com) uploaded a new patch: https://review.whamcloud.com/31019
Subject: LU-9836 osd-ldiskfs: read directory completely
Project: fs/lustre-release
Branch: b2_10
Current Patch Set: 1
Commit: 53540def8b7ad887b49921028d8378accae22698

Comment by Gerrit Updater [ 09/Feb/18 ]

John L. Hammond (john.hammond@intel.com) merged in patch https://review.whamcloud.com/31019/
Subject: LU-9836 osd-ldiskfs: read directory completely
Project: fs/lustre-release
Branch: b2_10
Current Patch Set:
Commit: fc862c92d8cad04bceea68823d136d9d8678cc98

Generated at Sat Feb 10 02:29:42 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.