[LU-1512] OI leaks Created: 12/Jun/12  Updated: 04/Mar/16  Resolved: 04/Mar/16

Status: Closed
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.3.0, Lustre 2.1.3, Lustre 2.1.6
Fix Version/s: Lustre 2.3.0, Lustre 2.4.0

Type: Bug Priority: Major
Reporter: Brian Murrell (Inactive) Assignee: nasf (Inactive)
Resolution: Fixed Votes: 0
Labels: mq213, mq313
Environment:

b2_1 g636ddbf


Severity: 3
Rank (Obsolete): 4236

 Description   

I have a smallish filesystem to which I only allocated a 5GB MDT since the overall dataset was always intended to be very small. This filesystem is simply being used to add and remove files in a loop with something along the lines of:

while true; do
    cp -a /lib /mnt/lustre/foo
    rm -rf /mnt/lustre/foo
done

It seems in doing this I have filled up my MDT with an "oi.16" file that is now 94% of the space of the MDT:

# stat /mnt/lustre/mdt/oi.16 
  File: `/mnt/lustre/mdt/oi.16'
  Size: 4733702144	Blocks: 9254568    IO Block: 4096   regular file
Device: fd05h/64773d	Inode: 13          Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2012-05-27 11:55:00.175323551 +0000
Modify: 2012-05-27 11:55:00.175323551 +0000
Change: 2012-05-27 11:55:00.175323551 +0000

# df -k /mnt/lustre/mdt/
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/LustreVG-mdt0
                       5240128   5240128         0 100% /mnt/lustre/mdt

# ls -ls /mnt/lustre/mdt/oi.16 
4627284 -rw-r--r-- 1 root root 4733702144 May 27 11:55 /mnt/lustre/mdt/oi.16

It seems the OI is leaking and not being reaped when files are removed.



 Comments   
Comment by Andreas Dilger [ 12/Jun/12 ]

The OI file does not actually free space that it has allocated. That said, it shouldn't continually allocate space if the inodes are being deleted (and presumably the entries are being remove from the OI file itself). The FIDs being hashed into the OI file should be relatively uniformly distributed, and if the same numbers of FIDs are being added and removed the the OI should remain at about the same peak size.

Unfortunately, I don't know if we have any tools to debug the OI file itself. It would be useful to be able to check of there are orphan entries in the OI, or if the hash is somehow very imbalanced and the new entries are not going into the same buckets as the previous ones?

This probably needs some kind of debugging code to be written to dump some stats about the OI file - number of entries in use, number of levels in the tree, number of leaves, fullness of each lead, etc.

Comment by Liang Zhen (Inactive) [ 13/Jun/12 ]

The FIDs being hashed into the OI file should be relatively uniformly distributed, and if the same numbers of FIDs are being added and removed the the OI should remain at about the same peak size

I agree it's the case of htree of ext2/3/4, but I suspect it's not true for IAM, I remember IAM will not hash FID and just use part of FID as index (SEQ? I don't remember...) and it tends to put all FIDs with same SEQ in the same block, which means the newest FIDs are all in the most right leaf block in the htree and grow the htree when that block is full, even we have already removed all other FIDs from the IAM htree, those old blocks wouldn't be re-used, the htree will keep growing forever.

Again, I'm not 100% sure about this, please verify this with Wang Di.

Comment by Robert Read (Inactive) [ 13/Jun/12 ]

I also see this on 2.1.1.

Comment by Peter Jones [ 13/Jun/12 ]

Fanyong

Could you please look into this one and comment?

Thanks

Peter

Comment by nasf (Inactive) [ 17/Jun/12 ]

This is IAM design issue.

1) No shrink interfaces.
For each OI file, it is an IAM container, which is implemented as five-levels H-tree. Such H-tree does not support shrink, means once some block is allocate to hold more OI mappings, then it will not be freed until the H-tree is destroyed, even if all the contained records have been removed from such block. So removing OI mappings cannot decrease the OI file size, neither release the OI file blocks.

2) No reuse idle OI mapping slot.
For each OI mapping, its hash in the OI file is just the FID without any conversion. The advantages are:

2.1) No hash collision, then simply the implementation.

2.2) The FID allocation algorithm makes the FID to grow only. When create, most OI mapping insertions are just append operations in the OI block, so avoided data moving in the OI block, and less split operations.

The shortcomings are:
2.3) Because the FID is never reused even if related file is removed, new OI mapping for new created file with larger FID will continue to append in the OI block which is for the latest larger FID (hash) values, rather than reusing some idle slot in some former OI block. That means, as new files created, OI file size will become larger and larger, even if some old files are removed.

There are some possible solutions:
I) Introduce shrink mechanism for H-tree.
I do not know why not support shrink for H-tree based IAM. Supporting shrink for H-tree based IAM is not impossible, but it will not be simple work, at least, ext3/4 does not support that yet.

II) Adjust OI mapping hash policy.
New hash method can make FID hash value not increasing with FID increasing. Then it is possible to reuse some idle OI mapping slot for new created file with larger FID. But we still need to avoid hash collision, otherwise for 128-bits FID space, the hash collision may cross multiple blocks, which will be trouble. On the other hand, it may breaks the advantage of 2.2), especially for creating from multiple clients in parallel, because the OI mapping insertion may become slower because data moving in the OI block or by split. So create performance may become slow.

III) Rebuild OI files via OI scrub.
In lurtre-2.3, we will support to rebuild OI files through OI scrub. So if the OI file occupies unreasonable space, we can remove the OI file by force (manually or detect automatically), then OI scrub can rebuild the OI file. After rebuilding, the new created OI file will not contain idle block.

IV) client-level filesystem backup/restore. We can backup the Lustre system from client, and restore it to another new Lustre system. This is the worst solution before we can any other solution.

Andreas, what's your suggestion?

Comment by Liang Zhen (Inactive) [ 17/Jun/12 ]

I think you need to add Andreas to "watcher" first,

Comment by Andreas Dilger [ 18/Jun/12 ]

So, I hit a similar problem on my test system just now, but it appears something strange is happening. The oi.16.16 file is large, along with a few other OIs, and the rest are tiny:

total 10188
   4 capa_keys            8 oi.16.19     8 oi.16.37     8 oi.16.55
   4 CATALOGS*            8 oi.16.2      8 oi.16.38     8 oi.16.56
   4 CONFIGS/             8 oi.16.20     8 oi.16.39     8 oi.16.57
   8 fld                  8 oi.16.21     8 oi.16.4      8 oi.16.58
   8 last_rcvd            8 oi.16.22     8 oi.16.40     8 oi.16.59
  16 lost+found/          8 oi.16.23     8 oi.16.41     8 oi.16.6
   4 lov_objid            8 oi.16.24     8 oi.16.42     8 oi.16.60
   4 NIDTBL_VERSIONS/     8 oi.16.25     8 oi.16.43     8 oi.16.61
   4 OBJECTS/             8 oi.16.26     8 oi.16.44     8 oi.16.62
 108 oi.16.0              8 oi.16.27     8 oi.16.45     8 oi.16.63
 392 oi.16.1              8 oi.16.28     8 oi.16.46     8 oi.16.7
   8 oi.16.10             8 oi.16.29     8 oi.16.47     8 oi.16.8
   8 oi.16.11             8 oi.16.3      8 oi.16.48     8 oi.16.9
   8 oi.16.12             8 oi.16.30     8 oi.16.49     4 OI_scrub
   8 oi.16.13             8 oi.16.31     8 oi.16.5      4 PENDING/
   8 oi.16.14          6224 oi.16.32     8 oi.16.50    16 ROOT/
   8 oi.16.15          1844 oi.16.33     8 oi.16.51     4 seq_ctl
1060 oi.16.16             8 oi.16.34     8 oi.16.52     4 seq_srv
   8 oi.16.17             8 oi.16.35     8 oi.16.53
   8 oi.16.18             8 oi.16.36     8 oi.16.54

So, oi.16.0, oi.16.1, oi.16.17, oi.16.32, oi.16.33 are the only ones that appear to be in use.

This is running with a 200MB MDT for "SLOW=no sh acceptance-small.sh" and an additional change to runtests to create 10000 files. It also appears that sanity.sh test_51b is trying to create 70000 subdirectories, but there aren't very many files in the filesystem:

# ../utils/lfs df -i
UUID                      Inodes       IUsed       IFree IUse% Mounted on
testfs-MDT0000_UUID       114688       34398       80290  30% /mnt/lustre[MDT:0]
testfs-OST0000_UUID        57344         143       57201   0% /mnt/lustre[OST:0]
testfs-OST0001_UUID        57344         296       57048   1% /mnt/lustre[OST:1]
testfs-OST0002_UUID        57344         138       57206   0% /mnt/lustre[OST:2]

filesystem summary:       114688       34398       80290  30% /mnt/lustre

It would seem to me that the OI selection function is imbalanced. The osd_fid2oi() code appears to be selecting the OI index based on (seq % oi_count), which should be OK. The seq should be updated every LUSTRE_SEQ_MAX_WIDTH (0x20000 = 131072 objects), so the inter-OI distributions should be relatively well balanced on even a slightly larger filesystem.

I don't think there is a huge problem with the OI itself not releasing space, so long as the space that is allocated is re-used. That means the internal hashing function needs to re-use buckets after some time, rather than always allocating blocks for new buckets.

It seems another related problem of having many OI files in a small filesystem is that the space allocated to each OI is not being used again, but rather new space is allocated to each new OI. A workaround for the test filesystems is to create fewer OI files in the case of smaller MDT size, and only allocate all 64 OIs for the case of large MDTs. This is not the original problem seen here, since multi-OI support is only in 2.2, but it can be a major contributor, since the total space used by the OI would increase by 64x compared to the single-OI case.

Fan Yong, I can't believe that there is NO LIMIT on the size of the OI file? Surely there must be some upper bound of the use of the OID as the hash index before it begins to wrap? It is impossible for a 128-bit value to be fit into a smaller hash space without any risk of collision, and it is impossible to store a linear index with even a reasonable number of files created in the filesystem over time, so there HAS to be some code to take this into account? Was the OI/IAM code implemented with so little foresight that it will just grow without limit to fill the MDT as new entries are inserted?

I would expect at least some simple modulus would provide an upper limit to the OI size, at which point we need to size the MDT taking this into account, and limit the OI count to ensure that these files do not fill the MDT.

Comment by nasf (Inactive) [ 20/Jun/12 ]

I do not think there was OI file size limitation before. Because if we never delete files from Lustre, but only create, the OI file size should increase only, and should not hit some upper bound.

Anyway, I agree with that we should introduce some FID hash function to make the OI mapping hash value can wrap back at some point. Then it is possible that the new OI mapping can reuse some former idel OI mapping slot.

My current idea is that warp FID hash back per 1K sequences. For example [1 - 1000] is the first sequences range. Then the 1001 sequence will be hashed to the value between seq[1]'s and seq[2]'s, the 1002 sequence will be hashed to the value between seq[2]'s and seq[3]'s, and so on. If some files belong to seq[1] are removed before new files belong to seq[1001] created, then the new files OI mapping can reuse the idel OI mapping slots which were occupied by the seq[1]'s old files. For the 2001 sequence, it will be hashed to the value between seq[1001]'s and seq[2]'s, the (1000 * N + M) sequence, it will be hashed to the value between seq[1000 * (N - 1)]'s and seq[M + 1]'s.

Andreas, any suggestion?

Comment by nasf (Inactive) [ 20/Jun/12 ]

This is the patch:
http://review.whamcloud.com/#change,3153

Comment by Chris Gearing (Inactive) [ 10/Jul/12 ]

Are we going to update the test scripts to include a set of tests that would find this and other similar issues in future?

Comment by Liang Zhen (Inactive) [ 20/Jul/12 ]

I'm thinking we should have this fix for 2.3, it's really important because users have already started to complain about this, please check LU-1648

Comment by Andreas Dilger [ 27/Jul/12 ]

We need to consider this patch for 2.1.3.

While it is an incompatible change to the OI format, it should only affect newly formatted filesystems, and is backward compatible with existing 2.1 filesystems. Since 2.1 does not have OI scrub, there would be no way to handle any problems hit with the OI growing too large.

Comment by nasf (Inactive) [ 31/Jul/12 ]

After some test, I found that warping FID hash to reuse some idle OI slots may be not an efficient solution for OI file size issues. Because the positions for idle OI slots is random, depends on which files are removed. It is almost impossible to find a suitable hash function which can hash the new OI mappings evenly to those random idle OI slots.

On the other hand, warping FID hash is inefficient for OI slot inserting because of more memmove() in related OI blocks. But for original flat hash, most of the OI slot inserting are append() ops in related OI blocks. So the create performance may be worse.

In fact, the most serious issue for OI file size increasing is the empty but non-released OI blocks. As long as we can reuse those empty but non-released OI blocks, then we can much slow down the OI file size increasing.

My current idea is to introduce inode::i_idle_blocks to record these non-released OI blocks when they become empty. And adjust the strategy for OI block allocation: reuse the empty block in inode::i_idle_blocks with priority, and allocate new block from system volume only when no idle OI blocks can be reused.

Another advantage is that such changes will not introduce OI compatibility issues. Means new OI file can be accessed by old MDT, and new MDT can access old OI file also.

Comment by nasf (Inactive) [ 31/Jul/12 ]

Patch for reusing empty OI blocks:

http://review.whamcloud.com/#change,3153,set4

For old Lustre-2.x release, this patch only effects the create/unlink after applying the patch, will not affect the existing empty OI blocks.

Andreas, Is it necessary to introduce some tool to find out all the empty OI blocks for reusing against the existing OI files? or give it to be rebuilt by OI scrub until Lustre-2.3?

Comment by nasf (Inactive) [ 31/Jul/12 ]

The patch contains sanity update: test_228, which will verity whether OI file size will increase when new files created with some empty OI blocks there.

Comment by nasf (Inactive) [ 02/Aug/12 ]

This is comment from Andreas:

This will help in our limited test case of creating and deleting files in a loop. The real question is whether there will be so many empty OI blocks in real life, when all files are not deleted in strict sequence?

I like the idea that this can be applied to fix the problem even on 2.1 releases that have already seen the problem, but it is important to know whether it will really help. This is especially true if this adds complexity to the code and doesn't actually help muh in the end.

One path forward is to create a debug patch that can be included into 2.1.3 that will print out (at mount time or via /proc?) how many empty blocks there really are in the OIs. The one drawback is that this may cause a LOT of seeking to read large OI files at mount, which may be unacceptable in production. This could be used by CEA and/or LLNL on their production to report the state of the OI file(s).

Cheers, Andreas

Comment by nasf (Inactive) [ 02/Aug/12 ]

Your worry about is not unnecessary, because in really use cases, the file deleting is random, nobody can guarantee the deleting operations will cause related OI blocks to be empty.

But on the other hand, if there are no empty OI blocks in the OI files, on some how, that means the OI space utilization in such system is not so bad. Because the starting point for OI file is performance, several single OI files needs to support all the OI operations on the server. So the original policy for OI design was that using more space for more performance. In the real world, the MDT device is often TB sized, nobody will mind the OI files use GB space.

My current patch can reuse new empty OI blocks (against any Lustre-2.x release), the existing OI block will be kept there without reusing. We can implement new tool to find out all the existing empty OI blocks by traveling the OI file. But I just wonder whether it is worth to do that or not. Because we will have OI scrub in Lustre-2.3. We can back port OI scrub to Lustre-2.1, which may be more easy than implement new tools to find out empty OI blocks. And rebuilding OI files can take back more space than only reuse empty OI blocks.

How do you think?

Comment by nasf (Inactive) [ 02/Aug/12 ]

Comment from Andreas:

> Your worry about is not unnecessary, because in really use cases, the file deleting is random, nobody can guarantee the deleting operations will cause related OI blocks to be empty.

Exactly. It may be that there are only a few entries in each block (e.g. an output file saved after some thousands of temporary files are written and then deleted), and there are few or no empty blocks.

> But on the other hand, if there are no empty OI blocks in the OI files, on some how, that means the OI space utilization in such system is not so bad.

That is not always clear. If the blocks are sparsely used, then the hash wrapping scheme would definitely help.

> Because the starting point for OI file is performance, several single OI files needs to support all the OI operations on the server. So the original policy for OI design was that using more space for more performance. In the real world, the MDT device is often TB sized, nobody will mind the OI files use GB space.

True, but we've increased the number of inodes for 2.x releases to use up more of that excess space than in the past. I agree that for large filesystems it should be less of a risk, and for small test filesystems we can hope that it helps enough under test loads to avoid problems.

I wouldn't object to combining both solutions for 2.3 so that we can be sure this problem does not hit us again in the future. I also like your ideas that OI scrub could fix this problem, but would it require significant effort to back port this code to 2.1? It is definitely more of a feature than I would like to include into 2.1, but it is also one of the major holes in the ability to support 2.1 for the long term if any OI problem results in an unusable filesystem.

> My current patch can reuse new empty OI blocks (against any Lustre-2.x release), the existing OI block will be kept there without reusing.

I haven't looked at your patch yet, but need to know more about how the solution works. Does it keep a persistent list of empty blocks on disk, or only in memory, or does it just delete the free blocks from the file?

Does the file size/offset of the OI file continue to grow during its lifetime? If it does, will it hit the 16TB size limit in heavy usage within, say, 5 years?

> We can implement new tool to find out all the existing empty OI blocks by traveling the OI file. But I just wonder whether it is worth to do that or not. Because we will have OI scrub in Lustre-2.3. We can back port OI scrub to Lustre-2.1, which may be more easy than implement new tools to find out empty OI blocks. And rebuilding OI files can take back more space than only reuse empty OI blocks.

It would be better to re-use the OI scrub code than to spend time developing a new tool for this. The OI scrub has more uses, and could be done online.

What might be needed at some point in the future is to allow a "mirrored OI" mode where the new OI file can be build while the old one is used for reference. That would avoid any threads hanging while the FID is not in the new OI file.

Comment by nasf (Inactive) [ 02/Aug/12 ]

I think that porting OI scrub code to Lustre-2.1 is simpler than implementing new tools to find out existing OI blocks. OI scrub is a completely solution for OI file size issue, because it can shrink the OI file size and take back the unused space. The work for re-using empty OI blocks and wrap FID hash only can slow down the OI file size increasing speed, but cannot shrink the OI file size. So anyway, we need OI scrub to resolve OI file size issue.

About the method of wrap FID hash, it can reuse some idle OI mapping slot, but it depends on the hash function to hash the new FID mapping to some idle slot properly. But good hash function also means more OI mapping insert instead of OI mapping append, which will affect create performance. On the other hand, it will introduce compatibility issue for old format OI file, so it cannot be used to resolve the OI file size issue on Lustre-2.1.

For the patch of re-using empty OI blocks, the empty OI blocks are recorded by a special on-disk blocks list. It does not really release the empty OI blocks.

Comment by Andreas Dilger [ 02/Aug/12 ]

My preferred path would be for OI scrub to be backported to 2.1. This would allow fixing this issue (though not in an ideal manner, currently), and also improve maintenance/support for 2.1 itself (allowing recovery from all sorts of OI corruption, backup/ restore, eyc.

First, however, please ask Oleg if he would also be in flavour of landing this code onto b2_1 as well. It is rather large for a maintenance release, though it could be argued for the above reasons that this is really necessary for making 2.1 more supportable in the future.

The one twain I say that this doesn't really resolve the OI size problem very well is because it requires manually deleting the OI file(s), then running OI scrub in urgent mode, which will block threads if they cannot find the FID they are looking for, and cause high load on the MDS.

It would be better to have some kind of "backup OI" mode where the new OI file is created, while the old one is used to find any missing FID. if the old OI file were kept around, this would also help during OI scrub in case the primary were lost or corupted. Only in case of backup/restore, where the old file is useless would it make sense to delete it right away.

Comment by nasf (Inactive) [ 02/Aug/12 ]

The "backup OI" mode for OI scrub to rebuild OI file will introduce more complexity, because there may be concurrent create/unlink during the OI scrub, it need to process both the old OI file and new OI file, and should kept them in consistent on somehow, which will cause normal logic changed for lookup/create/unlink. Such changes may introduce some race bugs.

In fact, we do not care the system crash during OI scrub, because we have the support to resume OI scrub from the breakpoint. We can guarantee the OI file rebuild correctly eventually, even if the system crash many times.

Oleg, what is your suggestion for back-porting OI scrub to Lustre-2.1.x?

Comment by Andreas Dilger [ 02/Aug/12 ]

The issue isn't about recovering from a crash, but rather if this is a "garbage collection" action that needs to be done on a regular basis, but the only way to do it is by deleting the OI file(s) and running an urgent scan, this will have serious performance impact, and block threads that are doing by-FID lookups.

My goal is to allow this "maintenance" action to be done without significant performance impact or delay. I agree that this would be more complex, but I don't know how much more. If we always create a new OI file when running LFSCK, it will also solve the problem of stale FID entries in the OI file. But to do this, it is better to do it at the "background scrub" speed, and allow the cases of lookup-by-FID not being found in the new OI file to be handled from the backup OI file.

For unlinked files, there is only a need to delete the FID from the new OI file (if it is there yet). The old FID should no longer be referenced by any files, so there is no harm to leave it in the old OI file I think?

Comment by nasf (Inactive) [ 02/Aug/12 ]

> For unlinked files, there is only a need to delete the FID from the new OI file (if it is there yet). The old FID should no longer be referenced by any files, so there is no harm to leave it in the old OI file I think?

It is not so simple. If we only delete the OI mapping in the new OI file, and leave it in the old OI mapping. Then what will happen if someone does lookup-by-FID after the unlink operation? He/she will find the stale OI mapping in the old OI file, but related object does not exist, under such case, it is not easy to distinguish whether it is normal case, or abnormal case of object lost because of disk issues or system errors.

Comment by Andreas Dilger [ 03/Aug/12 ]

I see two options in that case. It would be possible to also delete FID entries from backup OI, but this would hurt performance during OI scrub. It would instead be possible to detect this (hopefuly rare) error during lookup, where a FID entry exists in the backup OI, but the inode is deleted or does not have a matching LMA FID, and return ENOENT or ESTALE as it would if no such entry existed in the first place.

Since the FID entry would have been lost anyway during OI rebuild, this by-FID lookup is just a rare race condition that only happens during scrub.

Comment by nasf (Inactive) [ 04/Aug/12 ]

Then current idea for "backup mode" OI scrub will be like as following:

For create: it will insert the OI mapping into the old OI file firstly, if the target ino is in front of OI scrub current postion, then OI scrub can add the mapping to new OI file also, otherwise the OI mapping should be inserted into the new OI file by the creator.

For unlink: it will delete the OI mapping from the new OI file firstly (if it is there).

For lookup: it will check old OI file only, if there is no relate OI mapping, then return -ENOENT; if found related OI mapping, but fail to load related inode, then return -EIO; if found related OI mapping, but the loaded inode is not the expected one, then return -ENOENT.

When should we do that? Now or LFSCK phase IV?

Comment by Andreas Dilger [ 04/Aug/12 ]

For backup OI, I think it makes more sense to do the opposite - update only the new OI, and leave the old OI as only the backup. For created, only add newly created FIDs into the new OI. For normal lookup by name, the existing OI rebuild will add the FID into the new OI already. In the case of a by-FID lookup that is missed in the new OI do we need to do a lookup in the backup OI. For unlinked files, only delete the FID from the new OI.

If there is an old (invalid) lookup by FID for a deleted file that is missed in the new OI, but found in the old OI, there will still be a chance to return an error if the inode is not found.

I think this will reduce the amount of updates to disk, with only changes being made to the new OI file, and the old OI file will not be modified.

As for when to do this, I think the OI rebuild should be ported to b2_1 first (subject to pre-approval from Oleg), and the "backup OI" handling can be done in Phase IV, since this is largely only a performance/usability improvement after the base OI scrub is available.

Comment by nasf (Inactive) [ 05/Aug/12 ]

OK, that means the lookup-by-fid will check new OI file firstly, if missed, then check the old OI file. It improves the update performance by lost some lookup performance.

Oleg, what's your suggestion? If you do not oppose, I will start the back porting.

Comment by Andreas Dilger [ 05/Aug/12 ]

I think the important thing is that it improves the update performance, and only hurts lookup performance for lookup by FID for objects that are not in cache. This should be only a very small fraction of operations.

Comment by Peter Jones [ 31/Aug/12 ]

Landed for 2.3 and 2.4

Comment by Brian Murrell (Inactive) [ 01/Sep/12 ]

I notice this is fixed for 2.3 and 2.4. Will anything be done for 2.1.x?

Comment by Peter Jones [ 01/Sep/12 ]

Yes we will fix this for 2.1.4

Comment by Bob Glossman (Inactive) [ 12/Nov/12 ]

back port to b2_1: http://review.whamcloud.com/#change,4516

Comment by Brian Murrell (Inactive) [ 08/Apr/13 ]

back port to b2_1: http://review.whamcloud.com/#change,4516

This seems to have stalled back in Dec., strangely enough, with +3. Any reason it did not progress to landing?

Comment by Bob Glossman (Inactive) [ 08/Apr/13 ]

Don't know why it stalled. Suspect it may have been kept out near the 2.1.4 release as not being important enough for the risk. Not sure why it didn't go in later.

Comment by Andreas Dilger [ 07/Jun/13 ]

I'm not sure why this bug was closed. The patch for b2_1 was still not landed, and the work to rebuild the OI files in LFSCK Phase 4 is not completed.

Comment by nasf (Inactive) [ 04/Mar/16 ]

Since we have no plan to back port more patches to b2_1 based branch, then close it.

Generated at Sat Feb 10 01:17:17 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.