HSM _not only_ small fixes and to do list goes here (LU-3647)

[LU-3834] hsm_cdt_request_completed() may clear HS_RELEASED on failed restore Created: 26/Aug/13  Updated: 24/Feb/16  Resolved: 20/Jan/14

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.5.0
Fix Version/s: Lustre 2.6.0, Lustre 2.5.1

Type: Technical task Priority: Critical
Reporter: John Hammond Assignee: Bruno Faccini (Inactive)
Resolution: Fixed Votes: 0
Labels: HSM

Rank (Obsolete): 9919

 Description   

In the restore case of hsm_cdt_request_completed(), if the copytool returned success but the layout swap fails then we get an unreadable file with HS_RELEASED clear but LOV_PATTERN_F_RELEASED set.

Perhaps the new HSM attributes should be applied to the volatile object before layout swap, and hsm_swap_layouts() should call mo_swap_layouts() with SWAP_LAYOUTS_MDS_HSM set.



 Comments   
Comment by Bruno Faccini (Inactive) [ 05/Sep/13 ]

John,
Sorry, but HSM code is still new for me, so I need some clarifications here ...

In this particular case of error handling, there is no other option than to return the file to its released state, is'nt it ?
So is this what you mean by having "hsm_swap_layouts() call mo_swap_layouts() with SWAP_LAYOUTS_MDS_HSM set", as during release ?

Comment by Bruno Faccini (Inactive) [ 05/Sep/13 ]

John,
Can you also detail me the scenario/conditions when you encountered such problem already ? As we discussed I may then be able to reproduce it or do some kind of error injection.

Comment by John Hammond [ 05/Sep/13 ]

Bruno,

(I am just restating what I said on the call today.)

In mdt_hsm_release() we call mdd_swap_layouts() with the SWAP_LAYOUTS_MDS_HSM flag, which causes the HSM xattrs to be handled along with the LOV xattrs, and rolls back both xattrs on failure.

Contrast this with hsm_cdt_request_completed() which calls mdt_hsm_attr_set() and hsm_swap_layouts(). hsm_swap_layouts() then calls mdd_swap_layouts() with no flags. In this case if the layout swap fails then we do not restore the HSM xattr to its previous (released set) state.

I have not checked but I suspect that may be possible to remodel the handing of restore after that of release and thereby avoid this inconsistency when layout swap fails.

Comment by Jinshan Xiong (Inactive) [ 06/Sep/13 ]

If this is a simple fix, then we can work out a patch for this. Otherwise I'd like to put the resource on something else because it's unlikely for swap_layout to fail anyway.

Comment by Bruno Faccini (Inactive) [ 11/Sep/13 ]

Ok, I see where this can be fixed now, thank's John. But now, and to save time before submit patch, what is the preferred way to do this ? :

_ always call mo_swap_layouts() with SWAP_LAYOUTS_MDS_HSM flag from hsm_swap_layouts(), since (actually?) hsm_swap_layouts() is only called from mdt_hsm_update_request_state() for a RESTORE op.

_ add a new flags to hsm_swap_layouts() to enable SWAP_LAYOUTS_MDS_HSM flag use or not during call to mo_swap_layouts().

Comment by John Hammond [ 11/Sep/13 ]

> _ always call mo_swap_layouts() with SWAP_LAYOUTS_MDS_HSM flag from hsm_swap_layouts(), since (actually?) hsm_swap_layouts() is only called from mdt_hsm_update_request_state() for a RESTORE op.

This seems better to me.

Comment by Bruno Faccini (Inactive) [ 25/Sep/13 ]

1st patch attempt is at http://review.whamcloud.com/7631.
Unfortunately it failed in multiple auto-tests and needs at least to re-base.
But also failed in several sub-tests of sanity-hsm, which is highly suspect ...
Currently under investigations.

Comment by Bruno Faccini (Inactive) [ 18/Nov/13 ]

Latest patch-set #6 auto-tests failures appear not related to this patch but to multiple issues already addressed in others tickets (LU-4093, LU-4086, …). Some of them have heir patches that landed until now, so I will rebase it again …

Comment by Bruno Faccini (Inactive) [ 26/Nov/13 ]

I found a possible bug in my original patch version causing layout-lock not to be released when restore is canceled … Just submitted patch-set #8 to fix this, will see if is passes auto-tests (particularly sanity-hsm/test_33 which was timing-out due to md5sum process never ending!!…).

Comment by Bruno Faccini (Inactive) [ 04/Dec/13 ]

I am wondering if I should also add some error injection to simulate SWAP_LAYOUT failure during restore ??

I will also push a new patch-set #8 to address John's last comment and convert to usual error handling style.

Comment by Bruno Faccini (Inactive) [ 16/Dec/13 ]

Some update, after I added fault-injection (force -ENOENT in the middle of mdd_swap_layouts() to cause layouts swap back) and associated sub-test test_12o within patch-set #8.

test_12o fails due to "diff" command, that caused the implicit restore, to be successful when it is expected to fail because of the fault-injection. Strange is that the Restore operation has been marked as failed, the Copytool received the error, and file still has the "released" flag set!!

I wonder if there could be some issue in mdd_swap_layouts() causing this unexpected behavior ?

Comment by Bruno Faccini (Inactive) [ 04/Jan/14 ]

Hehe, finally I found that my fault-injection code itself introduced some problem because being added after the volatile/2nd file layout change and not reverting it to mimic the error !! This caused the restored datas to be available as if restore succeed …

I changed this in patch-set #13, and now new sub-test test_12o runs fine, returning errors on both copytool (ENOTSUPP, injected!) and client (ENODATA) sides with layout-swap fault-injection, and next restore attempt without fault-injection to be successful.

Will run with build+patch locally and see if I can still reproduce the Volatile object leak on MDT, seen as part of this ticket and LU-4293.

Comment by Bruno Faccini (Inactive) [ 20/Jan/14 ]

patch http://review.whamcloud.com/7631 has landed. Closing.

Comment by Andreas Dilger [ 10/Feb/14 ]

Patch was only landed to master and not b2_5. In the future, this type of patch should be cherry-picked to b2_5 so that it is fixed in the maintenance release.

Comment by Bruno Faccini (Inactive) [ 11/Feb/14 ]

Hello Andreas,
I am sorry if I missed to do something here, to be honest actually I mainly focus to get the patch done for the branch where problem has been reported. But then should I create a new patch version for each affected version listed?

Comment by Andreas Dilger [ 11/Feb/14 ]

Bruno, the patch was marked as affecting the 2.5.0 release. I'm just going through patches that have landed to master and trying to see which ones need to be landed for 2.5.1 that have not been landed there, since that is the long-term maintenance release. If you are closing a but then you should consider if it is fixing a problem that is serious and may affect earlier versions of Lustre and should land on the maintenance release. In many cases, Oleg can cherry-pick the patch directly to b2_5 without putting it through Gerrit/Jenkins/autotest again, but he needs to know to do this.

Comment by Bruno Faccini (Inactive) [ 11/Feb/14 ]

Ok thanks Andreas, I understand now that I need to take care of this because it is also under my responsibility, if a patch is required for earlier versions, to either create+push a new patch for each other versions or ask Oleg to cherry-pick the original patch for each other versions.

I don't know why but I thought that the patch integration/release decision was done by other people (you, Oleg, Peter, …), this may simply be you are doing this verification work very requently and do the job for lazy guys like me!!

Comment by Bruno Faccini (Inactive) [ 12/Feb/14 ]

Andreas, you b2_5 patch for this ticket at http://review.whamcloud.com/9212, has found a flaw in sanity-hsm/test_12o (from original patch http://review.whamcloud.com/7631 from this ticket too !!) during auto-tests session.

This new problem is tracked within LU-4613 where I already pushed a patch to master (http://review.whamcloud.com/9235), since #7631 has already landed to master, but what should we do for the b2_5 version you just pushed ?

Generated at Sat Feb 10 01:37:18 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.