[LU-2445] add "lfs migrate" support Created: 07/Dec/12 Updated: 21/Feb/15 Resolved: 21/Feb/15 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.4.0 |
| Fix Version/s: | Lustre 2.4.0 |
| Type: | Bug | Priority: | Minor |
| Reporter: | Andreas Dilger | Assignee: | Jinshan Xiong (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Attachments: |
|
||||||||||
| Issue Links: |
|
||||||||||
| Sub-Tasks: |
|
||||||||||
| Story Points: | 3 | ||||||||||
| Rank (Obsolete): | 5779 | ||||||||||
| Description |
|
Once the "layout swap" support in In order for this functionality to be useful for users, the llapi_layout_swap() function needs to be wrapped in some additional code compared to lfs swap_layouts to ensure that the file isn't changing during the copy, and to do the actual copy. I propose an lfs migrate command that takes the same arguments as lfs setstripe (calling into lfs_setstripe() to create the target file, for simplicity and compatibility, though with some option to make it an open-unlinked file via If the MDS does not support MDS_SWAP_LAYOUTS then lfs migrate should return an -EOPNOTSUPP error, so that users are aware that atomic layout swap is not available. The lfs_migrate script should call the lfs migrate command to do the migrate/copy (instead of rsync + mv). but lfs_migrate probably needs to fall back to rsync+mv again. The lfs_migrate script has not previously guaranteed atomic migration, so it should continue to work using rsync+mv as it has in the past if "lfs migrate" returns EOPNOTSUPP, with a comment to the effect that this functionality should be removed after Lustre 2.10 or so. |
| Comments |
| Comment by Johann Lombardi (Inactive) [ 07/Dec/12 ] |
|
Andreas, i am afraid that the current layout lock implementation is only suitable for HSM and more work is required to support layout revocation of opened files (for which there can be read/write request in flight). |
| Comment by Andreas Dilger [ 07/Dec/12 ] |
|
Johann, can you please file a bug with the details of what still needs to be implemented for the full layout lock support, or if one already exists please link it to this bug. |
| Comment by Robert Read (Inactive) [ 07/Dec/12 ] |
|
If the new objects have earlier version numbers than the old objects (because they are on new OSTs perhaps), then will this have a effect on HSM? Will HSM still archive the data if the version looks older than what is currently in the archive? |
| Comment by jacques-charles lafoucriere [ 11/Feb/13 ] |
|
Please assign me this LU (or to jody) , I will work on a patch |
| Comment by jacques-charles lafoucriere [ 05/Mar/13 ] |
|
I will initially implement it in lfs_setstripe() directly so any re-stripe call will do the migration. I think I have to take a group lock for force flush from others and get exclusive access during copy. |
| Comment by Jinshan Xiong (Inactive) [ 06/Mar/13 ] |
|
Here is a flow chart for migration I proposed in Beijing. Please take a look. |
| Comment by jacques-charles lafoucriere [ 06/Mar/13 ] |
|
OpenEx + dataversion is perfect I will implement this soon |
| Comment by Andreas Dilger [ 06/Mar/13 ] |
|
JC, since the current behaviour of setstripe on an existing file is to return an error, it isn't clear to me if changing this to do internal migration and data copying would be "obvious" to users or not. It does have a certain appeal, however, so I think it should be discussed more. Could you please send and email (hpdd-discuss and lustre-discuss) to ask for done input on the user interface. I was thinking it makes more sense to require an explicit call to migrate the file data, since this may take a long time. If the setstripe approach is taken, it should definitely only migrate the file if a different layout is explicitly given. |
| Comment by jacques-charles lafoucriere [ 06/Mar/13 ] |
|
let's start with lfs migrate, have it working and we will ask to the list later |
| Comment by jacques-charles lafoucriere [ 06/Mar/13 ] |
|
Patch at http://review.whamcloud.com/5620 |
| Comment by Andreas Dilger [ 07/Mar/13 ] |
|
For use by the "lfs_migrate" script, this is sufficient for use today, since that script is already not safe for files being modified. It is better than the simple cp + checksum method, since it preserves the inode numbers and would also keep open file handles for read or write, so long as they are not actively writing during migration. "lfs migrate" probably needs a comment in the usage message to indicate it is not totally safe for files that are actively undergoing IO. It might also make sense to have an upper limit on the number of times it will try to migrate the file in the loop when the data version has changed, and continue to try any other files. It should save an error if this happens (maybe EBUSY) and return it at the end, so that it is clear to the user that the migrate was not totally successful. |
| Comment by Jinshan Xiong (Inactive) [ 07/Mar/13 ] |
This is a first tentative because it does not add the support of O_EXCL as requested by Jinshan Exclusive open is lustre specific which is not simply O_EXCL. We should make it clear at initial start otherwise people will be confused. Andreas, would you suggest a name please? |
| Comment by Andreas Dilger [ 11/Mar/13 ] |
|
I think it would make sense to ensure that exclusive open has the same semantics as leases, so that we do not need to implement something similar but slightly different in the future. I believe they have a similar semantic if notifying the client, but do not necessarily revoke the client lease immediately. To be honesty, I think we could get fairly similar behavior with regular MDS DLM locks, if they could be dropped asynchronously if there is a failure on one of the OSTs. The client would get a DLM layout lock, register a callback to userspace for any blocking callback event, and if the userspace thread isn't responsive in time then the lock is cancelled anyway (as it is today) and e.g. the open file handle is invalidated. That gives userspace some limited time to finish or snort migration, and in almost all cases this will be sufficient, just like for regular clients. This was actually in the original migration design at https://bugzilla.lustre.org/show_bug.cgi?id=13182 and related bugs. Thoughts? |
| Comment by Peter Jones [ 26/Mar/13 ] |
|
Dropped priority because patch landed for 2.4 |
| Comment by Andreas Dilger [ 12/Apr/13 ] |
|
With the landing of the latest patch, is it now safe to do "lfs migrate [--block]" on a file that is in use and actively undergoing IO? If yes, this would be a great feature coming from the HSM to announce for 2.4, even though the full HSM functionality is not yet available. |
| Comment by Jinshan Xiong (Inactive) [ 12/Apr/13 ] |
Yes. |
| Comment by Andreas Dilger [ 10/Jul/13 ] |
|
Before this bug can be closed, we need to update the user manual to describe this feature. |
| Comment by Keith Mannthey (Inactive) [ 07/Oct/13 ] |
|
+1 on this being a very cool thing. "lfs migrate [--block]" allows in filesystem data migration. Is there a lustre-test for this? |
| Comment by Andreas Dilger [ 08/Oct/13 ] |
|
There is sanity.sh test_56w(), though it doesn't appear this verifies the data... |
| Comment by Keith Mannthey (Inactive) [ 08/Oct/13 ] |
|
Thanks I will look at expanding the testing a bit. |
| Comment by Frank Zago (Inactive) [ 07/Nov/14 ] |
|
Minor fix: http://review.whamcloud.com/12627 |
| Comment by Gerrit Updater [ 03/Feb/15 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/12627/ |
| Comment by Andreas Dilger [ 21/Feb/15 ] |
|
This was primarily landed in Lustre 2.4.0, but a number of related fixes have been landed since then. |