[LU-8324] HSM: prioritize HSM requests Created: 24/Jun/16  Updated: 25/Jan/21

Status: Open
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Minor
Reporter: CEA Assignee: Quentin Bouget
Resolution: Unresolved Votes: 2
Labels: None

Attachments: File analyzer-lu-8324.sh    
Issue Links:
Duplicate
is duplicated by LU-14363 Prioritize HSM cancel request Open
Related
is related to LU-10968 add coordinator bypass upcalls for HS... Reopened
is related to LU-8382 HSM: reorder coordinator's cleanup fu... Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

Most of the time (unless the filesystem is full), RESTORE and REMOVE requests should be processed first as they have the highest priority from a user's point of view ; ARCHIVE requests should have a lower priority.



 Comments   
Comment by Quentin Bouget [ 24/Jun/16 ]

I am working on addind a dynamic policy to the coordinator that would define which requests are to be sent first to copytools.

The policy I am currently implementing defines two levels of priority and allows administrators to set which kind of request gets which priority (default will be low_priority = [ ARCHIVE ], high_priority = [ RESTORE, CANCEL, REMOVE, ... ]). One could also set a ratio that the coordinator tries to follow to batch requests and send them to copytools (X% high_priority and (100 -X) % low_priority), this prevents starvation. The ratio is a soft limit (if there is too little of one priority level of request to fill the buffers the ratio is not used), this prevents from wasting time.

Comment by Peter Jones [ 24/Jun/16 ]

ok Quentin. Let us know how you progress

Comment by Robert Read (Inactive) [ 21/Jul/16 ]

I agree we need to prioritize these operations, however I don’t believe adding prioritization to the coordinator is right answer here. We are currently on a path that will turn the coordinator into a general purpose request queue, and this is not something that belongs in Lustre code and certainly not in the kernel.

Instead, we should move the HSM request processing out of the kernel and into user space. Although Lustre will still need to keep track of the implicit restore requests triggered by file access, all other operations could be done without using a coordinator. Lustre should provide the mechanisms needed for a correct HSM system, and allow the user space tools manage all of the policies around what and when is copied and their priorities.

I’m still thinking about exactly what this should look like, but at a minimum an Archive operation begins with setting the EXISTS flag, and completes with setting ARCHIVE flag. If the file is modified after EXISTS is set, then the MDT will set the DIRTY flag and reject the ARCHIVE flag when mover attempts to set it later.

A Restore operation is primarily a layout swap, though it may need to be a special case to ensure the RELEASED flag is cleared atomically with the swap.

A Remove operation is done by clearing the EXISTS and ARCHIVE flags.

The existing coordinator should remain in place for some time to continue to support current set of tools, but I would like to discourage adding further complexity, and solve issues like this in a different way.

Comment by Henri Doreau (Inactive) [ 25/Jul/16 ]

We would happily consider a more resilient and distributed mecanism for the coordinator. Nevertheless, I see it as a non-trivial project that should not block improvements of HSM, if it targets mid-term future (I have neither seen any design document nor heard any discussion about it).

The patch has not been pushed yet but the solution that Quentin proposes is leightweight and elegant and I believe that it significantly improves the experience of using HSM in production.
It is more subjective, but I also find that it improves code quality and makes it easier to reason about the logic of the CDT, which would be helpful for future replacement work.

Comment by Gerrit Updater [ 25/Jul/16 ]

Quentin Bouget (quentin.bouget.ocre@cea.fr) uploaded a new patch: http://review.whamcloud.com/21494
Subject: LU-8324 hsm: prioritize HSM requests
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 0655c8faf7cb7dcdb3b19dd761aad6c06fcda159

Comment by Matt Rásó-Barnett (Inactive) [ 09/Jan/17 ]

Hello,
I would just like to say that we would be very keen on this kind of feature at Cambridge - I've just run into this issue today where a single file restore operation is at the back of the queue behind ~10TB of archive jobs.

I'd be interested in testing this patch against one of our test filesystems, but I just wanted to add a comment that we would really appreciate having more ability to control the coordinator queue - whether it's in it's current state or some future tool as Robert suggests.

Kind regards,
Matt Raso-Barnett
University of Cambridge

Comment by Quentin Bouget [ 20/Feb/17 ]

Hello Matt,

I think the patch is mature enough for you to test it if you are still interested in it.

Comment by Henri Doreau (Inactive) [ 12/Apr/17 ]

Any chance for this patch to make it into 2.10? It is a very useful feature for HSM users and we believe that the patch is mature.

Comment by Cory Spitz [ 18/May/17 ]

If not 2.10, it seems that 2.10.1 would be possible.

Comment by Peter Jones [ 23/May/17 ]

I think that 2.10.1 is more likely option at this stage. It seems like there will be some discussions about this area at LUG next week.

Comment by Gerrit Updater [ 02/Jun/17 ]

Quentin Bouget (quentin.bouget@cea.fr) uploaded a new patch: https://review.whamcloud.com/27394
Subject: LU-8324 hsm: prioritize HSM requests
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: f8d7f289866c2219d19a51693ae44cc6c3fdf867

Comment by Gerrit Updater [ 23/Jun/17 ]

Quentin Bouget (quentin.bouget@cea.fr) uploaded a new patch: https://review.whamcloud.com/27800
Subject: LU-8324 hsm: ease the development of a different coordinator
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 0ee6e5d71c6f549b48be59607d5f55f28e950f47

Comment by Nathan Rutman [ 23/Feb/18 ]

This thread looks kind of dead, but we have a desire to see some prioritization mechanism as well.
Some options:
1. FIFO (today)
2. Restore-first. All restore requests are prioritized over archive requests. (Except in-progress archives.)
3. Archive-first. All archives are prioritized.
4. Interleaved. Archive and Restore requests are alternated, as long as some of each are waiting.
5. Tunable. Adjustable ratio of archive:restore processing. Maybe this covers the above 2-4 as well.
6. Batched. Archives and Restores are grouped into separate batches, potentially resulting in fewer tape swaps.
7. Time-boxed. A variant of batched; batch ends after a fixed time period.
Many other options I'm sure...

Ultimately I'm in agreement with Robert Read's comment above that the prioritization should really be done outside of Lustre, but if the patch here implements #5 that might cover enough of the use cases to make most people happy...

Comment by Gerrit Updater [ 22/Mar/18 ]

Quentin Bouget (quentin.bouget@cea.fr) uploaded a new patch: https://review.whamcloud.com/31723
Subject: LU-8324 hsm: prioritize one RESTORE once in a while
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: b30b036607a2bc4928e13e06462701bf5ba62d3d

Comment by Quentin Bouget [ 22/Mar/18 ]

The patch above is the shortest/simplest hack I could come up with to help bear with LU-8324 until a more definitive fix is developed (it is more of a band-aid than anything else).

The idea is to use the times when the coordinator traverses its whole llog to "force-schedule" at least one RESTORE request. In practice, this means that you should see at least one RESTORE request scheduled every "loop_period" (the value in /proc/<fsname>/mdt/<mdt-name>/hsm/loop_period) seconds.

Comment by Gerrit Updater [ 26/Sep/18 ]

Quentin Bouget (quentin.bouget@cea.fr) uploaded a new patch: https://review.whamcloud.com/33239
Subject: LU-8324 hsm: prioritize one RESTORE once in a while
Project: fs/lustre-release
Branch: b2_10
Current Patch Set: 1
Commit: 93364e9f3b0c9694904d2c1e2a687af61a980c1f

Comment by Gerrit Updater [ 12/Oct/18 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/31723/
Subject: LU-8324 hsm: prioritize one RESTORE once in a while
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 0dce1ddefc673a3f39b4964d6b669e2a11aaf903

Comment by Gerrit Updater [ 24/Apr/19 ]

Quentin Bouget (quentin.bouget@cea.fr) uploaded a new patch: https://review.whamcloud.com/34749
Subject: LU-8324 hsm: prioritize one RESTORE once in a while
Project: fs/lustre-release
Branch: b2_10
Current Patch Set: 1
Commit: 3fa2b1682755eeb988d10af53797a8d5e1a3679d

Generated at Sat Feb 10 02:16:33 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.