[LU-8324] HSM: prioritize HSM requests Created: 24/Jun/16 Updated: 25/Jan/21 |
|
| Status: | Open |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Minor |
| Reporter: | CEA | Assignee: | Quentin Bouget |
| Resolution: | Unresolved | Votes: | 2 |
| Labels: | None | ||
| Attachments: |
|
||||||||||||||||||||
| Issue Links: |
|
||||||||||||||||||||
| Severity: | 3 | ||||||||||||||||||||
| Rank (Obsolete): | 9223372036854775807 | ||||||||||||||||||||
| Description |
|
Most of the time (unless the filesystem is full), RESTORE and REMOVE requests should be processed first as they have the highest priority from a user's point of view ; ARCHIVE requests should have a lower priority. |
| Comments |
| Comment by Quentin Bouget [ 24/Jun/16 ] |
|
I am working on addind a dynamic policy to the coordinator that would define which requests are to be sent first to copytools. The policy I am currently implementing defines two levels of priority and allows administrators to set which kind of request gets which priority (default will be low_priority = [ ARCHIVE ], high_priority = [ RESTORE, CANCEL, REMOVE, ... ]). One could also set a ratio that the coordinator tries to follow to batch requests and send them to copytools (X% high_priority and (100 -X) % low_priority), this prevents starvation. The ratio is a soft limit (if there is too little of one priority level of request to fill the buffers the ratio is not used), this prevents from wasting time. |
| Comment by Peter Jones [ 24/Jun/16 ] |
|
ok Quentin. Let us know how you progress |
| Comment by Robert Read (Inactive) [ 21/Jul/16 ] |
|
I agree we need to prioritize these operations, however I don’t believe adding prioritization to the coordinator is right answer here. We are currently on a path that will turn the coordinator into a general purpose request queue, and this is not something that belongs in Lustre code and certainly not in the kernel. Instead, we should move the HSM request processing out of the kernel and into user space. Although Lustre will still need to keep track of the implicit restore requests triggered by file access, all other operations could be done without using a coordinator. Lustre should provide the mechanisms needed for a correct HSM system, and allow the user space tools manage all of the policies around what and when is copied and their priorities. A Restore operation is primarily a layout swap, though it may need to be a special case to ensure the RELEASED flag is cleared atomically with the swap. A Remove operation is done by clearing the EXISTS and ARCHIVE flags. The existing coordinator should remain in place for some time to continue to support current set of tools, but I would like to discourage adding further complexity, and solve issues like this in a different way. |
| Comment by Henri Doreau (Inactive) [ 25/Jul/16 ] |
|
We would happily consider a more resilient and distributed mecanism for the coordinator. Nevertheless, I see it as a non-trivial project that should not block improvements of HSM, if it targets mid-term future (I have neither seen any design document nor heard any discussion about it). The patch has not been pushed yet but the solution that Quentin proposes is leightweight and elegant and I believe that it significantly improves the experience of using HSM in production. |
| Comment by Gerrit Updater [ 25/Jul/16 ] |
|
Quentin Bouget (quentin.bouget.ocre@cea.fr) uploaded a new patch: http://review.whamcloud.com/21494 |
| Comment by Matt Rásó-Barnett (Inactive) [ 09/Jan/17 ] |
|
Hello, I'd be interested in testing this patch against one of our test filesystems, but I just wanted to add a comment that we would really appreciate having more ability to control the coordinator queue - whether it's in it's current state or some future tool as Robert suggests. Kind regards, |
| Comment by Quentin Bouget [ 20/Feb/17 ] |
|
Hello Matt, I think the patch is mature enough for you to test it if you are still interested in it. |
| Comment by Henri Doreau (Inactive) [ 12/Apr/17 ] |
|
Any chance for this patch to make it into 2.10? It is a very useful feature for HSM users and we believe that the patch is mature. |
| Comment by Cory Spitz [ 18/May/17 ] |
|
If not 2.10, it seems that 2.10.1 would be possible. |
| Comment by Peter Jones [ 23/May/17 ] |
|
I think that 2.10.1 is more likely option at this stage. It seems like there will be some discussions about this area at LUG next week. |
| Comment by Gerrit Updater [ 02/Jun/17 ] |
|
Quentin Bouget (quentin.bouget@cea.fr) uploaded a new patch: https://review.whamcloud.com/27394 |
| Comment by Gerrit Updater [ 23/Jun/17 ] |
|
Quentin Bouget (quentin.bouget@cea.fr) uploaded a new patch: https://review.whamcloud.com/27800 |
| Comment by Nathan Rutman [ 23/Feb/18 ] |
|
This thread looks kind of dead, but we have a desire to see some prioritization mechanism as well. Ultimately I'm in agreement with Robert Read's comment above that the prioritization should really be done outside of Lustre, but if the patch here implements #5 that might cover enough of the use cases to make most people happy... |
| Comment by Gerrit Updater [ 22/Mar/18 ] |
|
Quentin Bouget (quentin.bouget@cea.fr) uploaded a new patch: https://review.whamcloud.com/31723 |
| Comment by Quentin Bouget [ 22/Mar/18 ] |
|
The patch above is the shortest/simplest hack I could come up with to help bear with LU-8324 until a more definitive fix is developed (it is more of a band-aid than anything else). The idea is to use the times when the coordinator traverses its whole llog to "force-schedule" at least one RESTORE request. In practice, this means that you should see at least one RESTORE request scheduled every "loop_period" (the value in /proc/<fsname>/mdt/<mdt-name>/hsm/loop_period) seconds. |
| Comment by Gerrit Updater [ 26/Sep/18 ] |
|
Quentin Bouget (quentin.bouget@cea.fr) uploaded a new patch: https://review.whamcloud.com/33239 |
| Comment by Gerrit Updater [ 12/Oct/18 ] |
|
Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/31723/ |
| Comment by Gerrit Updater [ 24/Apr/19 ] |
|
Quentin Bouget (quentin.bouget@cea.fr) uploaded a new patch: https://review.whamcloud.com/34749 |