Details
-
Bug
-
Resolution: Fixed
-
Minor
-
Lustre 2.8.0
-
3
-
9223372036854775807
Description
I've always wondered why sometimes max_requests has an effect and other times it appears to be completely ignored.
The hsm coordinator receives new actions from user space in action lists, and there can be between and ~50 actions in the list. As the coordinator sends the actions (aka "requests") to agents, it tracks how many are being processed in cdt->cdt_request_count. There is also a tunable parameter, cdt->cdt_max_requests, that is presumably intended to limit the number of requests sent to the agents. The count is compared against max_requests in the main cdt loop prior to processing each action list:
/* still room for work ? */ if (atomic_read(&cdt->cdt_request_count) == cdt->cdt_max_requests) break;
Note it is checking for equality there.
Since this check occurs prior to processing an action list, and there can be multiple requests per list, it is easy to see that request_count can very easily greatly exceed max_requests, and when the happens the coordinator continues to send all available requests to the agents, which might actually be the right thing to do anyway.
If we really want a limit, then a simple workaround here is to change "==" to ">=" to provide some kind of limit, but it really seems wrong to have a single global limit regardless of the number of agents and archives. Ideally the agents should be able to maximize the throughput to each of the archives they are handling. I believe there have been some discussions on how best to do this, but I don't think we've reached a consensus yet.
Attachments
Issue Links
- is duplicated by
-
LU-7995 Decreasing max request count in HSM can cause count to grow unbounded
-
- Closed
-
I agree we need to limit the number of operations the movers can do, but this isn't the right place to do it. We need to separate the number of actions that have been submitted to the mover from the number of parallel operations allowed per client. The parallel IO limit should be implemented by the mover, and coordinator should simply be sending as many requests to the mover as the movers ask for.
Consider the use case of restoring data from a cold storage archive such as AWS Glacier. Typically it will take around 4 hours for an object to be retrieved from the archive and made available to download. If a user needs to restore many thousands (or millions?) of files, the mover should be able to submit as many as possible for retrieval all at once, and not be limited by the number of parallel operations the filesystem supports. Once the files are available to download, then the mover will limit how many are copied in parallel.
Likewise, tape archive solutions such as DMF are able to manage the tape IO more efficiently if all requests are sent directly to DMF as quickly as possible.