[LU-7920] hsm coordinator request_count and max_requests not used consistently Created: 25/Mar/16  Updated: 24/Mar/17  Resolved: 14/Jun/16

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.8.0
Fix Version/s: Lustre 2.9.0

Type: Bug Priority: Minor
Reporter: Robert Read (Inactive) Assignee: Nathaniel Clark
Resolution: Fixed Votes: 1
Labels: cea

Issue Links:
Duplicate
is duplicated by LU-7995 Decreasing max request count in HSM c... Closed
Related
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

I've always wondered why sometimes max_requests has an effect and other times it appears to be completely ignored.

The hsm coordinator receives new actions from user space in action lists, and there can be between and ~50 actions in the list. As the coordinator sends the actions (aka "requests") to agents, it tracks how many are being processed in cdt->cdt_request_count. There is also a tunable parameter, cdt->cdt_max_requests, that is presumably intended to limit the number of requests sent to the agents. The count is compared against max_requests in the main cdt loop prior to processing each action list:

	/* still room for work ? */
	if (atomic_read(&cdt->cdt_request_count) ==
	    cdt->cdt_max_requests)
		break;

Note it is checking for equality there.

Since this check occurs prior to processing an action list, and there can be multiple requests per list, it is easy to see that request_count can very easily greatly exceed max_requests, and when the happens the coordinator continues to send all available requests to the agents, which might actually be the right thing to do anyway.

If we really want a limit, then a simple workaround here is to change "==" to ">=" to provide some kind of limit, but it really seems wrong to have a single global limit regardless of the number of agents and archives. Ideally the agents should be able to maximize the throughput to each of the archives they are handling. I believe there have been some discussions on how best to do this, but I don't think we've reached a consensus yet.



 Comments   
Comment by jacques-charles lafoucriere [ 30/Mar/16 ]

We need to keep a global max request to avoid a storm of requests to be send to agent (like find . -exec hsm_archive ...). Today this is broken. You are right a better limit should be a max_request per archive backend and then we suppose any agent can serve the same number of request. If we want a limit for each agent it has to be defined by the agent at registration.

Comment by Robert Read (Inactive) [ 30/Mar/16 ]

That is an interesting idea to for the agent to set this on registration, as I would like to see the agent more involved. My preference would be to add flow control between agent and CDT so the rate can adapt for different situations. Perhaps it would be sufficient if the agent could adjust its per-archive limits at any time me and not just at registration. This could also be used to shut down the agent cleanly by stopping new requests and allowing existing ones to drain.

Comment by John Hammond [ 07/Apr/16 ]

Frédérick,

I believe that this is the issue that you described in your LUG talk. A patch to change == to >= is forthcoming.

Comment by Gerrit Updater [ 07/Apr/16 ]

Nathaniel Clark (nathaniel.l.clark@intel.com) uploaded a new patch: http://review.whamcloud.com/19382
Subject: LU-7920 hsm: Account for decreasing max request count
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: d6f83d318368a197dc68c0dad68add4e6857b712

Comment by Frédérick Lefebvre [ 07/Apr/16 ]

John,

It sure does look like the same issue. I also agree with Jacques-Charles that we need to keep a global limit. It makes sense to be able to limit parallel HSM operations a specific filesystem can safely handle. The HSM agent / copytool should handle the logic of figuring it how much parallel request each client can handle.

Comment by Robert Read (Inactive) [ 08/Apr/16 ]

I agree we need to limit the number of operations the movers can do, but this isn't the right place to do it. We need to separate the number of actions that have been submitted to the mover from the number of parallel operations allowed per client. The parallel IO limit should be implemented by the mover, and coordinator should simply be sending as many requests to the mover as the movers ask for.

Consider the use case of restoring data from a cold storage archive such as AWS Glacier. Typically it will take around 4 hours for an object to be retrieved from the archive and made available to download. If a user needs to restore many thousands (or millions?) of files, the mover should be able to submit as many as possible for retrieval all at once, and not be limited by the number of parallel operations the filesystem supports. Once the files are available to download, then the mover will limit how many are copied in parallel.

Likewise, tape archive solutions such as DMF are able to manage the tape IO more efficiently if all requests are sent directly to DMF as quickly as possible.

Comment by Stephane Thiell [ 24/Apr/16 ]

Indeed max_requests only works as expected when you start from scratch (no active_requests and all started copytool agents must be able to handle max_requests). Also, max_requests is broken when you try to decrease its value when active requests are running. In that particular case I’ve seen the same behavior Frédérick reported at LUG. During my initial tests, I did often play with active_requests_timeout to clean things as a workaround. As it is clearly a defect, it would be nice to fix this in the Lustre 2.5 Intel version too. Please keep a global max_requests to limit system resources.

I’m using Google as the Lustre/HSM backend for one of our filesystems, and like other cloud storage, I am constrained by service quotas like requests limit per period of time… During the initial archival process, my main issue was that the global to-the-cloud request rate, implied by all running copytools, depends on the size of the files being archived. Ideally I would like to be able to set a max_requests per period of time (and per archive_id) in a flexible way, like Robert described, done in user space by the copytool agent itself, with a more advanced CDT/agent protocol to control the global requests/x_secs of all running copytools. Google recommended to implement an exponential backoff error handling strategy in the copytool (https://developers.google.com/drive/v2/web/handle-errors#exponential-backoff ), and this is what I did. While I am still seeing wasteful network requests, it’s working just fine, so maybe it’s not worth the hassle after all.

Comment by Gerrit Updater [ 14/Jun/16 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/19382/
Subject: LU-7920 hsm: Account for decreasing max request count
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 5bfc22a47debfd5a6103862424546c100b3ad94e

Generated at Sat Feb 10 02:13:03 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.