Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-7920

hsm coordinator request_count and max_requests not used consistently

Details

    • Bug
    • Resolution: Fixed
    • Minor
    • Lustre 2.9.0
    • Lustre 2.8.0
    • 3
    • 9223372036854775807

    Description

      I've always wondered why sometimes max_requests has an effect and other times it appears to be completely ignored.

      The hsm coordinator receives new actions from user space in action lists, and there can be between and ~50 actions in the list. As the coordinator sends the actions (aka "requests") to agents, it tracks how many are being processed in cdt->cdt_request_count. There is also a tunable parameter, cdt->cdt_max_requests, that is presumably intended to limit the number of requests sent to the agents. The count is compared against max_requests in the main cdt loop prior to processing each action list:

      	/* still room for work ? */
      	if (atomic_read(&cdt->cdt_request_count) ==
      	    cdt->cdt_max_requests)
      		break;
      

      Note it is checking for equality there.

      Since this check occurs prior to processing an action list, and there can be multiple requests per list, it is easy to see that request_count can very easily greatly exceed max_requests, and when the happens the coordinator continues to send all available requests to the agents, which might actually be the right thing to do anyway.

      If we really want a limit, then a simple workaround here is to change "==" to ">=" to provide some kind of limit, but it really seems wrong to have a single global limit regardless of the number of agents and archives. Ideally the agents should be able to maximize the throughput to each of the archives they are handling. I believe there have been some discussions on how best to do this, but I don't think we've reached a consensus yet.

      Attachments

        Issue Links

          Activity

            [LU-7920] hsm coordinator request_count and max_requests not used consistently

            Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/19382/
            Subject: LU-7920 hsm: Account for decreasing max request count
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: 5bfc22a47debfd5a6103862424546c100b3ad94e

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/19382/ Subject: LU-7920 hsm: Account for decreasing max request count Project: fs/lustre-release Branch: master Current Patch Set: Commit: 5bfc22a47debfd5a6103862424546c100b3ad94e

            Indeed max_requests only works as expected when you start from scratch (no active_requests and all started copytool agents must be able to handle max_requests). Also, max_requests is broken when you try to decrease its value when active requests are running. In that particular case I’ve seen the same behavior Frédérick reported at LUG. During my initial tests, I did often play with active_requests_timeout to clean things as a workaround. As it is clearly a defect, it would be nice to fix this in the Lustre 2.5 Intel version too. Please keep a global max_requests to limit system resources.

            I’m using Google as the Lustre/HSM backend for one of our filesystems, and like other cloud storage, I am constrained by service quotas like requests limit per period of time… During the initial archival process, my main issue was that the global to-the-cloud request rate, implied by all running copytools, depends on the size of the files being archived. Ideally I would like to be able to set a max_requests per period of time (and per archive_id) in a flexible way, like Robert described, done in user space by the copytool agent itself, with a more advanced CDT/agent protocol to control the global requests/x_secs of all running copytools. Google recommended to implement an exponential backoff error handling strategy in the copytool (https://developers.google.com/drive/v2/web/handle-errors#exponential-backoff ), and this is what I did. While I am still seeing wasteful network requests, it’s working just fine, so maybe it’s not worth the hassle after all.

            sthiell Stephane Thiell added a comment - Indeed max_requests only works as expected when you start from scratch (no active_requests and all started copytool agents must be able to handle max_requests). Also, max_requests is broken when you try to decrease its value when active requests are running. In that particular case I’ve seen the same behavior Frédérick reported at LUG. During my initial tests, I did often play with active_requests_timeout to clean things as a workaround. As it is clearly a defect, it would be nice to fix this in the Lustre 2.5 Intel version too. Please keep a global max_requests to limit system resources. I’m using Google as the Lustre/HSM backend for one of our filesystems, and like other cloud storage, I am constrained by service quotas like requests limit per period of time… During the initial archival process, my main issue was that the global to-the-cloud request rate, implied by all running copytools, depends on the size of the files being archived. Ideally I would like to be able to set a max_requests per period of time (and per archive_id) in a flexible way, like Robert described, done in user space by the copytool agent itself, with a more advanced CDT/agent protocol to control the global requests/x_secs of all running copytools. Google recommended to implement an exponential backoff error handling strategy in the copytool ( https://developers.google.com/drive/v2/web/handle-errors#exponential-backoff ), and this is what I did. While I am still seeing wasteful network requests, it’s working just fine, so maybe it’s not worth the hassle after all.
            rread Robert Read added a comment -

            I agree we need to limit the number of operations the movers can do, but this isn't the right place to do it. We need to separate the number of actions that have been submitted to the mover from the number of parallel operations allowed per client. The parallel IO limit should be implemented by the mover, and coordinator should simply be sending as many requests to the mover as the movers ask for.

            Consider the use case of restoring data from a cold storage archive such as AWS Glacier. Typically it will take around 4 hours for an object to be retrieved from the archive and made available to download. If a user needs to restore many thousands (or millions?) of files, the mover should be able to submit as many as possible for retrieval all at once, and not be limited by the number of parallel operations the filesystem supports. Once the files are available to download, then the mover will limit how many are copied in parallel.

            Likewise, tape archive solutions such as DMF are able to manage the tape IO more efficiently if all requests are sent directly to DMF as quickly as possible.

            rread Robert Read added a comment - I agree we need to limit the number of operations the movers can do, but this isn't the right place to do it. We need to separate the number of actions that have been submitted to the mover from the number of parallel operations allowed per client. The parallel IO limit should be implemented by the mover, and coordinator should simply be sending as many requests to the mover as the movers ask for. Consider the use case of restoring data from a cold storage archive such as AWS Glacier. Typically it will take around 4 hours for an object to be retrieved from the archive and made available to download. If a user needs to restore many thousands (or millions?) of files, the mover should be able to submit as many as possible for retrieval all at once, and not be limited by the number of parallel operations the filesystem supports. Once the files are available to download, then the mover will limit how many are copied in parallel. Likewise, tape archive solutions such as DMF are able to manage the tape IO more efficiently if all requests are sent directly to DMF as quickly as possible.

            John,

            It sure does look like the same issue. I also agree with Jacques-Charles that we need to keep a global limit. It makes sense to be able to limit parallel HSM operations a specific filesystem can safely handle. The HSM agent / copytool should handle the logic of figuring it how much parallel request each client can handle.

            fredlefebvre Frédérick Lefebvre added a comment - John, It sure does look like the same issue. I also agree with Jacques-Charles that we need to keep a global limit. It makes sense to be able to limit parallel HSM operations a specific filesystem can safely handle. The HSM agent / copytool should handle the logic of figuring it how much parallel request each client can handle.

            Nathaniel Clark (nathaniel.l.clark@intel.com) uploaded a new patch: http://review.whamcloud.com/19382
            Subject: LU-7920 hsm: Account for decreasing max request count
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: d6f83d318368a197dc68c0dad68add4e6857b712

            gerrit Gerrit Updater added a comment - Nathaniel Clark (nathaniel.l.clark@intel.com) uploaded a new patch: http://review.whamcloud.com/19382 Subject: LU-7920 hsm: Account for decreasing max request count Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: d6f83d318368a197dc68c0dad68add4e6857b712
            jhammond John Hammond added a comment - - edited

            Frédérick,

            I believe that this is the issue that you described in your LUG talk. A patch to change == to >= is forthcoming.

            jhammond John Hammond added a comment - - edited Frédérick, I believe that this is the issue that you described in your LUG talk. A patch to change == to >= is forthcoming.
            rread Robert Read added a comment -

            That is an interesting idea to for the agent to set this on registration, as I would like to see the agent more involved. My preference would be to add flow control between agent and CDT so the rate can adapt for different situations. Perhaps it would be sufficient if the agent could adjust its per-archive limits at any time me and not just at registration. This could also be used to shut down the agent cleanly by stopping new requests and allowing existing ones to drain.

            rread Robert Read added a comment - That is an interesting idea to for the agent to set this on registration, as I would like to see the agent more involved. My preference would be to add flow control between agent and CDT so the rate can adapt for different situations. Perhaps it would be sufficient if the agent could adjust its per-archive limits at any time me and not just at registration. This could also be used to shut down the agent cleanly by stopping new requests and allowing existing ones to drain.

            We need to keep a global max request to avoid a storm of requests to be send to agent (like find . -exec hsm_archive ...). Today this is broken. You are right a better limit should be a max_request per archive backend and then we suppose any agent can serve the same number of request. If we want a limit for each agent it has to be defined by the agent at registration.

            jcl jacques-charles lafoucriere added a comment - We need to keep a global max request to avoid a storm of requests to be send to agent (like find . -exec hsm_archive ...). Today this is broken. You are right a better limit should be a max_request per archive backend and then we suppose any agent can serve the same number of request. If we want a limit for each agent it has to be defined by the agent at registration.

            People

              utopiabound Nathaniel Clark
              rread Robert Read
              Votes:
              1 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: