Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-18072

Lock cancel resending overwhelms ldlm canceld thread

Details

    • Bug
    • Resolution: Unresolved
    • Critical
    • None
    • Lustre 2.16.0
    • None
    • 3
    • 9223372036854775807

    Description

      Ever since LU-1565 landed it looks like another subtle problem was introduced.

      Since LDLM_CANCEL requests don't have any throttling, they could be sent in huge numbers, this is normally not a big problem as processing is relatively lightweight.

      But in case there's a resend for such a cancel, suddenly we take this path in ptlrpc_server_check_resend_in_progress() added in LU-793:

      ptlrpc_server_check_resend_in_progress(struct ptlrpc_request *req)
      {
              struct ptlrpc_request *tmp = NULL;        if (!(lustre_msg_get_flags(req->rq_reqmsg) & MSG_RESENT))
                      return NULL;        /*
               * This list should not be longer than max_requests in
               * flights on the client, so it is not all that long.
               * Also we only hit this codepath in case of a resent
               * request which makes it even more rarely hit
               */
              list_for_each_entry(tmp, &req->rq_export->exp_reg_rpcs,
                                      rq_exp_list) {
                      /* Found duplicate one */
                      if (tmp->rq_xid == req->rq_xid)
                              goto found;
              }
              list_for_each_entry(tmp, &req->rq_export->exp_hp_rpcs,
                                      rq_exp_list) {
                      /* Found duplicate one */
                      if (tmp->rq_xid == req->rq_xid)
      

      a case was observed at a customer site where this was entered and the exp_hp_rpc lists for multiple client exports were in tens of thousands as mass cancellations (from lru timing out?) commenced which led to these list iterations under exp_rpc_lock taking really long time and now clogging the ldlm_cn threads so no processing wasreally possible, request delays from network acceptance to req_in handler could reach into tens of seconds as the result for all clients (same thread pool) with the expected disastrous results.

       

      I imagine we need to address this from two directions:

      1. server side we really need to avoid taking long time for duplicate request search
      2. client side we already drop unused and actively cancelled ldlm locks, we need to also drop the cancel resends on replay somehow

      Attachments

        Issue Links

          Activity

            [LU-18072] Lock cancel resending overwhelms ldlm canceld thread

            "Oleg Drokin <green@whamcloud.com>" uploaded a new patch: https://review.whamcloud.com/c/fs/lustre-release/+/56843
            Subject: LU-18072 ptlrpc: do not search for duplicate cancel requests
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 5d88c8d89c940366aebfa96b0a47c5677a24e071

            gerrit Gerrit Updater added a comment - "Oleg Drokin <green@whamcloud.com>" uploaded a new patch: https://review.whamcloud.com/c/fs/lustre-release/+/56843 Subject: LU-18072 ptlrpc: do not search for duplicate cancel requests Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 5d88c8d89c940366aebfa96b0a47c5677a24e071

            Does Andriy's patch https://review.whamcloud.com/55946 "LU-18111 ptlrpc: don't drop expired cancel request" help this issue at all?

            adilger Andreas Dilger added a comment - Does Andriy's patch https://review.whamcloud.com/55946 " LU-18111 ptlrpc: don't drop expired cancel request " help this issue at all?

            I think the whole duplicate xid searching is a waste for the case or RPCs like cancel

            I tend to agree with Oleg here, but would look at it wider - why in this case to search for dups for RPCs without a tag at all? if so, it is not a matter of portals; and moreover - in this case the list becomes really small - as it was supposed to be originally, and therefore the benefit of dropping the incoming RPC queue (Alex's proposal), is very limited and probably no much sense to bother with it.

            iiuc, the main idea of this code was to search for dups is to not execute modification RPC twice, what may lead to corruptions. this would be still in place. the 2nd benefit was (theoretical, iiuc, not confirmed by tests) to not let heavy weight not-modification RPCs to be processed twice. however, in this (2nd) case, if we have already sent a reply, we are not able to detect it is a dup and would handle it twice anyway (only relying on other dropping mechanisms if applicable); if we have not sent a reply yet - all the file data are (almost always) still in mem, and the 2nd handling supposed to be relatively fast.

            whereas the original idea (of still detecting such dups for all RPCs) looked reasonable, it resulted in very long lists and completely blocked MDS's, and relatively fast 2nd handling looks much faster here than searching dup for (if already not all RPCs, but still) many other not modification RPCs.

            vitaly_fertman Vitaly Fertman added a comment - I think the whole duplicate xid searching is a waste for the case or RPCs like cancel I tend to agree with Oleg here, but would look at it wider - why in this case to search for dups for RPCs without a tag at all? if so, it is not a matter of portals; and moreover - in this case the list becomes really small - as it was supposed to be originally, and therefore the benefit of dropping the incoming RPC queue (Alex's proposal), is very limited and probably no much sense to bother with it. iiuc, the main idea of this code was to search for dups is to not execute modification RPC twice, what may lead to corruptions. this would be still in place. the 2nd benefit was (theoretical, iiuc, not confirmed by tests) to not let heavy weight not-modification RPCs to be processed twice. however, in this (2nd) case, if we have already sent a reply, we are not able to detect it is a dup and would handle it twice anyway (only relying on other dropping mechanisms if applicable); if we have not sent a reply yet - all the file data are (almost always) still in mem, and the 2nd handling supposed to be relatively fast. whereas the original idea (of still detecting such dups for all RPCs) looked reasonable, it resulted in very long lists and completely blocked MDS's, and relatively fast 2nd handling looks much faster here than searching dup for (if already not all RPCs, but still) many other not modification RPCs.
            green Oleg Drokin added a comment -

            I branched out LU-18077 for the work on the client side handling of cancel resends so we can concentrate on the server side here,

            green Oleg Drokin added a comment - I branched out LU-18077 for the work on the client side handling of cancel resends so we can concentrate on the server side here,
            green Oleg Drokin added a comment -

            I think the whole duplicate xid searching is a waste for the case or RPCs like cancel, rhashtable is unnecessary complication too.

            basically we have two kinds of portals:

            • "normal" - those are guarded by max RIF (well, I guess evil clients might exist) - the number of requests in teh queue there is small, some requests might take a long time to process so duplicate search and reconnect makes sense (LU-793)
            • "special" - like the cancel portal, unlimited requests, fast processing, no need for duplicate xid search - so we should skip that code path altogether

            moreover, just like Alex proposed - every time we receive a reconnect from the client - we should throw away entire incoming queue at once, the reason is twofold:

            1. we won't service those requests anyway so why let it hog the memory and slow down processing?
            2. increases speed of reprocessing those duplicate xids search (well, limited value I guess)
            3. the wrinkle is we need to still keep requests taht are being processed when the reconnect came.

            This does not process us against evil clients so we still need some sort of a counter of requests received and waiting for processing so when that gets too big we'd return some error I guess?

            green Oleg Drokin added a comment - I think the whole duplicate xid searching is a waste for the case or RPCs like cancel, rhashtable is unnecessary complication too. basically we have two kinds of portals: "normal" - those are guarded by max RIF (well, I guess evil clients might exist) - the number of requests in teh queue there is small, some requests might take a long time to process so duplicate search and reconnect makes sense ( LU-793 ) "special" - like the cancel portal, unlimited requests, fast processing, no need for duplicate xid search - so we should skip that code path altogether moreover, just like Alex proposed - every time we receive a reconnect from the client - we should throw away entire incoming queue at once, the reason is twofold: we won't service those requests anyway so why let it hog the memory and slow down processing? increases speed of reprocessing those duplicate xids search (well, limited value I guess) the wrinkle is we need to still keep requests taht are being processed when the reconnect came. This does not process us against evil clients so we still need some sort of a counter of requests received and waiting for processing so when that gets too big we'd return some error I guess?

            Returning -EINPROGRESS to the client if the server is OOM or the incoming queue is too large will at least allow the client to resend this RPC without returning an error to the application. However, we should also print some kind of notice to the MDS console in this case (e.g. "server request queue overloaded by N RPCs, denying new requests" or similar), otherwise we may have an unresponsive server and clients "hanging" without any visible signs in the logs.

            Using an rhashtable for the client RPC XID (per export) would be reasonable. While struct rhashtable is not tiny (176 bytes), it is still relatively small compared to struct obd_export at 1024 bytes:

            struct rhashtable {
                    /* size: 176, cachelines: 3, members: 9 */
                    /* sum members: 169, holes: 1, sum holes: 7 */
                    /* last cacheline: 48 bytes */
            };
            
            struct obd_export {
                    /* size: 1024, cachelines: 16, members: 59 */
                    /* sum members: 1000, holes: 5, sum holes: 20 */
                    /* sum bitfield members: 16 bits, bit holes: 1, sum bit holes: 16 bits */
                    /* paddings: 2, sum paddings: 7 */
                    /* forced alignments: 1 */
            

            Ideally, client lock cancellation should always be fast, but if there is something slowing it down (for whatever reason) we don't want it to become exponentially worse as the queue size grows...

            It looks like we might also use this RPC XID hash functionality in a few other places, like mdt_steal_ack_locks() which is walking the exp_outstanding_replies list looking for XIDs.

            The issue that Oleg pointed out about dropping duplicate RPC XIDs, but then skipping the RPC because it is old should be fixed separately. I think in that case it makes sense to update the RPC time to ensure it does not get dropped. However, we also want to ensure that the logs/stats show the original time that the RPC arrived and how long it was waiting, and not the most recent time that the RPC was resent...

            adilger Andreas Dilger added a comment - Returning -EINPROGRESS to the client if the server is OOM or the incoming queue is too large will at least allow the client to resend this RPC without returning an error to the application. However, we should also print some kind of notice to the MDS console in this case (e.g. " server request queue overloaded by N RPCs, denying new requests " or similar), otherwise we may have an unresponsive server and clients "hanging" without any visible signs in the logs. Using an rhashtable for the client RPC XID (per export) would be reasonable. While struct rhashtable is not tiny (176 bytes), it is still relatively small compared to struct obd_export at 1024 bytes: struct rhashtable { /* size: 176, cachelines: 3, members: 9 */ /* sum members: 169, holes: 1, sum holes: 7 */ /* last cacheline: 48 bytes */ }; struct obd_export { /* size: 1024, cachelines: 16, members: 59 */ /* sum members: 1000, holes: 5, sum holes: 20 */ /* sum bitfield members: 16 bits, bit holes: 1, sum bit holes: 16 bits */ /* paddings: 2, sum paddings: 7 */ /* forced alignments: 1 */ Ideally, client lock cancellation should always be fast, but if there is something slowing it down (for whatever reason) we don't want it to become exponentially worse as the queue size grows... It looks like we might also use this RPC XID hash functionality in a few other places, like mdt_steal_ack_locks() which is walking the exp_outstanding_replies list looking for XIDs. The issue that Oleg pointed out about dropping duplicate RPC XIDs, but then skipping the RPC because it is old should be fixed separately. I think in that case it makes sense to update the RPC time to ensure it does not get dropped. However, we also want to ensure that the logs/stats show the original time that the RPC arrived and how long it was waiting, and not the most recent time that the RPC was resent...

            People

              green Oleg Drokin
              green Oleg Drokin
              Votes:
              0 Vote for this issue
              Watchers:
              23 Start watching this issue

              Dates

                Created:
                Updated: