Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-9964

> 1 group lock on same file (group lock lifecycle/cbpending problem)


    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: Lustre 2.13.0
    • Labels:
    • Severity:
    • Rank (Obsolete):


      Sometimes when using group locks from many threads writing to one file, one of several assertions is encountered. Note all of this assumes all of the lock requests are cooperating & using the same GID.

      From osc_cache_writeback_range:
      LASSERT(hp == 0 && discard == 0);
      EASSERT(!ext->oe_hp, ext);

      And osc_extent_merge:
      LASSERT(cur->oe_dlmlock == victim->oe_dlmlock);

      Investigation of dumps shows that in all of these cases, multiple group locks are granted on the same resource at the same time, and one of these locks has cbpending set. This is broadly similar to LU-6368 and LU-6679.

      I believe there are actually two problems here, one in the request phase and one in the destruction phase.

      It is possible for two threads (on the same client) to request a group lock from the server at the same time. If this happens, both group locks will be granted, because they are compatible with one another. This gets two group locks granted at the same time on the same file. When one of them is eventually released, this can cause the crashes noted above, because two locks cover the same dirty pages.

      Additionally, almost exactly the problem described in LU-6368 is still present. When a group gets cbpending set, future group lock requests will fail to match it, which can result in the server granting a group lock which conflicts with an existing request. While cbpending is no longer set in order to destroy a group lock, it is still eventually set while destroying a group lock. (ldlm_cli_cancel_local does it)

      After this point, new requests on the client will not match this lock any more. That can result in new group lock requests to the server, again creating the overlapping lock problem. This also results in the same crashes.

      The solution comes in two parts:
      1. Wait (in osc_lock_enqueue_wait) for compatible group lock requests to be granted before attempting the ldlm phase of the lock request
      2. Change the matching logic in ldlm_lock_match and lock_matches so that if we find a group lock being destroyed, we wait until it is fully destroyed before making a new lock request.


        1. LU-9964.c
          1 kB
        2. test-9964.sh
          0.7 kB



            • Assignee:
              paf Patrick Farrell (Inactive)
              paf Patrick Farrell (Inactive)
            • Votes:
              0 Vote for this issue
              3 Start watching this issue


              • Created: