[LU-9964] > 1 group lock on same file (group lock lifecycle/cbpending problem) Created: 08/Sep/17  Updated: 03/Mar/23  Resolved: 25/Nov/19

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: Lustre 2.13.0

Type: Bug Priority: Minor
Reporter: Patrick Farrell (Inactive) Assignee: Patrick Farrell (Inactive)
Resolution: Fixed Votes: 0
Labels: None

Attachments: File LU-9964.c     File test-9964.sh    
Issue Links:
Related
is related to LU-16046 Shared-file I/O performance is poor u... Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

Sometimes when using group locks from many threads writing to one file, one of several assertions is encountered. Note all of this assumes all of the lock requests are cooperating & using the same GID.

From osc_cache_writeback_range:
LASSERT(hp == 0 && discard == 0);
EASSERT(!ext->oe_hp, ext);

And osc_extent_merge:
LASSERT(cur->oe_dlmlock == victim->oe_dlmlock);

Investigation of dumps shows that in all of these cases, multiple group locks are granted on the same resource at the same time, and one of these locks has cbpending set. This is broadly similar to LU-6368 and LU-6679.

I believe there are actually two problems here, one in the request phase and one in the destruction phase.

It is possible for two threads (on the same client) to request a group lock from the server at the same time. If this happens, both group locks will be granted, because they are compatible with one another. This gets two group locks granted at the same time on the same file. When one of them is eventually released, this can cause the crashes noted above, because two locks cover the same dirty pages.

Additionally, almost exactly the problem described in LU-6368 is still present. When a group gets cbpending set, future group lock requests will fail to match it, which can result in the server granting a group lock which conflicts with an existing request. While cbpending is no longer set in order to destroy a group lock, it is still eventually set while destroying a group lock. (ldlm_cli_cancel_local does it)

After this point, new requests on the client will not match this lock any more. That can result in new group lock requests to the server, again creating the overlapping lock problem. This also results in the same crashes.

The solution comes in two parts:
1. Wait (in osc_lock_enqueue_wait) for compatible group lock requests to be granted before attempting the ldlm phase of the lock request
2. Change the matching logic in ldlm_lock_match and lock_matches so that if we find a group lock being destroyed, we wait until it is fully destroyed before making a new lock request.



 Comments   
Comment by Patrick Farrell (Inactive) [ 08/Sep/17 ]

Attached files together comprise a test for the "two group locks granted on same resource" case. They will NOT crash (because they do not write to the file), simply exit and dump debug when the case is identified.

Compile the .c file (in a directory by itself) to a binary named a.out
Run test-9964.sh

On a 4 CPU VM without my patch, I hit the problem in < 10 minutes. On a real system with 32 CPUs, I hit the problem in < 1 minute.

Comment by Patrick Farrell (Inactive) [ 08/Sep/17 ]

Note that these problems exist for PW locks as well, but
are solved on the server, because the server will not grant
a new PW lock until all conflicting locks have been
cancelled. The problem for group locks is they are
compatible with one another. The problem is just that we
must not grant two group locks on the same resource to the
same client.

We could achieve this by checking the exports before
granting new group locks, but since locks are not sorted by
export, this would require walking the list of granted and
waiting locks on the server. If many clients request group
locks, this would be unacceptable.

Comment by Gerrit Updater [ 08/Sep/17 ]

Patrick Farrell (paf@cray.com) uploaded a new patch: https://review.whamcloud.com/28916
Subject: LU-9964 ldlm: Prevent multiple group locks
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 20bdf0a8071f2b4ed038bae76ac89797bb78c137

Comment by Gerrit Updater [ 14/Aug/19 ]

Alexandr Boyko (c17825@cray.com) uploaded a new patch: https://review.whamcloud.com/35791
Subject: LU-9964 llite: prevent mulitple group locks
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 44bedeca8dce4db31b3f84480936e3b5a2a4ecc4

Comment by Gerrit Updater [ 07/Sep/19 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/35791/
Subject: LU-9964 llite: prevent mulitple group locks
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: aba68250a67a10104c534bd726f67b31a7f35692

Comment by Joseph Gmitter (Inactive) [ 25/Nov/19 ]

Patch landed to 2.13.0

Comment by Gerrit Updater [ 27/Jan/21 ]

Olaf Faaland-LLNL (faaland1@llnl.gov) uploaded a new patch: https://review.whamcloud.com/41332
Subject: LU-9964 llite: prevent mulitple group locks
Project: fs/lustre-release
Branch: b2_12
Current Patch Set: 1
Commit: 5c84a2b80dbf7b993dbc33e32c539623c926e100

Comment by Gerrit Updater [ 03/Mar/23 ]

"Etienne AUJAMES <eaujames@ddn.com>" uploaded a new patch: https://review.whamcloud.com/c/fs/lustre-release/+/50198
Subject: LU-9964 llite: prevent mulitple group locks
Project: fs/lustre-release
Branch: b2_12
Current Patch Set: 1
Commit: 97945b29aabc8104f945e7769420789c2d40a70f

Generated at Sat Feb 10 02:30:51 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.