<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:45:58 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4801] spin lock contention in lock_res_and_lock</title>
                <link>https://jira.whamcloud.com/browse/LU-4801</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Our MDS experienced severe lock contention in &lt;tt&gt;lock_res_and_lock()&lt;/tt&gt;.  This had a large impact on client responsiveness because service threads were starved for CPU time.  We have not yet identified the client workload that caused this problem. All active tasks had stack traces like this, but would eventually get scheduled out.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt; ...
__spin_lock
lock_res_and_lock
ldlm_handle_enqueue0
mdt_handle_common
mds_regular_handle
ptlrpc_server_handle_request
...
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This raises the question of why the ldlm resource lock needs to be a spinlock. Couldn&apos;t we avoid this issue by converting it to a mutex?  This question was raised in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3504&quot; title=&quot;MDS: All cores spinning on ldlm lock in lock_res_and_lock&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3504&quot;&gt;&lt;del&gt;LU-3504&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</description>
                <environment>lustre-2.4.0-26chaos</environment>
        <key id="23830">LU-4801</key>
            <summary>spin lock contention in lock_res_and_lock</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="green">Oleg Drokin</assignee>
                                    <reporter username="nedbass">Ned Bass</reporter>
                        <labels>
                            <label>llnl</label>
                    </labels>
                <created>Fri, 21 Mar 2014 19:51:15 +0000</created>
                <updated>Thu, 22 Apr 2021 16:17:24 +0000</updated>
                            <resolved>Thu, 22 Apr 2021 16:17:24 +0000</resolved>
                                    <version>Lustre 2.4.1</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>10</watches>
                                                                            <comments>
                            <comment id="80053" author="pjones" created="Sat, 22 Mar 2014 13:16:28 +0000"  >&lt;p&gt;Oleg&lt;/p&gt;

&lt;p&gt;Could you please comment?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="80055" author="green" created="Sun, 23 Mar 2014 02:03:12 +0000"  >&lt;p&gt;Hm.&lt;br/&gt;
I guess this could happen when there&apos;s a long waiting list in the lock resource, then attempt to add there would take long time iterating and at the same time holding the lock.&lt;br/&gt;
Typically this would happen when you have jobs that try to e.g. create the same file from many nodes (hello fortran open, I guess). Other scenarios are also possible, I imagine.&lt;br/&gt;
I imagine all threads could not have strack traces like that, at leat one must have something else within a protected region.&lt;/p&gt;

&lt;p&gt;As for the necessity of res lock to be a spinlock, it was supposed to only cover very fast code paths and spinlock was cheaper at the time. This might have changed and we need to have some internal discussions to see if it still makes sense or if it should really be converted to e.g. a mutex.&lt;/p&gt;
</comment>
                            <comment id="80433" author="bzzz" created="Fri, 28 Mar 2014 05:10:48 +0000"  >&lt;p&gt;if a spinlock consumes a lot of CPU, this mean high contention. in such a condition, a mutex will be causing schedule() very often, which isn&apos;t good? I&apos;d think the root cause is that contention to be understood and addressed.&lt;/p&gt;</comment>
                            <comment id="81010" author="morrone" created="Thu, 3 Apr 2014 23:58:02 +0000"  >&lt;p&gt;Calling schedule() when there is a high level of contention &lt;em&gt;is&lt;/em&gt; good.  It allows the hundreds of threads that are &lt;em&gt;not&lt;/em&gt; holding the lock to go to sleep, which allows the one guy who actually can make progress (the one holding the lock) to get the CPU and move forward with real work.&lt;/p&gt;

&lt;p&gt;A spin lock is good when contention is expected to be, relatively speaking, low and for short periods of time.&lt;/p&gt;</comment>
                            <comment id="81030" author="green" created="Fri, 4 Apr 2014 04:21:06 +0000"  >&lt;p&gt;Well, it really depends on how ssmall the lock region is. Since it was believed that the lock region in this case is quite short and small, it made no sense to use anything but spinlocks as schedule overhead would have been worse.&lt;br/&gt;
Of course we knoe that with long lock queues it takes longer to iterate them.&lt;/p&gt;

&lt;p&gt;There is another possible problem, btw. My understanding is that mutex can opportunistically spin in some cases when it encounters a busy lock. So it might be trading same for same.&lt;/p&gt;</comment>
                            <comment id="81751" author="morrone" created="Wed, 16 Apr 2014 17:39:06 +0000"  >&lt;p&gt;We need to develop a plan to make progress on this issue.&lt;/p&gt;</comment>
                            <comment id="82009" author="liang" created="Sat, 19 Apr 2014 02:56:34 +0000"  >&lt;p&gt;I think there are two things could be improved even w/o changing lock type here, although I don&apos;t know how much they can really help:&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;it is unnecessary to always create ldlm_lock in ldlm_handle_enqueue0() for MDT stack, MDT intent policy almost always create its own lock. Although this redundant lock will never be granted, but destroy of it will increase chance of spin on res_lock (see ldlm_lock_destroy)&lt;/li&gt;
	&lt;li&gt;ldlm_handle_enqueue0() will always call ldlm_reprocess_all() if ELDLM_LOCK_REPLACED is returned from ns_policy, I suspect it&apos;s unnecessary as well, I actually think ldlm_reprocess_all() in ldlm_handle_enqueue0() is just legacy code (for ELDLM_LOCK_CHANGED which is not used by anyone now?) and can be removed from common code path.&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;I had a patch to improve this (&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-30&quot; title=&quot;improve &amp;amp; cleanup ldlm_handle_enqueue&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-30&quot;&gt;LU-30&lt;/a&gt;), for unknown reason this patch has been deleted from gerrit, but I still have it in my local repository which is based on 2.1, I will try to port it to 2.4. &lt;/p&gt;</comment>
                            <comment id="82079" author="green" created="Mon, 21 Apr 2014 18:12:15 +0000"  >&lt;p&gt;I have two patches here.&lt;br/&gt;
One is purely experimental to replace server side ldlm resource lock with mutex: &lt;a href=&quot;http://review.whamcloud.com/10038&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10038&lt;/a&gt;&lt;br/&gt;
Another is to address item #2 from Liang&apos;s post, I thin kit does not make sense to always reprocess at the end of server side enqueue indeed, patch is at &lt;a href=&quot;http://review.whamcloud.com/10039&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10039&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point the patches underwent about an hour of my horrific testing and seems to be doing fine.&lt;/p&gt;

&lt;p&gt;In addition to that - we still need to better understand what is the worload that causes this issue for you, if you can share that. If you can catch a problem in progress and crashdump the node so that it&apos;s possible to examine resoources on mds server and see how long are the lock lists, what sort of locks are there and so on - that would also be great.&lt;/p&gt;</comment>
                            <comment id="82122" author="liang" created="Tue, 22 Apr 2014 06:04:14 +0000"  >&lt;p&gt;I also have ported my patch to 2.4 (&lt;a href=&quot;http://review.whamcloud.com/#/c/10031/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/10031/&lt;/a&gt;), it includes a lot more changes and needs more efforts to review, so I think Oleg&apos;s patch is the better choice for trying. &lt;/p&gt;</comment>
                            <comment id="82551" author="nedbass" created="Fri, 25 Apr 2014 22:03:12 +0000"  >&lt;p&gt;I can sometimes reproduce this problem in our test environment.  My test case is based on the observed production workload: two thousand tasks reading the same file in 256k chunks and taking a read flock on each chunk.  I believe the locking may be done in the ROMIO I/O library.  The production MDS is under heavy memory pressure, and when this workload runs we see many MDS service threads sleeping in cfs_alloc().&lt;/p&gt;

&lt;p&gt;I used a modified version of IOR to simulate the workload:  &lt;a href=&quot;https://github.com/nedbass/ior/tree/flock&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://github.com/nedbass/ior/tree/flock&lt;/a&gt;.  An option is added to do reads and/or writes under an flock.  Then I create memory pressure on the MDS by allocating and writing lots of memory in user space.  This doesn&apos;t work 100% of the time. I think the key is when service threads start using &lt;tt&gt;kmalloc()&lt;/tt&gt; to satisfy ptlrpc send buffer allocations (as opposed to reusing buffers from the cache).  Then they are forced to schedule and the load average quickly climbs to 300+.  This is when I start seeing all CPUs spinning in &lt;tt&gt;lock_res_and_lock()&lt;/tt&gt;.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# Precreate a 1.6TB file.
srun -p pbatch -N 2 ./src/ior -b 800g -e -k -l -N 2 -o /p/lcraterz/bass6/iorfile -w -t 1m

# Run a 6400 task IOR to read the file using the flock option.
srun -p pbatch -N 100 -n 6400 ./src/ior -b 256m -e -k -l -N 6400 -o /p/lcraterz/bass6/iorfile2 -r -t 256k -L r

# Eat up memory on MDS. Usage: memhog [pages to alloc] [seconds to sleep]
./memhog $(( 18 * 1024 * 1024 * 1024 / 4096 )) 86400
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="82829" author="green" created="Wed, 30 Apr 2014 04:58:50 +0000"  >&lt;p&gt;Read flock! wow, that&apos;s really-really heavy stuff thats all computed under res lock, including deadlock detection too.&lt;br/&gt;
Naturally sleeping is not sone under spinlock, so the sleeping threads should not be adding to the particular spinlock contention woes.&lt;/p&gt;

&lt;p&gt;I imagine you should see quite a bit of spinning with flocks on the same resource even with no memory pressure at all.&lt;/p&gt;

&lt;p&gt;work in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1157&quot; title=&quot;improve flock deadlock detection: hash of waiting flocks instead of list&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1157&quot;&gt;&lt;del&gt;LU-1157&lt;/del&gt;&lt;/a&gt; has reduced the flock overhead to some degree, but I imagine not too much esp if it&apos;s all on the same file.&lt;/p&gt;

&lt;p&gt;Significantly redoing flock code fast is kind of hard, so I wonder if the replacement of spinlock with mutex will really help you here.&lt;br/&gt;
Additionally is the flock really needed by the app logic, or is or there in the bad assumption that it&apos;s operating on top of NFS? Could you mount with -o localflock on your clients to fake out consistent clustre-wide flocks? Benefits would include faster IO since flock calls would be totally local then - not loading MDS and avoiding extra RPC roundtrips to ask for and then cancel the locks.&lt;/p&gt;</comment>
                            <comment id="89703" author="green" created="Tue, 22 Jul 2014 02:41:55 +0000"  >&lt;p&gt;Ok, here&apos;s a &quot;flock to only use mutexes&quot; patch: &lt;a href=&quot;http://review.whamcloud.com/11171&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/11171&lt;/a&gt; - This one passes my testing.&lt;br/&gt;
I guess it should help in your case too.&lt;br/&gt;
I think Cliff will give it a shot sometime soon with ior in flock mode to see if there&apos;s any effect.&lt;/p&gt;

&lt;p&gt;Additionally, I have refreshed &lt;a href=&quot;http://review.whamcloud.com/10039&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10039&lt;/a&gt; - this is a patch useful all in itself to reduce workload on server during lock granting&lt;br/&gt;
and &lt;a href=&quot;http://review.whamcloud.com/10038&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10038&lt;/a&gt; - this one would be a replacement to flock-only patch and makes entire server-side operation to rely on mutex, not on spinlock in resource lock. I guess it makes sense for all kind of locks after all, so we should give this another try just to make sure. This one also passes my testing which is a good sign I guess.&lt;/p&gt;

&lt;p&gt;Additionally, rereading Ned&apos;s comment on the reproducer - this entire &quot;kmalloc kicks out the problem&quot; thing is really strange. Even when threads block in allocation, they could not be doing it with a spinlock held - as this would cause all sorts of deadlocks. If you ever have time to play with this again, please dump a crashdump to check if there are no threads that are diving into an allocation with a spinlock held please?&lt;br/&gt;
Also if low memory on MDS is absolutely a requirement, could it be that since we landed lu_cache limiting code, it&apos;s no longer a problem for you?&lt;/p&gt;</comment>
                            <comment id="89705" author="green" created="Tue, 22 Jul 2014 02:47:38 +0000"  >&lt;p&gt;I guess an idea for a reproducer if it&apos;s just a long flock reprocessing issue is to have an multithreaded app run from a whole bunch of nodes where every thread would try to lock the same range (in blocking mode) in the same file to accumulate a multi-thousand list of blocked locks.&lt;/p&gt;

&lt;p&gt;Make whoever has the lock first to wait a minute or whatever amount of time is needed to ensure that all threads on all nodes has sent the flock requests and they all blocked.&lt;br/&gt;
Then release the original lock, and after that every thread that has received a lock should release it immediately as well.&lt;/p&gt;</comment>
                            <comment id="89706" author="green" created="Tue, 22 Jul 2014 03:48:19 +0000"  >&lt;p&gt;attached parallel_flock.c is my idea for a good reproducer of the flock issue.&lt;/p&gt;

&lt;p&gt;It takes three arguments: -f filename - filename on lustre fs to work on&lt;br/&gt;
-n number of iterations&lt;br/&gt;
-s how long for the first lock to be kept.&lt;/p&gt;

&lt;p&gt;To reproduce - run on a machine with a lot of clients using all available cores too.&lt;br/&gt;
default sleep time is just 6 seconds so possbly you want more than that.&lt;br/&gt;
Several iterations (default - 10).&lt;/p&gt;

&lt;p&gt;While running, I imagine MDS should grind to a total halt so that even userspace barely responds if at all.&lt;/p&gt;

&lt;p&gt;This is untested code, I just made sure it compiles.&lt;/p&gt;</comment>
                            <comment id="89815" author="green" created="Wed, 23 Jul 2014 04:09:00 +0000"  >&lt;p&gt;Attaching parallel_flock_v2.c - this is the same as before, only this version actually works as expected.&lt;/p&gt;</comment>
                            <comment id="89816" author="nedbass" created="Wed, 23 Jul 2014 04:46:02 +0000"  >&lt;p&gt;Oleg, to clarify my comment regarding threads blocking in kmalloc(), I don&apos;t mean that they are doing so while holding a spinlock.  My theory is that they spent so much time under the spin lock that they become eligible to reschedule.  When they later call kmalloc() outside the spinlock it internally calls might_sleep() and reschedules the thread.  Because there are other threads actively contending in lock_res_and_lock(), those blocked threads get starved for CPU time.&lt;/p&gt;</comment>
                            <comment id="89859" author="green" created="Wed, 23 Jul 2014 17:07:50 +0000"  >&lt;p&gt;Ned, I see.&lt;br/&gt;
Well, I wonder if you can try my patch on your testbed to see if it hits similar overloaded issues right away by any chance?&lt;br/&gt;
Cliff tried it with the patch and MDS was basically under no load during the test run, but he was then preempted by other important testing, so there was no test without the patch to actually ensure that the reproducer reproduces anything.&lt;br/&gt;
So if you have time for that, it might be interesting exercise.&lt;/p&gt;</comment>
                            <comment id="89925" author="nedbass" created="Thu, 24 Jul 2014 05:25:58 +0000"  >&lt;p&gt;Oleg, it may not be right away, but we&apos;ll get this scheduled for testing.&lt;/p&gt;</comment>
                            <comment id="251249" author="simmonsja" created="Fri, 12 Jul 2019 15:05:41 +0000"  >&lt;p&gt;Neil pushed some patches to address. I will push&lt;/p&gt;</comment>
                            <comment id="251250" author="gerrit" created="Fri, 12 Jul 2019 15:08:10 +0000"  >&lt;p&gt;James Simmons (jsimmons@infradead.org) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/35483&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/35483&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4801&quot; title=&quot;spin lock contention in lock_res_and_lock&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4801&quot;&gt;&lt;del&gt;LU-4801&lt;/del&gt;&lt;/a&gt; ldlm: discard l_lock from struct ldlm_lock.&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: c7e16e62306abdb62f1039957573400c9114ea3f&lt;/p&gt;</comment>
                            <comment id="251251" author="gerrit" created="Fri, 12 Jul 2019 15:11:19 +0000"  >&lt;p&gt;James Simmons (jsimmons@infradead.org) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/35484&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/35484&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4801&quot; title=&quot;spin lock contention in lock_res_and_lock&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4801&quot;&gt;&lt;del&gt;LU-4801&lt;/del&gt;&lt;/a&gt; ldlm: don&apos;t access l_resource when not locked.&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 5ea026bb3eb9020233569659189850519bc99a17&lt;/p&gt;</comment>
                            <comment id="267537" author="gerrit" created="Tue, 14 Apr 2020 08:11:28 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/35483/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/35483/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4801&quot; title=&quot;spin lock contention in lock_res_and_lock&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4801&quot;&gt;&lt;del&gt;LU-4801&lt;/del&gt;&lt;/a&gt; ldlm: discard l_lock from struct ldlm_lock.&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 0584eb73dbb5b4c710a8c7eb1553ed5dad0c18d8&lt;/p&gt;</comment>
                            <comment id="267754" author="gerrit" created="Wed, 15 Apr 2020 21:40:17 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/38238&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/38238&lt;/a&gt;&lt;br/&gt;
Subject: Revert &quot;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4801&quot; title=&quot;spin lock contention in lock_res_and_lock&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4801&quot;&gt;&lt;del&gt;LU-4801&lt;/del&gt;&lt;/a&gt; ldlm: discard l_lock from struct ldlm_lock.&quot;&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 32095b5717954fa7260c9d6e369a208395bc39da&lt;/p&gt;</comment>
                            <comment id="267755" author="gerrit" created="Wed, 15 Apr 2020 21:40:33 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/38238/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/38238/&lt;/a&gt;&lt;br/&gt;
Subject: Revert &quot;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4801&quot; title=&quot;spin lock contention in lock_res_and_lock&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4801&quot;&gt;&lt;del&gt;LU-4801&lt;/del&gt;&lt;/a&gt; ldlm: discard l_lock from struct ldlm_lock.&quot;&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 9051844cec34ab4f3427adc28bc1948706f5ffc2&lt;/p&gt;</comment>
                            <comment id="278707" author="gerrit" created="Thu, 3 Sep 2020 05:02:37 +0000"  >&lt;p&gt;Neil Brown (neilb@suse.de) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/39811&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39811&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4801&quot; title=&quot;spin lock contention in lock_res_and_lock&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4801&quot;&gt;&lt;del&gt;LU-4801&lt;/del&gt;&lt;/a&gt; ldlm: discard l_lock from struct ldlm_lock.&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: c634dbeb9e576574a903f83dfb9cf39cf8d5468b&lt;/p&gt;</comment>
                            <comment id="299349" author="gerrit" created="Wed, 21 Apr 2021 03:14:52 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/39811/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39811/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4801&quot; title=&quot;spin lock contention in lock_res_and_lock&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4801&quot;&gt;&lt;del&gt;LU-4801&lt;/del&gt;&lt;/a&gt; ldlm: discard l_lock from struct ldlm_lock.&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: bb6edb7b8eeec65f46f8eaeb135e5dde13bf7ad8&lt;/p&gt;</comment>
                            <comment id="299506" author="simmonsja" created="Thu, 22 Apr 2021 16:17:24 +0000"  >&lt;p&gt;Patch has landed. We will look to deploy this on our production systems soon. Any further work needed we can reopen this ticket.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="19552">LU-3504</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="56398">LU-12542</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="15400" name="parallel_flock.c" size="3375" author="green" created="Tue, 22 Jul 2014 03:48:19 +0000"/>
                            <attachment id="15406" name="parallel_flock_v2.c" size="3379" author="green" created="Wed, 23 Jul 2014 04:09:00 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10490" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>End date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Mon, 10 Nov 2014 19:51:15 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwi4n:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>13209</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10493" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>Start date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Fri, 21 Mar 2014 19:51:15 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    </customfields>
    </item>
</channel>
</rss>