<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:50:48 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-12233] Deadlock on LNet shutdown</title>
                <link>https://jira.whamcloud.com/browse/LU-12233</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;I reproduced this issue with master and Cray&apos;s 2.12 branch. For completeness I&apos;ll note that my master was slightly modified so that I can configure LNet on Cray&apos;s hardware, and I also applied the fix from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11756&quot; title=&quot;kib_conn leak&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11756&quot;&gt;&lt;del&gt;LU-11756&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here&apos;s the relevant git log. Commit &apos;8cb7ccf54e&apos; is on master.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;86ef522cac LU-11756 o2iblnd: kib_conn leak
888adb9340 MRP-342 lnet: add config file support
d661c584c6 Revert &quot;LU-11838 lnet: change lnet_ipaddr_enumerate() to use for_each_netdev()&quot;
4c681cf4ee Revert &quot;LU-11838 o2iblnd: get IP address more directly.&quot;
f4fe014620 Revert &quot;LU-6399 lnet: socket cleanup&quot;
8cb7ccf54e LU-11986 lnet: properly cleanup lnet debugfs files
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;LNetNIFini() takes the ln_api_mutex and then shuts down LNet. It doesn&apos;t release the mutex until all teardown functions have returned.&lt;/p&gt;

&lt;p&gt;The message receive path also takes the ln_api_mutex in lnet_nid2peerni_locked().&lt;br/&gt;
kgnilnd_check_fma_rx-&amp;gt;lnet_parse-&amp;gt;lnet_nid2peerni_locked&lt;br/&gt;
kiblnd_handle_rx-&amp;gt;lnet_parse-&amp;gt;lnet_nid2peerni_locked&lt;br/&gt;
ksocknal_process_receive-&amp;gt;lnet_parse-&amp;gt;lnet_nid2peerni_locked&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;/*
 * Get a peer_ni for the given nid, create it if necessary. Takes a
 * hold on the peer_ni.
 */
struct lnet_peer_ni *
lnet_nid2peerni_locked(lnet_nid_t nid, lnet_nid_t pref, int cpt)
{
        struct lnet_peer_ni *lpni = NULL;
        int rc;

        if (the_lnet.ln_state != LNET_STATE_RUNNING)
                return ERR_PTR(-ESHUTDOWN);

        /*
         * find if a peer_ni already exists.
         * If so then just return that.
         */
        lpni = lnet_find_peer_ni_locked(nid);
        if (lpni)
                return lpni;

        /*
         * Slow path:
         * use the lnet_api_mutex to serialize the creation of the peer_ni
         * and the creation/deletion of the local ni/net. When a local ni is
         * created, if there exists a set of peer_nis on that network,
         * they need to be traversed and updated. When a local NI is
         * deleted, which could result in a network being deleted, then
         * all peer nis on that network need to be removed as well.
         *
         * Creation through traffic should also be serialized with
         * creation through DLC.
         */
        lnet_net_unlock(cpt);
        mutex_lock(&amp;amp;the_lnet.ln_api_mutex);
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;int
LNetNIFini()
{
        mutex_lock(&amp;amp;the_lnet.ln_api_mutex);

        LASSERT(the_lnet.ln_refcount &amp;gt; 0);

        if (the_lnet.ln_refcount != 1) {
                the_lnet.ln_refcount--;
        } else {
                LASSERT(!the_lnet.ln_niinit_self);

                lnet_fault_fini();

                lnet_router_debugfs_init();
                lnet_peer_discovery_stop();
                lnet_push_target_fini();
                lnet_monitor_thr_stop();
                lnet_ping_target_fini();

                /* Teardown fns that use my own API functions BEFORE here */
                the_lnet.ln_refcount = 0;

                lnet_acceptor_stop();
                lnet_destroy_routes();
                lnet_shutdown_lndnets(); &amp;lt;&amp;lt;&amp;lt;  the_lnet.ln_state = LNET_STATE_STOPPING; happens here
                lnet_unprepare();
        }

        mutex_unlock(&amp;amp;the_lnet.ln_api_mutex);
        return 0;
}
EXPORT_SYMBOL(LNetNIFini);&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We can see there is a decent sized window where the deadlock can be hit.&lt;/p&gt;

&lt;p&gt;It is easy to reproduce for me.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@snx11922n000 ~]# pdsh -g lustre modprobe lnet; lctl net up ; lctl list_nids ; lctl ping 10.12.0.50@o2ib40 ; lctl net down ; lustre_rmmod&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Sometimes the command needs to be repeated a couple of times.&lt;/p&gt;

&lt;p&gt;I believe this regression was introduced by:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;commit fa8b4e6357c53ea457ef6624b0b19bece0b0fdde
Author: Amir Shehata &amp;lt;amir.shehata@intel.com&amp;gt;
Date:   Thu May 26 15:42:39 2016 -0700

    LU-7734 lnet: peer/peer_ni handling adjustments
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="55514">LU-12233</key>
            <summary>Deadlock on LNet shutdown</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="ssmirnov">Serguei Smirnov</assignee>
                                    <reporter username="hornc">Chris Horn</reporter>
                        <labels>
                    </labels>
                <created>Fri, 26 Apr 2019 20:58:51 +0000</created>
                <updated>Thu, 29 Oct 2020 12:28:24 +0000</updated>
                            <resolved>Fri, 25 Sep 2020 04:01:45 +0000</resolved>
                                    <version>Lustre 2.13.0</version>
                    <version>Lustre 2.12.1</version>
                    <version>Lustre 2.12.3</version>
                                    <fixVersion>Lustre 2.14.0</fixVersion>
                    <fixVersion>Lustre 2.12.6</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="246419" author="hornc" created="Fri, 26 Apr 2019 21:03:11 +0000"  >&lt;p&gt;I reported this issue internally at Cray, but I don&apos;t have the cycles to work on it right now. If someone else is able to work out a patch that&apos;d be great, and I&apos;ll be sure to update this ticket if Cray begins work on this issue.&lt;/p&gt;</comment>
                            <comment id="246461" author="hornc" created="Mon, 29 Apr 2019 17:00:06 +0000"  >&lt;p&gt;The specific instance I found was with iblnd. Maybe it&apos;s specific to that.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;LNetNIFini()-&amp;gt;lnet_shutdown_lndnets()-&amp;gt;lnet_shutdown_lndnet()-&amp;gt;lnet_shutdown_lndni()-&amp;gt;lnet_clear_zombies_nis_locked()-&amp;gt;kiblnd_shutdown() &amp;lt;&amp;lt;&amp;lt;&amp;lt; We&apos;re holding ln_api_mutex
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In kiblnd_shutdown() we&apos;re waiting for all peers to disconnect:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;                while (atomic_read(&amp;amp;net-&amp;gt;ibn_npeers) != 0) {
                        i++;
                        /* power of 2? */
                        CDEBUG(((i &amp;amp; (-i)) == i) ? D_WARNING : D_NET,
                               &quot;%s: waiting for %d peers to disconnect\n&quot;,
                               libcfs_nid2str(ni-&amp;gt;ni_nid),
                               atomic_read(&amp;amp;net-&amp;gt;ibn_npeers));
                        set_current_state(TASK_UNINTERRUPTIBLE);
                        schedule_timeout(cfs_time_seconds(1));
                }
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;But the ib_cm thread is stuck trying to acquire the ln_api_mutex:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;
Apr 25 19:48:15 snx11922n002 kernel: INFO: task kworker/0:0:16461 blocked for more than 120 seconds.
Apr 25 19:48:15 snx11922n002 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Apr 25 19:48:15 snx11922n002 kernel: kworker/0:0     D ffff880172949fa0     0 16461      2 0x00000000
Apr 25 19:48:15 snx11922n002 kernel: Workqueue: ib_cm cm_work_handler [ib_cm]
Apr 25 19:48:15 snx11922n002 kernel: Call Trace:
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffff816b5329&amp;gt;] schedule_preempt_disabled+0x29/0x70
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffff816b30d7&amp;gt;] __mutex_lock_slowpath+0xc7/0x1d0
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffff816b24bf&amp;gt;] mutex_lock+0x1f/0x2f
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffffc0b2ed91&amp;gt;] lnet_nid2peerni_locked+0x71/0x150 [lnet]
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffffc0b1ee34&amp;gt;] lnet_parse+0x794/0x1260 [lnet]
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffffc0a42163&amp;gt;] kiblnd_handle_rx+0x213/0x6b0 [ko2iblnd]
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffffc0a4267f&amp;gt;] kiblnd_handle_early_rxs+0x7f/0x120 [ko2iblnd]
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffffc0a43576&amp;gt;] kiblnd_connreq_done+0x286/0x6c0 [ko2iblnd]
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffffc0a46327&amp;gt;] kiblnd_cm_callback+0x11e7/0x2390 [ko2iblnd]
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffff810aa05a&amp;gt;] ? __queue_delayed_work+0xaa/0x1a0
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffff810aa3c1&amp;gt;] ? try_to_grab_pending+0xb1/0x160
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffffc05013e0&amp;gt;] cma_ib_handler+0xc0/0x290 [rdma_cm]
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffffc04ed5ab&amp;gt;] cm_process_work+0x2b/0x130 [ib_cm]
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffffc04ef943&amp;gt;] cm_work_handler+0xaa3/0x12db [ib_cm]
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffff810abe2f&amp;gt;] process_one_work+0x17f/0x440
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffff810acaf6&amp;gt;] worker_thread+0x126/0x3c0
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffff810ac9d0&amp;gt;] ? manage_workers.isra.24+0x2a0/0x2a0
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffff810b4031&amp;gt;] kthread+0xd1/0xe0
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffff810b3f60&amp;gt;] ? insert_kthread_work+0x40/0x40
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffff816c155d&amp;gt;] ret_from_fork+0x5d/0xb0
Apr 25 19:48:15 snx11922n002 kernel:  [&amp;lt;ffffffff810b3f60&amp;gt;] ? insert_kthread_work+0x40/0x40
Apr 25 19:49:10 snx11922n002 kernel: LNet: 15887:0:(o2iblnd.c:3032:kiblnd_shutdown()) 10.12.0.50@o2ib40: waiting for 1 peers to disconnect
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Another time I saw kiblnd_sd threads stuck:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Apr 26 00:00:06 snx11922n002 kernel: INFO: task kiblnd_sd_02_00:32191 blocked for more than 120 seconds.
Apr 26 00:00:06 snx11922n002 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Apr 26 00:00:06 snx11922n002 kernel: kiblnd_sd_02_00 D ffff881fce3adee0     0 32191      2 0x00000000
Apr 26 00:00:06 snx11922n002 kernel: Call Trace:
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffff810cc938&amp;gt;] ? __enqueue_entity+0x78/0x80
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffff816b5329&amp;gt;] schedule_preempt_disabled+0x29/0x70
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffff816b30d7&amp;gt;] __mutex_lock_slowpath+0xc7/0x1d0
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffff816b24bf&amp;gt;] mutex_lock+0x1f/0x2f
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffffc0b799d1&amp;gt;] lnet_nid2peerni_locked+0x71/0x150 [lnet]
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffffc0b66ba1&amp;gt;] lnet_parse+0x791/0x11e0 [lnet]
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffffc090b153&amp;gt;] kiblnd_handle_rx+0x213/0x6b0 [ko2iblnd]
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffffc091228c&amp;gt;] kiblnd_scheduler+0xf3c/0x1180 [ko2iblnd]
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffff810cb0c5&amp;gt;] ? sched_clock_cpu+0x85/0xc0
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffff8102954d&amp;gt;] ? __switch_to+0xcd/0x500
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffff810c7c80&amp;gt;] ? wake_up_state+0x20/0x20
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffffc0911350&amp;gt;] ? kiblnd_cq_event+0x90/0x90 [ko2iblnd]
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffff810b4031&amp;gt;] kthread+0xd1/0xe0
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffff810b3f60&amp;gt;] ? insert_kthread_work+0x40/0x40
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffff816c155d&amp;gt;] ret_from_fork+0x5d/0xb0
Apr 26 00:00:06 snx11922n002 kernel:  [&amp;lt;ffffffff810b3f60&amp;gt;] ? insert_kthread_work+0x40/0x40
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="252202" author="hornc" created="Mon, 29 Jul 2019 20:50:43 +0000"  >&lt;p&gt;Started hitting this internally a bit more often. Bumping priority.&lt;/p&gt;</comment>
                            <comment id="254637" author="hornc" created="Thu, 12 Sep 2019 20:40:21 +0000"  >&lt;p&gt;Maybe this would work?&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;diff --git a/lnet/lnet/api-ni.c b/lnet/lnet/api-ni.c
index d50c9395e2..f4d224d39f 100644
--- a/lnet/lnet/api-ni.c
+++ b/lnet/lnet/api-ni.c
@@ -2155,11 +2155,10 @@ lnet_shutdown_lndnets(void)
 	/* NB called holding the global mutex */

 	/* All quiet on the API front */
-	LASSERT(the_lnet.ln_state == LNET_STATE_RUNNING);
+	LASSERT(the_lnet.ln_state == LNET_STATE_STOPPING);
 	LASSERT(the_lnet.ln_refcount == 0);

 	lnet_net_lock(LNET_LOCK_EX);
-	the_lnet.ln_state = LNET_STATE_STOPPING;

 	while (!list_empty(&amp;amp;the_lnet.ln_nets)) {
 		/*
@@ -2746,6 +2745,10 @@ EXPORT_SYMBOL(LNetNIInit);
 int
 LNetNIFini()
 {
+	lnet_net_lock(LNET_LOCK_EX);
+	the_lnet.ln_state = LNET_STATE_STOPPING;
+	lnet_net_unlock(LNET_LOCK_EX);
+
 	mutex_lock(&amp;amp;the_lnet.ln_api_mutex);

 	LASSERT(the_lnet.ln_refcount &amp;gt; 0);
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="276865" author="hornc" created="Thu, 6 Aug 2020 21:11:15 +0000"  >&lt;p&gt;There&apos;s another potential deadlock with shutdown and the discovery thread.&lt;/p&gt;

&lt;p&gt;lnet_peer_data_present() tries to take the ln_api_mutex and then check ln_state, but shutdown thread could already be holding the ln_api_mutex.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;static int lnet_peer_data_present(struct lnet_peer *lp)
...
        mutex_lock(&amp;amp;the_lnet.ln_api_mutex);
        if (the_lnet.ln_state != LNET_STATE_RUNNING) {
                rc = -ESHUTDOWN;
                goto out;
        }
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="279733" author="gerrit" created="Wed, 16 Sep 2020 17:13:10 +0000"  >&lt;p&gt;Serguei Smirnov (ssmirnov@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/39933&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39933&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12233&quot; title=&quot;Deadlock on LNet shutdown&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12233&quot;&gt;&lt;del&gt;LU-12233&lt;/del&gt;&lt;/a&gt; lnet: deadlock on LNet shutdown&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: a5d809cbf8c42b25c306b3ec38122025e799e406&lt;/p&gt;</comment>
                            <comment id="280583" author="gerrit" created="Fri, 25 Sep 2020 03:13:06 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/39933/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39933/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12233&quot; title=&quot;Deadlock on LNet shutdown&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12233&quot;&gt;&lt;del&gt;LU-12233&lt;/del&gt;&lt;/a&gt; lnet: deadlock on LNet shutdown&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: e0c445648a38fb72cc426ac0c16c33f5183cda08&lt;/p&gt;</comment>
                            <comment id="280598" author="pjones" created="Fri, 25 Sep 2020 04:01:45 +0000"  >&lt;p&gt;Landed for 2.14&lt;/p&gt;</comment>
                            <comment id="281705" author="gerrit" created="Wed, 7 Oct 2020 22:32:32 +0000"  >&lt;p&gt;Serguei Smirnov (ssmirnov@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/40171&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/40171&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12233&quot; title=&quot;Deadlock on LNet shutdown&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12233&quot;&gt;&lt;del&gt;LU-12233&lt;/del&gt;&lt;/a&gt; lnet: deadlock on LNet shutdown&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: c60b67b2bce3067c9a1d1a1c96ec3a86e931ca79&lt;/p&gt;</comment>
                            <comment id="283580" author="gerrit" created="Thu, 29 Oct 2020 07:49:46 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/40171/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/40171/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12233&quot; title=&quot;Deadlock on LNet shutdown&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12233&quot;&gt;&lt;del&gt;LU-12233&lt;/del&gt;&lt;/a&gt; lnet: deadlock on LNet shutdown&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 6d92d5d0e710e60a8ede7da19e6a577696697a91&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="60041">LU-13807</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00fif:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>