<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:17:08 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-8392] sanity test_27z: soft lockup - CPU#0 stuck for 22s! [ptlrpcd_rcv:6145]</title>
                <link>https://jira.whamcloud.com/browse/LU-8392</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This issue was created by maloo for John Hammond &amp;lt;john.hammond@intel.com&amp;gt;&lt;/p&gt;

&lt;p&gt;This issue relates to the following test suite run: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/2fcd5716-48f6-11e6-bf87-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/2fcd5716-48f6-11e6-bf87-5254006e85c2&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The sub-test test_27z failed with the following error:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;test failed to respond and timed out
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;11:07:16:[ 1632.062003] BUG: soft lockup - CPU#0 stuck for 22s! [ptlrpcd_rcv:6145]
11:07:16:[ 1632.062003] Modules linked in: lustre(OE) obdecho(OE) mgc(OE) lov(OE) osc(OE) mdc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) sha512_generic crypto_null libcfs(OE) rpcsec_gss_krb5 nfsv4 dns_resolver nfs fscache xprtrdma ib_isert iscsi_target_mod ib_iser libiscsi scsi_transport_iscsi ib_srpt target_core_mod crc_t10dif crct10dif_generic crct10dif_common ib_srp scsi_transport_srp scsi_tgt ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr ppdev pcspkr virtio_balloon parport_pc parport i2c_piix4 nfsd nfs_acl lockd auth_rpcgss grace sunrpc ip_tables ext4 mbcache jbd2 ata_generic pata_acpi virtio_blk cirrus syscopyarea sysfillrect sysimgblt drm_kms_helper 8139too ttm ata_piix serio_raw virtio_pci virtio_ring virtio libata 8139cp mii drm i2c_core floppy
11:07:16:[ 1632.062003] CPU: 0 PID: 6145 Comm: ptlrpcd_rcv Tainted: G           OE  ------------   3.10.0-327.22.2.el7.x86_64 #1
11:07:16:[ 1632.062003] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2007
11:07:16:[ 1632.062003] task: ffff880078184500 ti: ffff880078e58000 task.ti: ffff880078e58000
11:07:16:[ 1632.062003] RIP: 0010:[&amp;lt;ffffffff8163dab2&amp;gt;]  [&amp;lt;ffffffff8163dab2&amp;gt;] _raw_spin_lock+0x32/0x50
11:07:16:[ 1632.062003] RSP: 0018:ffff880078e5b9f0  EFLAGS: 00000202
11:07:16:[ 1632.062003] RAX: 0000000000002fb8 RBX: 0000000000000000 RCX: 000000000000701c
11:07:16:[ 1632.062003] RDX: 000000000000701e RSI: 000000000000701e RDI: ffff880079cb4d00
11:07:16:[ 1632.062003] RBP: ffff880078e5b9f0 R08: 0000000000000000 R09: 0000000000000208
11:07:16:[ 1632.062003] R10: 0000000000000009 R11: ffff880078e5b85e R12: ffff880078e5bfd8
11:07:16:[ 1632.062003] R13: ffffffff812fd8e3 R14: ffff880078e5b9f0 R15: ffffffffa05cf498
11:07:16:[ 1632.062003] FS:  0000000000000000(0000) GS:ffff88007fc00000(0000) knlGS:0000000000000000
11:07:16:[ 1632.062003] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
11:07:16:[ 1632.062003] CR2: 00007f5378b2fed0 CR3: 000000007856d000 CR4: 00000000000006f0
11:07:16:[ 1632.062003] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
11:07:16:[ 1632.062003] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
11:07:16:[ 1632.062003] Stack:
11:07:16:[ 1632.062003]  ffff880078e5ba18 ffffffffa05db498 0000020000000100 0000000000000000
11:07:16:[ 1632.062003]  ffffffffffffffff ffff880078e5ba90 ffffffffa065a296 000200000a090430
11:07:16:[ 1632.062003]  000200000a09042e ffff880044ae7c00 ffffffffa06916e0 0000000000020000
11:07:16:[ 1632.062003] Call Trace:
11:07:16:[ 1632.062003]  [&amp;lt;ffffffffa05db498&amp;gt;] cfs_percpt_lock+0x58/0x110 [libcfs]
11:07:16:[ 1632.062003]  [&amp;lt;ffffffffa065a296&amp;gt;] lnet_send+0xb6/0xc90 [lnet]
11:07:16:[ 1632.062003]  [&amp;lt;ffffffff811c178e&amp;gt;] ? kmem_cache_alloc_trace+0x1ce/0x1f0
11:07:16:[ 1632.062003]  [&amp;lt;ffffffffa065b0b5&amp;gt;] LNetPut+0x245/0x7a0 [lnet]
11:07:16:[ 1632.062003]  [&amp;lt;ffffffffa0919aa3&amp;gt;] ptl_send_buf+0x183/0x500 [ptlrpc]
11:07:16:[ 1632.062003]  [&amp;lt;ffffffffa091b5b1&amp;gt;] ptl_send_rpc+0x611/0xda0 [ptlrpc]
11:07:16:[ 1632.062003]  [&amp;lt;ffffffffa0910ff0&amp;gt;] ptlrpc_send_new_req+0x460/0xa60 [ptlrpc]
11:07:16:[ 1632.062003]  [&amp;lt;ffffffffa0914358&amp;gt;] ptlrpc_check_set.part.23+0x9a8/0x1dd0 [ptlrpc]
11:07:16:[ 1632.062003]  [&amp;lt;ffffffffa09157db&amp;gt;] ptlrpc_check_set+0x5b/0xe0 [ptlrpc]
11:07:16:[ 1632.062003]  [&amp;lt;ffffffffa09403bb&amp;gt;] ptlrpcd_check+0x4eb/0x5e0 [ptlrpc]
11:07:16:[ 1632.062003]  [&amp;lt;ffffffffa094076b&amp;gt;] ptlrpcd+0x2bb/0x560 [ptlrpc]
11:07:16:[ 1632.062003]  [&amp;lt;ffffffff810b88d0&amp;gt;] ? wake_up_state+0x20/0x20
11:07:16:[ 1632.062003]  [&amp;lt;ffffffffa09404b0&amp;gt;] ? ptlrpcd_check+0x5e0/0x5e0 [ptlrpc]
11:07:16:[ 1632.062003]  [&amp;lt;ffffffff810a5aef&amp;gt;] kthread+0xcf/0xe0
11:07:16:[ 1632.062003]  [&amp;lt;ffffffff810a5a20&amp;gt;] ? kthread_create_on_node+0x140/0x140
11:07:16:[ 1632.062003]  [&amp;lt;ffffffff816467d8&amp;gt;] ret_from_fork+0x58/0x90
11:07:16:[ 1632.062003]  [&amp;lt;ffffffff810a5a20&amp;gt;] ? kthread_create_on_node+0x140/0x140
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Info required for matching: sanity 27z&lt;/p&gt;</description>
                <environment></environment>
        <key id="38138">LU-8392</key>
            <summary>sanity test_27z: soft lockup - CPU#0 stuck for 22s! [ptlrpcd_rcv:6145]</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="sbuisson">Sebastien Buisson</assignee>
                                    <reporter username="maloo">Maloo</reporter>
                        <labels>
                    </labels>
                <created>Wed, 13 Jul 2016 16:16:02 +0000</created>
                <updated>Fri, 12 Aug 2016 12:37:17 +0000</updated>
                            <resolved>Fri, 12 Aug 2016 12:37:17 +0000</resolved>
                                    <version>Lustre 2.9.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>10</watches>
                                                                            <comments>
                            <comment id="158629" author="jhammond" created="Wed, 13 Jul 2016 16:16:21 +0000"  >&lt;p&gt;Another failure: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/0cc7cb54-4872-11e6-8968-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/0cc7cb54-4872-11e6-8968-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="158643" author="jhammond" created="Wed, 13 Jul 2016 17:09:15 +0000"  >&lt;p&gt;Seems to be introduced by &lt;a href=&quot;http://review.whamcloud.com/#/c/18782/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/18782/&lt;/a&gt; which calls &lt;tt&gt;lnet_ipif_query()&lt;/tt&gt; while holding the &lt;tt&gt;lnet_net_lock&lt;/tt&gt;:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;PID: 9297   TASK: ffff88007a2f3980  CPU: 0   COMMAND: &quot;ll_cfg_requeue&quot;
 #0 [ffff8800796c73c8] __schedule at ffffffff8163b16d
 #1 [ffff8800796c7430] __cond_resched at ffffffff810b5ed6
 #2 [ffff8800796c7448] _cond_resched at ffffffff8163baaa
 #3 [ffff8800796c7458] kmem_cache_alloc at ffffffff811c1425
 #4 [ffff8800796c7498] sock_alloc_inode at ffffffff8151025d
 #5 [ffff8800796c74b8] alloc_inode at ffffffff811f986d
 #6 [ffff8800796c74d8] new_inode_pseudo at ffffffff811fb891
 #7 [ffff8800796c74f8] sock_alloc at ffffffff8150fdba
 #8 [ffff8800796c7510] __sock_create at ffffffff81510b05
 #9 [ffff8800796c7560] sock_create at ffffffff81510d10
#10 [ffff8800796c7570] lnet_sock_ioctl at ffffffffa0654070 [lnet]
#11 [ffff8800796c75b8] lnet_ipif_query at ffffffffa0654ca8 [lnet]
#12 [ffff8800796c7620] LNetDist at ffffffffa06568b4 [lnet]
#13 [ffff8800796c7688] ptlrpc_uuid_to_peer at ffffffffa09262d4 [ptlrpc]
#14 [ffff8800796c76f0] ptlrpc_uuid_to_connection at ffffffffa090d26f [ptlrpc]
#15 [ffff8800796c7730] import_set_conn at ffffffffa08f1600 [ptlrpc]
#16 [ffff8800796c7788] import_set_conn_priority at ffffffffa08f3555 [ptlrpc]
#17 [ffff8800796c7798] ptlrpc_recover_import at ffffffffa091816b [ptlrpc]
#18 [ffff8800796c7848] lprocfs_import_seq_write at ffffffffa0944600 [ptlrpc]
#19 [ffff8800796c78a8] osc_import_seq_write at ffffffffa0adb539 [osc]
#20 [ffff8800796c78b8] class_process_proc_param at ffffffffa0731ea4 [obdclass]
#21 [ffff8800796c7ac8] osc_process_config_base at ffffffffa0adab82 [osc]
#22 [ffff8800796c7ad8] osc_cl_process_config at ffffffffa0adcebc [osc]
#23 [ffff8800796c7af8] class_process_config at ffffffffa0737d9e [obdclass]
#24 [ffff8800796c7bb0] mgc_apply_recover_logs at ffffffffa0b91bbc [mgc]
#25 [ffff8800796c7cd8] mgc_process_recover_nodemap_log at ffffffffa0b93d48 [mgc]
#26 [ffff8800796c7d68] mgc_process_log at ffffffffa0b96614 [mgc]
#27 [ffff8800796c7e28] mgc_requeue_thread at ffffffffa0b98538 [mgc]
#28 [ffff8800796c7ec8] kthread at ffffffff810a5aef
#29 [ffff8800796c7f50] ret_from_fork at ffffffff816467d8
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
</comment>
                            <comment id="158926" author="yong.fan" created="Fri, 15 Jul 2016 06:36:15 +0000"  >&lt;p&gt;I hit similar trouble in replay-single test_102c on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/335fffb2-4a47-11e6-a80f-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/335fffb2-4a47-11e6-a80f-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="159180" author="yujian" created="Tue, 19 Jul 2016 06:38:08 +0000"  >&lt;p&gt;The same failure occurred on replay-single test 38 on master branch:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/64b16e48-49c5-11e6-bf87-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/64b16e48-49c5-11e6-bf87-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="159247" author="pjones" created="Tue, 19 Jul 2016 18:05:41 +0000"  >&lt;p&gt;Sebastien&lt;/p&gt;

&lt;p&gt;Could you please look into this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="159297" author="gerrit" created="Wed, 20 Jul 2016 11:53:28 +0000"  >&lt;p&gt;Sebastien Buisson (sbuisson@ddn.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/21437&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21437&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8392&quot; title=&quot;sanity test_27z: soft lockup - CPU#0 stuck for 22s! [ptlrpcd_rcv:6145]&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8392&quot;&gt;&lt;del&gt;LU-8392&lt;/del&gt;&lt;/a&gt; lnet: no lnet_net_lock for address visibility check&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 29538f69e00d8677789e2c18387e4a8e150ca241&lt;/p&gt;</comment>
                            <comment id="159563" author="yujian" created="Fri, 22 Jul 2016 05:30:53 +0000"  >&lt;p&gt;More failure instance: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/32f28fce-4f90-11e6-9f8e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/32f28fce-4f90-11e6-9f8e-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="159566" author="sbuisson" created="Fri, 22 Jul 2016 06:22:52 +0000"  >&lt;p&gt;Hum, this soft lockup bug was hit during testing of patch &lt;a href=&quot;http://review.whamcloud.com/21437:&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21437:&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/3a5366ea-4fd0-11e6-bf87-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/3a5366ea-4fd0-11e6-bf87-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So it seems that calling lnet_ipif_query() while holding the lnet_net_lock (patch landed at &lt;a href=&quot;http://review.whamcloud.com/18782&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/18782&lt;/a&gt;), whereas probably not recommended, is not the primary reason for this soft lockup.&lt;/p&gt;</comment>
                            <comment id="159874" author="yong.fan" created="Tue, 26 Jul 2016 14:08:52 +0000"  >&lt;p&gt;Another failure instance on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/d79fa8e8-532f-11e6-bf87-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/d79fa8e8-532f-11e6-bf87-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="160230" author="yujian" created="Thu, 28 Jul 2016 17:53:11 +0000"  >&lt;p&gt;sanity-quota test 18 also hit the same issue:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/f2f11e80-54a4-11e6-905c-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/f2f11e80-54a4-11e6-905c-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="160308" author="yong.fan" created="Fri, 29 Jul 2016 14:49:07 +0000"  >&lt;p&gt;Another failure on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/b99f71fc-5573-11e6-b5b1-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/b99f71fc-5573-11e6-b5b1-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="160382" author="yong.fan" created="Sun, 31 Jul 2016 02:21:28 +0000"  >&lt;p&gt;More failures on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/7ce11cfc-56a4-11e6-906c-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/7ce11cfc-56a4-11e6-906c-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="160431" author="jamesanunez" created="Mon, 1 Aug 2016 15:28:03 +0000"  >&lt;p&gt;Looks like we are started hitting this soft lockup in recovery-small test_61 in the past two weeks:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/af0b71a4-4e8f-11e6-9f8e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/af0b71a4-4e8f-11e6-9f8e-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/6e43ceea-528c-11e6-bf87-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/6e43ceea-528c-11e6-bf87-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/ec71f75a-5382-11e6-87c4-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/ec71f75a-5382-11e6-87c4-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/2e6697d8-55e5-11e6-aa74-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/2e6697d8-55e5-11e6-aa74-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="160537" author="sbuisson" created="Tue, 2 Aug 2016 15:53:44 +0000"  >&lt;p&gt;Oleg,&lt;/p&gt;

&lt;p&gt;I guess you reverted patch &lt;a href=&quot;http://review.whamcloud.com/18782&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/18782&lt;/a&gt; to solve the issue described here. But as the issue was still hit with patch &lt;a href=&quot;http://review.whamcloud.com/21437&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21437&lt;/a&gt; (patch that avoids calling lnet_ipif_query() while holding the lnet_net_lock), there is no clue that this patch is responsible for this soft lockup.&lt;/p&gt;

&lt;p&gt;If you anyway think that patch &lt;a href=&quot;http://review.whamcloud.com/21437&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21437&lt;/a&gt; is necessary, let&apos;s work to have it merged.&lt;/p&gt;

&lt;p&gt;Sebastien.&lt;/p&gt;</comment>
                            <comment id="160642" author="sbuisson" created="Wed, 3 Aug 2016 08:15:32 +0000"  >&lt;p&gt;I pushed a new version of the patch at &lt;a href=&quot;http://review.whamcloud.com/21437&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21437&lt;/a&gt;, that avoids calling LIBCFS_ALLOC() while holding the lock.&lt;/p&gt;</comment>
                            <comment id="160758" author="sbuisson" created="Thu, 4 Aug 2016 06:58:14 +0000"  >&lt;p&gt;FYI, patch &lt;a href=&quot;http://review.whamcloud.com/21437&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21437&lt;/a&gt; successfully passed Maloo tests.&lt;br/&gt;
I have just retriggered them with:&lt;br/&gt;
Test-Parameters: fortestonly testlist=sanity,sanity,sanity,sanity&lt;br/&gt;
to see if it passes sanity consistently.&lt;/p&gt;</comment>
                            <comment id="160878" author="sbuisson" created="Fri, 5 Aug 2016 06:06:04 +0000"  >&lt;p&gt;Hi,&lt;/p&gt;

&lt;p&gt;Confirmed that &lt;a href=&quot;http://review.whamcloud.com/21437&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21437&lt;/a&gt; passes sanity tests consistently.&lt;/p&gt;</comment>
                            <comment id="161658" author="pjones" created="Thu, 11 Aug 2016 18:03:22 +0000"  >&lt;p&gt;If I understand correctly, the revert of the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7845&quot; title=&quot;Support namespace in credentials retrieval&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7845&quot;&gt;&lt;del&gt;LU-7845&lt;/del&gt;&lt;/a&gt; patch from master has meant that this is no logner occurring regularly and so is not a blocker to 2.9. AS Sebastien reports it can still happen occasionally I have moved it to 2.10 with a lower prioirity.&lt;/p&gt;</comment>
                            <comment id="161717" author="sbuisson" created="Fri, 12 Aug 2016 06:26:44 +0000"  >&lt;p&gt;Peter,&lt;/p&gt;

&lt;p&gt;It appears this soft lockup was due to &lt;a href=&quot;http://review.whamcloud.com/18782&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/18782&lt;/a&gt;. But now that this patch is reverted, the bug should not appear any more. Maybe we can wait a bit before closing this bug, just to make sure it does not occur again.&lt;/p&gt;

&lt;p&gt;Sebastien.&lt;/p&gt;</comment>
                            <comment id="161725" author="pjones" created="Fri, 12 Aug 2016 12:37:17 +0000"  >&lt;p&gt;In that case let&apos;s close this as a  duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7845&quot; title=&quot;Support namespace in credentials retrieval&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7845&quot;&gt;&lt;del&gt;LU-7845&lt;/del&gt;&lt;/a&gt; and open a new ticket if there is still a residual issue&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="35157">LU-7845</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzyhdr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>