<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:16:50 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-8357] sanity-sec LBUG on MDS umount with ASSERTION( exp-&gt;u.eu_target_data.ted_nodemap == nodemap )</title>
                <link>https://jira.whamcloud.com/browse/LU-8357</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;sanity-sec timesout on MDS umount. From the suite_stdout log, after all sanity-sec tests were run, we see:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;13:18:19:CMD: onyx-36vm7 grep -c /mnt/lustre-mds1&apos; &apos; /proc/mounts
13:18:19:Stopping /mnt/lustre-mds1 (opts:-f) on onyx-36vm7
13:18:19:CMD: onyx-36vm7 umount -d -f /mnt/lustre-mds1
14:17:19:********** Timeout by autotest system **********
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Looking at the test_complete log for MDS1, vm7, doesn&#8217;t really show anything interesting. Yet, if we look at the console log for MDS1 for test_27 we see the LBUG:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;13:20:13:[30936.817772] Lustre: DEBUG MARKER: umount -d -f /mnt/lustre-mds1
13:20:13:[30942.959187] Lustre: 3708:0:(client.c:2113:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1467231499/real 1467231499]  req@ffff88007b2bdb00 x1538472463395376/t0(0) o39-&amp;gt;lustre-MDT0000-lwp-MDT0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1467231505 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1
13:20:13:[30942.968919] Lustre: 3708:0:(client.c:2113:ptlrpc_expire_one_request()) Skipped 39 previous similar messages
13:20:13:[31024.097193] Lustre: 11462:0:(client.c:2113:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1467231575/real 1467231575]  req@ffff88007b2bfc00 x1538472463395360/t0(0) o104-&amp;gt;lustre-MDT0000@10.2.4.148@tcp:15/16 lens 296/224 e 0 to 1 dl 1467231586 ref 1 fl Rpc:X/2/ffffffff rc 0/-1
13:20:13:[31024.108553] Lustre: 11462:0:(client.c:2113:ptlrpc_expire_one_request()) Skipped 7 previous similar messages
13:20:13:[31046.110142] LustreError: 11462:0:(ldlm_lockd.c:691:ldlm_handle_ast_error()) ### client (nid 10.2.4.148@tcp) failed to reply to blocking AST (req@ffff88007b2bfc00 x1538472463395360 status 0 rc -110), evict it ns: mdt-lustre-MDT0000_UUID lock: ffff88007a8ea200/0x40220881aef8307 lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13 rrc: 1 type: IBT flags: 0x60200400000020 nid: 10.2.4.148@tcp remote: 0x3f8b8faf93471726 expref: 5 pid: 11463 timeout: 4325802406 lvb_type: 0
13:20:13:[31046.122612] LustreError: 138-a: lustre-MDT0000: A client on nid 10.2.4.148@tcp was evicted due to a lock blocking callback time out: rc -110
13:20:13:[31046.135477] LustreError: 13044:0:(nodemap_member.c:50:nm_member_del()) ASSERTION( exp-&amp;gt;u.eu_target_data.ted_nodemap == nodemap ) failed: 
13:20:13:[31046.140099] LustreError: 13044:0:(nodemap_member.c:50:nm_member_del()) LBUG
13:20:13:[31046.142126] Pid: 13044, comm: mdt00_003
13:20:13:[31046.143802] 
13:20:13:[31046.143802] Call Trace:
13:20:13:[31046.146737]  [&amp;lt;ffffffffa06a87d3&amp;gt;] libcfs_debug_dumpstack+0x53/0x80 [libcfs]
13:20:13:[31046.148763]  [&amp;lt;ffffffffa06a8d75&amp;gt;] lbug_with_loc+0x45/0xc0 [libcfs]
13:20:13:[31046.150703]  [&amp;lt;ffffffffa0a8c79d&amp;gt;] nm_member_del+0x18d/0x190 [ptlrpc]
13:20:13:[31046.152683]  [&amp;lt;ffffffffa0a8743f&amp;gt;] nodemap_del_member+0x5f/0x170 [ptlrpc]
13:20:13:[31046.154752]  [&amp;lt;ffffffffa0db95a5&amp;gt;] mdt_obd_disconnect+0x155/0x640 [mdt]
13:20:13:[31046.156740]  [&amp;lt;ffffffffa09cd15b&amp;gt;] target_handle_disconnect+0x10b/0x4a0 [ptlrpc]
13:20:13:[31046.158828]  [&amp;lt;ffffffffa0a64e37&amp;gt;] tgt_disconnect+0x37/0x140 [ptlrpc]
13:20:13:[31046.160881]  [&amp;lt;ffffffffa0a69595&amp;gt;] tgt_request_handle+0x915/0x1320 [ptlrpc]
13:20:13:[31046.162920]  [&amp;lt;ffffffffa0a15b1b&amp;gt;] ptlrpc_server_handle_request+0x21b/0xa90 [ptlrpc]
13:20:13:[31046.165045]  [&amp;lt;ffffffffa0a136d8&amp;gt;] ? ptlrpc_wait_event+0x98/0x340 [ptlrpc]
13:20:13:[31046.166963]  [&amp;lt;ffffffff810b88b2&amp;gt;] ? default_wake_function+0x12/0x20
13:20:13:[31046.168920]  [&amp;lt;ffffffff810af018&amp;gt;] ? __wake_up_common+0x58/0x90
13:20:13:[31046.170841]  [&amp;lt;ffffffffa0a19bd0&amp;gt;] ptlrpc_main+0xaa0/0x1de0 [ptlrpc]
13:20:13:[31046.172759]  [&amp;lt;ffffffffa0a19130&amp;gt;] ? ptlrpc_main+0x0/0x1de0 [ptlrpc]
13:20:13:[31046.174784]  [&amp;lt;ffffffff810a5acf&amp;gt;] kthread+0xcf/0xe0
13:20:13:[31046.176658]  [&amp;lt;ffffffff810a5a00&amp;gt;] ? kthread+0x0/0xe0
13:20:13:[31046.178486]  [&amp;lt;ffffffff81646318&amp;gt;] ret_from_fork+0x58/0x90
13:20:13:[31046.180329]  [&amp;lt;ffffffff810a5a00&amp;gt;] ? kthread+0x0/0xe0
13:20:13:[31046.182177] 
13:20:13:[31046.183948] Kernel panic - not syncing: LBUG
13:20:13:[31046.184689] CPU: 0 PID: 13044 Comm: mdt00_003 Tainted: G           OE  ------------   3.10.0-327.18.2.el7_lustre.x86_64 #1
13:20:13:[31046.184689] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2007
13:20:13:[31046.184689]  ffffffffa06c5def 000000006a616767 ffff88004311bb00 ffffffff81635c14
13:20:13:[31046.184689]  ffff88004311bb80 ffffffff8162f48a ffffffff00000008 ffff88004311bb90
13:20:13:[31046.184689]  ffff88004311bb30 000000006a616767 ffffffffa0aa54eb 0000000000000246
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;


&lt;p&gt;There are several instances of this failure. Logs are at&lt;br/&gt;
2016-06-29 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/2cbf7ebc-3e44-11e6-80b9-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/2cbf7ebc-3e44-11e6-80b9-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-06-23  - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/a6a39816-39c2-11e6-acf3-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/a6a39816-39c2-11e6-acf3-5254006e85c2&lt;/a&gt;&lt;/p&gt;</description>
                <environment>autotest</environment>
        <key id="37924">LU-8357</key>
            <summary>sanity-sec LBUG on MDS umount with ASSERTION( exp-&gt;u.eu_target_data.ted_nodemap == nodemap )</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="kit.westneat">Kit Westneat</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                    </labels>
                <created>Thu, 30 Jun 2016 15:52:00 +0000</created>
                <updated>Tue, 2 Aug 2016 19:12:46 +0000</updated>
                            <resolved>Tue, 26 Jul 2016 19:47:38 +0000</resolved>
                                    <version>Lustre 2.9.0</version>
                                    <fixVersion>Lustre 2.9.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="157550" author="jgmitter" created="Fri, 1 Jul 2016 17:21:32 +0000"  >&lt;p&gt;Hi Kit,&lt;/p&gt;

&lt;p&gt;Could you please have a look at this nodemap issue?&lt;/p&gt;

&lt;p&gt;Thanks.&lt;br/&gt;
Joe&lt;/p&gt;</comment>
                            <comment id="157670" author="kit.westneat" created="Tue, 5 Jul 2016 16:14:12 +0000"  >&lt;p&gt;I found a couple places where nodemaps were being reclassified outside of the config lock, which could cause this LBUG.&lt;/p&gt;

&lt;p&gt;I&apos;ll upload a new patch.&lt;/p&gt;</comment>
                            <comment id="157685" author="gerrit" created="Tue, 5 Jul 2016 17:24:41 +0000"  >&lt;p&gt;Kit Westneat (kit.westneat@gmail.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/21159&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21159&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8357&quot; title=&quot;sanity-sec LBUG on MDS umount with ASSERTION( exp-&amp;gt;u.eu_target_data.ted_nodemap == nodemap )&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8357&quot;&gt;&lt;del&gt;LU-8357&lt;/del&gt;&lt;/a&gt; nodemap: reclassify nodemap requires active conf lock&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 318b220dfd35cffe54e3692eccd7d055f34c710f&lt;/p&gt;</comment>
                            <comment id="159062" author="yong.fan" created="Mon, 18 Jul 2016 03:18:51 +0000"  >&lt;p&gt;Another failure instance on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/802afcac-4aec-11e6-8968-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/802afcac-4aec-11e6-8968-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="159354" author="gerrit" created="Wed, 20 Jul 2016 17:43:32 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/21159/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21159/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8357&quot; title=&quot;sanity-sec LBUG on MDS umount with ASSERTION( exp-&amp;gt;u.eu_target_data.ted_nodemap == nodemap )&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8357&quot;&gt;&lt;del&gt;LU-8357&lt;/del&gt;&lt;/a&gt; nodemap: reclassify nodemap requires active conf lock&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 1ce1032a03dd26345e662164b7877079c54468f5&lt;/p&gt;</comment>
                            <comment id="159963" author="jgmitter" created="Tue, 26 Jul 2016 19:47:38 +0000"  >&lt;p&gt;Landed to master for 2.9.0&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="18740">LU-3291</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzygbz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>