<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:38:32 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-10827] conf-sanity test 0 fails with &#8216;rmmod: ERROR: Module lustre is in use&#8217;</title>
                <link>https://jira.whamcloud.com/browse/LU-10827</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;conf-sanity tests 0, 1, 2, 3, 4, 5a/b/c/d and many others fail with the following error when trying to shut down the file system&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;stop mds service on onyx-50vm9
CMD: onyx-50vm9 grep -c /mnt/lustre-mds1&apos; &apos; /proc/mounts || true
Stopping /mnt/lustre-mds1 (opts:-f) on onyx-50vm9
CMD: onyx-50vm9 umount -d -f /mnt/lustre-mds1
CMD: onyx-50vm9 lsmod | grep lnet &amp;gt; /dev/null &amp;amp;&amp;amp;
lctl dl | grep &apos; ST &apos; || true
CMD: onyx-50vm6.onyx.hpdd.intel.com lsmod | grep lnet &amp;gt; /dev/null &amp;amp;&amp;amp;
lctl dl | grep &apos; ST &apos; || true
rmmod: ERROR: Module lustre is in use
conf-sanity test_0: @@@@@@ FAIL: cleanup failed with 203
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;unmounting the OSTs and MDT seems to work, but calling rmmod on the client seems to fail; from the suite_log for &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/5d846520-287c-11e8-9e0e-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/5d846520-287c-11e8-9e0e-52540065bddc&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Looking at the client console (vm6) we see&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Ubuntu 16.04.2 LTS trevis-4vm3.trevis.hpdd.intel.com ttyS0

trevis-4vm3 login: [&#160;&#160;&#160; 8.165539] audit: type=1400 audit(1521039871.976:11): apparmor=&quot;ALLOWED&quot; operation=&quot;open&quot; profile=&quot;/usr/sbin/sssd&quot; name=&quot;/etc/gss/mech.d/&quot; pid=547 comm=&quot;sssd_be&quot; requested_mask=&quot;r&quot; denied_mask=&quot;r&quot; fsuid=0 ouid=0
[&#160;&#160; 81.009062] random: nonblocking pool is initialized
[&#160; 138.440162] libcfs: module verification failed: signature and/or required key missing - tainting kernel
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;We don&#8217;t see this during RHEL 7 testing.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;On the client dmesg log, we see an error&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[24874.129276] Lustre: DEBUG MARKER: grep -c /mnt/lustre&apos; &apos; /proc/mounts
[24874.137005] Lustre: DEBUG MARKER: lsof -t /mnt/lustre
[24880.900494] LustreError: 167-0: lustre-MDT0000-mdc-ffff880061827800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail.
[24880.902374] LustreError: 8353:0:(file.c:4213:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -5
[24880.905171] Lustre: lustre-MDT0000-mdc-ffff880061827800: Connection restored to 10.2.9.244@tcp (at 10.2.9.244@tcp)
[24881.073111] Lustre: DEBUG MARKER: umount /mnt/lustre 2&amp;gt;&amp;amp;1
[24881.110161] Lustre: Unmounted lustre-client
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;So far, this issue is only seen when testing Ubuntu clients and started on 2018-02-27 22:03:52 UTC.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Logs for the failures are at&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/f75808be-1cb5-11e8-a7cd-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/f75808be-1cb5-11e8-a7cd-52540065bddc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/4aeef8ce-1de8-11e8-bd91-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/4aeef8ce-1de8-11e8-bd91-52540065bddc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/cf7f2d1a-1f29-11e8-b046-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/cf7f2d1a-1f29-11e8-b046-52540065bddc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/a268caba-2894-11e8-b3c6-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/a268caba-2894-11e8-b3c6-52540065bddc&lt;/a&gt;&lt;/p&gt;</description>
                <environment>Ubuntu clients</environment>
        <key id="51432">LU-10827</key>
            <summary>conf-sanity test 0 fails with &#8216;rmmod: ERROR: Module lustre is in use&#8217;</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="ys">Yang Sheng</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                            <label>ubuntu</label>
                    </labels>
                <created>Mon, 19 Mar 2018 23:17:15 +0000</created>
                <updated>Thu, 29 Mar 2018 22:52:57 +0000</updated>
                            <resolved>Thu, 29 Mar 2018 22:52:57 +0000</resolved>
                                    <version>Lustre 2.11.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="224047" author="pjones" created="Tue, 20 Mar 2018 17:09:09 +0000"  >&lt;p&gt;Yang Sheng&lt;/p&gt;

&lt;p&gt;Could you please investigate?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="224610" author="ys" created="Tue, 27 Mar 2018 15:29:14 +0000"  >&lt;p&gt;From test script, the module unload should not be called since this is a mgsmds combined environment. Looks like this part should be ran on mds instead of client. Will push a patch to fix it.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Thanks,&lt;/p&gt;

&lt;p&gt;Yangsheng&lt;/p&gt;</comment>
                            <comment id="224611" author="gerrit" created="Tue, 27 Mar 2018 15:47:13 +0000"  >&lt;p&gt;Yang Sheng (yang.sheng@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/31793&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/31793&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10827&quot; title=&quot;conf-sanity test 0 fails with &#8216;rmmod: ERROR: Module lustre is in use&#8217;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10827&quot;&gt;&lt;del&gt;LU-10827&lt;/del&gt;&lt;/a&gt; tests: unload_modules_conf should run on mds&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 4cb7aabdf935339d30dfb89ac6755116aa20b944&lt;/p&gt;</comment>
                            <comment id="224681" author="jamesanunez" created="Wed, 28 Mar 2018 02:26:48 +0000"  >&lt;p&gt;We reverted the patch for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6867&quot; title=&quot;change test-framework to detect active facet based on current Lustre state&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6867&quot;&gt;LU-6867&lt;/a&gt; &lt;a href=&quot;https://review.whamcloud.com/#/c/15638/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/#/c/15638/&lt;/a&gt;&#160;and it looks like conf-sanity running with Ubuntu clients now passes all testing.&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;The revert patch is at &lt;a href=&quot;https://review.whamcloud.com/#/c/31798/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/#/c/31798/&lt;/a&gt;&#160;and the conf-sanity results are at &lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/19e2923f-bfde-498a-a827-583c610cd040.&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/19e2923f-bfde-498a-a827-583c610cd040.&lt;/a&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="224846" author="jamesanunez" created="Thu, 29 Mar 2018 22:52:57 +0000"  >&lt;p&gt;When we reverted &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6867&quot; title=&quot;change test-framework to detect active facet based on current Lustre state&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6867&quot;&gt;LU-6867&lt;/a&gt;, this issue has gone away. Closing as a duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6867&quot; title=&quot;change test-framework to detect active facet based on current Lustre state&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6867&quot;&gt;LU-6867&lt;/a&gt;.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="31127">LU-6867</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzukf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>