<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:30:00 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-2988] conf-sanity 66: Modules still loaded</title>
                <link>https://jira.whamcloud.com/browse/LU-2988</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;It is easy to reproduce this on a single VM by running only conf-sanity 66:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== conf-sanity test 66: replace nids == 15:30:00 (1363678200)
Loading modules from /root/lustre-master/lustre/tests/..
detected 1 online CPUs by sysfs
libcfs will create CPU partition based on online CPUs
debug=-1
subsystem_debug=all -lnet -lnd -pinger
gss/krb5 is not supported
quota/lquota options: &apos;hash_lqs_cur_bits=3&apos;
start mds service on linux
Starting mds1:   -o loop /tmp/lustre-mdt1 /mnt/mds1
Started lustre-MDT0000
start ost1 service on linux
Starting ost1:   -o loop /tmp/lustre-ost1 /mnt/ost1
Started lustre-OST0000
mount lustre on /mnt/lustre.....
Starting client: linux: -o user_xattr,flock linux@tcp:/lustre /mnt/lustre
replace_nids should fail if MDS, OSTs and clients are UP
error: replace_nids: Operation now in progress
umount lustre on /mnt/lustre.....
Stopping client linux /mnt/lustre (opts:)
sh: lsof: command not found
replace_nids should fail if MDS and OSTs are UP
error: replace_nids: Operation now in progress
stop ost1 service on linux
Stopping /mnt/ost1 (opts:-f) on linux
replace_nids should fail if MDS is UP
error: replace_nids: Operation now in progress
stop mds service on linux
Stopping /mnt/mds1 (opts:-f) on linux
start mds service on linux
Starting mds1: -o nosvc,loop  /tmp/lustre-mdt1 /mnt/mds1
Started lustre-MDT0000
command should accept two parameters
replace primary NIDs for a device
usage: replace_nids &amp;lt;device&amp;gt; &amp;lt;nid1&amp;gt;[,nid2,nid3]
correct device name should be passed
error: replace_nids: Invalid argument
wrong nids list should not destroy the system
replace primary NIDs for a device
usage: replace_nids &amp;lt;device&amp;gt; &amp;lt;nid1&amp;gt;[,nid2,nid3]
replace OST nid
command should accept two parameters
replace primary NIDs for a device
usage: replace_nids &amp;lt;device&amp;gt; &amp;lt;nid1&amp;gt;[,nid2,nid3]
wrong nids list should not destroy the system
replace primary NIDs for a device
usage: replace_nids &amp;lt;device&amp;gt; &amp;lt;nid1&amp;gt;[,nid2,nid3]
replace MDS nid
stop mds service on linux
Stopping /mnt/mds1 (opts:-f) on linux
start mds service on linux
Starting mds1:   -o loop /tmp/lustre-mdt1 /mnt/mds1
Started lustre-MDT0000
start ost1 service on linux
Starting ost1:   -o loop /tmp/lustre-ost1 /mnt/ost1
Started lustre-OST0000
mount lustre on /mnt/lustre.....
Starting client: linux: -o user_xattr,flock linux@tcp:/lustre /mnt/lustre
setup single mount lustre success
umount lustre on /mnt/lustre.....
Stopping client linux /mnt/lustre (opts:)
sh: lsof: command not found
stop ost1 service on linux
Stopping /mnt/ost1 (opts:-f) on linux
stop mds service on linux
Stopping /mnt/mds1 (opts:-f) on linux
Modules still loaded: 
ldiskfs/ldiskfs/ldiskfs.o lustre/mdd/mdd.o lustre/mgs/mgs.o lustre/quota/lquota.o lustre/mgc/mgc.o lustre/fid/fid.o lustre/fld/fld.o lustre/ptlrpc/ptlrpc.o lustre/obdclass/obdclass.o lustre/lvfs/lvfs.o lnet/klnds/socklnd/ksocklnd.o lnet/lnet/lnet.o libcfs/libcfs/libcfs.o
Stopping clients: linux /mnt/lustre (opts:)
Stopping clients: linux /mnt/lustre2 (opts:)
Loading modules from /root/lustre-master/lustre/tests/..
detected 1 online CPUs by sysfs
libcfs will create CPU partition based on online CPUs
debug=-1
subsystem_debug=all -lnet -lnd -pinger
gss/krb5 is not supported
Formatting mgs, mds, osts
Format mds1: /tmp/lustre-mdt1
Format ost1: /tmp/lustre-ost1
Format ost2: /tmp/lustre-ost2
Resetting fail_loc on all nodes...done.
PASS 66 (69s)
............== conf-sanity test complete, duration 113 sec == 15:31:10 (1363678270)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This prevents some of my new tests, which are placed after 66, from removing and reloading Lustre kernel modules.  The root cause is that the &quot;lctl replace_nids&quot; implementation may leak lu_envs when certain errors happen.&lt;/p&gt;

&lt;p&gt;I&apos;ll post a patch shortly.&lt;/p&gt;</description>
                <environment></environment>
        <key id="18004">LU-2988</key>
            <summary>conf-sanity 66: Modules still loaded</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="liwei">Li Wei</assignee>
                                    <reporter username="liwei">Li Wei</reporter>
                        <labels>
                    </labels>
                <created>Tue, 19 Mar 2013 10:23:03 +0000</created>
                <updated>Tue, 2 Apr 2013 00:54:12 +0000</updated>
                            <resolved>Tue, 2 Apr 2013 00:54:00 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                                    <fixVersion>Lustre 2.4.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>2</watches>
                                                                            <comments>
                            <comment id="54366" author="liwei" created="Tue, 19 Mar 2013 13:26:25 +0000"  >&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/5765&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/5765&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="55245" author="liwei" created="Tue, 2 Apr 2013 00:54:00 +0000"  >&lt;p&gt;The patch has landed to master.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvlof:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>7281</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>