<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:01:47 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-6619] lustre/obdclass does not get cleared while stopping/cleanup Lustre if setup had additional MDT present </title>
                <link>https://jira.whamcloud.com/browse/LU-6619</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Hi,&lt;/p&gt;

&lt;p&gt;Pre-requisite to reproduce this bug :&lt;/p&gt;

&lt;p&gt;1. A single Scientific Linux (release 6.6)  VM, with min 1 GB memory and &lt;br/&gt;
   50GB disk space.&lt;br/&gt;
2. A lustre setup 2.7.51 up and running on the above VM, in my case all  &lt;br/&gt;
    lustre components are configured on the  same VM .&lt;br/&gt;
3. I have added 1 extra MDT of 15 GB to the lustre setup, the MDT was  &lt;br/&gt;
    created on loop device.&lt;br/&gt;
===================================&lt;/p&gt;
&lt;ol&gt;
	&lt;li&gt;Steps to reproduce the issue :&lt;br/&gt;
===================================&lt;br/&gt;
1.  run dd command to generate some IO on the lustre filesystem .&lt;br/&gt;
     ( dd if=/dev/zero of=/mnt/lustre/test bs=512M count=10).&lt;br/&gt;
2. Once IOs are completed , stop Lustre filesystem , i had executed &lt;br/&gt;
    lustrecleanup.sh script (../lustre-release/lustre/tests/llmountcleanup.sh) &lt;br/&gt;
    to unmount/stop the lustre.&lt;br/&gt;
3. After the unmount completes, lustre prints error message on the &lt;br/&gt;
    terminal :module lustre/obdclass stil loaded &lt;br/&gt;
=======================================&lt;br/&gt;
Command prompt trace :&lt;br/&gt;
=============================&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@localhost ~&amp;#93;&lt;/span&gt;# sh /var/lib/jenkins/jobs/Lustre-New-Test/workspace/default/lustre-release/lustre/tests/                           llmountcleanup.sh&lt;br/&gt;
Stopping clients: localhost /mnt/lustre (opts:-f)&lt;br/&gt;
Stopping client localhost /mnt/lustre opts:-f&lt;br/&gt;
Stopping clients: localhost /mnt/lustre2 (opts:-f)&lt;br/&gt;
Stopping /mnt/mds1 (opts:-f) on localhost&lt;br/&gt;
Stopping /mnt/ost1 (opts:-f) on localhost&lt;br/&gt;
Stopping /mnt/ost2 (opts:-f) on localhost&lt;br/&gt;
  2 UP mgc MGC192.168.102.13@tcp 9918b9be-ec01-ce40-5dc1-d4ebb297e839 5&lt;br/&gt;
  3 UP mds MDS MDS_uuid 3&lt;br/&gt;
 23 UP osd-ldiskfs lustre-MDT0001-osd lustre-MDT0001-osd_UUID 9&lt;br/&gt;
 24 UP lod lustre-MDT0001-mdtlov lustre-MDT0001-mdtlov_UUID 4&lt;br/&gt;
 25 UP mdt lustre-MDT0001 lustre-MDT0001_UUID 5&lt;br/&gt;
 26 UP mdd lustre-MDD0001 lustre-MDD0001_UUID 4&lt;br/&gt;
 27 UP osp lustre-MDT0000-osp-MDT0001 lustre-MDT0001-mdtlov_UUID 5&lt;br/&gt;
 28 UP osp lustre-OST0000-osc-MDT0001 lustre-MDT0001-mdtlov_UUID 5&lt;br/&gt;
 29 UP osp lustre-OST0001-osc-MDT0001 lustre-MDT0001-mdtlov_UUID 5&lt;br/&gt;
 30 UP lwp lustre-MDT0000-lwp-MDT0001 lustre-MDT0000-lwp-MDT0001_UUID 5&lt;br/&gt;
Modules still loaded: ************&lt;br/&gt;
lustre/osp/osp.o lustre/lod/lod.o lustre/mdt/mdt.o lustre/mdd/mdd.o ldiskfs/ldiskfs.o lustre/quota/lquota.o  lustre/lfsck/lfsck.o lustre/mgc/mgc.o lustre/fid/fid.o lustre/fld/fld.o lustre/ptlrpc/ptlrpc.o lustre/obdc                           lass/obdclass.o lnet/klnds/socklnd/ksocklnd.o lnet/lnet/lnet.o libcfs/libcfs/libcfs.o&lt;br/&gt;
==================================================&lt;br/&gt;
4. Now after this message, I had manually unmounted the additional MDT , which results into call traces.&lt;br/&gt;
5. Var/log/messages shows--&amp;gt; proc_dir_entry &apos;lustre/lov&apos; already registered, proc_dir_entry &apos;lustre/osc&apos; already registered followed by call trace .&lt;br/&gt;
5. After this , if lustre mount/start  is attempted , then the process Hangs &lt;br/&gt;
    sometime ,Again results into call traces at the backend.&lt;br/&gt;
6. Reboot is the solution to cleanup everything and start a fresh.&lt;br/&gt;
====================================================&lt;br/&gt;
Attaching /var/log/messages and dmesgs from the Lustre setup.&lt;br/&gt;
====================================================&lt;br/&gt;
Thanks,&lt;br/&gt;
Paramita Varma&lt;/li&gt;
&lt;/ol&gt;


</description>
                <environment>Scientific Linux release 6.6 (Carbon)&lt;br/&gt;
</environment>
        <key id="30230">LU-6619</key>
            <summary>lustre/obdclass does not get cleared while stopping/cleanup Lustre if setup had additional MDT present </summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="paramitavarma">Paramita varma</reporter>
                        <labels>
                            <label>mdt</label>
                    </labels>
                <created>Tue, 19 May 2015 06:43:49 +0000</created>
                <updated>Fri, 28 Feb 2020 00:04:00 +0000</updated>
                            <resolved>Fri, 28 Feb 2020 00:04:00 +0000</resolved>
                                    <version>Lustre 2.7.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>1</watches>
                                                                            <comments>
                            <comment id="116312" author="paramitavarma" created="Mon, 25 May 2015 08:59:13 +0000"  >&lt;p&gt;Hi,&lt;br/&gt;
This issue is reproducible with additional MDTcreated on Disk device also.&lt;br/&gt;
I have retested the scenario with disk device and hit the issue again.&lt;/p&gt;

&lt;p&gt;Thanks &amp;amp; Regards,&lt;/p&gt;

&lt;p&gt;Paramita Varma&lt;/p&gt;</comment>
                            <comment id="264206" author="adilger" created="Fri, 28 Feb 2020 00:04:00 +0000"  >&lt;p&gt;Close old bug that hasn&apos;t been seen in a long time.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                            <attachment id="17817" name="dmesg-proc-cleaning.txt" size="94137" author="paramitavarma" created="Tue, 19 May 2015 06:43:49 +0000"/>
                            <attachment id="17951" name="log-message-with-disk-device-MDT.txt" size="16580" author="paramitavarma" created="Mon, 25 May 2015 09:08:59 +0000"/>
                            <attachment id="17950" name="log-message-with-disk-device-MDT.txt" size="16580" author="paramitavarma" created="Mon, 25 May 2015 09:01:29 +0000"/>
                            <attachment id="17818" name="proc-cleaning-log-messages.txt" size="39562" author="paramitavarma" created="Tue, 19 May 2015 06:43:49 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10040" key="com.atlassian.jira.plugin.system.customfieldtypes:labels">
                        <customfieldname>Epic</customfieldname>
                        <customfieldvalues>
                                        <label>client</label>
            <label>metadata</label>
            <label>mount</label>
            <label>server</label>
            <label>test</label>
    
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10030" key="com.atlassian.jira.plugin.system.customfieldtypes:labels">
                        <customfieldname>Epic/Theme</customfieldname>
                        <customfieldvalues>
                                        <label>Lustre-2.5.2</label>
            <label>test</label>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                            <customfield id="customfield_10070" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Project</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10032"><![CDATA[Test Infrastructure]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzxdnr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>