<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:14:08 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-8044] class_process_config() no device for: lustre-MDT0021-mdtlov</title>
                <link>https://jira.whamcloud.com/browse/LU-8044</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;On startup for the first time after formatting, the MDT fails to process the config provided by the MGS.  The MDT then fails to start.&lt;br/&gt;
The config log on the MGS appears to be invalid, with more than one setup and modify_mdc_tgt record for one of the other MDTs.&lt;/p&gt;

&lt;p&gt;The MDT which fails to start reports:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: Lustre: Build Version: 2.8.0
LustreError: 11797:0:(obd_config.c:1262:class_process_config()) no device for: lustre-MDT0021-mdtlov
LustreError: 11797:0:(obd_config.c:1666:class_config_llog_handler()) MGC192.168.112.240@o2ib15: cfg command failed: rc = -22
Lustre:    cmd=cf014 0:lustre-MDT0021-mdtlov  1:lustre-MDT0014_UUID  2:20  3:1

LustreError: 15b-f: MGC192.168.112.240@o2ib15: The configuration from log &apos;lustre-MDT0021&apos;failed from the MGS (-22).  Make sure this client and the MGS are running compatible versions of Lustre.
LustreError: 11667:0:(obd_mount_server.c:1309:server_start_targets()) failed to start server lustre-MDT0021: -22
LustreError: 11667:0:(obd_mount_server.c:1798:server_fill_super()) Unable to start targets: -22
LustreError: 11667:0:(obd_mount_server.c:1512:server_put_super()) no obd lustre-MDT0021
Lustre: server umount lustre-MDT0021 complete
LustreError: 11667:0:(obd_mount.c:1426:lustre_fill_super()) Unable to mount  (-22)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The config logs CONFIGS/lustre-MDT* do not all have the same number of records.  lustre-MDT0021 has 2 more records than the other 29 MDTs.&lt;/p&gt;

&lt;p&gt;The suspicious llog records are:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;#04 (152)setup     0:lustre-MDT0014-osp-MDT0021  1:lustre-MDT0014_UUID  2:192.168.113.6@o2ib15
#05 (136)modify_mdc_tgts add 0:lustre-MDT0021-mdtlov  1:lustre-MDT0014_UUID  2:20  3:1
#179 (152)setup     0:lustre-MDT0014-osp-MDT0021  1:lustre-MDT0014_UUID  2:192.168.113.6@o2ib15
#180 (136)modify_mdc_tgts add 0:lustre-MDT0021-mdtlov  1:lustre-MDT0014_UUID  2:20  3:1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>TOSS 2 (RHEL 6.7 based)&lt;br/&gt;
kernel 2.6.32-573.22.1.1chaos.ch5.4.x86_64&lt;br/&gt;
Lustre 2.8.0+patches 2.8-llnl-preview1&lt;br/&gt;
zfs-0.6.5.4-1.ch5.4.x86_64&lt;br/&gt;
1 MGS - separate server&lt;br/&gt;
40 MDTs - each on separate server&lt;br/&gt;
10 OSTs - each on separate server&lt;br/&gt;
Filesystem name is &amp;quot;lustre&amp;quot;</environment>
        <key id="36257">LU-8044</key>
            <summary>class_process_config() no device for: lustre-MDT0021-mdtlov</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="di.wang">Di Wang</assignee>
                                    <reporter username="ofaaland">Olaf Faaland</reporter>
                        <labels>
                            <label>llnl</label>
                    </labels>
                <created>Tue, 19 Apr 2016 17:59:34 +0000</created>
                <updated>Thu, 14 Jun 2018 21:41:18 +0000</updated>
                            <resolved>Wed, 15 Jun 2016 13:15:25 +0000</resolved>
                                    <version>Lustre 2.8.0</version>
                                    <fixVersion>Lustre 2.9.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="149439" author="ofaaland" created="Tue, 19 Apr 2016 18:01:26 +0000"  >&lt;p&gt;My description made it sound like this happens every time.  That&apos;s not the case; it happens intermittently.&lt;/p&gt;</comment>
                            <comment id="149441" author="ofaaland" created="Tue, 19 Apr 2016 18:08:58 +0000"  >&lt;p&gt;Attached:&lt;br/&gt;
 config log for MDT0021 from CONFIGS/ on MGS&lt;br/&gt;
 console log (dmesg) for MDS (catalyst240)&lt;br/&gt;
 ldev.conf showing what nodes play what roles&lt;br/&gt;
 lctl dk output from MGS, reflects default debug and subsystem_debug settings&lt;/p&gt;</comment>
                            <comment id="149442" author="di.wang" created="Tue, 19 Apr 2016 18:15:59 +0000"  >&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;#04 (152)setup     0:lustre-MDT0014-osp-MDT0021  1:lustre-MDT0014_UUID  2:192.168.113.6@o2ib15
#05 (136)modify_mdc_tgts add 0:lustre-MDT0021-mdtlov  1:lustre-MDT0014_UUID  2:20  3:1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;The index for OSP setup seems too earlier, which does not look right. &lt;/p&gt;

&lt;p&gt;Could you please post CONFIGS/lustre-MDT0021 and CONFIG/lustre-MDT0000 here? thanks.&lt;/p&gt;</comment>
                            <comment id="149457" author="ofaaland" created="Tue, 19 Apr 2016 18:41:47 +0000"  >&lt;p&gt;Config log for MDT0000.&lt;br/&gt;
The log for MDT0021 is already attached.&lt;/p&gt;</comment>
                            <comment id="149460" author="ofaaland" created="Tue, 19 Apr 2016 18:47:15 +0000"  >&lt;p&gt;Di,&lt;/p&gt;

&lt;p&gt;For the next few hours, I can either gather more information for you, or experiment.  About 4 hours from now, I&apos;ll have to put the nodes back to their production use and the filesystem will be destroyed.&lt;/p&gt;

&lt;p&gt;thanks,&lt;br/&gt;
Olaf&lt;/p&gt;</comment>
                            <comment id="149474" author="di.wang" created="Tue, 19 Apr 2016 19:46:19 +0000"  >&lt;p&gt;According to the config log, it looks  OSP (lustre-MDT0014-osp-MDT0021) setup record is added before &quot;lov setup&quot;, which is clearly wrong. &lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;#01 (224)marker 865 (flags=0x01, v2.8.0.0) lustre-MDT0014  &apos;add osp&apos; Tue Apr 19 08:55:48 2016-
#02 (088)add_uuid  nid=192.168.113.6@o2ib15(0x5000fc0a87106)  0:  1:192.168.113.6@o2ib15
#03 (144)attach    0:lustre-MDT0014-osp-MDT0021  1:osp  2:lustre-MDT0021-mdtlov_UUID
#04 (152)setup     0:lustre-MDT0014-osp-MDT0021  1:lustre-MDT0014_UUID  2:192.168.113.6@o2ib15
#05 (136)modify_mdc_tgts add 0:lustre-MDT0021-mdtlov  1:lustre-MDT0014_UUID  2:20  3:1
#06 (224)END   marker 865 (flags=0x02, v2.8.0.0) lustre-MDT0014  &apos;add osp&apos; Tue Apr 19 08:55:48 2016-
#07 (224)marker 873 (flags=0x01, v2.8.0.0) lustre-MDT0021  &apos;add mdt&apos; Tue Apr 19 08:55:48 2016-
#08 (120)attach    0:lustre-MDT0021  1:mdt  2:lustre-MDT0021_UUID
#09 (112)mount_option 0:  1:lustre-MDT0021  2:lustre-MDT0021-mdtlov
#10 (160)setup     0:lustre-MDT0021  1:lustre-MDT0021_UUID  2:33  3:lustre-MDT0021-mdtlov  4:f
#11 (224)END   marker 873 (flags=0x02, v2.8.0.0) lustre-MDT0021  &apos;add mdt&apos; Tue Apr 19 08:55:48 2016-
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I am checking the debug log on MGS to see why this happen. &lt;/p&gt;

&lt;p&gt;Olaf, Could you please try to reproduce the log with debug level = -1 on MGS? it will help me to figure out what happens there. thanks.&lt;/p&gt;</comment>
                            <comment id="149479" author="di.wang" created="Tue, 19 Apr 2016 20:41:54 +0000"  >&lt;p&gt;Ah, it looks like a race when MGS register 2 MDTs at the same time, I will cook a patch.&lt;/p&gt;</comment>
                            <comment id="149486" author="ofaaland" created="Tue, 19 Apr 2016 21:39:54 +0000"  >&lt;p&gt;Attach debug log from MGS with debug = -1, while MDTs coming up for first time.&lt;/p&gt;

&lt;p&gt;In this log, MDT0002 (on catalyst243, NID 192.168.112.243@o2ib15) encountered the error.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: Lustre: Build Version: 2.8.0
LustreError: 11826:0:(obd_config.c:1262:class_process_config()) no device for: lustre-MDT0002-mdtlov
LustreError: 11826:0:(obd_config.c:1666:class_config_llog_handler()) MGC192.168.112.240@o2ib15: cfg command failed: rc = -22
Lustre:    cmd=cf014 0:lustre-MDT0002-mdtlov  1:lustre-MDT0023_UUID  2:35  3:1

LustreError: 15b-f: MGC192.168.112.240@o2ib15: The configuration from log &apos;lustre-MDT0002&apos;failed from the MGS (-22).  Make sure this client and the MGS are running compatible versions of Lustre.
LustreError: 11696:0:(obd_mount_server.c:1309:server_start_targets()) failed to start server lustre-MDT0002: -22
LustreError: 11696:0:(obd_mount_server.c:1798:server_fill_super()) Unable to start targets: -22
LustreError: 11696:0:(obd_mount_server.c:1512:server_put_super()) no obd lustre-MDT0002
Lustre: server umount lustre-MDT0002 complete
LustreError: 11696:0:(obd_mount.c:1426:lustre_fill_super()) Unable to mount  (-22)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="149489" author="gerrit" created="Tue, 19 Apr 2016 21:49:47 +0000"  >&lt;p&gt;wangdi (di.wang@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/19658&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/19658&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8044&quot; title=&quot;class_process_config() no device for: lustre-MDT0021-mdtlov&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8044&quot;&gt;&lt;del&gt;LU-8044&lt;/del&gt;&lt;/a&gt; mgs: Only add OSP for registered MDT&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 3ccd18da205192ec0ad527ec88b69793aa5e6670&lt;/p&gt;</comment>
                            <comment id="149492" author="di.wang" created="Tue, 19 Apr 2016 22:03:19 +0000"  >&lt;p&gt;Olaf: the new debug log seems not catch the failure,  probably too late or -1 make the dk log too big to catch all of information?  But anyway the patch 19658 should help here. Please try this when you have another chance. Thanks.&lt;/p&gt;</comment>
                            <comment id="149583" author="jgmitter" created="Wed, 20 Apr 2016 20:25:37 +0000"  >&lt;p&gt;Hi Di,&lt;br/&gt;
Assigning to you as I see you have already commented and provided a fix in a new patch.&lt;br/&gt;
Thanks.&lt;br/&gt;
Joe&lt;/p&gt;</comment>
                            <comment id="155589" author="gerrit" created="Tue, 14 Jun 2016 03:46:41 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/19658/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/19658/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8044&quot; title=&quot;class_process_config() no device for: lustre-MDT0021-mdtlov&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8044&quot;&gt;&lt;del&gt;LU-8044&lt;/del&gt;&lt;/a&gt; mgs: Only add OSP for registered MDT&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: c67a74b55c126ec1be6c195cb2e8cb8c2e6cf868&lt;/p&gt;</comment>
                            <comment id="155774" author="jgmitter" created="Wed, 15 Jun 2016 13:15:25 +0000"  >&lt;p&gt;patch has landed to master for 2.9.0&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="21210" name="dk.catalyst240" size="3408656" author="ofaaland" created="Tue, 19 Apr 2016 18:08:58 +0000"/>
                            <attachment id="21208" name="dmesg.catalyst240" size="866" author="ofaaland" created="Tue, 19 Apr 2016 18:08:58 +0000"/>
                            <attachment id="21209" name="ldev.conf" size="2907" author="ofaaland" created="Tue, 19 Apr 2016 18:08:58 +0000"/>
                            <attachment id="21211" name="llog.MDT0000.onMGS" size="42104" author="ofaaland" created="Tue, 19 Apr 2016 18:41:47 +0000"/>
                            <attachment id="21207" name="llog.MDT0021.onMGS" size="42296" author="ofaaland" created="Tue, 19 Apr 2016 18:08:58 +0000"/>
                            <attachment id="21215" name="mgs.register_mdts.dk.gz" size="243" author="ofaaland" created="Tue, 19 Apr 2016 21:39:53 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10490" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>End date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Fri, 3 Jun 2016 17:59:34 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzy8mn:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10493" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>Start date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Tue, 19 Apr 2016 17:59:34 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    </customfields>
    </item>
</channel>
</rss>