<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:30:57 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-9976] mount.lustre: mount /dev/sdd at /mnt/test-fs-MDT0000 failed: No such file or directory</title>
                <link>https://jira.whamcloud.com/browse/LU-9976</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;While trying to mount a Lustre target for initial registration I got:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# mount --verbose -t lustre /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk3 /mnt/test-fs-MDT0000
arg[0] = /sbin/mount.lustre
arg[1] = -v
arg[2] = -o
arg[3] = rw
arg[4] = /dev/sdd
arg[5] = /mnt/test-fs-MDT0000
source = /dev/sdd (/dev/sdd), target = /mnt/test-fs-MDT0000
options = rw
checking for existing Lustre data: found
Reading CONFIGS/mountdata
Writing CONFIGS/mountdata
mounting device /dev/sdd at /mnt/test-fs-MDT0000, flags=0x1000000 options=user_xattr,errors=remount-ro,,osd=osd-ldiskfs,mgsnode=10.14.81.0@tcp:10.14.81.1@tcp,virgin,update,param=mgsnode=10.14.81.0@tcp:10.14.81.1@tcp,param=failover.node=10.14.81.0@tcp,svname=test-fs-MDT0000,device=/dev/sdd

mount.lustre: increased /sys/block/sdd/queue/max_sectors_kb from 512 to 16384
mount.lustre: mount /dev/sdd at /mnt/test-fs-MDT0000 failed: No such file or directory retries left: 0
mount.lustre: mount /dev/sdd at /mnt/test-fs-MDT0000 failed: No such file or directory
Is the MGS specification correct?
Is the filesystem name correct?
If upgrading, is the copied client log valid? (see upgrade docs)
# echo $?
2

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Error messages on the MDS trying to register the target:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Sep 11 19:58:52 lotus-10vm6.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sdd): mounted filesystem with ordered data mode. Opts: errors=remount-ro
Sep 11 19:58:52 lotus-10vm6.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sdd): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc
Sep 11 19:58:59 lotus-10vm6.lotus.hpdd.lab.intel.com kernel: Lustre: 1771:0:(client.c:2114:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1505185132/real 1505185132]  req@ff
Sep 11 19:58:59 lotus-10vm6.lotus.hpdd.lab.intel.com kernel: LustreError: 166-1: MGC10.14.81.0@tcp: Connection to MGS (at 10.14.81.0@tcp) was lost; in progress operations using this service will fail
Sep 11 19:58:59 lotus-10vm6.lotus.hpdd.lab.intel.com kernel: LustreError: 13a-8: Failed to get MGS log test-fs-MDT0000 and no local copy.
Sep 11 19:58:59 lotus-10vm6.lotus.hpdd.lab.intel.com kernel: LustreError: 15c-8: MGC10.14.81.0@tcp: The configuration from log &apos;test-fs-MDT0000&apos; failed (-2). This may be the result of communication errors bet
Sep 11 19:58:59 lotus-10vm6.lotus.hpdd.lab.intel.com kernel: LustreError: 1771:0:(obd_mount_server.c:1373:server_start_targets()) failed to start server test-fs-MDT0000: -2
Sep 11 19:58:59 lotus-10vm6.lotus.hpdd.lab.intel.com kernel: LustreError: 1771:0:(obd_mount_server.c:1866:server_fill_super()) Unable to start targets: -2
Sep 11 19:58:59 lotus-10vm6.lotus.hpdd.lab.intel.com kernel: LustreError: 1771:0:(obd_mount_server.c:1576:server_put_super()) no obd test-fs-MDT0000
Sep 11 19:59:03 lotus-10vm6.lotus.hpdd.lab.intel.com kernel: Lustre: server umount test-fs-MDT0000 complete
Sep 11 19:59:03 lotus-10vm6.lotus.hpdd.lab.intel.com kernel: Lustre: Skipped 1 previous similar message
Sep 11 19:59:03 lotus-10vm6.lotus.hpdd.lab.intel.com kernel: LustreError: 1771:0:(obd_mount.c:1505:lustre_fill_super()) Unable to mount  (-2)

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Error messages on the MGS:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Sep 11 19:58:44 lotus-10vm5.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sdc): mounted filesystem with ordered data mode. Opts: errors=remount-ro
Sep 11 19:58:44 lotus-10vm5.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sdc): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc
Sep 11 19:58:44 lotus-10vm5.lotus.hpdd.lab.intel.com kernel: Lustre: 30767:0:(osd_handler.c:7007:osd_mount()) MGS-osd: device /dev/sdc was upgraded from Lustre-1.x without enabling the dirdata feature. If you
 do not want to downgrade to Lustre-1.x again, you can enable it via &apos;tune2fs -O dirdata device&apos;
Sep 11 19:58:44 lotus-10vm5.lotus.hpdd.lab.intel.com kernel: Lustre: MGS: Connection restored to MGC10.14.81.0@tcp_0 (at 0@lo)
Sep 11 19:59:03 lotus-10vm5.lotus.hpdd.lab.intel.com kernel: Lustre: MGS: Received new LWP connection from 10.14.81.1@tcp, removing former export from same NID
Sep 11 19:59:03 lotus-10vm5.lotus.hpdd.lab.intel.com kernel: Lustre: MGS: Connection restored to 0f2304fc-a4f2-fb0d-fe61-0eb9e38e1b0a (at 10.14.81.1@tcp)
Sep 11 19:59:03 lotus-10vm5.lotus.hpdd.lab.intel.com kernel: Lustre: Skipped 1 previous similar message

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;A subsequent attempt to mount the target was successful.&lt;/p&gt;

&lt;p&gt;I have attached the lustre debug for the mds and mgs where this occurred.&lt;/p&gt;</description>
                <environment>Lustre: Build Version: 2.10.0_71_g6d59523</environment>
        <key id="48277">LU-9976</key>
            <summary>mount.lustre: mount /dev/sdd at /mnt/test-fs-MDT0000 failed: No such file or directory</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="brian">Brian Murrell</reporter>
                        <labels>
                    </labels>
                <created>Tue, 12 Sep 2017 14:03:31 +0000</created>
                <updated>Tue, 3 Oct 2017 15:35:54 +0000</updated>
                                            <version>Lustre 2.10.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="208253" author="adilger" created="Wed, 13 Sep 2017 15:35:45 +0000"  >&lt;p&gt;At a minimum, the message about the missing  &lt;tt&gt;dirdata&lt;/tt&gt; feature should be removed for the MGS. That is only relevant for the MDT. &lt;/p&gt;</comment>
                            <comment id="209571" author="brian" created="Tue, 26 Sep 2017 14:06:48 +0000"  >&lt;p&gt;Do we know what the cause of this is and what remediation should be taken when this occurs?&lt;/p&gt;

&lt;p&gt;Until we know these, we can&apos;t know what IML should do about it and remediating this is blocking the 4.0 RC.&lt;/p&gt;</comment>
                            <comment id="210195" author="jhammond" created="Tue, 3 Oct 2017 15:18:26 +0000"  >&lt;p&gt;In a production setting, IML should at least retry the mount.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                            <attachment id="28269" name="lctl_debug-mds.bz2" size="7181938" author="brian" created="Tue, 12 Sep 2017 13:58:20 +0000"/>
                            <attachment id="28268" name="lctl_debug-mgs.bz2" size="7637650" author="brian" created="Tue, 12 Sep 2017 13:58:21 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzjzz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>