<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:06:36 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-7171] Hard Failover recovery-small test_65: Inappropriate ioctl for device</title>
                <link>https://jira.whamcloud.com/browse/LU-7171</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This issue was created by maloo for sarah_lw &amp;lt;wei3.liu@intel.com&amp;gt;&lt;/p&gt;

&lt;p&gt;This issue relates to the following test suite run: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/2a433f74-55bb-11e5-8784-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/2a433f74-55bb-11e5-8784-5254006e85c2&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The sub-test test_65 failed with the following error:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;test_65 failed with 1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;test log&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== recovery-small test 65: lock enqueue for destroyed export ========================================= 09:27:59 (1441618079)
Starting client: shadow-49vm1.shadow.whamcloud.com:  -o user_xattr,flock shadow-49vm3:shadow-49vm7:/lustre /mnt/lustre2
CMD: shadow-49vm1.shadow.whamcloud.com mkdir -p /mnt/lustre2
CMD: shadow-49vm1.shadow.whamcloud.com mount -t lustre -o user_xattr,flock shadow-49vm3:shadow-49vm7:/lustre /mnt/lustre2
mount.lustre: mount shadow-49vm3:shadow-49vm7:/lustre at /mnt/lustre2 failed: Input/output error
Is the MGS running?
error on ioctl 0x4008669a for &apos;/mnt/lustre2/f65.recovery-small&apos; (3): Inappropriate ioctl for device
error: setstripe: create file &apos;/mnt/lustre2/f65.recovery-small&apos; failed
 recovery-small test_65: @@@@@@ FAIL: test_65 failed with 1 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;client dmesg&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 6969.281626] Lustre: DEBUG MARKER: /usr/sbin/lctl mark == recovery-small test 65: lock enqueue for destroyed export ========================================= 09:27:59 \(1441618079\)
[ 6969.448135] Lustre: DEBUG MARKER: == recovery-small test 65: lock enqueue for destroyed export ========================================= 09:27:59 (1441618079)
[ 6969.499596] Lustre: DEBUG MARKER: mkdir -p /mnt/lustre2
[ 6969.510144] Lustre: DEBUG MARKER: mount -t lustre -o user_xattr,flock shadow-49vm3:shadow-49vm7:/lustre /mnt/lustre2
[ 6972.666189] LustreError: 166-1: MGC10.1.6.57@tcp: Connection to MGS (at 10.1.6.61@tcp) was lost; in progress operations using this service will fail
[ 6972.669074] LustreError: Skipped 2 previous similar messages
[ 6972.671309] LustreError: 15c-8: MGC10.1.6.57@tcp: The configuration from log &apos;lustre-client&apos; failed (-5). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information.
[ 6972.677497] Lustre: Unmounted lustre-client
[ 6972.679317] LustreError: 13842:0:(obd_mount.c:1342:lustre_fill_super()) Unable to mount  (-5)
[ 6972.916786] Lustre: DEBUG MARKER: /usr/sbin/lctl mark  recovery-small test_65: @@@@@@ FAIL: test_65 failed with 1 
[ 6973.085666] Lustre: DEBUG MARKER: recovery-small test_65: @@@@@@ FAIL: test_65 failed with 1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>client and server: lustre-master build #3175 RHEL7 zfs</environment>
        <key id="32156">LU-7171</key>
            <summary>Hard Failover recovery-small test_65: Inappropriate ioctl for device</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="hongchao.zhang">Hongchao Zhang</assignee>
                                    <reporter username="maloo">Maloo</reporter>
                        <labels>
                    </labels>
                <created>Wed, 16 Sep 2015 06:51:46 +0000</created>
                <updated>Fri, 28 Apr 2017 14:43:08 +0000</updated>
                            <resolved>Fri, 28 Apr 2017 14:43:08 +0000</resolved>
                                    <version>Lustre 2.8.0</version>
                    <version>Lustre 2.10.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="130403" author="standan" created="Wed, 14 Oct 2015 17:50:17 +0000"  >&lt;p&gt;Another instance for EL6.7 Server/Client - ZFS in 2.7.61 tag:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/4964bc54-6d42-11e5-bf10-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/4964bc54-6d42-11e5-bf10-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="138939" author="standan" created="Thu, 14 Jan 2016 17:36:36 +0000"  >&lt;p&gt;Also seen on Master for tag 2.7.65 with SELinux enabled client.&lt;br/&gt;
1 Cient, 1 OSS - 2 OSTs, 1 MDS - 1 MDT&lt;br/&gt;
Build# 3301&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== recovery-small test 65: lock enqueue for destroyed export == 00:09:54 (1452730194)
Starting client: eagle-52vm5.eagle.hpdd.intel.com:  -o user_xattr,flock eagle-52vm2@tcp:/lustre /mnt/lustre2
mount.lustre: mount eagle-52vm2@tcp:/lustre at /mnt/lustre2 failed: Input/output error
Is the MGS running?
error on ioctl 0x4008669a for &apos;/mnt/lustre2/f65.recovery-small&apos; (3): Inappropriate ioctl for device
error: setstripe: create file &apos;/mnt/lustre2/f65.recovery-small&apos; failed
 recovery-small test_65: @@@@@@ FAIL: test_65 failed with 1 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="139399" author="standan" created="Wed, 20 Jan 2016 02:15:38 +0000"  >&lt;p&gt;Another instance found for hardfailover: EL7 Server/Client - ZFS&lt;br/&gt;
build# 3305&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/fbbee064-bbc6-11e5-8506-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/fbbee064-bbc6-11e5-8506-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="139775" author="pjones" created="Fri, 22 Jan 2016 18:48:58 +0000"  >&lt;p&gt;Hongchao&lt;/p&gt;

&lt;p&gt;Could you please look into this one?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="140007" author="hongchao.zhang" created="Tue, 26 Jan 2016 09:23:59 +0000"  >&lt;p&gt;I have analyzed these failed cases, and it failed due to the disconnection between the client and the MGS, then the Lustre mount&lt;br/&gt;
failed subsequently, no special logs were found to indicate whether the disconnection was triggered, and the MGS just did NOT&lt;br/&gt;
received the request(PING request) from the client. I&apos;m afraid that it could be related the network itself.&lt;/p&gt;</comment>
                            <comment id="141404" author="standan" created="Fri, 5 Feb 2016 18:29:53 +0000"  >&lt;p&gt;Another instance for master, build# 3316&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/fe17414c-cc2b-11e5-b519-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/fe17414c-cc2b-11e5-b519-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="141704" author="standan" created="Tue, 9 Feb 2016 23:56:17 +0000"  >&lt;p&gt;Another instance found for hardfailover : EL7 Server/Client - ZFS, tag 2.7.66, master build 3314&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/f0dd9616-ca6e-11e5-9609-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/f0dd9616-ca6e-11e5-9609-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="143563" author="standan" created="Wed, 24 Feb 2016 16:52:46 +0000"  >&lt;p&gt;Another instance found on b2_8 for failover testing , build# 6.&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/54ec62da-d99d-11e5-9ebe-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/54ec62da-d99d-11e5-9ebe-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/c5a8e44c-d9c7-11e5-85dd-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/c5a8e44c-d9c7-11e5-85dd-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="193892" author="hongchao.zhang" created="Fri, 28 Apr 2017 14:40:48 +0000"  >&lt;p&gt;there is no such problem since Jun 11, 2016&lt;/p&gt;</comment>
                            <comment id="193894" author="pjones" created="Fri, 28 Apr 2017 14:43:08 +0000"  >&lt;p&gt;ok then let&apos;s close the ticket&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzxnrz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>