<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:55:20 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-12753] sanity test_300a: FAIL: 1:stripe_count is 1, expect 2 </title>
                <link>https://jira.whamcloud.com/browse/LU-12753</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Seeing regular failures from Oleg&apos;s test environment:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Initial testing failed:

    sanity2@ldiskfs+DNE Failure(6653s)
    - 300a(1:stripe_count is 1, expect 2)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It looks like the MDS is still recovering from the shutdown in &lt;tt&gt;test_278&lt;/tt&gt; when &lt;tt&gt;test_300a&lt;/tt&gt; is started, so it isn&apos;t creating a directory stripe on the missing MDT:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 5346.863984] Lustre: DEBUG MARKER: == sanity test 278: Race starting MDS between MDTs stop/start ======================================== 13:15:41 (1568222141)
[ 5347.936754] Lustre: Failing over lustre-MDT0001
[ 5347.942260] Lustre: Skipped 11 previous similar messages
[ 5347.946645] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_disconnect to node 192.168.203.134@tcp failed: rc = -19
[ 5347.949716] LustreError: 16840:0:(osp_dev.c:485:osp_disconnect()) lustre-MDT0000-osp-MDT0001: can&apos;t disconnect: rc = -19
[ 5347.954701] LustreError: 16840:0:(lod_dev.c:267:lod_sub_process_config()) lustre-MDT0001-mdtlov: error cleaning up LOD index 0: cmd 0xcf031 : rc = -19
[ 5348.090212] LustreError: 16840:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 60c sleeping
[ 5348.901928] Lustre: server umount lustre-MDT0000 complete
[ 5348.905261] Lustre: Skipped 10 previous similar messages
[ 5351.974023] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc
[ 5352.048229] LustreError: 17417:0:(libcfs_fail.h:174:cfs_race()) cfs_fail_race id 60c waking
[ 5352.049878] LustreError: 16840:0:(libcfs_fail.h:172:cfs_race()) cfs_fail_race id 60c awake: rc=0
[ 5352.050393] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 192.168.203.134@tcp) was lost; in progress operations using this service will wait for recovery to complete
[ 5352.050394] Lustre: Skipped 1 previous similar message
[ 5352.051035] LustreError: 166-1: MGC192.168.203.134@tcp: Connection to MGS (at 192.168.203.134@tcp) was lost; in progress operations using this service will fail
[ 5352.052430] Lustre: Evicted from MGS (at 192.168.203.134@tcp) after server handle changed from 0xb4695c9561748b31 to 0xb4695c9561c72f3e
[ 5354.166243] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
[ 5355.558035] Lustre: DEBUG MARKER: oleg334-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8
[ 5359.121998] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc
[ 5359.290980] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_connect to node 192.168.203.134@tcp failed: rc = -114
[ 5359.294022] LustreError: Skipped 1 previous similar message
[ 5360.745846] Lustre: DEBUG MARKER: oleg334-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8
[ 5363.943859] Lustre: DEBUG MARKER: == sanity test 300a: basic striped dir sanity test =================================================== 13:15:58 (1568222158)
[ 5364.351961] Lustre: 18717:0:(ldlm_lib.c:1855:extend_recovery_timer()) lustre-MDT0001: extended recovery timer reached hard limit: 180, extend: 1
[ 5364.355319] Lustre: 18717:0:(ldlm_lib.c:1855:extend_recovery_timer()) Skipped 2 previous similar messages
[ 5365.197447] Lustre: lustre-MDT0000: Recovery over after 0:08, of 2 clients 2 recovered and 0 were evicted.
[ 5365.202538] Lustre: Skipped 2 previous similar messages
[ 5365.217245] Lustre: lustre-OST0001: deleting orphan objects from 0x0:53071 to 0x0:53089
[ 5365.217254] Lustre: lustre-OST0000: deleting orphan objects from 0x0:53209 to 0x0:53281
[ 5365.246825] Lustre: 18717:0:(ldlm_lib.c:1855:extend_recovery_timer()) lustre-MDT0001: extended recovery timer reached hard limit: 180, extend: 1
[ 5365.268393] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:8777 to 0x280000400:8897
[ 5365.268407] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000400:8931 to 0x2c0000400:8993
[ 5365.795157] Lustre: DEBUG MARKER: sanity test_300a: @@@@@@ FAIL: 1:stripe_count is 1, expect 2
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It seems likely that we need something like &quot;&lt;tt&gt;wait_recovery_complete mds2&lt;/tt&gt;&quot; at the end of &lt;tt&gt;test_278()&lt;/tt&gt; instead of letting it bleed into the next test.&lt;/p&gt;</description>
                <environment>olegtest</environment>
        <key id="56895">LU-12753</key>
            <summary>sanity test_300a: FAIL: 1:stripe_count is 1, expect 2 </summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="adilger">Andreas Dilger</assignee>
                                    <reporter username="adilger">Andreas Dilger</reporter>
                        <labels>
                    </labels>
                <created>Wed, 11 Sep 2019 21:44:16 +0000</created>
                <updated>Fri, 20 Sep 2019 14:48:15 +0000</updated>
                            <resolved>Fri, 20 Sep 2019 14:48:15 +0000</resolved>
                                                    <fixVersion>Lustre 2.13.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>2</watches>
                                                                            <comments>
                            <comment id="254556" author="gerrit" created="Wed, 11 Sep 2019 22:12:18 +0000"  >&lt;p&gt;Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/36167&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/36167&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12753&quot; title=&quot;sanity test_300a: FAIL: 1:stripe_count is 1, expect 2 &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12753&quot;&gt;&lt;del&gt;LU-12753&lt;/del&gt;&lt;/a&gt; tests: wait for mds2 recovery in sanity 278&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 31946d9c959db6ac389a164480058ff54f0fe45d&lt;/p&gt;</comment>
                            <comment id="255105" author="gerrit" created="Fri, 20 Sep 2019 07:55:15 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/36167/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/36167/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12753&quot; title=&quot;sanity test_300a: FAIL: 1:stripe_count is 1, expect 2 &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12753&quot;&gt;&lt;del&gt;LU-12753&lt;/del&gt;&lt;/a&gt; tests: wait for mds2 recovery in sanity 278&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: ee809615178d2fdbf6f2004ec871d04c2cfbca7e&lt;/p&gt;</comment>
                            <comment id="255150" author="pjones" created="Fri, 20 Sep 2019 14:48:15 +0000"  >&lt;p&gt;Landed for 2.13&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00mnz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>