<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:10:56 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-7673] conf-sanity test failure cause multiple tests to fail</title>
                <link>https://jira.whamcloud.com/browse/LU-7673</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;There are several examples in conf-sanity and other test suites where a test fails and that failure causes several tests that follow to fail. This ticket is to harden tests when a previous test fails. &lt;/p&gt;

&lt;p&gt;For example, when conf-sanity test 52 fails,  test 53a also fails because the MDS is already mounted. Test 52 calls cleanup() at the end of the test and cleanup() calls stop_ost, stop_mds and unloads modules. Test 53a runs right after test 52 and calls setup(), which calls start_mds, start_ost, etc. and will return an error if any of these fail. Thus, when test 52 fails, it does not call cleanup(), all servers are will remain mounted when test 53a starts. Test 53a calls setup() and returns an error. Then test 53b starts and calls setup() and fails. Then 54a fails because the OST is still mounted. Test 54b fails because the MDT is still mounted.&lt;/p&gt;

&lt;p&gt;One example of this cascade of errors is at&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/3d0a4d2c-ba9d-11e5-87b4-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/3d0a4d2c-ba9d-11e5-87b4-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another example of one test failing leading to several others to fail is at &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/df65fd10-bad8-11e5-b3d5-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/df65fd10-bad8-11e5-b3d5-5254006e85c2&lt;/a&gt;. In this case, test 44 failed and this caused test 45 to fail for the same reason as above; not calling cleanup() due to test failure and the next test calls setup().&lt;/p&gt;

&lt;p&gt;There are a few ways to stop these failures. A couple of possible solutions are:&lt;br/&gt;
1. Insert a TRAP that would unmount all servers at the end of these tests. &lt;br/&gt;
2. Modify setup to return a value when start_mds, start_ost, mount_client, client_up fail. Then each individual test that calls setup() can choose if, for example, an already mounted MDS is a reason to fail the test or not and continue the test.&lt;/p&gt;</description>
                <environment></environment>
        <key id="34132">LU-7673</key>
            <summary>conf-sanity test failure cause multiple tests to fail</summary>
                <type id="4" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11310&amp;avatarType=issuetype">Improvement</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                            <label>tests</label>
                    </labels>
                <created>Fri, 15 Jan 2016 02:06:56 +0000</created>
                <updated>Tue, 6 Oct 2020 03:23:55 +0000</updated>
                                            <version>Lustre 2.8.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>2</watches>
                                                                            <comments>
                            <comment id="140206" author="jamesanunez" created="Wed, 27 Jan 2016 15:27:39 +0000"  >&lt;p&gt;Looks like same issue when conf-sanity test 22 fails, test 23a fails because the MDT is already mounted. Logs at&lt;br/&gt;
2016-01-26 16:38:56  - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/0285473a-c47d-11e5-8866-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/0285473a-c47d-11e5-8866-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="141844" author="standan" created="Wed, 10 Feb 2016 21:48:19 +0000"  >&lt;p&gt;Another instance found for interop tag 2.7.66 - EL7 Server/2.7.1 Client, build# 3316&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/c3a8632c-cc91-11e5-b80c-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/c3a8632c-cc91-11e5-b80c-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another instance found for interop tag 2.7.66 - EL6.7 Server/2.7.1 Client, build# 3316&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/55d7aa40-cc98-11e5-b80c-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/55d7aa40-cc98-11e5-b80c-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another instance found for interop tag 2.7.66 - EL6.7 Server/2.5.5 Client, build# 3316&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/bdea5946-cc9f-11e5-963e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/bdea5946-cc9f-11e5-963e-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another instance found for interop tag 2.7.66 - EL7 Server/2.5.5 Client, build# 3316&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/79a03aac-cc46-11e5-901d-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/79a03aac-cc46-11e5-901d-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="143575" author="standan" created="Wed, 24 Feb 2016 17:05:55 +0000"  >&lt;p&gt;Another instance found for interop - EL7 Server/2.7.1 Client, tag 2.7.90. &lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/495aabae-d306-11e5-be5c-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/495aabae-d306-11e5-be5c-5254006e85c2&lt;/a&gt;&lt;br/&gt;
Another instance found for interop - EL6.7 Server/2.7.1 Client, tag 2.7.90. &lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/42ace612-d560-11e5-9cc2-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/42ace612-d560-11e5-9cc2-5254006e85c2&lt;/a&gt;&lt;br/&gt;
Another instance found for interop - EL6.7 Server/2.5.5 Client, tag 2.7.90. &lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/f99a2d60-d567-11e5-bc47-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/f99a2d60-d567-11e5-bc47-5254006e85c2&lt;/a&gt;&lt;br/&gt;
Another instance found for interop - EL7 Server/2.5.5 Client, tag 2.7.90. &lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/93baffee-d2ae-11e5-8697-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/93baffee-d2ae-11e5-8697-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="166037" author="jamesanunez" created="Wed, 14 Sep 2016 16:50:16 +0000"  >&lt;p&gt;When conf-sanity test 20 fails, tests 21*, 22 and 23a typically also fail. These tests need to be more resilient to previous failures.&lt;/p&gt;

&lt;p&gt;See &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/afcbd7ba-79d6-11e6-b058-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/afcbd7ba-79d6-11e6-b058-5254006e85c2&lt;/a&gt; for one example of this cascade of failures. &lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzxybb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>