<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:48:19 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5076] Test failure on test suite conf-sanity, subtest test_46a test failed to respond and timed out</title>
                <link>https://jira.whamcloud.com/browse/LU-5076</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This issue was created by maloo for wangdi &amp;lt;di.wang@intel.com&amp;gt;&lt;/p&gt;

&lt;p&gt;This issue relates to the following test suite run: &lt;a href=&quot;http://maloo.whamcloud.com/test_sets/0638f47c-dd56-11e3-8e9b-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://maloo.whamcloud.com/test_sets/0638f47c-dd56-11e3-8e9b-52540035b04c&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The sub-test test_46a failed with the following error:&lt;/p&gt;
&lt;blockquote&gt;

&lt;p&gt;Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400):1:ost&lt;br/&gt;
Lustre: lustre-MDT0000: Client lustre-MDT0000-lwp-OST0006_UUID seen on new nid 10.10.4.199@tcp when existing nid 10.10.4.203@tcp is already connected&lt;br/&gt;
Lustre: Skipped 3 previous similar messages&lt;br/&gt;
Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000300000400-0x0000000340000400):4:ost&lt;br/&gt;
Lustre: Skipped 2 previous similar messages&lt;br/&gt;
Lustre: lustre-MDT0000: Client lustre-MDT0000-lwp-OST0006_UUID seen on new nid 10.10.4.199@tcp when existing nid 10.10.4.203@tcp is already connected&lt;br/&gt;
Lustre: Skipped 6 previous similar messages&lt;br/&gt;
Lustre: lustre-MDT0000: already connected client lustre-MDT0000-lwp-OST0000_UUID (at 10.10.4.199@tcp) with handle 0x2fb538c53b7cc26b. Rejecting client with the same UUID trying to reconnect with handle 0x4f578b0725086d9c&lt;br/&gt;
Lustre: Skipped 62 previous similar messages&lt;br/&gt;
Lustre: lustre-MDT0000: Client lustre-MDT0000-lwp-OST0006_UUID seen on new nid 10.10.4.199@tcp when existing nid 10.10.4.203@tcp is already connected&lt;br/&gt;
Lustre: Skipped 12 previous similar messages&lt;br/&gt;
Lustre: lustre-MDT0000: Client lustre-MDT0000-lwp-OST0006_UUID seen on new nid 10.10.4.199@tcp when existing nid 10.10.4.203@tcp is already connected&lt;br/&gt;
Lustre: Skipped 24 previous similar messages&lt;br/&gt;
LustreError: 11-0: lustre-OST0006-osc-MDT0000: Communicating with 10.10.4.199@tcp, operation ost_connect failed with -11.&lt;br/&gt;
LustreError: Skipped 94 previous similar messages&lt;br/&gt;
Lustre: lustre-MDT0000: Client lustre-MDT0000-lwp-OST0006_UUID seen on new nid 10.10.4.199@tcp when existing nid 10.10.4.203@tcp is already connected&lt;br/&gt;
Lustre: Skipped 50 previous similar messages&lt;br/&gt;
Lustre: lustre-MDT0000: already connected client lustre-MDT0000-lwp-OST0001_UUID (at 10.10.4.199@tcp) with handle 0x2fb538c53b7cc33d. Rejecting client with the same UUID trying to reconnect with handle 0x4f578b0725086f01&lt;br/&gt;
Lustre: Skipped 306 previous similar messages&lt;br/&gt;
Lustre: lustre-MDT0000: Client lustre-MDT0000-lwp-OST0006_UUID seen on new nid 10.10.4.199@tcp when existing nid 10.10.4.203@tcp is already connected&lt;br/&gt;
Lustre: Skipped 102 previous similar messages&lt;br/&gt;
LustreError: 11-0: lustre-OST0006-osc-MDT0000: Communicating with 10.10.4.199@tcp, operation ost_connect failed with -11.&lt;br/&gt;
LustreError: Skipped 120 previous similar messages&lt;br/&gt;
Lustre: lustre-MDT0000: already connected client lustre-MDT0000-lwp-OST0000_UUID (at 10.10.4.199@tcp) with handle 0x2fb538c53b7cc26b. Rejecting client with the same UUID trying to reconnect with handle 0x4f578b0725086d9c&lt;br/&gt;
Lustre: Skipped 364 previous similar messages&lt;br/&gt;
Lustre: lustre-MDT0000: Client lustre-MDT0000-lwp-OST0006_UUID seen on new nid 10.10.4.199@tcp when existing nid 10.10.4.203@tcp is already connected&lt;br/&gt;
Lustre: Skipped 120 previous similar messages&lt;br/&gt;
LustreError: 11-0: lustre-OST0006-osc-MDT0000: Communicating with 10.10.4.199@tcp, operation ost_connect failed with -11.&lt;br/&gt;
test failed to respond and timed out&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;This failure is a bit strange, according to the syslog on MDS0&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: lustre-MDT0000: Client lustre-MDT0000-lwp-OST0006_UUID seen on new nid 10.10.4.199@tcp when existing nid 10.10.4.203@tcp is already connected
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;But the ip of OSS should be on 10.10.4.199, I do not know where this 10.10.4.203 comes from. So I am not sure this is a TEI ticket. If some one confirm this is a TEI ticket, please close this one. Thanks.&lt;/p&gt;

&lt;p&gt;Info required for matching: conf-sanity 46a&lt;/p&gt;</description>
                <environment></environment>
        <key id="24747">LU-5076</key>
            <summary>Test failure on test suite conf-sanity, subtest test_46a test failed to respond and timed out</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="mdiep">Minh Diep</assignee>
                                    <reporter username="maloo">Maloo</reporter>
                        <labels>
                    </labels>
                <created>Sat, 17 May 2014 01:20:59 +0000</created>
                <updated>Tue, 10 Jun 2014 17:07:21 +0000</updated>
                            <resolved>Tue, 10 Jun 2014 17:07:21 +0000</resolved>
                                                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="84457" author="adilger" created="Tue, 20 May 2014 15:37:28 +0000"  >&lt;p&gt;It would we worthwhile to track down which mv cluster this other IO address belongs to, and why it thinks it should be connecting to this MDS. &lt;/p&gt;

&lt;p&gt;Separately, one option to avoid such problems is to use a more unique $NAME variable for each test cluster (e.g. hostname of master test node instead of ALWAYS &quot;lustre&quot;) so that the clients and servers are not able to connect to the wrong system being tested. &lt;/p&gt;</comment>
                            <comment id="86235" author="adilger" created="Tue, 10 Jun 2014 17:07:21 +0000"  >&lt;p&gt;Closing this as a duplicate of TEI-1993. There are two possible fixes in the test infrastructure that are possible:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;fix the test system not to leave test nodes running after changing the cluster config&lt;/li&gt;
	&lt;li&gt;fix the test system to assign more unique filesystem names for tests, so that old servers do not think they should be connecting to new servers&lt;/li&gt;
&lt;/ul&gt;
</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                            <outwardlinks description="duplicates">
                                                        </outwardlinks>
                                                        </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="24725">LU-5064</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwmon:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>14010</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>