<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:51:13 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5407] Failover failure on test suite replay-single test_58c: test_58c failed with 2</title>
                <link>https://jira.whamcloud.com/browse/LU-5407</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This issue was created by maloo for sarah &amp;lt;sarah@whamcloud.com&amp;gt;&lt;/p&gt;

&lt;p&gt;This issue relates to the following test suite run: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/79386184-0f6e-11e4-aee3-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/79386184-0f6e-11e4-aee3-5254006e85c2&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The sub-test test_58c failed with the following error:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;test_58c failed with 2&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;May related with &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3625&quot; title=&quot;Test failure on test suite replay-single, subtest test_58c&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3625&quot;&gt;&lt;del&gt;LU-3625&lt;/del&gt;&lt;/a&gt;&lt;br/&gt;
test log&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== replay-single test 58c: resend/reconstruct setxattr op ============================================ 09:15:20 (1405786520)
CMD: onyx-63vm3 dumpe2fs -h /dev/lvm-Role_MDS/P1 2&amp;gt;&amp;amp;1 |
		grep -E -q &apos;(ea_inode|large_xattr)&apos;
Starting client: onyx-63vm1: -o user_xattr,flock onyx-63vm3:onyx-63vm7:/lustre /mnt/lustre2
CMD: onyx-63vm1 mkdir -p /mnt/lustre2
CMD: onyx-63vm1 mount -t lustre -o user_xattr,flock onyx-63vm3:onyx-63vm7:/lustre /mnt/lustre2
mount.lustre: mount onyx-63vm3:onyx-63vm7:/lustre at /mnt/lustre2 failed: Input/output error
Is the MGS running?
CMD: onyx-63vm3 lctl set_param fail_loc=0x123
fail_loc=0x123
CMD: onyx-63vm1 setfattr -n trusted.foo -v bar /mnt/lustre/d58c.replay-single/f58c.replay-single
CMD: onyx-63vm3 lctl set_param fail_loc=0
fail_loc=0
getfattr: /mnt/lustre2/d58c.replay-single/f58c.replay-single: No such file or directory
 replay-single test_58c: @@@@@@ FAIL: test_58c failed with 2 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;client dmesg&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[129081.475975] Lustre: DEBUG MARKER: == replay-single test 58c: resend/reconstruct setxattr op ============================================ 09:15:20 (1405786520)
[129081.758649] Lustre: DEBUG MARKER: mkdir -p /mnt/lustre2
[129081.771133] Lustre: DEBUG MARKER: mount -t lustre -o user_xattr,flock onyx-63vm3:onyx-63vm7:/lustre /mnt/lustre2
[129081.784961] LustreError: 15c-8: MGC10.2.5.138@tcp: The configuration from log &apos;lustre-client&apos; failed (-5). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information.
[129081.785213] Lustre: Unmounted lustre-client
[129081.785422] LustreError: 22063:0:(obd_mount.c:1342:lustre_fill_super()) Unable to mount  (-5)
[129082.023875] Lustre: DEBUG MARKER: setfattr -n trusted.foo -v bar /mnt/lustre/d58c.replay-single/f58c.replay-single
[129134.337350] Lustre: DEBUG MARKER: /usr/sbin/lctl mark  replay-single test_58c: @@@@@@ FAIL: test_58c failed with 2 
[129134.445678] Lustre: DEBUG MARKER: replay-single test_58c: @@@@@@ FAIL: test_58c failed with 2
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>lustre-b2_6-rc2   client is SLES11 SP3</environment>
        <key id="25727">LU-5407</key>
            <summary>Failover failure on test suite replay-single test_58c: test_58c failed with 2</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="hongchao.zhang">Hongchao Zhang</assignee>
                                    <reporter username="maloo">Maloo</reporter>
                        <labels>
                            <label>zfs</label>
                    </labels>
                <created>Thu, 24 Jul 2014 16:28:51 +0000</created>
                <updated>Tue, 1 Nov 2016 16:16:31 +0000</updated>
                            <resolved>Tue, 26 May 2015 17:39:58 +0000</resolved>
                                    <version>Lustre 2.6.0</version>
                    <version>Lustre 2.7.0</version>
                    <version>Lustre 2.5.3</version>
                    <version>Lustre 2.8.0</version>
                                    <fixVersion>Lustre 2.8.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>13</watches>
                                                                            <comments>
                            <comment id="92091" author="yujian" created="Wed, 20 Aug 2014 23:47:12 +0000"  >&lt;p&gt;Lustre Build: &lt;a href=&quot;https://build.hpdd.intel.com/job/lustre-b2_5/80/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://build.hpdd.intel.com/job/lustre-b2_5/80/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL6.5/x86_64&lt;br/&gt;
Test Group: failover&lt;br/&gt;
FSTYPE=zfs&lt;/p&gt;

&lt;p&gt;The same failure occurred:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/21ecf6d8-26a7-11e4-84f2-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/21ecf6d8-26a7-11e4-84f2-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="92173" author="yujian" created="Thu, 21 Aug 2014 20:40:01 +0000"  >&lt;p&gt;One more instance on Lustre b2_5 branch:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/c237917a-2904-11e4-9362-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/c237917a-2904-11e4-9362-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="92905" author="yujian" created="Sun, 31 Aug 2014 21:31:02 +0000"  >&lt;p&gt;Lustre Build: &lt;a href=&quot;https://build.hpdd.intel.com/job/lustre-b2_5/86/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://build.hpdd.intel.com/job/lustre-b2_5/86/&lt;/a&gt; (2.5.3 RC1)&lt;br/&gt;
Distro/Arch: RHEL6.5/x86_64&lt;br/&gt;
Test Group: failover&lt;br/&gt;
FSTYPE=zfs&lt;/p&gt;

&lt;p&gt;The same failure occurred: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/d3d551f2-3123-11e4-b503-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/d3d551f2-3123-11e4-b503-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="106547" author="jamesanunez" created="Tue, 10 Feb 2015 22:36:16 +0000"  >&lt;p&gt;replay-single test 58b fails with the same (similar?) getfattr error: &lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== replay-single test 58b: test replay of setxattr op == 01:19:17 (1423041557)
pdsh@c13: mds01: ssh exited with exit code 1
Starting client: c13:  -o user_xattr,flock mds01@o2ib:/scratch /lustre/scratch2
mount.lustre: mount mds01@o2ib:/scratch at /lustre/scratch2 failed: Input/output error
Is the MGS running?
Filesystem           1K-blocks    Used  Available Use% Mounted on
mds01@o2ib:/scratch 1253608724 2831236 1185541688   1% /lustre/scratch
Failing mds1 on mds01
Stopping /lustre/scratch/mdt0 (opts:) on mds01
pdsh@c13: mds01: ssh exited with exit code 1
reboot facets: mds1
Failover mds1 to mds01
01:19:37 (1423041577) waiting for mds01 network 900 secs ...
01:19:37 (1423041577) network interface is UP
mount facets: mds1
Starting mds1:   /dev/lvm-sdc/MDT0 /lustre/scratch/mdt0
Started scratch-MDT0000
c13: mdc.scratch-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec
getfattr: /lustre/scratch2/d58b.replay-single/f58b.replay-single: No such file or directory
 replay-single test_58b: @@@@@@ FAIL: test_58b failed with 1 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Results for lustre-master 2.6.93 tag at &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/c7685e64-ade2-11e4-a0b6-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/c7685e64-ade2-11e4-a0b6-5254006e85c2&lt;/a&gt; &lt;/p&gt;</comment>
                            <comment id="109183" author="yong.fan" created="Mon, 9 Mar 2015 00:50:04 +0000"  >&lt;p&gt;Hit it on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/8c1f6af0-c5a2-11e4-9ec4-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/8c1f6af0-c5a2-11e4-9ec4-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="109226" author="bfaccini" created="Mon, 9 Mar 2015 16:44:17 +0000"  >&lt;p&gt;+1 at &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/495484d0-c3cc-11e4-869d-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/495484d0-c3cc-11e4-869d-5254006e85c2&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="111735" author="jhammond" created="Wed, 8 Apr 2015 14:08:32 +0000"  >&lt;p&gt;I think this is likely the same issue as &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5420&quot; title=&quot;Failure on test suite sanity test_17m: mount MDS failed, Input/output error&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5420&quot;&gt;&lt;del&gt;LU-5420&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="114755" author="jamesanunez" created="Fri, 8 May 2015 19:12:13 +0000"  >&lt;p&gt;Replay-single test 58c used to fail with error &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;test_58c failed with 2
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;After the patch for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2524&quot; title=&quot;Tests regressions: tests interrelation introduced.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2524&quot;&gt;&lt;del&gt;LU-2524&lt;/del&gt;&lt;/a&gt; with change ID Ib09102e50f855550db801180be3f7fc42911191a the failure message was changed to the unfortunately long &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;xattr set (yyyyyyyyyyyyyyyyyyyyyyyyy &#8230; yyy) is not what was returned ()
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;or &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;xattr set (bar) is not what was returned ()
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Here are all the test 58c failures for the since 01 April 2015. Just looking at the number of failures in April compared to the failures so far in May, it looks like this test is failing more often in the past week. It also looks to be impacting ZFS more than ldiskfs servers (marked with &#8220;ldiskfs&#8221;) lately. &lt;/p&gt;

&lt;p&gt;ldiskfs - 2015-04-06 08:08:06 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/32c396f8-dc6c-11e4-a69a-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/32c396f8-dc6c-11e4-a69a-5254006e85c2&lt;/a&gt; &lt;br/&gt;
ldiskfs - 2015-04-06 10:21:25 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/5525c23c-dcd2-11e4-9d45-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/5525c23c-dcd2-11e4-9d45-5254006e85c2&lt;/a&gt; &lt;br/&gt;
ldiskfs - 2015-04-06 12:29:29 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/56cae55c-dc8e-11e4-a69a-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/56cae55c-dc8e-11e4-a69a-5254006e85c2&lt;/a&gt; &lt;br/&gt;
ldiskfs - 2015-04-06 14:48:52 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/d090287c-dceb-11e4-babc-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/d090287c-dceb-11e4-babc-5254006e85c2&lt;/a&gt; &lt;br/&gt;
ldiskfs - 2015-04-07 11:42:00 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/1d3d0418-dd57-11e4-babc-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/1d3d0418-dd57-11e4-babc-5254006e85c2&lt;/a&gt; &lt;br/&gt;
ldiskfs - 2015-04-07 18:42:37 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/5b4c0db4-dd87-11e4-a807-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/5b4c0db4-dd87-11e4-a807-5254006e85c2&lt;/a&gt; &lt;br/&gt;
2015-04-08 12:53:15 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/04435bac-de37-11e4-90b9-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/04435bac-de37-11e4-90b9-5254006e85c2&lt;/a&gt; &lt;br/&gt;
ldiskfs - 2015-04-08 18:56:27 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/9bebb710-de64-11e4-90b9-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/9bebb710-de64-11e4-90b9-5254006e85c2&lt;/a&gt; &lt;br/&gt;
ldiskfs - 2015-04-09 23:09:15 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/74066ef2-df9a-11e4-b5b0-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/74066ef2-df9a-11e4-b5b0-5254006e85c2&lt;/a&gt; &lt;br/&gt;
ldiskfs - 2015-04-11 03:57:25 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/ccd71cf2-e08f-11e4-93dc-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/ccd71cf2-e08f-11e4-93dc-5254006e85c2&lt;/a&gt; &lt;br/&gt;
ldiskfs - 2015-04-20 14:21:18 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/de50f082-e7a3-11e4-bdb1-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/de50f082-e7a3-11e4-bdb1-5254006e85c2&lt;/a&gt; &lt;br/&gt;
2015-04-26 11:36:43 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/6c1a6f4c-ec31-11e4-8eb7-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/6c1a6f4c-ec31-11e4-8eb7-5254006e85c2&lt;/a&gt; &lt;br/&gt;
2015-05-01 19:16:59 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/722e18e4-f04a-11e4-9bb2-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/722e18e4-f04a-11e4-9bb2-5254006e85c2&lt;/a&gt; &lt;br/&gt;
2015-05-02 08:13:07 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/03f97924-f11a-11e4-9e14-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/03f97924-f11a-11e4-9e14-5254006e85c2&lt;/a&gt; &lt;br/&gt;
2015-05-02 13:40:16 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/75a610ae-f143-11e4-bb65-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/75a610ae-f143-11e4-bb65-5254006e85c2&lt;/a&gt; &lt;br/&gt;
2015-05-03 23:34:14 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/2244f05c-f200-11e4-aa2e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/2244f05c-f200-11e4-aa2e-5254006e85c2&lt;/a&gt; &lt;br/&gt;
2015-05-04 13:41:15 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/0d145610-f2d1-11e4-aad2-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/0d145610-f2d1-11e4-aad2-5254006e85c2&lt;/a&gt; &lt;br/&gt;
2015-05-04 20:22:35 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/4787eeca-f307-11e4-9186-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/4787eeca-f307-11e4-9186-5254006e85c2&lt;/a&gt; &lt;br/&gt;
2015-05-05 08:11:46 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/7a76c3c6-f382-11e4-a51d-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/7a76c3c6-f382-11e4-a51d-5254006e85c2&lt;/a&gt; &lt;br/&gt;
2015-05-05 18:14:34 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/29879778-f3b0-11e4-8b3b-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/29879778-f3b0-11e4-8b3b-5254006e85c2&lt;/a&gt; &lt;br/&gt;
2015-05-05 23:56:04 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/9febf6d6-f3e0-11e4-b108-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/9febf6d6-f3e0-11e4-b108-5254006e85c2&lt;/a&gt; &lt;br/&gt;
2015-05-06 04:48:54 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/a6e591e2-f401-11e4-a594-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/a6e591e2-f401-11e4-a594-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2015-05-06 09:08:40 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/08da0d34-f442-11e4-a594-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/08da0d34-f442-11e4-a594-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2015-05-06 16:55:59 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/ff57f6e2-f476-11e4-ac9e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/ff57f6e2-f476-11e4-ac9e-5254006e85c2&lt;/a&gt;&lt;br/&gt;
ldiskfs - 2015-05-06 17:03:23 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/9825fe38-f4a3-11e4-b783-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/9825fe38-f4a3-11e4-b783-5254006e85c2&lt;/a&gt;&lt;br/&gt;
ldiskfs - 2015-05-07 06:25:06 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/62441088-f535-11e4-ac9e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/62441088-f535-11e4-ac9e-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2015-05-07 20:17:28 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/b4ccb126-f552-11e4-91fd-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/b4ccb126-f552-11e4-91fd-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2015-05-07 20:58:18 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/b89ca102-f54e-11e4-91fd-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/b89ca102-f54e-11e4-91fd-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2015-05-07 22:18:20 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/04cf6b6a-f566-11e4-90f4-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/04cf6b6a-f566-11e4-90f4-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2015-05-07 23:29:52 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/23893fd8-f573-11e4-90f4-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/23893fd8-f573-11e4-90f4-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2015-05-08 00:38:48 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/375e2178-f586-11e4-af5c-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/375e2178-f586-11e4-af5c-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2015-05-08 02:55:30 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/8647e13c-f588-11e4-90f4-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/8647e13c-f588-11e4-90f4-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2015-05-08 04:02:37 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/80073dac-f5ad-11e4-91fd-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/80073dac-f5ad-11e4-91fd-5254006e85c2&lt;/a&gt; &lt;/p&gt;</comment>
                            <comment id="114778" author="adilger" created="Fri, 8 May 2015 22:44:00 +0000"  >&lt;p&gt;I think recent cases marked &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5407&quot; title=&quot;Failover failure on test suite replay-single test_58c: test_58c failed with 2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5407&quot;&gt;&lt;del&gt;LU-5407&lt;/del&gt;&lt;/a&gt; are actually &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6573&quot; title=&quot;multiple tests: client evicted, Input/output error&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6573&quot;&gt;&lt;del&gt;LU-6573&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="114797" author="adilger" created="Sat, 9 May 2015 03:02:39 +0000"  >&lt;p&gt;It looks to me that this failure has become much more common since May 2.  LIkely candidates are two patches landed on May 1:&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/9286&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/9286&lt;/a&gt; &quot;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3266&quot; title=&quot;Regression tests for NRS policies&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3266&quot;&gt;&lt;del&gt;LU-3266&lt;/del&gt;&lt;/a&gt; test: regression tests for nrs policies&quot;&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/14505&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14505&lt;/a&gt; &quot;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6477&quot; title=&quot;Update ZFS/SPL version to 0.6.4.1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6477&quot;&gt;&lt;del&gt;LU-6477&lt;/del&gt;&lt;/a&gt; build: Update SPL/ZFS to 0.6.4.1&quot;&lt;/p&gt;</comment>
                            <comment id="114800" author="adilger" created="Sat, 9 May 2015 03:23:52 +0000"  >&lt;p&gt;James, would it be possible for you to fix test_58c so that the error message isn&apos;t so long?&lt;/p&gt;</comment>
                            <comment id="114817" author="jhammond" created="Sun, 10 May 2015 14:19:19 +0000"  >&lt;p&gt;&amp;gt; James, would it be possible for you to fix test_58c so that the error message isn&apos;t so long?&lt;/p&gt;

&lt;p&gt;And to check for errors from mount?&lt;/p&gt;</comment>
                            <comment id="114889" author="pjones" created="Mon, 11 May 2015 17:23:28 +0000"  >&lt;p&gt;Hongchao&lt;/p&gt;

&lt;p&gt;Could you please look into this issue?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="114939" author="gerrit" created="Mon, 11 May 2015 21:18:40 +0000"  >&lt;p&gt;James Nunez (james.a.nunez@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/14766&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14766&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5407&quot; title=&quot;Failover failure on test suite replay-single test_58c: test_58c failed with 2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5407&quot;&gt;&lt;del&gt;LU-5407&lt;/del&gt;&lt;/a&gt; tests: Error message for replay-single 58b and 58c&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: da8b8fe156ec0fcb06a096c1f6cddb85beec3fe9&lt;/p&gt;</comment>
                            <comment id="114941" author="jamesanunez" created="Mon, 11 May 2015 21:20:14 +0000"  >&lt;p&gt;Please note that patch &lt;a href=&quot;http://review.whamcloud.com/14766&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14766&lt;/a&gt; only modifies the error message and checks if the client mount succeeds. &lt;/p&gt;

&lt;p&gt;This patch does not fix the test failure.&lt;/p&gt;</comment>
                            <comment id="115038" author="adilger" created="Tue, 12 May 2015 17:25:24 +0000"  >&lt;p&gt;This problem is causing about 1/3 of all test failures in review-zfs, increasing to blocker status.&lt;/p&gt;

&lt;p&gt;Hong Chao, can you please treat this as a priority.&lt;/p&gt;</comment>
                            <comment id="115130" author="hongchao.zhang" created="Wed, 13 May 2015 00:23:11 +0000"  >&lt;p&gt;Hi Andreas, Okay, I&apos;ll try to analysis and create the patch asap, thanks!&lt;/p&gt;</comment>
                            <comment id="115183" author="hongchao.zhang" created="Wed, 13 May 2015 14:31:00 +0000"  >&lt;p&gt;this problem is caused by the disconnected import state of MGC at the client, which is not recovered yet after the MDS is failed over in &quot;test_58b&quot;.&lt;/p&gt;

&lt;p&gt;the log from MDT&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;00000100:00100000:0.0:1431060944.791714:0:20964:0:(nrs_fifo.c:179:nrs_fifo_req_get()) NRS start fifo request from 12345-10.1.5.31@tcp, seq: 74
00000100:00100000:0.0:1431060944.791720:0:20964:0:(service.c:2075:ptlrpc_server_handle_request()) Handling RPC pname:cluuid+ref:pid:xid:nid:opc ll_mgs_0002:0+-99:2640:x1500570864992708:12345-10.1.5.31@tcp:101
00000020:00080000:0.0:1431060944.791729:0:20964:0:(tgt_handler.c:622:tgt_request_handle()) operation 101 on unconnected OST from 12345-10.1.5.31@tcp
00000100:00100000:0.0:1431060944.791779:0:20964:0:(service.c:2125:ptlrpc_server_handle_request()) Handled RPC pname:cluuid+ref:pid:xid:nid:opc ll_mgs_0002:0+-99:2640:x1500570864992708:12345-10.1.5.31@tcp:101 Request procesed in 59us (193us total) trans 0 rc -107/-107
00000100:00100000:0.0:1431060944.791787:0:20964:0:(nrs_fifo.c:241:nrs_fifo_req_stop()) NRS stop fifo request from 12345-10.1.5.31@tcp, seq: 74
00000100:00100000:0.0:1431060944.792290:0:3344:0:(events.c:349:request_in_callback()) peer: 12345-10.1.5.31@tcp
00000100:00100000:0.0:1431060944.792296:0:20964:0:(service.c:1927:ptlrpc_server_handle_req_in()) got req x1500570864992712
00000100:00100000:0.0:1431060944.792313:0:20964:0:(nrs_fifo.c:179:nrs_fifo_req_get()) NRS start fifo request from 12345-10.1.5.31@tcp, seq: 75
00000100:00100000:0.0:1431060944.792314:0:20964:0:(service.c:2075:ptlrpc_server_handle_request()) Handling RPC pname:cluuid+ref:pid:xid:nid:opc ll_mgs_0002:0+-99:539:x1500570864992712:12345-10.1.5.31@tcp:250
00010000:00080000:0.0:1431060944.792327:0:20964:0:(ldlm_lib.c:1045:target_handle_connect()) MGS: connection from 5108cdc4-cb46-6929-784d-073b1dfbc568@10.1.5.31@tcp t0 exp (null) cur 1431060944 last 0
00000020:00000080:0.0:1431060944.792353:0:20964:0:(genops.c:1146:class_connect()) connect: client 5108cdc4-cb46-6929-784d-073b1dfbc568, cookie 0x96ce524a05765d59
00000020:01000000:0.0:1431060944.792358:0:20964:0:(lprocfs_status_server.c:307:lprocfs_exp_setup()) using hash ffff8800791d5180
00000100:00100000:0.0:1431060944.792411:0:20964:0:(service.c:2125:ptlrpc_server_handle_request()) Handled RPC pname:cluuid+ref:pid:xid:nid:opc ll_mgs_0002:5108cdc4-cb46-6929-784d-073b1dfbc568+4:539:x1500570864992712:12345-10.1.5.31@tcp:250 Request procesed in 96us (122us total) trans 0 rc 0/0
00000100:00100000:0.0:1431060944.792413:0:20964:0:(nrs_fifo.c:241:nrs_fifo_req_stop()) NRS stop fifo request from 12345-10.1.5.31@tcp, seq: 75
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;the config lock is enqueued by MGC at 1431060944.791714 and failed in &quot;tgt_request_handle&quot; with &quot;operation 101 on unconnected OST from 12345-10.1.5.31@tcp&quot;,&lt;br/&gt;
then the MGC sends MGS_CONNECT(250) to MGS a little later (00000100:00100000:0.0:1431060944.792313).&lt;br/&gt;
in test_58c, the lustre mount at &quot;/mnt/lustre2&quot; failed for the above reason then the &quot;getfxattr&quot; faild with -2 (ENOENT).&lt;/p&gt;</comment>
                            <comment id="115187" author="gerrit" created="Wed, 13 May 2015 14:57:31 +0000"  >&lt;p&gt;Hongchao Zhang (hongchao.zhang@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/14792&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14792&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5407&quot; title=&quot;Failover failure on test suite replay-single test_58c: test_58c failed with 2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5407&quot;&gt;&lt;del&gt;LU-5407&lt;/del&gt;&lt;/a&gt; test: wait MGC import to reconnect&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 3e529aea4d9e52e433084287c40792700c9bdd63&lt;/p&gt;</comment>
                            <comment id="115928" author="gerrit" created="Tue, 19 May 2015 19:03:50 +0000"  >&lt;p&gt;James Nunez (james.a.nunez@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/14864&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14864&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5407&quot; title=&quot;Failover failure on test suite replay-single test_58c: test_58c failed with 2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5407&quot;&gt;&lt;del&gt;LU-5407&lt;/del&gt;&lt;/a&gt; tests: Disable replay-single test 58c&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 0dbceb9e6d2835c00733a2fd76b950ff77976305&lt;/p&gt;</comment>
                            <comment id="116003" author="gerrit" created="Wed, 20 May 2015 14:47:01 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/14864/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14864/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5407&quot; title=&quot;Failover failure on test suite replay-single test_58c: test_58c failed with 2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5407&quot;&gt;&lt;del&gt;LU-5407&lt;/del&gt;&lt;/a&gt; tests: Disable replay-single test 58c&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 5b0ce8303e4033b3c7b09fda50f013e6d9d002b0&lt;/p&gt;</comment>
                            <comment id="116425" author="gerrit" created="Tue, 26 May 2015 17:34:56 +0000"  >&lt;p&gt;Andreas Dilger (andreas.dilger@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/14792/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14792/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5407&quot; title=&quot;Failover failure on test suite replay-single test_58c: test_58c failed with 2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5407&quot;&gt;&lt;del&gt;LU-5407&lt;/del&gt;&lt;/a&gt; test: wait MGC import to finish recovery&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: c8602de66d24be2e4cf4750ce79a95e51ef5676d&lt;/p&gt;</comment>
                            <comment id="117974" author="gerrit" created="Tue, 9 Jun 2015 21:04:56 +0000"  >&lt;p&gt;Andreas Dilger (andreas.dilger@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/14766/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14766/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5407&quot; title=&quot;Failover failure on test suite replay-single test_58c: test_58c failed with 2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5407&quot;&gt;&lt;del&gt;LU-5407&lt;/del&gt;&lt;/a&gt; tests: Error message for replay-single 58b and 58c&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 88e555dbabfc35521345851ff41516156217b1ec&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="25764">LU-5420</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="29885">LU-6573</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzws7b:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>15044</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>