<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:19:32 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-8668] sanityn test 77a, 77b, 77c, 77d, 77e, 77f all fail with &apos;dd on client failed&apos; </title>
                <link>https://jira.whamcloud.com/browse/LU-8668</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;All sanityn test_77a, test_77b, est_77c, test_77d, test_77e, test_77f all fail with&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;&apos;dd on client failed&apos;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The test log for 77a looks like:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== sanityn test 77a: check FIFO NRS policy =========================================================== 23:46:57 (1475538417)
CMD: trevis-66vm4 lctl set_param ost.OSS.*.nrs_policies=fifo
ost.OSS.ost.nrs_policies=fifo
ost.OSS.ost_create.nrs_policies=fifo
ost.OSS.ost_io.nrs_policies=fifo
ost.OSS.ost_out.nrs_policies=fifo
ost.OSS.ost_seq.nrs_policies=fifo
CMD: trevis-66vm5.trevis.hpdd.intel.com,trevis-66vm6 sync
sanityn test_77a: @@@@@@ FAIL: dd on client failed 
  Trace dump:
  = /usr/lib64/lustre/tests/test-framework.sh:4835:error()
  = /usr/lib64/lustre/tests/sanityn.sh:2975:nrs_write_read()
  = /usr/lib64/lustre/tests/sanityn.sh:2999:test_77a()
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Since all of the NRS policy tests fail, the problem is in the routine they all call which is nrs_write_read(). &lt;/p&gt;

&lt;p&gt;Since dd runs in the background, it&#8217;s not clear which of the dds fail. From the client console:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;23:47:04:[ 8104.314157] Lustre: DEBUG MARKER: == sanityn test 77a: check FIFO NRS policy =========================================================== 23:46:57 (1475538417)
23:47:04:[ 8105.118942] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_r_trevis-66vm5.trevis.hpdd.intel.com bs=1M count=16
23:47:04:[ 8109.441422] Lustre: DEBUG MARKER: sync
23:47:04:[ 8109.560257] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=10 count=1
23:47:04:[ 8109.575839] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=15 count=1
23:47:04:[ 8109.590028] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=1 count=1
23:47:04:[ 8109.604393] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=9 count=1
23:47:04:[ 8109.619225] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=12 count=1
23:47:04:[ 8109.635401] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=2 count=1
23:47:04:[ 8109.650030] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=4 count=1
23:47:04:[ 8109.665628] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=13 count=1
23:47:04:[ 8109.680064] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=11 count=1
23:47:35:[ 8109.696518] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=7 count=1
23:47:35:[ 8109.716382] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=0 count=1
23:47:35:[ 8109.747046] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=8 count=1
23:47:35:[ 8109.767675] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=5 count=1
23:47:35:[ 8109.784620] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=6 count=1
23:47:35:[ 8109.801728] Lustre: DEBUG MARKER: dd if=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com bs=1M seek=14 count=1
23:47:35:[ 8114.267018] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=0 count=1
23:47:35:[ 8114.467267] Lustre: DEBUG MARKER: /usr/sbin/lctl mark  sanityn test_77a: @@@@@@ FAIL: dd on client failed 
23:47:35:[ 8115.076977] Lustre: DEBUG MARKER: sanityn test_77a: @@@@@@ FAIL: dd on client failed
23:47:35:[ 8115.196188] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=1 count=1
23:47:35:[ 8115.682787] Lustre: DEBUG MARKER: /usr/sbin/lctl dk &amp;gt; /logdir/test_logs/2016-10-03/lustre-reviews-el7-x86_64--review-zfs-part-1--1_8_1__41796__-70202352898540-210143/sanityn.test_77a.debug_log.$(hostname -s).1475538429.log;
23:47:35:[ 8115.682787]          dmesg &amp;gt; /logdir/test_logs/2016-10-03/lustre-reviews-el7-
23:47:35:[ 8116.138228] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=2 count=1
23:47:35:[ 8116.960193] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=3 count=1
23:47:35:[ 8117.502537] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=4 count=1
23:47:35:[ 8117.900023] Lustre: DEBUG MARKER: lctl set_param -n fail_loc=0 	    fail_val=0 2&amp;gt;/dev/null
23:47:35:[ 8118.524133] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=5 count=1
23:47:35:[ 8119.059330] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=6 count=1
23:47:35:[ 8119.505604] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=7 count=1
23:47:35:[ 8119.938421] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=8 count=1
23:47:35:[ 8120.365768] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=9 count=1
23:47:35:[ 8120.859821] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=10 count=1
23:47:35:[ 8121.303947] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=11 count=1
23:47:35:[ 8121.747911] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=12 count=1
23:47:35:[ 8122.180064] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=13 count=1
23:47:35:[ 8122.619751] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=14 count=1
23:47:35:[ 8123.046963] Lustre: DEBUG MARKER: dd if=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm5.trevis.hpdd.intel.com of=/dev/zero bs=1M seek=15 count=1
23:47:35:[ 8123.590134] Lustre: DEBUG MARKER: lctl set_param -n fail_loc=0 	    fail_val=0 2&amp;gt;/dev/null
23:47:35:[ 8124.254671] Lustre: DEBUG MARKER: rc=0;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;


&lt;p&gt;These tests started failing on October 1, 2016. Logs for failures are at&lt;br/&gt;
2016-10-04  - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/b0ff1952-8a03-11e6-a9b0-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/b0ff1952-8a03-11e6-a9b0-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-10-03 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/371f30a8-89d7-11e6-a9b0-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/371f30a8-89d7-11e6-a9b0-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-10-01 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/903c4160-87b3-11e6-a9b0-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/903c4160-87b3-11e6-a9b0-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-10-01 -&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/332ebf58-87df-11e6-91aa-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/332ebf58-87df-11e6-91aa-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-10-01 -&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/e0e6f028-87ff-11e6-a8b7-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/e0e6f028-87ff-11e6-a8b7-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-10-01 &#8211;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/30dee3fe-87df-11e6-a9b0-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/30dee3fe-87df-11e6-a9b0-5254006e85c2&lt;/a&gt;&lt;/p&gt;
</description>
                <environment>autotest</environment>
        <key id="40282">LU-8668</key>
            <summary>sanityn test 77a, 77b, 77c, 77d, 77e, 77f all fail with &apos;dd on client failed&apos; </summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="niu">Niu Yawei</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                    </labels>
                <created>Tue, 4 Oct 2016 18:14:54 +0000</created>
                <updated>Fri, 14 Oct 2016 23:44:11 +0000</updated>
                            <resolved>Fri, 14 Oct 2016 23:44:11 +0000</resolved>
                                    <version>Lustre 2.9.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>11</watches>
                                                                            <comments>
                            <comment id="168506" author="green" created="Thu, 6 Oct 2016 15:45:06 +0000"  >&lt;p&gt;So patches landed on Sep 29 are:&lt;br/&gt;
f2b457a &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8632&quot; title=&quot;Address of function &amp;#39;page_count&amp;#39; is used instead of a local variable&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8632&quot;&gt;&lt;del&gt;LU-8632&lt;/del&gt;&lt;/a&gt; osc: remove of usage removed &apos;page_count&apos; local variable&lt;br/&gt;
3ed0a4a &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8579&quot; title=&quot;oti_dev is never set, but is used&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8579&quot;&gt;&lt;del&gt;LU-8579&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: code cleanup for osd_ldiskfs_add_entry&lt;br/&gt;
5207c48 &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6401&quot; title=&quot;Untangle lustre userland and kernel headers&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6401&quot;&gt;&lt;del&gt;LU-6401&lt;/del&gt;&lt;/a&gt; header: remove assert from interval_set()&lt;br/&gt;
4dfa29e &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3888&quot; title=&quot;lfs getstripe should print FID&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3888&quot;&gt;&lt;del&gt;LU-3888&lt;/del&gt;&lt;/a&gt; utils: print lmm_fid as part of getstripe&lt;br/&gt;
7597c67 &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4474&quot; title=&quot;deadlock of ldiskfs_quota_off()&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4474&quot;&gt;&lt;del&gt;LU-4474&lt;/del&gt;&lt;/a&gt; osd: Add nodelalloc to ldiskfs mount options&lt;br/&gt;
091739b &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6910&quot; title=&quot;Configurable values for OST reserved size&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6910&quot;&gt;&lt;del&gt;LU-6910&lt;/del&gt;&lt;/a&gt; osp: add procfs values for OST reserved size&lt;br/&gt;
4a5e355 &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8446&quot; title=&quot;metadata-updates: FAIL: wrong timestamps&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8446&quot;&gt;&lt;del&gt;LU-8446&lt;/del&gt;&lt;/a&gt; llite: clear inode timestamps after losing UPDATE lock&lt;br/&gt;
72ec6eb &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8544&quot; title=&quot;recovery-double-scale test_pairwise_fail: start client on trevis-54vm5 failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8544&quot;&gt;&lt;del&gt;LU-8544&lt;/del&gt;&lt;/a&gt; test: using lfs df in client_up&lt;/p&gt;

&lt;p&gt;none of them seems to be the obvious culprit, though.&lt;/p&gt;</comment>
                            <comment id="168572" author="gerrit" created="Thu, 6 Oct 2016 20:11:59 +0000"  >&lt;p&gt;Andreas Dilger (andreas.dilger@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/22990&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/22990&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8668&quot; title=&quot;sanityn test 77a, 77b, 77c, 77d, 77e, 77f all fail with &amp;#39;dd on client failed&amp;#39; &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8668&quot;&gt;&lt;del&gt;LU-8668&lt;/del&gt;&lt;/a&gt; tests: print more information about dd failure&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: c722e57af3e433b50933f2488dbfcf89d1fa53fa&lt;/p&gt;</comment>
                            <comment id="168576" author="bogl" created="Thu, 6 Oct 2016 20:31:18 +0000"  >&lt;p&gt;more on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/14cc482c-8bff-11e6-a9b0-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/14cc482c-8bff-11e6-a9b0-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/0422b016-8c2b-11e6-91aa-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/0422b016-8c2b-11e6-91aa-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="168818" author="gerrit" created="Sat, 8 Oct 2016 16:40:03 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/22990/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/22990/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8668&quot; title=&quot;sanityn test 77a, 77b, 77c, 77d, 77e, 77f all fail with &amp;#39;dd on client failed&amp;#39; &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8668&quot;&gt;&lt;del&gt;LU-8668&lt;/del&gt;&lt;/a&gt; tests: print more information about dd failure&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: d3409e4a55c8c9a18a63743014a1e1c1bfc8b86d&lt;/p&gt;</comment>
                            <comment id="168860" author="niu" created="Sun, 9 Oct 2016 15:20:36 +0000"  >&lt;p&gt;Didn&apos;t find any error message in the logs, the debug patch from Andreas has been landed, it may help us to identify what kind of error the &apos;dd&apos; got.&lt;/p&gt;</comment>
                            <comment id="168922" author="jamesanunez" created="Mon, 10 Oct 2016 14:54:39 +0000"  >&lt;p&gt;Failures with the debug information from Andreas&apos; patch:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/dcf408e8-8e9e-11e6-91aa-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/dcf408e8-8e9e-11e6-91aa-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/b63dd56c-8e71-11e6-a9b0-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/b63dd56c-8e71-11e6-a9b0-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/6167e6ee-8e95-11e6-91aa-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/6167e6ee-8e95-11e6-91aa-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="169034" author="niu" created="Tue, 11 Oct 2016 02:33:57 +0000"  >&lt;p&gt;Ok, take &lt;a href=&quot;https://testing.hpdd.intel.com/test_logs/66d51444-8e95-11e6-91aa-5254006e85c2/show_text&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_logs/66d51444-8e95-11e6-91aa-5254006e85c2/show_text&lt;/a&gt; as an example:&lt;/p&gt;

&lt;p&gt;We can see 4 &quot;ssh_exchange_identification&quot; errors:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;CMD: trevis-66vm1.trevis.hpdd.intel.com,trevis-66vm2 dd &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt;=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm1.trevis.hpdd.intel.com bs=1M seek=5 count=1
CMD: trevis-66vm1.trevis.hpdd.intel.com,trevis-66vm2 dd &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt;=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm1.trevis.hpdd.intel.com bs=1M seek=4 count=1
CMD: trevis-66vm1.trevis.hpdd.intel.com,trevis-66vm2 dd &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt;=/dev/zero of=/mnt/lustre/d77a.sanityn/nrs_w_trevis-66vm1.trevis.hpdd.intel.com bs=1M seek=0 count=1
trevis-66vm2: 1+0 records in
trevis-66vm2: 1+0 records out
trevis-66vm2: 1048576 bytes (1.0 MB) copied, 0.0365618 s, 28.7 MB/s
trevis-66vm1: ssh_exchange_identification: Connection closed by remote host
pdsh@trevis-66vm1: trevis-66vm1: ssh exited with exit code 255
trevis-66vm1: ssh_exchange_identification: Connection closed by remote host
pdsh@trevis-66vm1: trevis-66vm1: ssh exited with exit code 255
trevis-66vm2: ssh_exchange_identification: Connection closed by remote host
pdsh@trevis-66vm1: trevis-66vm2: ssh exited with exit code 255
trevis-66vm1: ssh_exchange_identification: Connection closed by remote host
pdsh@trevis-66vm1: trevis-66vm1: ssh exited with exit code 255
trevis-66vm2: 1+0 records in
trevis-66vm2: 1+0 records out
trevis-66vm2: 1048576 bytes (1.0 MB) copied, 0.0534754 s, 19.6 MB/s
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I think it&apos;s the reason of 4 &apos;dd failed&apos; errors:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;sanityn test_77a: @@@@@@ FAIL: dd at 10MB on client failed (2)
...
sanityn test_77a: @@@@@@ FAIL: dd at 15MB on client failed (2)
...
sanityn test_77a: @@@@@@ FAIL: dd at 4MB on client failed (2)
...
sanityn test_77a: @@@@@@ FAIL: dd at 6MB on client failed (2)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The dd itself actually didn&apos;t fail, but the pdsh command failed. Are there any network related hardware or configuration changes in our testing system?&lt;/p&gt;</comment>
                            <comment id="169035" author="niu" created="Tue, 11 Oct 2016 02:40:35 +0000"  >&lt;p&gt;All failures happened on &apos;trevis&apos;, probably we&apos;d ask DCO team to take a look what kind of change to &apos;trevis&apos; may cause such frequent ssh connection failures?&lt;/p&gt;</comment>
                            <comment id="169241" author="adilger" created="Wed, 12 Oct 2016 05:12:27 +0000"  >&lt;p&gt;All of the last 50 failures are trevis66-71.&lt;/p&gt;</comment>
                            <comment id="169480" author="colmstea" created="Thu, 13 Oct 2016 17:10:45 +0000"  >&lt;p&gt;From the testing I ran the cause was narrowed down to Autotest changing the rcmd module from mrsh to ssh for pdsh:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;PDSH=&quot;pdsh -t 120 -S -Rssh -w&quot;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;wheras before it was&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;PDSH=&quot;pdsh -t 120 -S -Rmrsh -w&quot;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I can easily change pdsh back to mrsh but should these 77* tests fail because pdsh is using ssh?&lt;/p&gt;</comment>
                            <comment id="169594" author="niu" created="Fri, 14 Oct 2016 03:09:04 +0000"  >&lt;p&gt;Yes, I think so. Lots of test cases rely on pdsh to issue command remotely, if pdsh returns error, it can&apos;t tell if the command is performed successfully, so failing the test looks a reasonable choice to me.&lt;/p&gt;</comment>
                            <comment id="169705" author="pjones" created="Fri, 14 Oct 2016 18:30:53 +0000"  >&lt;p&gt;So I see that a change has been made to autotest under DCO-6061. Will this change just take effect at the next autotest upgrade? Should we close this LU ticket as a duplicate of the DCO ticket?&lt;/p&gt;</comment>
                            <comment id="169805" author="adilger" created="Fri, 14 Oct 2016 23:44:11 +0000"  >&lt;p&gt;Problem was caused by test environment.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzyqfr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>