<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:22:13 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-8982] replay-vbr test_7g: @@@@@@ replay-vbr test_7g: @@@@@@ FAIL: Test 7g.2 failed; FAIL: Test 7g.1 failed</title>
                <link>https://jira.whamcloud.com/browse/LU-8982</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;stdout&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== replay-vbr test 7g: rename, {lost}, create ======================================================== 14:24:28 (1475850268)
Starting client: fre0132:  -o user_xattr,flock fre0129@tcp:/lustre /mnt/lustre2
fre0132: mount.lustre: according to /etc/mtab fre0129@tcp:/lustre is already mounted on /mnt/lustre2
pdsh@fre0131: fre0132: ssh exited with exit code 17
start cycle: test_7g.1
mdd.lustre-MDT0000.sync_permission=0
mdt.lustre-MDT0000.commit_on_sharing=0
Filesystem                 1K-blocks  Used Available Use% Mounted on
192.168.101.29@tcp:/lustre   1345184 35424   1209144   3% /mnt/lustre
test_7g.1 first: createmany -o /mnt/lustre/d7g.replay-vbr/f7g.replay-vbr- 1; mv /mnt/lustre/d7g.replay-vbr/f7g.replay-vbr-0 /mnt/lustre/d7g.replay-vbr/f7g.replay-vbr-1
total: 1 creates in 0.00 seconds: 435.91 creates/second
test_7g.1 lost: mkdir /mnt/lustre2/d7g.replay-vbr/f7g.replay-vbr-0;rmdir /mnt/lustre2/d7g.replay-vbr/f7g.replay-vbr-0
test_7g.1 last: createmany -o /mnt/lustre/d7g.replay-vbr/f7g.replay-vbr- 1
total: 1 creates in 0.00 seconds: 880.23 creates/second
Stopping client fre0132 /mnt/lustre2 (opts:)
pdsh@fre0131: fre0132: ssh exited with exit code 1
Failing mds1 on fre0129
Stopping /mnt/mds1 (opts:) on fre0129
reboot facets: mds1
Failover mds1 to fre0129
14:24:41 (1475850281) waiting for fre0129 network 900 secs ...
14:24:41 (1475850281) network interface is UP
mount facets: mds1
Starting mds1: -o rw,user_xattr  /dev/vdc /mnt/mds1
Started lustre-MDT0000
affected facets: mds1
fre0129: *.lustre-MDT0000.recovery_status status: COMPLETE
Waiting for orphan cleanup...
osp.lustre-OST0000-osc-MDT0000.old_sync_processed
osp.lustre-OST0000-osc-MDT0001.old_sync_processed
osp.lustre-OST0001-osc-MDT0000.old_sync_processed
osp.lustre-OST0001-osc-MDT0001.old_sync_processed
wait 40 secs maximumly for fre0129 mds-ost sync done.
Starting client: fre0132:  -o user_xattr,flock fre0129@tcp:/lustre /mnt/lustre2
start cycle: test_7g.2
mdd.lustre-MDT0000.sync_permission=0
mdt.lustre-MDT0000.commit_on_sharing=0
Filesystem                 1K-blocks  Used Available Use% Mounted on
192.168.101.29@tcp:/lustre   1345184 35424   1209144   3% /mnt/lustre
test_7g.2 first: createmany -o /mnt/lustre/d7g.replay-vbr/f7g.replay-vbr- 2; mv /mnt/lustre/d7g.replay-vbr/f7g.replay-vbr-0 /mnt/lustre/d7g.replay-vbr/f7g.replay-vbr-1
total: 2 creates in 0.00 seconds: 739.61 creates/second
test_7g.2 lost: createmany -o /mnt/lustre2/d7g.replay-vbr/f7g.replay-vbr- 1; rm /mnt/lustre2/d7g.replay-vbr/f7g.replay-vbr-0
total: 1 creates in 0.00 seconds: 392.76 creates/second
test_7g.2 last: mkdir /mnt/lustre/d7g.replay-vbr/f7g.replay-vbr-0
Stopping client fre0132 /mnt/lustre2 (opts:)
pdsh@fre0131: fre0132: ssh exited with exit code 1
Failing mds1 on fre0129
Stopping /mnt/mds1 (opts:) on fre0129
reboot facets: mds1
Failover mds1 to fre0129
14:26:07 (1475850367) waiting for fre0129 network 900 secs ...
14:26:07 (1475850367) network interface is UP
mount facets: mds1
Starting mds1: -o rw,user_xattr  /dev/vdc /mnt/mds1
Started lustre-MDT0000
fre0131: stat: cannot read file system information for &#8216;/mnt/lustre&#8217;: Input/output error
pdsh@fre0131: fre0131: ssh exited with exit code 1
affected facets: mds1
fre0129: *.lustre-MDT0000.recovery_status status: COMPLETE
Waiting for orphan cleanup...
osp.lustre-OST0000-osc-MDT0000.old_sync_processed
osp.lustre-OST0000-osc-MDT0001.old_sync_processed
osp.lustre-OST0001-osc-MDT0000.old_sync_processed
osp.lustre-OST0001-osc-MDT0001.old_sync_processed
wait 40 secs maximumly for fre0129 mds-ost sync done.
 replay-vbr test_7g: @@@@@@ FAIL: Test 7g.2 failed 
  Trace dump:
  = /usr/lib64/lustre/tests/test-framework.sh:4863:error()
  = /usr/lib64/lustre/tests/replay-vbr.sh:891:test_7g()
  = /usr/lib64/lustre/tests/test-framework.sh:5123:run_one()
  = /usr/lib64/lustre/tests/test-framework.sh:5161:run_one_logged()
  = /usr/lib64/lustre/tests/test-framework.sh:4965:run_test()
  = /usr/lib64/lustre/tests/replay-vbr.sh:906:main()
Dumping lctl log to /tmp/test_logs/1475850252/replay-vbr.test_7g.*.1475850473.log
fre0130: Warning: Permanently added &apos;fre0131,192.168.101.31&apos; (ECDSA) to the list of known hosts.

fre0129: Warning: Permanently added &apos;fre0131,192.168.101.31&apos; (ECDSA) to the list of known hosts.

fre0132: Warning: Permanently added &apos;fre0131,192.168.101.31&apos; (ECDSA) to the list of known hosts.

fre0130: error: set_param: setting debug=: Invalid argument
pdsh@fre0131: fre0130: ssh exited with exit code 22
fre0129: error: set_param: setting debug=: Invalid argument
pdsh@fre0131: fre0129: ssh exited with exit code 22
Resetting fail_loc on all nodes...done.
FAIL 7g (208s)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;cmd&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;SLOW=YES NAME=ncli mgs_HOST=fre0129 MGSDEV=/dev/vdb NETTYPE=tcp mds1_HOST=fre0129 MDSDEV1=/dev/vdc mds_HOST=fre0129 MDSDEV=/dev/vdc mds2_HOST=fre0129 MDSDEV2=/dev/vdd MDSCOUNT=2 ost1_HOST=fre0130 OSTDEV1=/dev/vdb ost2_HOST=fre0130 OSTDEV2=/dev/vdc OSTCOUNT=2 CLIENTS=fre0131 RCLIENTS=&lt;span class=&quot;code-quote&quot;&gt;&quot;fre0132&quot;&lt;/span&gt;   PDSH=&lt;span class=&quot;code-quote&quot;&gt;&quot;/usr/bin/pdsh -R ssh -S -w &quot;&lt;/span&gt; ONLY=7g MDS_MOUNT_OPTS=&lt;span class=&quot;code-quote&quot;&gt;&quot;-o rw,user_xattr&quot;&lt;/span&gt; OST_MOUNT_OPTS=&lt;span class=&quot;code-quote&quot;&gt;&quot;-o user_xattr&quot;&lt;/span&gt; MDSSIZE=0 OSTSIZE=0 MDSJOURNALSIZE=&lt;span class=&quot;code-quote&quot;&gt;&quot;22&quot;&lt;/span&gt; ENABLE_QUOTA=&lt;span class=&quot;code-quote&quot;&gt;&quot;yes&quot;&lt;/span&gt;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>Release : 191_3.10.0_327.13.1.x3.0.86.x86_64_g8e08a98 &lt;br/&gt;
Client 2.7.14.x8 Server 2.7.14.x8 &lt;br/&gt;
4 node DNE SingleMDS - KVM setup </environment>
        <key id="42683">LU-8982</key>
            <summary>replay-vbr test_7g: @@@@@@ replay-vbr test_7g: @@@@@@ FAIL: Test 7g.2 failed; FAIL: Test 7g.1 failed</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="hongchao.zhang">Hongchao Zhang</assignee>
                                    <reporter username="hongchao.zhang">Hongchao Zhang</reporter>
                        <labels>
                    </labels>
                <created>Fri, 30 Dec 2016 03:29:03 +0000</created>
                <updated>Tue, 18 Apr 2017 12:53:37 +0000</updated>
                            <resolved>Tue, 18 Apr 2017 12:53:37 +0000</resolved>
                                                                        <due></due>
                            <votes>0</votes>
                                    <watches>3</watches>
                                                                            <comments>
                            <comment id="179215" author="gerrit" created="Fri, 30 Dec 2016 03:32:02 +0000"  >&lt;p&gt;Hongchao Zhang (hongchao.zhang@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/24541&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/24541&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8982&quot; title=&quot;replay-vbr test_7g: @@@@@@ replay-vbr test_7g: @@@@@@ FAIL: Test 7g.2 failed; FAIL: Test 7g.1 failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8982&quot;&gt;&lt;del&gt;LU-8982&lt;/del&gt;&lt;/a&gt; ldlm: limit recovery timer to allow VBR&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 50a59a4dc35590cc54382e5489283ab6c7e605d3&lt;/p&gt;</comment>
                            <comment id="192213" author="hongchao.zhang" created="Mon, 17 Apr 2017 07:33:57 +0000"  >&lt;p&gt;the issue has been fixed by the patch &lt;a href=&quot;https://review.whamcloud.com/#/c/23716/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/#/c/23716/&lt;/a&gt; in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8826&quot; title=&quot;recovery hard time should not be shrunk for IR&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8826&quot;&gt;&lt;del&gt;LU-8826&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="192478" author="pjones" created="Tue, 18 Apr 2017 12:53:37 +0000"  >&lt;p&gt;IIUC this is a duplicate&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                            <outwardlinks description="duplicates">
                                                        </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzyzfr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>