<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:07:17 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-463] orphan recovery happens too late, causing writes to fail with ENOENT after recovery</title>
                <link>https://jira.whamcloud.com/browse/LU-463</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;While running recovery-mds-scale with FLAVOR=OSS, it failed as follows after running 3 hours:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;==== Checking the clients loads AFTER  failover -- failure NOT OK
ost5 has failed over 5 times, and counting...
sleeping 246 seconds ... 
tar: etc/rc.d/rc6.d/K88rsyslog: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors
Found the END_RUN_FILE file: /home/yujian/test_logs/end_run_file
client-21-ib
Client load failed on node client-21-ib

client client-21-ib load stdout and debug files :
              /tmp/recovery-mds-scale.log_run_tar.sh-client-21-ib
              /tmp/recovery-mds-scale.log_run_tar.sh-client-21-ib.debug
2011-06-26 08:08:03 Terminating clients loads ...
Duration:                86400
Server failover period: 600 seconds
Exited after:           13565 seconds
Number of failovers before exit:
mds: 0 times
ost1: 2 times
ost2: 6 times
ost3: 3 times
ost4: 4 times
ost5: 5 times
ost6: 3 times
Status: FAIL: rc=1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Syslog on client node client-21-ib showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Jun 26 08:03:55 client-21 kernel: Lustre: DEBUG MARKER: ost5 has failed over 5 times, and counting...
Jun 26 08:04:20 client-21 kernel: LustreError: 18613:0:(client.c:2347:ptlrpc_replay_interpret()) @@@ status -2, old was 0  req@ffff88031daf6c00 x1372677268199869/t98784270264 o2-&amp;gt;lustre-OST0005_UUID@192.168.4.132@o2ib:28/4 lens 400/592 e 0 to 1 dl 1309100718 ref 2 fl Interpret:R/4/0 rc -2/-2
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Syslog on the MDS node client-10-ib showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Jun 26 08:03:57 client-10-ib kernel: Lustre: DEBUG MARKER: ost5 has failed over 5 times, and counting...
Jun 26 08:04:22 client-10-ib kernel: LustreError: 17651:0:(client.c:2347:ptlrpc_replay_interpret()) @@@ status -2, old was 0  req@ffff810320674400 x1372677249608261/t98784270265 o2-&amp;gt;lustre-OST0005_UUID@192.168.4.132@o2ib:28/4 lens 400/592 e 0 to 1 dl 1309100720 ref 2 fl Interpret:R/4/0 rc -2/-2
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Syslog on the OSS node fat-amd-1-ib showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Jun 26 08:03:57 fat-amd-1-ib kernel: Lustre: DEBUG MARKER: ost5 has failed over 5 times, and counting...
Jun 26 08:04:21 fat-amd-1-ib kernel: Lustre: 6278:0:(ldlm_lib.c:1815:target_queue_last_replay_reply()) lustre-OST0005: 5 recoverable clients remain
Jun 26 08:04:21 fat-amd-1-ib kernel: Lustre: 6278:0:(ldlm_lib.c:1815:target_queue_last_replay_reply()) Skipped 2 previous similar messagesJun 26 08:04:21 fat-amd-1-ib kernel: LustreError: 6336:0:(ldlm_resource.c:862:ldlm_resource_add()) filter-lustre-OST0005_UUID: lvbo_init failed for resource 161916: rc -2
Jun 26 08:04:21 fat-amd-1-ib kernel: LustreError: 6336:0:(ldlm_resource.c:862:ldlm_resource_add()) Skipped 18 previous similar messagesJun 26 08:04:25 fat-amd-1-ib kernel: LustreError: 7708:0:(filter_log.c:135:filter_cancel_cookies_cb()) error cancelling log cookies: rc = -19
Jun 26 08:04:25 fat-amd-1-ib kernel: LustreError: 7708:0:(filter_log.c:135:filter_cancel_cookies_cb()) Skipped 8 previous similar messagesJun 26 08:04:25 fat-amd-1-ib kernel: Lustre: lustre-OST0005: Recovery period over after 0:05, of 6 clients 6 recovered and 0 were evicted.
Jun 26 08:04:25 fat-amd-1-ib kernel: Lustre: lustre-OST0005: sending delayed replies to recovered clientsJun 26 08:04:25 fat-amd-1-ib kernel: Lustre: lustre-OST0005: received MDS connection from 192.168.4.10@o2ib
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Maloo report: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/f1c2fd72-a067-11e0-aee5-52540025f9af&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/f1c2fd72-a067-11e0-aee5-52540025f9af&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please find the debug logs in the attachment.&lt;/p&gt;

&lt;p&gt;This is a known issue: &lt;a href=&quot;https://bugzilla.lustre.org/show_bug.cgi?id=22777&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;bug 22777&lt;/a&gt;&lt;/p&gt;</description>
                <environment>&lt;br/&gt;
Lustre Branch: v1_8_6_RC3&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://newbuild.whamcloud.com/job/lustre-b1_8/90/&quot;&gt;http://newbuild.whamcloud.com/job/lustre-b1_8/90/&lt;/a&gt;&lt;br/&gt;
e2fsprogs Build: &lt;a href=&quot;http://newbuild.whamcloud.com/job/e2fsprogs-master/42/&quot;&gt;http://newbuild.whamcloud.com/job/e2fsprogs-master/42/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL6/x86_64(patchless client, in-kernel OFED, kernel version: 2.6.32-131.2.1.el6)&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;RHEL5/x86_64(server, OFED 1.5.3.1, kernel version: 2.6.18-238.12.1.el5_lustre)&lt;br/&gt;
ENABLE_QUOTA=yes&lt;br/&gt;
FAILURE_MODE=HARD&lt;br/&gt;
&lt;br/&gt;
MGS/MDS Nodes: client-10-ib(active), client-12-ib(passive)&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;\  /&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;1 combined MGS/MDT&lt;br/&gt;
&lt;br/&gt;
OSS Nodes: fat-amd-1-ib(active), fat-amd-2-ib(active)&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;\  /&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;OST1 (active in fat-amd-1-ib)&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;OST2 (active in fat-amd-2-ib)&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;OST3 (active in fat-amd-1-ib)&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;OST4 (active in fat-amd-2-ib)&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;OST5 (active in fat-amd-1-ib)&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;OST6 (active in fat-amd-2-ib)&lt;br/&gt;
&lt;br/&gt;
Client Nodes:  fat-amd-3-ib, client-[6,7,16,21,24]-ib&lt;br/&gt;
</environment>
        <key id="11241">LU-463</key>
            <summary>orphan recovery happens too late, causing writes to fail with ENOENT after recovery</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="hongchao.zhang">Hongchao Zhang</assignee>
                                    <reporter username="yujian">Jian Yu</reporter>
                        <labels>
                    </labels>
                <created>Sun, 26 Jun 2011 23:46:50 +0000</created>
                <updated>Tue, 5 Apr 2016 22:57:03 +0000</updated>
                            <resolved>Tue, 5 Apr 2016 22:57:03 +0000</resolved>
                                    <version>Lustre 2.1.0</version>
                    <version>Lustre 2.2.0</version>
                    <version>Lustre 2.1.1</version>
                    <version>Lustre 2.1.2</version>
                    <version>Lustre 2.1.3</version>
                    <version>Lustre 2.1.4</version>
                    <version>Lustre 2.1.5</version>
                    <version>Lustre 1.8.8</version>
                    <version>Lustre 1.8.6</version>
                    <version>Lustre 1.8.9</version>
                    <version>Lustre 2.1.6</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>15</watches>
                                                                            <comments>
                            <comment id="16979" author="pjones" created="Mon, 27 Jun 2011 01:19:40 +0000"  >&lt;p&gt;johann&lt;/p&gt;

&lt;p&gt;Do you agree that this is a known issue? If so, does this mean that this not a blocker to 1.8.6-wc?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="16994" author="johann" created="Mon, 27 Jun 2011 08:03:19 +0000"  >&lt;p&gt;Yes, this is a known issue that we hit sometimes with those tests.&lt;/p&gt;</comment>
                            <comment id="18175" author="yujian" created="Mon, 25 Jul 2011 03:18:44 +0000"  >&lt;p&gt;Lustre Tag: v2_0_65_0&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://newbuild.whamcloud.com/job/lustre-master/204/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://newbuild.whamcloud.com/job/lustre-master/204/&lt;/a&gt;&lt;br/&gt;
e2fsprogs Build: &lt;a href=&quot;http://newbuild.whamcloud.com/job/e2fsprogs-master/42/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://newbuild.whamcloud.com/job/e2fsprogs-master/42/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL6/x86_64(in-kernel OFED, kernel version: 2.6.32-131.2.1.el6)&lt;br/&gt;
ENABLE_QUOTA=yes&lt;br/&gt;
FAILURE_MODE=HARD&lt;br/&gt;
FLAVOR=OSS&lt;/p&gt;

&lt;p&gt;Lustre cluster configuration:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;MGS/MDS Node: client-7-ib
OSS Nodes:    fat-amd-1-ib(active), fat-amd-2-ib(active)
                                 \  /
                                 OST1 (active in fat-amd-1-ib)
                                 OST2 (active in fat-amd-2-ib)
                                 OST3 (active in fat-amd-1-ib)
                                 OST4 (active in fat-amd-2-ib)
                                 OST5 (active in fat-amd-1-ib)
                                 OST6 (active in fat-amd-2-ib)
              client-8-ib (OST7)
Client Nodes: fat-amd-3-ib, client-[9,11,12,13]-ib
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;While running recovery-mds-scale with FLAVOR=OSS, it failed as follows:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;==== Checking the clients loads AFTER  failover -- failure NOT OK
ost5 has failed over 1 times, and counting...
sleeping 417 seconds ... 
tar: etc/selinux/targeted/modules/active/modules/postgrey.pp: Cannot write: No such file or directory
tar: Exiting with failure status due to previous errors
Found the END_RUN_FILE file: /home/yujian/test_logs/end_run_file
client-13-ib
client-12-ib
Client load failed on node client-13-ib

client client-13-ib load stdout and debug files :
              /tmp/recovery-mds-scale.log_run_dbench.sh-client-13-ib
              /tmp/recovery-mds-scale.log_run_dbench.sh-client-13-ib.debug
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;/tmp/recovery-mds-scale.log_run_dbench.sh-client-13-ib:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;copying /usr/share/dbench/client.txt to /mnt/lustre/d0.dbench-client-13-ib/client.txt
running &apos;dbench 2&apos; on /mnt/lustre/d0.dbench-client-13-ib at Sun Jul 24 21:00:21 PDT 2011
dbench PID=29959
dbench version 4.00 - Copyright Andrew Tridgell 1999-2004

Running for 600 seconds with load &apos;client.txt&apos; and minimum warmup 120 secs
0 of 2 processes prepared for launch   0 sec
2 of 2 processes prepared for launch   0 sec
releasing clients
   2       666    42.32 MB/sec  warmup   1 sec  latency 354.182 ms
   2      1436    26.02 MB/sec  warmup   2 sec  latency 386.166 ms
&amp;lt;~snip~&amp;gt;
   2     18452     0.00 MB/sec  execute  67 sec  latency 171356.589 ms
   2     18452     0.00 MB/sec  execute  68 sec  latency 172356.684 ms
[18468] write failed on handle 13839 (Cannot send after transport endpoint shutdown)
Child failed with status 1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;/tmp/recovery-mds-scale.log_run_dbench.sh-client-13-ib.debug:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2011-07-24 21:00:21: dbench run starting
+ mkdir -p /mnt/lustre/d0.dbench-client-13-ib
+ load_pid=29927
+ wait 29927
+ rundbench -D /mnt/lustre/d0.dbench-client-13-ib 2
touch: missing file operand
Try `touch --help&apos; for more information.
+ &apos;[&apos; 1 -eq 0 &apos;]&apos;
++ date &apos;+%F %H:%M:%S&apos;
+ echoerr &apos;2011-07-24 21:03:30: dbench failed&apos;
+ echo &apos;2011-07-24 21:03:30: dbench failed&apos;
2011-07-24 21:03:30: dbench failed
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;/tmp/recovery-mds-scale.log_run_tar.sh-client-12-ib:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;tar: etc/selinux/targeted/modules/active/modules/postgrey.pp: Cannot write: No such file or directory
tar: Exiting with failure status due to previous errors
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;/tmp/recovery-mds-scale.log_run_tar.sh-client-12-ib.debug:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2011-07-24 21:00:20: tar run starting
+ mkdir -p /mnt/lustre/d0.tar-client-12-ib
+ cd /mnt/lustre/d0.tar-client-12-ib
+ wait 29934
+ do_tar
+ tar cf - /etc
+ tar xf -
+ tee /tmp/recovery-mds-scale.log_run_tar.sh-client-12-ib
tar: Removing leading `/&apos; from member names
+ return 2
+ RC=2
++ grep &apos;exit delayed from previous errors&apos; /tmp/recovery-mds-scale.log_run_tar.sh-client-12-ib
+ PREV_ERRORS=
+ true
+ &apos;[&apos; 2 -ne 0 -a &apos;&apos; -a &apos;&apos; &apos;]&apos;
+ &apos;[&apos; 2 -eq 0 &apos;]&apos;
++ date &apos;+%F %H:%M:%S&apos;
+ echoerr &apos;2011-07-24 21:03:32: tar failed&apos;
+ echo &apos;2011-07-24 21:03:32: tar failed&apos;
2011-07-24 21:03:32: tar failed
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Syslog on client node client-13-ib showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Jul 24 21:03:27 client-13 kernel: Lustre: DEBUG MARKER: ost5 has failed over 1 times, and counting...
Jul 24 21:03:29 client-13 kernel: Lustre: 29024:0:(client.c:2527:ptlrpc_replay_interpret()) @@@ Version mismatch during replay
Jul 24 21:03:29 client-13 kernel:  req@ffff8802fcbb5000 x1375277041099764/t470(470) o-1-&amp;gt;lustre-OST0004_UUID@192.168.4.133@o2ib:6/4 lens 512/400 e 1 to 0 dl 1311566653 ref 2 fl Interpret:R/ffffffff/ffffffff rc -75/-1
Jul 24 21:03:30 client-13 kernel: Lustre: 29024:0:(import.c:1190:completed_replay_interpret()) lustre-OST0002-osc-ffff88030ec62400: version recovery fails, reconnecting
Jul 24 21:03:30 client-13 kernel: LustreError: 167-0: This client was evicted by lustre-OST0002; in progress operations using this service will fail.
Jul 24 21:03:30 client-13 kernel: LustreError: 29023:0:(client.c:1057:ptlrpc_import_delay_req()) @@@ IMP_INVALID  req@ffff8802fcbdf400 x1375277041107454/t0(0) o-1-&amp;gt;lustre-OST0002_UUID@192.168.4.133@o2ib:28/4 lens 296/352 e 0 to 0 dl 0 ref 1 fl Rpc:/ffffffff/ffffffff rc 0/-1
Jul 24 21:03:30 client-13 kernel: LustreError: 29021:0:(client.c:1057:ptlrpc_import_delay_req()) @@@ IMP_INVALID  req@ffff8803071b3800 x1375277041107455/t0(0) o-1-&amp;gt;lustre-OST0002_UUID@192.168.4.133@o2ib:6/4 lens 456/416 e 0 to 0 dl 0 ref 2 fl Rpc:/ffffffff/ffffffff rc 0/-1
Jul 24 21:03:30 client-13 kernel: Lustre: lustre-OST0002-osc-ffff88030ec62400: Connection restored to service lustre-OST0002 using nid 192.168.4.133@o2ib.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Syslog on client node client-12-ib showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Jul 24 21:03:27 client-12 kernel: Lustre: DEBUG MARKER: ost5 has failed over 1 times, and counting...
Jul 24 21:03:29 client-12 kernel: LustreError: 29051:0:(client.c:2570:ptlrpc_replay_interpret()) @@@ status -2, old was 0  req@ffff880302aa3800 x1375277041064805/t507(507) o-1-&amp;gt;lustre-OST0004_UUID@192.168.4.133@o2ib:28/4 lens 408/400 e 0 to 0 dl 1311566655 ref 2 fl Interpret:R/ffffffff/ffffffff rc -2/-1
Jul 24 21:03:29 client-12 kernel: LustreError: 29051:0:(client.c:2570:ptlrpc_replay_interpret()) Skipped 10 previous similar messages
Jul 24 21:03:29 client-12 kernel: Lustre: lustre-OST0004-osc-ffff88030b964000: Connection restored to service lustre-OST0004 using nid 192.168.4.133@o2ib.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Syslog on MDS node client-7-ib showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Jul 24 21:03:27 client-7 kernel: Lustre: DEBUG MARKER: ost5 has failed over 1 times, and counting...
Jul 24 21:03:29 client-7 kernel: LustreError: 29520:0:(client.c:2570:ptlrpc_replay_interpret()) @@@ status -2, old was 0  req@ffff8802a519e000 x1375277024283756/t508(508) o-1-&amp;gt;lustre-OST0004_UUID@192.168.4.133@o2ib:28/4 lens 408/400 e 0 to 0 dl 1311566655 ref 2 fl Interpret:R/ffffffff/ffffffff rc -2/-1
Jul 24 21:03:29 client-7 kernel: LustreError: 29520:0:(client.c:2570:ptlrpc_replay_interpret()) Skipped 10 previous similar messages
Jul 24 21:03:29 client-7 kernel: Lustre: lustre-OST0004-osc-MDT0000: Connection restored to service lustre-OST0004 using nid 192.168.4.133@o2ib.
Jul 24 21:03:29 client-7 kernel: Lustre: MDS mdd_obd-lustre-MDT0000: lustre-OST0004_UUID now active, resetting orphans
Jul 24 21:03:29 client-7 kernel: Lustre: 31049:0:(quota_master.c:1760:mds_quota_recovery()) Only 6/7 OSTs are active, abort quota recovery
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Syslog on OSS node fat-amd-2-ib showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Jul 24 21:03:27 fat-amd-2 kernel: Lustre: DEBUG MARKER: ost5 has failed over 1 times, and counting...
Jul 24 21:03:29 fat-amd-2 kernel: LustreError: 32133:0:(filter.c:4111:filter_destroy())  lustre-OST0004: can not find olg of group 0
Jul 24 21:03:29 fat-amd-2 kernel: LustreError: 32133:0:(filter.c:4111:filter_destroy()) Skipped 9 previous similar messages
Jul 24 21:03:29 fat-amd-2 kernel: LustreError: 32133:0:(genops.c:1267:class_disconnect_stale_exports()) lustre-OST0004: disconnect stale client de6f48d2-f9b1-c66d-ae70-0bfaf5c8e6b5@192.168.4.13@o2ib
Jul 24 21:03:29 fat-amd-2 kernel: LustreError: 32133:0:(filter.c:2927:filter_grant_sanity_check()) filter_disconnect: tot_granted 5345280 != fo_tot_granted 11374592
Jul 24 21:03:29 fat-amd-2 kernel: LustreError: 32133:0:(ldlm_resource.c:1084:ldlm_resource_get()) lvbo_init failed for resource 326: rc -2
Jul 24 21:03:29 fat-amd-2 kernel: LustreError: 32133:0:(ldlm_resource.c:1084:ldlm_resource_get()) Skipped 10 previous similar messages
Jul 24 21:03:29 fat-amd-2 kernel: Lustre: lustre-OST0004: sending delayed replies to recovered clients
Jul 24 21:03:29 fat-amd-2 kernel: Lustre: lustre-OST0004: received MDS connection from 192.168.4.7@o2ib
Jul 24 21:03:29 fat-amd-2 kernel: Lustre: 30378:0:(filter.c:2550:filter_llog_connect()) lustre-OST0004: Recovery from log 0xff506/0x0:5a402c04
Jul 24 21:03:29 fat-amd-2 kernel: LustreError: 30513:0:(filter_io.c:723:filter_preprw_write()) lustre-OST0004: BRW to missing obj 342/0:rc -2
Jul 24 21:03:30 fat-amd-2 kernel: Lustre: 30368:0:(filter.c:2846:filter_connect()) lustre-OST0002: Received MDS connection (0x2d0794f47e7586a3); group 0
Jul 24 21:03:30 fat-amd-2 kernel: Lustre: 30368:0:(filter.c:2846:filter_connect()) Skipped 9 previous similar messages
Jul 24 21:04:15 fat-amd-2 kernel: Lustre: 30360:0:(ldlm_lib.c:871:target_handle_connect()) lustre-OST0004: connection from de6f48d2-f9b1-c66d-ae70-0bfaf5c8e6b5@192.168.4.13@o2ib t470 exp (null) cur 1311566655 last 0
Jul 24 21:04:15 fat-amd-2 kernel: Lustre: 30360:0:(ldlm_lib.c:871:target_handle_connect()) Skipped 5 previous similar messages
Jul 24 21:04:15 fat-amd-2 kernel: Lustre: 30360:0:(filter.c:2846:filter_connect()) lustre-OST0004: Received MDS connection (0x2d0794f47e759c6e); group 0
Jul 24 21:04:15 fat-amd-2 kernel: Lustre: 30360:0:(sec.c:1474:sptlrpc_import_sec_adapt()) import lustre-OST0004-&amp;gt;NET_0x50000c0a8040d_UUID netid 50000: select flavor null
Jul 24 21:04:15 fat-amd-2 kernel: Lustre: 30360:0:(sec.c:1474:sptlrpc_import_sec_adapt()) Skipped 5 previous similar messages
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Maloo report: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/eff97732-b67e-11e0-8bdf-52540025f9af&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/eff97732-b67e-11e0-8bdf-52540025f9af&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please refer to the attached recovery-oss-scale-1311567030.tar.bz2 for more syslogs and debug logs.&lt;/p&gt;</comment>
                            <comment id="20103" author="yujian" created="Fri, 9 Sep 2011 00:11:37 +0000"  >&lt;p&gt;Lustre Branch: master&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://newbuild.whamcloud.com/job/lustre-master/276/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://newbuild.whamcloud.com/job/lustre-master/276/&lt;/a&gt;&lt;br/&gt;
e2fsprogs Build: &lt;a href=&quot;http://newbuild.whamcloud.com/job/e2fsprogs-master/54/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://newbuild.whamcloud.com/job/e2fsprogs-master/54/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL5/x86_64(in-kernel OFED, kernel version: 2.6.18-238.19.1.el5)&lt;br/&gt;
ENABLE_QUOTA=yes&lt;br/&gt;
FAILURE_MODE=HARD&lt;br/&gt;
FLAVOR=OSS&lt;/p&gt;

&lt;p&gt;recovery-mds-scale(FLAVOR=OSS) failed with the same issue: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/d0a83c78-da97-11e0-8d02-52540025f9af&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/d0a83c78-da97-11e0-8d02-52540025f9af&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please refer to the attached recovery-oss-scale.1315539020.log.tar.bz2 for more logs.&lt;/p&gt;</comment>
                            <comment id="20352" author="yujian" created="Tue, 20 Sep 2011 04:35:41 +0000"  >&lt;p&gt;Lustre Tag: v2_1_0_0_RC2&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://newbuild.whamcloud.com/job/lustre-master/283/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://newbuild.whamcloud.com/job/lustre-master/283/&lt;/a&gt;&lt;br/&gt;
e2fsprogs Build: &lt;a href=&quot;http://newbuild.whamcloud.com/job/e2fsprogs-master/54/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://newbuild.whamcloud.com/job/e2fsprogs-master/54/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL6/x86_64(in-kernel OFED, kernel version: 2.6.32-131.6.1.el6.x86_64)&lt;br/&gt;
ENABLE_QUOTA=yes&lt;br/&gt;
FAILURE_MODE=HARD&lt;br/&gt;
FLAVOR=OSS&lt;/p&gt;

&lt;p&gt;After running about 2 hours (OSS failed over 7 times), recovery-mds-scale (FLAVOR=OSS) test failed with the same issue:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/50e83640-e362-11e0-9909-52540025f9af&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/50e83640-e362-11e0-9909-52540025f9af&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please refer to the attached recovery-oss-scale.1316503823.log.tar.bz2 for more logs.&lt;/p&gt;</comment>
                            <comment id="21176" author="yujian" created="Thu, 13 Oct 2011 01:07:25 +0000"  >&lt;p&gt;Lustre Tag: v1_8_7_WC1_RC1&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://newbuild.whamcloud.com/job/lustre-b1_8/142/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://newbuild.whamcloud.com/job/lustre-b1_8/142/&lt;/a&gt;&lt;br/&gt;
e2fsprogs Build: &lt;a href=&quot;http://newbuild.whamcloud.com/job/e2fsprogs-master/65/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://newbuild.whamcloud.com/job/e2fsprogs-master/65/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL5/x86_64(server, OFED 1.5.3.2, ext4-based ldiskfs), RHEL6/x86_64(client, in-kernel OFED)&lt;br/&gt;
ENABLE_QUOTA=yes&lt;br/&gt;
FAILURE_MODE=HARD&lt;br/&gt;
FLAVOR=OSS&lt;/p&gt;

&lt;p&gt;recovery-mds-scale (FLAVOR=OSS) test failed with the same issue: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/004f464c-f550-11e0-908b-52540025f9af&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/004f464c-f550-11e0-908b-52540025f9af&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please refer to the attached recovery-oss-scale.1318474116.log.tar.bz2 for more logs.&lt;/p&gt;</comment>
                            <comment id="29540" author="yujian" created="Wed, 22 Feb 2012 07:49:12 +0000"  >&lt;p&gt;Lustre Tag: v2_1_1_0_RC4&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b2_1/44/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b2_1/44/&lt;/a&gt;&lt;br/&gt;
e2fsprogs Build: &lt;a href=&quot;http://build.whamcloud.com/job/e2fsprogs-master/217/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/e2fsprogs-master/217/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL6/x86_64 (kernel version: 2.6.32-220.el6)&lt;br/&gt;
Network: IB (in-kernel OFED)&lt;br/&gt;
ENABLE_QUOTA=yes&lt;br/&gt;
FAILURE_MODE=HARD&lt;br/&gt;
FLAVOR=OSS&lt;/p&gt;

&lt;p&gt;Configuration:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;MGS/MDS Nodes: client-8-ib

OSS Nodes: client-18-ib(active), client-19-ib(active)
                              \ /
                              OST1 (active in client-18-ib)
                              OST2 (active in client-19-ib)
                              OST3 (active in client-18-ib)
                              OST4 (active in client-19-ib)
                              OST5 (active in client-18-ib)
                              OST6 (active in client-19-ib)
           client-9-ib(OST7)

Client Nodes: client-[1,4,17],fat-amd-2,fat-intel-2

Network Addresses:
client-1-ib: 192.168.4.1
client-4-ib: 192.168.4.4
client-8-ib: 192.168.4.8
client-9-ib: 192.168.4.9
client-17-ib: 192.168.4.17
client-18-ib: 192.168.4.18
client-19-ib: 192.168.4.19
fat-amd-2-ib: 192.168.4.133
fat-intel-2-ib: 192.168.4.129
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;While running recovery-mds-scale with FLAVOR=OSS, it failed as follows:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;==== Checking the clients loads AFTER  failover -- failure NOT OK
ost1 has failed over 1 times, and counting...
sleeping 717 seconds ...
tar: etc/selinux/targeted/contexts/users/root: Cannot write: No such file or directory
tar: Exiting with failure status due to previous errors
Found the END_RUN_FILE file: /home/yujian/test_logs/end_run_file
client-1-ib
Client load failed on node client-1-ib

client client-1-ib load stdout and debug files :
              /tmp/recovery-mds-scale.log_run_tar.sh-client-1-ib
              /tmp/recovery-mds-scale.log_run_tar.sh-client-1-ib.debug
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;/tmp/recovery-mds-scale.log_run_tar.sh-client-1-ib:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;tar: etc/selinux/targeted/contexts/users/root: Cannot write: No such file or directory
tar: Exiting with failure status due to previous errors
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;/tmp/recovery-mds-scale.log_run_tar.sh-client-1-ib.debug&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;&amp;lt;~snip~&amp;gt;
2012-02-22 03:56:04: tar run starting
+ mkdir -p /mnt/lustre/d0.tar-client-1-ib
+ cd /mnt/lustre/d0.tar-client-1-ib
+ wait 11196
+ do_tar
+ tar cf - /etc
+ tar xf -
+ tee /tmp/recovery-mds-scale.log_run_tar.sh-client-1-ib
tar: Removing leading `/&apos; from member names
+ return 2
+ RC=2
++ grep &apos;exit delayed from previous errors&apos; /tmp/recovery-mds-scale.log_run_tar.sh-client-1-ib
+ PREV_ERRORS=
+ true
+ &apos;[&apos; 2 -ne 0 -a &apos;&apos; -a &apos;&apos; &apos;]&apos;
+ &apos;[&apos; 2 -eq 0 &apos;]&apos;
++ date &apos;+%F %H:%M:%S&apos;
+ echoerr &apos;2012-02-22 03:59:25: tar failed&apos;
+ echo &apos;2012-02-22 03:59:25: tar failed&apos;
2012-02-22 03:59:25: tar failed
&amp;lt;~snip~&amp;gt;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Syslog on client node client-1-ib showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Feb 22 03:59:12 client-1 kernel: Lustre: DEBUG MARKER: ost1 has failed over 1 times, and counting...
Feb 22 03:59:19 client-1 kernel: LustreError: 10064:0:(client.c:2590:ptlrpc_replay_interpret()) @@@ status -2, old was 0  req@ffff88031d605c00 x1394513519058221/t379(379) o-1-&amp;gt;lustre-OST0004_UUID@192.168.4.19@o2ib:28/4 lens 408/400 e 0 to 0 dl 1329912005 ref 2 fl Interpret:R/ffffffff/ffffffff rc -2/-1
Feb 22 03:59:19 client-1 kernel: LustreError: 10064:0:(client.c:2590:ptlrpc_replay_interpret()) Skipped 4 previous similar messages
Feb 22 03:59:19 client-1 kernel: Lustre: lustre-OST0004-osc-ffff88032c89a400: Connection restored to service lustre-OST0004 using nid 192.168.4.19@o2ib.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Syslog on MDS node client-8-ib showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Feb 22 03:59:12 client-8-ib kernel: Lustre: DEBUG MARKER: ost1 has failed over 1 times, and counting...
Feb 22 03:59:19 client-8-ib kernel: LustreError: 5628:0:(client.c:2590:ptlrpc_replay_interpret()) @@@ status -2, old was 0  req@ffff88030708c400 x1394513506470444/t380(380) o-1-&amp;gt;lustre-OST0004_UUID@192.168.4.19@o2ib:28/4 lens 408/400 e 0 to 0 dl 1329912005 ref 2 fl Interpret:R/ffffffff/ffffffff rc -2/-1
Feb 22 03:59:19 client-8-ib kernel: LustreError: 5628:0:(client.c:2590:ptlrpc_replay_interpret()) Skipped 4 previous similar messages
Feb 22 03:59:19 client-8-ib kernel: Lustre: lustre-OST0004-osc-MDT0000: Connection restored to service lustre-OST0004 using nid 192.168.4.19@o2ib.
Feb 22 03:59:19 client-8-ib kernel: Lustre: MDS mdd_obd-lustre-MDT0000: lustre-OST0004_UUID now active, resetting orphans
Feb 22 03:59:19 client-8-ib kernel: Lustre: 7395:0:(quota_master.c:1760:mds_quota_recovery()) Only 3/7 OSTs are active, abort quota recovery
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Syslog on OSS node client-19-ib showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Feb 22 03:59:12 client-19-ib kernel: Lustre: DEBUG MARKER: ost1 has failed over 1 times, and counting...
Feb 22 03:59:18 client-19-ib kernel: Lustre: 7501:0:(filter.c:2697:filter_connect_internal()) lustre-OST0004: Received MDS connection for group 0
Feb 22 03:59:18 client-19-ib kernel: LustreError: 9874:0:(filter.c:4141:filter_destroy())  lustre-OST0004: can not find olg of group 0
Feb 22 03:59:18 client-19-ib kernel: LustreError: 9874:0:(filter.c:4141:filter_destroy()) Skipped 22 previous similar messages
Feb 22 03:59:19 client-19-ib kernel: Lustre: lustre-OST0004: sending delayed replies to recovered clients
Feb 22 03:59:19 client-19-ib kernel: Lustre: lustre-OST0004: received MDS connection from 192.168.4.8@o2ib
Feb 22 03:59:19 client-19-ib kernel: Lustre: 7530:0:(filter.c:2553:filter_llog_connect()) lustre-OST0004: Recovery from log 0xff506/0x0:8f36a744
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Please refer to /scratch/logs/2.1.1/recovery-oss-scale.1329912676.log.tar.bz2 on brent node for debug and other logs.&lt;/p&gt;</comment>
                            <comment id="32828" author="yujian" created="Thu, 29 Mar 2012 09:17:51 +0000"  >&lt;p&gt;Lustre Tag: v2_2_0_0_RC2&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b2_2/17/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b2_2/17/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: SLES11SP1/x86_64(client), RHEL6.2/x86_64(server)&lt;br/&gt;
Network: TCP (1GigE)&lt;br/&gt;
ENABLE_QUOTA=yes&lt;br/&gt;
FAILURE_MODE=HARD&lt;/p&gt;

&lt;p&gt;The same issue occurred while failing over OST: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/b6eb20c8-799f-11e1-9d2a-5254004bbbd3&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/b6eb20c8-799f-11e1-9d2a-5254004bbbd3&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="38814" author="yujian" created="Tue, 15 May 2012 08:05:45 +0000"  >&lt;p&gt;Lustre Tag: v1_8_8_WC1_RC1&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b1_8/195/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b1_8/195/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL5.8/x86_64(server), RHEL6.2/x86_64(client)&lt;br/&gt;
Network: TCP (1GigE)&lt;br/&gt;
ENABLE_QUOTA=yes&lt;br/&gt;
FAILURE_MODE=HARD&lt;/p&gt;

&lt;p&gt;The same issue occurred while failing over OST: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/be9c60e0-9e82-11e1-9080-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/be9c60e0-9e82-11e1-9080-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="39799" author="yujian" created="Fri, 1 Jun 2012 05:43:01 +0000"  >&lt;p&gt;Lustre Tag: v2_1_2_RC2&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b2_1/86/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b2_1/86/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL6.2/x86_64&lt;br/&gt;
Network: TCP (1GigE)&lt;br/&gt;
ENABLE_QUOTA=yes&lt;br/&gt;
FAILURE_MODE=HARD&lt;/p&gt;

&lt;p&gt;The same issue occurred while failing over OST: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/c9193e08-abca-11e1-9b8f-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/c9193e08-abca-11e1-9b8f-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="43175" author="yujian" created="Tue, 14 Aug 2012 07:19:30 +0000"  >&lt;p&gt;Lustre Tag: v2_1_3_RC1&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b2_1/113/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b2_1/113/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL6.3/x86_64 (kernel version: 2.6.32-279.2.1.el6)&lt;br/&gt;
Network: IB (in-kernel OFED)&lt;br/&gt;
ENABLE_QUOTA=yes&lt;br/&gt;
FAILURE_MODE=HARD&lt;/p&gt;

&lt;p&gt;The issue occurred again while running recovery-mds-scale failover_ost test:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/b18a1330-e5ad-11e1-ae4e-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/b18a1330-e5ad-11e1-ae4e-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="43548" author="yujian" created="Tue, 21 Aug 2012 07:34:30 +0000"  >&lt;p&gt;Another instance:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/f99459d2-eb26-11e1-b137-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/f99459d2-eb26-11e1-b137-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="48814" author="pjones" created="Wed, 5 Dec 2012 13:22:29 +0000"  >&lt;p&gt;Hongchao&lt;/p&gt;

&lt;p&gt;Could you please look into this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="48853" author="hongchao.zhang" created="Thu, 6 Dec 2012 06:37:28 +0000"  >&lt;p&gt;how about fixing the bug by waiting some time if the -2(ENOENT) is encountered on OST, which is in recovery mode atm.&lt;br/&gt;
will produce a patch by this way.&lt;/p&gt;</comment>
                            <comment id="49110" author="yujian" created="Wed, 12 Dec 2012 06:24:55 +0000"  >&lt;p&gt;This has been blocking the recovery-mds-scale failover_ost test.&lt;/p&gt;</comment>
                            <comment id="49369" author="hongchao.zhang" created="Tue, 18 Dec 2012 05:04:17 +0000"  >&lt;p&gt;the patch against b2_1 is under creation&amp;amp;test.&lt;/p&gt;</comment>
                            <comment id="49440" author="hongchao.zhang" created="Wed, 19 Dec 2012 05:56:34 +0000"  >&lt;p&gt;the patch is tracked at &lt;a href=&quot;http://review.whamcloud.com/#change,4868&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,4868&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="50778" author="hongchao.zhang" created="Fri, 18 Jan 2013 04:37:18 +0000"  >&lt;p&gt;the patch is updated, it will try to create the missing object if it was not created during recovery for precreation RPCs are not replayable,&lt;br/&gt;
it will try to wait the creation complete or return &lt;del&gt;EINPROGRESS if OBD_CONNECT_EINPROGRESS is set in the obd_export&lt;/del&gt;&amp;gt;exp_connect_flags.&lt;/p&gt;</comment>
                            <comment id="51171" author="yujian" created="Thu, 24 Jan 2013 21:33:30 +0000"  >&lt;p&gt;Lustre Branch: b1_8&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b1_8/251/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b1_8/251/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The same issue occurred: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/9734baba-661f-11e2-a42b-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/9734baba-661f-11e2-a42b-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="52630" author="yujian" created="Mon, 18 Feb 2013 10:44:41 +0000"  >&lt;p&gt;Lustre Tag: v1_8_9_WC1_RC1&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b1_8/256&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b1_8/256&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL5.9/x86_64&lt;br/&gt;
Network: IB (in-kernel OFED)&lt;br/&gt;
ENABLE_QUOTA=yes&lt;br/&gt;
FAILURE_MODE=HARD&lt;/p&gt;

&lt;p&gt;The same issue occurred: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/d7e82752-79db-11e2-8fd2-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/d7e82752-79db-11e2-8fd2-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="52739" author="yujian" created="Wed, 20 Feb 2013 06:25:12 +0000"  >&lt;p&gt;Lustre Tag: v1_8_9_WC1_RC2&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b1_8/258&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b1_8/258&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL5.9/x86_64(server), RHEL6.3/x86_64(client)&lt;br/&gt;
Network: TCP (1GigE)&lt;br/&gt;
ENABLE_QUOTA=yes&lt;br/&gt;
FAILURE_MODE=HARD&lt;/p&gt;

&lt;p&gt;The same issue occurred: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/79cd620e-7af3-11e2-b916-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/79cd620e-7af3-11e2-b916-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="54350" author="yujian" created="Tue, 19 Mar 2013 07:49:59 +0000"  >&lt;p&gt;Lustre Branch: b2_1&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b2_1/189/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b2_1/189/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The same issue occurred: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/278684ba-902b-11e2-9b28-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/278684ba-902b-11e2-9b28-52540035b04c&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dmesg on OSS node showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: DEBUG MARKER: ==== Checking the clients loads AFTER failover -- failure NOT OK
cannot allocate a tage (271)
cannot allocate a tage (271)
Lustre: DEBUG MARKER: /usr/sbin/lctl mark ost6 has failed over 5 times, and counting...
Lustre: DEBUG MARKER: ost6 has failed over 5 times, and counting...
Lustre: lustre-OST0006: sending delayed replies to recovered clients
LustreError: 3691:0:(ldlm_resource.c:1090:ldlm_resource_get()) lvbo_init failed for resource 14662: rc -2
Lustre: lustre-OST0006: received MDS connection from 10.10.4.190@tcp
__ratelimit: 12 callbacks suppressed
cannot allocate a tage (402)
cannot allocate a tage (402)
cannot allocate a tage (402)
cannot allocate a tage (402)
cannot allocate a tage (402)
cannot allocate a tage (402)
cannot allocate a tage (402)
cannot allocate a tage (402)
cannot allocate a tage (402)
cannot allocate a tage (402)
__ratelimit: 8 callbacks suppressed
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="60072" author="yujian" created="Thu, 6 Jun 2013 00:04:21 +0000"  >&lt;p&gt;Lustre Tag: v2_1_6_RC1&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b2_1/208&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b2_1/208&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL6.4/x86_64&lt;br/&gt;
ENABLE_QUOTA=yes&lt;br/&gt;
FAILURE_MODE=HARD&lt;/p&gt;

&lt;p&gt;The issue occurred again while running recovery-mds-scale failover_ost test:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/bca04cd4-cdf2-11e2-ba28-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/bca04cd4-cdf2-11e2-ba28-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="60803" author="hongchao.zhang" created="Tue, 18 Jun 2013 11:07:13 +0000"  >&lt;p&gt;the patch is updated &lt;a href=&quot;http://review.hpdd.intel.com/#/c/4868/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.hpdd.intel.com/#/c/4868/&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="138272" author="jfc" created="Thu, 7 Jan 2016 23:55:34 +0000"  >&lt;p&gt;Incomplete and out of date.&lt;br/&gt;
~ jfc.&lt;/p&gt;</comment>
                            <comment id="147681" author="adilger" created="Sun, 3 Apr 2016 04:04:14 +0000"  >&lt;p&gt;I noticed that the patch &lt;a href=&quot;http://review.whamcloud.com/4868&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/4868&lt;/a&gt; (against b2_1) was never landed to any branch.  Was this bug fixed in some other way (in which case it can be closed again) or are we just ignoring these test failures now?&lt;/p&gt;</comment>
                            <comment id="147792" author="yujian" created="Tue, 5 Apr 2016 05:42:51 +0000"  >&lt;p&gt;Hi Andreas,&lt;/p&gt;

&lt;p&gt;The failure is still blocking recovery-mds-scale failover_ost testing on all of the Lustre branches. &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4621&quot; title=&quot;recovery-mds-scale: test_failover_ost&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4621&quot;&gt;&lt;del&gt;LU-4621&lt;/del&gt;&lt;/a&gt; and &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6200&quot; title=&quot;Failover recovery-mds-scale test_failover_ost: test_failover_ost returned 1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6200&quot;&gt;&lt;del&gt;LU-6200&lt;/del&gt;&lt;/a&gt; were created later to track the failures.&lt;/p&gt;</comment>
                            <comment id="147799" author="hongchao.zhang" created="Tue, 5 Apr 2016 08:11:49 +0000"  >&lt;p&gt;Yes, this should a duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6200&quot; title=&quot;Failover recovery-mds-scale test_failover_ost: test_failover_ost returned 1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6200&quot;&gt;&lt;del&gt;LU-6200&lt;/del&gt;&lt;/a&gt;, which contains a patch to fix the problem in a similar way.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                            <outwardlinks description="duplicates">
                                        <issuelink>
            <issuekey id="28511">LU-6200</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="10289" name="recovery-mds-scale-1309100884.tar.bz2" size="2277196" author="yujian" created="Sun, 26 Jun 2011 23:46:50 +0000"/>
                            <attachment id="10325" name="recovery-oss-scale-1311567030.tar.bz2" size="1092024" author="yujian" created="Mon, 25 Jul 2011 03:18:44 +0000"/>
                            <attachment id="10419" name="recovery-oss-scale.1315539020.log.tar.bz2" size="3754914" author="yujian" created="Fri, 9 Sep 2011 00:14:36 +0000"/>
                            <attachment id="10540" name="recovery-oss-scale.1318474116.log.tar.bz2" size="1648101" author="yujian" created="Thu, 13 Oct 2011 01:13:53 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                    <customfield id="customfield_10020" key="com.atlassian.jira.plugin.system.customfieldtypes:float">
                        <customfieldname>Bugzilla ID</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>22777.0</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvcvz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>5680</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>