<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:12:13 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-7821] job crash in complex error scenario</title>
                <link>https://jira.whamcloud.com/browse/LU-7821</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Error happens during soak testing of build &apos;20160224&apos; (b2_8 RC2) (see:&lt;br/&gt;
&lt;a href=&quot;https://wiki.hpdd.intel.com/pages/viewpage.action?title=Soak+Testing+on+Lola&amp;amp;&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://wiki.hpdd.intel.com/pages/viewpage.action?title=Soak+Testing+on+Lola&amp;amp;&lt;/a&gt; spaceKey=Releases#SoakTestingonLola-20150224). DNE is enabled.&lt;br/&gt;
MDSes had been formatted using ldiskfs, OSTs using zfs. MDSes are configured in active-active HA failover configuration.&lt;/p&gt;

&lt;p&gt;Error happened several times during execution of application &lt;tt&gt;mdtest&lt;/tt&gt; (1 file per process) on client nodes &lt;tt&gt;lola-&lt;span class=&quot;error&quot;&gt;&amp;#91;33,34&amp;#93;&lt;/span&gt;&lt;/tt&gt; and reads as:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt; JOBID             ERROR - MESSAGE
445852 :  201602 25 21:12:01 : Process 1(lola-33.lola.whamcloud.com): FAILED in create_remove_items_helper, unable to remove directory: Input/output error
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Lustre Error messages that can be correlated to the event are:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lola-10.log:Feb 25 21:12:01 lola-10 kernel: Lustre: soaked-MDT0003-osp-MDT0005: Connection restored to 192.168.1.109@o2ib10 (at 192.168.1.109@o2ib10)
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 11-0: soaked-MDT0003-osp-MDT0006: operation out_update to node 192.168.1.109@o2ib10 failed: rc = -107
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 167-0: soaked-MDT0003-osp-MDT0006: This client was evicted by soaked-MDT0003; in progress operations using this service will fail.
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 8253:0:(ldlm_resource.c:887:ldlm_resource_complain()) soaked-MDT0003-osp-MDT0006: namespace resource [0x2c000e6a3:0xa8a2:0x0].0x0 (ffff8806fb30dbc0) refcount nonzero (1) after lock cleanup; forcing cleanup.
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 8253:0:(ldlm_resource.c:1502:ldlm_resource_dump()) --- Resource: [0x2c000e6a3:0xa8a2:0x0].0x0 (ffff8806fb30dbc0) refcount = 2
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 8253:0:(ldlm_resource.c:1505:ldlm_resource_dump()) Granted locks (in reverse order):
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 8253:0:(ldlm_resource.c:1508:ldlm_resource_dump()) ### ### ns: soaked-MDT0003-osp-MDT0006 lock: ffff8806c99d0880/0xaacb8c6ebe9816d2 lrc: 2/0,1 mode: EX/EX res: [0x2c000e6a3:0xa8a2:0x0].0x0 bits 0x2 rrc: 2 type: IBT flags: 0x1106401000000 nid: local remote: 0x4af49d2c5913727e expref: -99 pid: 4773 timeout: 0 lvb_type: 0
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 8253:0:(ldlm_resource.c:1502:ldlm_resource_dump()) --- Resource: [0x2c000e6a3:0xa92a:0x0].0x0 (ffff88077f835e40) refcount = 2
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 8253:0:(ldlm_resource.c:1505:ldlm_resource_dump()) Granted locks (in reverse order):
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 8253:0:(ldlm_resource.c:1502:ldlm_resource_dump()) --- Resource: [0x2c000e6a3:0xa433:0x0].0x0 (ffff8806b09aa780) refcount = 2
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 8253:0:(ldlm_resource.c:1505:ldlm_resource_dump()) Granted locks (in reverse order):
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 8253:0:(ldlm_resource.c:1502:ldlm_resource_dump()) --- Resource: [0x38000dec1:0x14895:0x0].0x0 (ffff8807ff657180) refcount = 2
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 8253:0:(ldlm_resource.c:1505:ldlm_resource_dump()) Granted locks (in reverse order):
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 8253:0:(ldlm_resource.c:1502:ldlm_resource_dump()) --- Resource: [0x2c000e6a3:0xa8b9:0x0].0x0 (ffff8807d74972c0) refcount = 2
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 8253:0:(ldlm_resource.c:1505:ldlm_resource_dump()) Granted locks (in reverse order):
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 8253:0:(ldlm_resource.c:1502:ldlm_resource_dump()) --- Resource: [0x2c000e6a3:0xa89b:0x0].0x0 (ffff8807e5c5f2c0) refcount = 2
lola-11.log:Feb 25 21:12:01 lola-11 kernel: LustreError: 8253:0:(ldlm_resource.c:1505:ldlm_resource_dump()) Granted locks (in reverse order):
lola-2.log:Feb 25 21:12:01 lola-2 kernel: Lustre: soaked-OST0000: deleting orphan objects from 0x400000403:1549433 to 0x400000403:1549585
lola-2.log:Feb 25 21:12:01 lola-2 kernel: Lustre: soaked-OST0004: deleting orphan objects from 0x500000405:1544672 to 0x500000405:1544849
lola-2.log:Feb 25 21:12:01 lola-2 kernel: Lustre: soaked-OST0008: deleting orphan objects from 0x600000402:1548216 to 0x600000402:1548417
lola-2.log:Feb 25 21:12:01 lola-2 kernel: Lustre: soaked-OST000c: deleting orphan objects from 0x700000401:1545068 to 0x700000401:1545153
lola-30.log:Feb 25 21:12:01 lola-30 kernel: Lustre: soaked-MDT0003-mdc-ffff88106fa1f800: Connection restored to 192.168.1.109@o2ib10 (at 192.168.1.109@o2ib10)
lola-31.log:Feb 25 21:12:01 lola-31 kernel: Lustre: soaked-MDT0003-mdc-ffff88086597e800: Connection restored to 192.168.1.109@o2ib10 (at 192.168.1.109@o2ib10)
lola-32.log:Feb 25 21:12:01 lola-32 kernel: Lustre: soaked-MDT0003-mdc-ffff88082f4c4000: Connection restored to 192.168.1.109@o2ib10 (at 192.168.1.109@o2ib10)
lola-32.log:Feb 25 21:12:01 lola-32 kernel: Lustre: Skipped 1 previous similar message
lola-33.log:Feb 25 21:12:01 lola-33 kernel: LustreError: 11-0: soaked-MDT0003-mdc-ffff881032461c00: operation mds_reint to node 192.168.1.109@o2ib10 failed: rc = -107
lola-33.log:Feb 25 21:12:01 lola-33 kernel: LustreError: 167-0: soaked-MDT0003-mdc-ffff881032461c00: This client was evicted by soaked-MDT0003; in progress operations using this service will fail.
lola-33.log:Feb 25 21:12:01 lola-33 kernel: LustreError: 157072:0:(lmv_obd.c:1325:lmv_fid_alloc()) Can&apos;t alloc new fid, rc -19
lola-33.log:Feb 25 21:12:01 lola-33 kernel: Lustre: soaked-MDT0003-mdc-ffff881032461c00: Connection restored to 192.168.1.109@o2ib10 (at 192.168.1.109@o2ib10)
lola-34.log:Feb 25 21:12:01 lola-34 kernel: Lustre: soaked-MDT0003-mdc-ffff88102fa38000: Connection restored to 192.168.1.109@o2ib10 (at 192.168.1.109@o2ib10)
lola-3.log:Feb 25 21:12:01 lola-3 kernel: Lustre: soaked-OST000d: deleting orphan objects from 0x740000405:1544703 to 0x740000405:1544833
lola-3.log:Feb 25 21:12:01 lola-3 kernel: Lustre: soaked-OST0005: deleting orphan objects from 0x540000403:1536015 to 0x540000403:1536097
lola-3.log:Feb 25 21:12:01 lola-3 kernel: Lustre: soaked-OST0001: deleting orphan objects from 0x440000401:1553755 to 0x440000401:1553873
lola-3.log:Feb 25 21:12:01 lola-3 kernel: Lustre: soaked-OST0009: deleting orphan objects from 0x640000402:1547689 to 0x640000402:1547777
lola-4.log:Feb 25 21:12:01 lola-4 kernel: Lustre: soaked-OST000e: deleting orphan objects from 0x780000403:1542237 to 0x780000403:1542337
lola-4.log:Feb 25 21:12:01 lola-4 kernel: Lustre: soaked-OST000a: deleting orphan objects from 0x6c0000401:1544440 to 0x6c0000401:1544513
lola-4.log:Feb 25 21:12:01 lola-4 kernel: Lustre: soaked-OST0002: deleting orphan objects from 0x480000401:1548270 to 0x480000401:1548385
lola-4.log:Feb 25 21:12:01 lola-4 kernel: Lustre: soaked-OST0006: deleting orphan objects from 0x580000405:1541804 to 0x580000405:1541889
lola-5.log:Feb 25 21:12:01 lola-5 kernel: Lustre: soaked-OST0003: deleting orphan objects from 0x4c0000401:1539783 to 0x4c0000401:1540289
lola-5.log:Feb 25 21:12:01 lola-5 kernel: Lustre: soaked-OST000f: deleting orphan objects from 0x7c0000403:1549006 to 0x7c0000403:1549265
lola-5.log:Feb 25 21:12:01 lola-5 kernel: Lustre: soaked-OST000b: deleting orphan objects from 0x680000401:1548710 to 0x680000401:1548801
lola-5.log:Feb 25 21:12:01 lola-5 kernel: Lustre: soaked-OST0007: deleting orphan objects from 0x5c0000405:1544139 to 0x5c0000405:1544513
lola-8.log:Feb 25 21:12:01 lola-8 kernel: LustreError: 11-0: soaked-MDT0003-osp-MDT0001: operation out_update to node 192.168.1.109@o2ib10 failed: rc = -107
lola-8.log:Feb 25 21:12:01 lola-8 kernel: LustreError: 167-0: soaked-MDT0003-osp-MDT0001: This client was evicted by soaked-MDT0003; in progress operations using this service will fail.
lola-8.log:Feb 25 21:12:01 lola-8 kernel: Lustre: soaked-MDT0003-osp-MDT0000: Connection restored to 192.168.1.109@o2ib10 (at 192.168.1.109@o2ib10)
lola-8.log:Feb 25 21:12:01 lola-8 kernel: Lustre: Skipped 2 previous similar messages
lola-9.log:Feb 25 21:12:01 lola-9 kernel: LustreError: 4484:0:(update_records.c:72:update_records_dump()) master transno = 98785366528 batchid = 73014961802 flags = 0 ops = 4 params = 7
lola-9.log:Feb 25 21:12:01 lola-9 kernel: LustreError: 4484:0:(update_records.c:72:update_records_dump()) master transno = 98785366528 batchid = 73014961809 flags = 0 ops = 4 params = 7
lola-9.log:Feb 25 21:12:01 lola-9 kernel: LustreError: 4484:0:(update_records.c:72:update_records_dump()) master transno = 98785366528 batchid = 73014961816 flags = 0 ops = 4 params = 7
lola-9.log:Feb 25 21:12:01 lola-9 kernel: LustreError: 4484:0:(update_records.c:72:update_records_dump()) master transno = 98785366528 batchid = 73014961822 flags = 0 ops = 4 params = 7
lola-9.log:Feb 25 21:12:01 lola-9 kernel: LustreError: 4484:0:(update_records.c:72:update_records_dump()) master transno = 98785366528 batchid = 73014961830 flags = 0 ops = 4 params = 7
lola-9.log:Feb 25 21:12:01 lola-9 kernel: LustreError: 4484:0:(update_records.c:72:update_records_dump()) master transno = 98785366576 batchid = 94491702660 flags = 0 ops = 53 params = 38
lola-9.log:Feb 25 21:12:01 lola-9 kernel: LustreError: 4484:0:(update_records.c:72:update_records_dump()) master transno = 98785366584 batchid = 94491702661 flags = 0 ops = 53 params = 38
lola-9.log:Feb 25 21:12:01 lola-9 kernel: LustreError: 4484:0:(update_records.c:72:update_records_dump()) master transno = 98785366585 batchid = 94491702662 flags = 0 ops = 53 params = 38
lola-9.log:Feb 25 21:12:01 lola-9 kernel: Lustre: soaked-MDT0003: disconnecting 7 stale clients
lola-9.log:Feb 25 21:12:01 lola-9 kernel: Lustre: 4484:0:(ldlm_lib.c:1586:abort_req_replay_queue()) @@@ aborted:  req@ffff88040a497c80 x1527085464220748/t0(98785366530) o101-&amp;gt;bf8a1d5c-0dc5-b3c9-6b26-84d56ad880b2@192.168.1.126@o2ib100:585/0 lens 976/0 e 2 to 0 dl 1456463535 ref 1 fl Complete:/4/ffffffff rc 0/-1
lola-9.log:Feb 25 21:12:01 lola-9 kernel: Lustre: 4484:0:(ldlm_lib.c:2011:target_recovery_overseer()) recovery is aborted, evict exports in recovery
lola-9.log:Feb 25 21:12:01 lola-9 kernel: Lustre: soaked-MDT0002: Client 2cb76067-9b42-1736-a64a-e2cc0037f63b (at 192.168.1.132@o2ib100) reconnecting, waiting for 16 clients in recovery for 2:09
lola-9.log:Feb 25 21:12:01 lola-9 kernel: Lustre: Skipped 5 previous similar messages
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Immediately before a restart of MDT &lt;tt&gt;lola-9&lt;/tt&gt; finished&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;mds_restart      : 2016-02-25 20:59:00,754 - 2016-02-25 21:11:17,795    lola-9
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>lola&lt;br/&gt;
build: &lt;a href=&quot;https://build.hpdd.intel.com/job/lustre-b2_8/8/&quot;&gt;https://build.hpdd.intel.com/job/lustre-b2_8/8/&lt;/a&gt;</environment>
        <key id="34996">LU-7821</key>
            <summary>job crash in complex error scenario</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="heckes">Frank Heckes</reporter>
                        <labels>
                            <label>soak</label>
                    </labels>
                <created>Fri, 26 Feb 2016 12:38:56 +0000</created>
                <updated>Mon, 20 Jul 2020 22:35:17 +0000</updated>
                            <resolved>Mon, 20 Jul 2020 22:35:17 +0000</resolved>
                                    <version>Lustre 2.8.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="144539" author="cliffw" created="Thu, 3 Mar 2016 19:26:13 +0000"  >&lt;p&gt;We are seeing this failure again on 2.8.0-RC4. &lt;br/&gt;
Job errors:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;449195-mdtestfpp.out:03/03/2016 09:51:14: &lt;span class=&quot;code-object&quot;&gt;Process&lt;/span&gt; 0(lola-26.lola.whamcloud.com): FAILED in create_remove_items_helper, unable to create directory: Input/output error
449195-mdtestfpp.out:03/03/2016 09:51:14: &lt;span class=&quot;code-object&quot;&gt;Process&lt;/span&gt; 1(lola-26.lola.whamcloud.com): FAILED in create_remove_items_helper, unable to create directory: Input/output error
449195-mdtestfpp.out:03/03/2016 09:51:14: &lt;span class=&quot;code-object&quot;&gt;Process&lt;/span&gt; 6(lola-34.lola.whamcloud.com): FAILED in create_remove_items_helper, unable to create directory: Input/output error
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Server errors:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;lola-10.log:Mar  3 09:51:14 lola-10 kernel: LustreError: 4805:0:(llog_cat.c:712:llog_cat_cancel_records()) soaked-MDT0006-osp-MDT0005: fail to cancel 1 of 1 llog-records: rc = -116
lola-11.log:Mar  3 09:51:14 lola-11 kernel: Lustre: 4655:0:(ldlm_lib.c:2001:target_recovery_overseer()) soaked-MDT0006 recovery is aborted by hard timeout
lola-11.log:Mar  3 09:51:14 lola-11 kernel: Lustre: 4655:0:(ldlm_lib.c:2011:target_recovery_overseer()) recovery is aborted, evict exports in recovery
lola-11.log:Mar  3 09:51:14 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 7:31, of 16 clients 8 recovered and 8 were evicted.
lola-16.log:Mar  3 09:51:14 lola-16 kernel: Lustre: soaked-MDT0006-mdc-ffff8807eb5cd400: Connection restored to 192.168.1.111@o2ib10 (at 192.168.1.111@o2ib10)
lola-16.log:Mar  3 09:51:14 lola-16 kernel: LustreError: 36131:0:(llite_lib.c:2309:ll_prep_inode()) new_inode -fatal: rc -2
lola-26.log:Mar  3 09:51:14 lola-26 kernel: Lustre: soaked-MDT0006-mdc-ffff88081ab31400: Connection restored to 192.168.1.111@o2ib10 (at 192.168.1.111@o2ib10)
lola-2.log:Mar  3 09:51:14 lola-2 kernel: Lustre: soaked-OST0000: deleting orphan objects from 0x400000406:4125908 to 0x400000406:4127809
lola-2.log:Mar  3 09:51:14 lola-2 kernel: Lustre: soaked-OST000c: deleting orphan objects from 0x700000406:4144394 to 0x700000406:4147105
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="144583" author="green" created="Fri, 4 Mar 2016 02:04:50 +0000"  >&lt;p&gt;so of those 8 clients that did not rejoin - did they crash? any interesting logs on them? What about server logs before the eviction, anything interesting there?&lt;/p&gt;

&lt;p&gt;Unfortunately now we only know tht recovery failed because half of the clients failed to rejoin - but no idea  why did that happen at all&lt;/p&gt;</comment>
                            <comment id="144769" author="heckes" created="Mon, 7 Mar 2016 17:32:12 +0000"  >&lt;p&gt;The clients didn&apos;t crash, but slurm jobs did (do). &lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;I attached an extracted content ( - 1hour) from the messages files of the  MDS nodes (&lt;tt&gt;lola-&lt;span class=&quot;error&quot;&gt;&amp;#91;8-11&amp;#93;&lt;/span&gt;&lt;/tt&gt;) and clients (&lt;tt&gt;lola-&lt;span class=&quot;error&quot;&gt;&amp;#91;30-34&amp;#93;&lt;/span&gt;&lt;/tt&gt; for first event. (--&amp;gt; logfiles *-20160225)&lt;br/&gt;
I&apos;m not sure about the relevance of the errors happened before for the problem&lt;/li&gt;
	&lt;li&gt;Same for second event added by Cliff (--&amp;gt; logfiles *-20160303)&lt;/li&gt;
	&lt;li&gt;Sorry, no debug logs had been written after the events happened containing items for the time intervals of interest.&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;The error might be correlated with &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7848&quot; title=&quot;Recovery process on MDS stalled&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7848&quot;&gt;&lt;del&gt;LU-7848&lt;/del&gt;&lt;/a&gt; as the evictions always happens during or shortly after MDS restarts and failover.&lt;/p&gt;</comment>
                            <comment id="275821" author="adilger" created="Mon, 20 Jul 2020 22:35:17 +0000"  >&lt;p&gt;Close old issue that has not been hit again.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                            <attachment id="20682" name="messages-lola-10.log-20160225.bz2" size="9100" author="heckes" created="Mon, 7 Mar 2016 17:37:09 +0000"/>
                            <attachment id="20683" name="messages-lola-10.log-20160303.bz2" size="5687" author="heckes" created="Mon, 7 Mar 2016 17:37:09 +0000"/>
                            <attachment id="20684" name="messages-lola-11.log-20160225.bz2" size="5699" author="heckes" created="Mon, 7 Mar 2016 17:37:09 +0000"/>
                            <attachment id="20685" name="messages-lola-11.log-20160303.bz2" size="5375" author="heckes" created="Mon, 7 Mar 2016 17:37:09 +0000"/>
                            <attachment id="20686" name="messages-lola-30.log-20160225.bz2" size="4253" author="heckes" created="Mon, 7 Mar 2016 17:37:09 +0000"/>
                            <attachment id="20687" name="messages-lola-31.log-20160225.bz2" size="3984" author="heckes" created="Mon, 7 Mar 2016 17:37:09 +0000"/>
                            <attachment id="20688" name="messages-lola-32.log-20160225.bz2" size="4391" author="heckes" created="Mon, 7 Mar 2016 17:37:09 +0000"/>
                            <attachment id="20689" name="messages-lola-33.log-20160225.bz2" size="4426" author="heckes" created="Mon, 7 Mar 2016 17:37:09 +0000"/>
                            <attachment id="20690" name="messages-lola-34.log-20160225.bz2" size="4123" author="heckes" created="Mon, 7 Mar 2016 17:37:09 +0000"/>
                            <attachment id="20678" name="messages-lola-8.log-20160225.bz2" size="5367" author="heckes" created="Mon, 7 Mar 2016 17:37:09 +0000"/>
                            <attachment id="20679" name="messages-lola-8.log-20160303.bz2" size="4002" author="heckes" created="Mon, 7 Mar 2016 17:37:09 +0000"/>
                            <attachment id="20680" name="messages-lola-9.log-20160225.bz2" size="13216" author="heckes" created="Mon, 7 Mar 2016 17:37:09 +0000"/>
                            <attachment id="20681" name="messages-lola-9.log-20160303.bz2" size="3642" author="heckes" created="Mon, 7 Mar 2016 17:37:09 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzy2w7:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>