<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:05:04 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-6994] MDT recovery timer goes negative, recovery never ends</title>
                <link>https://jira.whamcloud.com/browse/LU-6994</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Attempting to mount a client, the recovery timer counts down, and then apparently rolls over to a negative value - recovery never ends&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;Lustre: soaked-MDT0000: Denying connection &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;code-keyword&quot;&gt;new&lt;/span&gt; client 7f50b61a-34a7-dd26-60bd-7487f4a8a6ee(at 192.168.1.116@o2ib100), waiting &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; 7 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:24
LustreError: 137-5: soaked-MDT0001_UUID: not available &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; connect from 192.168.1.116@o2ib100 (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 13 previous similar messages
Lustre: Skipped 2 previous similar messages
LustreError: 11-0: soaked-MDT0003-osp-MDT0000: operation mds_connect to node 0@lo failed: rc = -19
Lustre: soaked-MDT0000: Denying connection &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;code-keyword&quot;&gt;new&lt;/span&gt; client 7f50b61a-34a7-dd26-60bd-7487f4a8a6ee(at 192.168.1.116@o2ib100), waiting &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; 7 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 7:55
Lustre: Skipped 4 previous similar messages
LustreError: 137-5: soaked-MDT0001_UUID: not available &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Lustre: 4255:0:(client.c:2020:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1439394552/real 1439394552]  req@ffff880815c0dcc0 x1509313907525360/t0(0) o38-&amp;gt;soaked-MDT0003-osp-MDT0000@192.168.1.109@o2ib10:24/4 lens 520/544 e 0 to 1 dl 1439394607 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 4255:0:(client.c:2020:ptlrpc_expire_one_request()) Skipped 109 previous similar messages
LustreError: Skipped 23 previous similar messages
Lustre: soaked-MDT0000: Denying connection &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;code-keyword&quot;&gt;new&lt;/span&gt; client 7f50b61a-34a7-dd26-60bd-7487f4a8a6ee(at 192.168.1.116@o2ib100), waiting &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; 7 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 3:20
Lustre: Skipped 10 previous similar messages
LustreError: 137-5: soaked-MDT0002_UUID: not available &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Lustre: 4255:0:(client.c:2020:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1439395077/real 1439395077]  req@ffff880812ded9c0 x1509313907526388/t0(0) o38-&amp;gt;soaked-MDT0001-osp-MDT0000@192.168.1.109@o2ib10:24/4 lens 520/544 e 0 to 1 dl 1439395088 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 4255:0:(client.c:2020:ptlrpc_expire_one_request()) Skipped 183 previous similar messages
LustreError: Skipped 46 previous similar messages
LustreError: 11-0: soaked-MDT0003-osp-MDT0000: operation mds_connect to node 0@lo failed: rc = -19
LustreError: Skipped 1 previous similar message
Lustre: soaked-MDT0000: Denying connection &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;code-keyword&quot;&gt;new&lt;/span&gt; client 7f50b61a-34a7-dd26-60bd-7487f4a8a6ee(at 192.168.1.116@o2ib100), waiting &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; 7 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 21188499:54
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="31450">LU-6994</key>
            <summary>MDT recovery timer goes negative, recovery never ends</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="tappro">Mikhail Pershin</assignee>
                                    <reporter username="cliffw">Cliff White</reporter>
                        <labels>
                            <label>llnl</label>
                    </labels>
                <created>Wed, 12 Aug 2015 16:16:19 +0000</created>
                <updated>Fri, 5 Aug 2016 23:17:10 +0000</updated>
                            <resolved>Fri, 5 Aug 2016 23:17:10 +0000</resolved>
                                    <version>Lustre 2.8.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>10</watches>
                                                                            <comments>
                            <comment id="123956" author="cliffw" created="Wed, 12 Aug 2015 16:16:38 +0000"  >&lt;p&gt;Lustre log from the failed mount attached&lt;/p&gt;</comment>
                            <comment id="124168" author="cliffw" created="Fri, 14 Aug 2015 17:13:26 +0000"  >&lt;p&gt;I checked some previous versions &lt;br/&gt;
2.7.55 - no issue&lt;br/&gt;
2.7.56 - no issue&lt;br/&gt;
2.7.57 - issue appears. I reproduced the problem, after a full re-format of the filesystem.&lt;/p&gt;</comment>
                            <comment id="132044" author="jgmitter" created="Thu, 29 Oct 2015 17:57:48 +0000"  >&lt;p&gt;Hi Mike,&lt;br/&gt;
Can you have a look at this issue?&lt;br/&gt;
Thanks.&lt;br/&gt;
Joe&lt;/p&gt;</comment>
                            <comment id="132046" author="adilger" created="Thu, 29 Oct 2015 17:58:21 +0000"  >&lt;p&gt;Chris, have you hit this problem in your 2.7.x testing?&lt;/p&gt;

&lt;p&gt;Cliff, have you hit this problem again in testing since 2.7.57, assuming you have re-run similar tests since then?&lt;/p&gt;</comment>
                            <comment id="132502" author="dinatale2" created="Tue, 3 Nov 2015 16:46:53 +0000"  >&lt;p&gt;I am currently experiencing this issue on a test cluster running 2.7.62. Let me know if there is any info I can provide which will help.&lt;/p&gt;</comment>
                            <comment id="132523" author="pjones" created="Tue, 3 Nov 2015 18:44:34 +0000"  >&lt;p&gt;Giusseppe&lt;/p&gt;

&lt;p&gt;Do you have any idea what you were doing prior to getting into this state?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="132560" author="dinatale2" created="Tue, 3 Nov 2015 22:50:30 +0000"  >&lt;p&gt;Peter,&lt;/p&gt;

&lt;p&gt;I am currently running a test cluster with DNE that has 4 MDSs and 2 OSTs. No failover is set up currently. I had an MDS crash while running a multi-node job which was writing to the lustre filesystem. I rebooted the MDS and it managed to re-establish a connection with the other MDSs. I believe I powered off the clients for some reason. The recovering MDS&apos;s timer went negative and it displayed similar messages to what is in the ticket. The clients were never evicted due to the hard timeout that was supposed to occur. Attempting to abort the recovery against the MDT on the recovering MDS would evict the clients, but the abort would eventually hang and never finish. Unfortunately, I had to reformat the filesystem and am currently trying to reproduce the above.&lt;/p&gt;

&lt;p&gt;Giuseppe&lt;/p&gt;</comment>
                            <comment id="133288" author="dinatale2" created="Wed, 11 Nov 2015 19:09:12 +0000"  >&lt;p&gt;I managed to reproduce this issue. Using the setup in my previous comment, I determined that a kernel panic occurred while initializing a new llog catalog record (details below) and an assertion was hit that was attempting to ensure that the llog_handle had a NULL pointer to a llog header struct. The panic occurred while I was running an mdtest job which was writing to a striped directory from 32 client nodes running 4 threads each. The panic caused an llog file to become corrupt. I manually repaired the llog file and restarted my MDSs and recovery now completes. Perhaps a negative timer is a symptom of an &quot;unrecoverable&quot; error? If so, is it reflected somewhere that recovery cannot complete?&lt;/p&gt;

&lt;p&gt;The details:&lt;br/&gt;
To summarize my setup, I am running a test cluster with 4 MDSs, 2 OSTs, and 32 client nodes with the filesystem mounted. No failover. The lustre version running is 2.7.62. The error messages and call stack from MDS1 are below:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2015-11-04 13:36:26 LustreError: 38007:0:(llog.c:342:llog_init_handle()) ASSERTION( handle-&amp;gt;lgh_hdr == ((void *)0) ) failed:
2015-11-04 13:36:26 LustreError: 38007:0:(llog.c:342:llog_init_handle()) LBUG
2015-11-04 13:36:26 Pid: 38007, comm: mdt01_005
2015-11-04 13:36:26 Nov  4 13:36:26
...
2015-11-04 13:36:26 Kernel panic - not syncing: LBUG
2015-11-04 13:36:26 Pid: 38007, comm: mdt01_005 Tainted: P           ---------------    2.6.32-504.16.2.1chaos.ch5.3.x86_64 #1
2015-11-04 13:36:26 Call Trace:
2015-11-04 13:36:26  [&amp;lt;ffffffff8152d471&amp;gt;] ? panic+0xa7/0x16f
2015-11-04 13:36:26  [&amp;lt;ffffffffa0847f2b&amp;gt;] ? lbug_with_loc+0x9b/0xb0 [libcfs]
2015-11-04 13:36:26  [&amp;lt;ffffffffa09a62cf&amp;gt;] ? llog_init_handle+0x86f/0xb10 [obdclass]
2015-11-04 13:36:26  [&amp;lt;ffffffffa09ac809&amp;gt;] ? llog_cat_new_log+0x3d9/0xdc0 [obdclass]
2015-11-04 13:36:26  [&amp;lt;ffffffffa09a4663&amp;gt;] ? llog_declare_write_rec+0x93/0x210 [obdclass]
2015-11-04 13:36:26  [&amp;lt;ffffffffa09ad616&amp;gt;] ? llog_cat_declare_add_rec+0x426/0x430 [obdclass]
2015-11-04 13:36:26  [&amp;lt;ffffffffa09a406f&amp;gt;] ? llog_declare_add+0x7f/0x1b0 [obdclass]
2015-11-04 13:36:26  [&amp;lt;ffffffffa0c9c19c&amp;gt;] ? top_trans_start+0x17c/0x960 [ptlrpc]
2015-11-04 13:36:26  [&amp;lt;ffffffffa127cc11&amp;gt;] ? lod_trans_start+0x61/0x70 [lod]
2015-11-04 13:36:26  [&amp;lt;ffffffffa13248b4&amp;gt;] ? mdd_trans_start+0x14/0x20 [mdd]
2015-11-04 13:36:26  [&amp;lt;ffffffffa1313333&amp;gt;] ? mdd_create+0xe53/0x1aa0 [mdd]
2015-11-04 13:36:26  [&amp;lt;ffffffffa11c6784&amp;gt;] ? mdt_version_save+0x84/0x1a0 [mdt]
2015-11-04 13:36:26  [&amp;lt;ffffffffa11c8f46&amp;gt;] ? mdt_reint_create+0xbb6/0xcc0 [mdt]
2015-11-04 13:36:26  [&amp;lt;ffffffffa0a13230&amp;gt;] ? lu_ucred+0x20/0x30 [obdclass]
2015-11-04 13:36:26  [&amp;lt;ffffffffa11a8675&amp;gt;] ? mdt_ucred+0x15/0x20 [mdt]
2015-11-04 13:36:26  [&amp;lt;ffffffffa11c183c&amp;gt;] ? mdt_root_squash+0x2c/0x3f0 [mdt]
2015-11-04 13:36:26  [&amp;lt;ffffffffa0c43d32&amp;gt;] ? __req_capsule_get+0x162/0x6e0 [ptlrpc]
2015-11-04 13:36:26  [&amp;lt;ffffffffa11c597d&amp;gt;] ? mdt_reint_rec+0x5d/0x200 [mdt]
2015-11-04 13:36:26  [&amp;lt;ffffffffa11b177b&amp;gt;] ? mdt_reint_internal+0x62b/0xb80 [mdt]
2015-11-04 13:36:26  [&amp;lt;ffffffffa11b216b&amp;gt;] ? mdt_reint+0x6b/0x120 [mdt]
2015-11-04 13:36:26  [&amp;lt;ffffffffa0c8621c&amp;gt;] ? tgt_request_handle+0x8bc/0x12e0 [ptlrpc]
2015-11-04 13:36:26  [&amp;lt;ffffffffa0c2da21&amp;gt;] ? ptlrpc_main+0xe41/0x1910 [ptlrpc]
2015-11-04 13:36:26  [&amp;lt;ffffffff8106d740&amp;gt;] ? pick_next_task_fair+0xd0/0x130
2015-11-04 13:36:26  [&amp;lt;ffffffff8152d8f6&amp;gt;] ? schedule+0x176/0x3a0
2015-11-04 13:36:26  [&amp;lt;ffffffffa0c2cbe0&amp;gt;] ? ptlrpc_main+0x0/0x1910 [ptlrpc]
2015-11-04 13:36:26  [&amp;lt;ffffffff8109fffe&amp;gt;] ? kthread+0x9e/0xc0
2015-11-04 13:36:27  [&amp;lt;ffffffff8100c24a&amp;gt;] ? child_rip+0xa/0x20
2015-11-04 13:36:27  [&amp;lt;ffffffff8109ff60&amp;gt;] ? kthread+0x0/0xc0
2015-11-04 13:36:27  [&amp;lt;ffffffff8100c240&amp;gt;] ? child_rip+0x0/0x20
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;After rebooting MDS1, I started to see llog corruption messages for an llog file that was on MDS4 (remember the panic was on MDS1) shown below:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2015-11-04 14:15:59 LustreError: 11466:0:(llog_osd.c:833:llog_osd_next_block()) ldne-MDT0003-osp-MDT0000: can&apos;t read llog block from log [0x300000401:0x1:0x0] offset 32768: rc = -5
2015-11-04 14:15:59 LustreError: 11466:0:(llog.c:578:llog_process_thread()) Local llog found corrupted
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Eventually, the recovery timer went negative and displayed messages similar to what is in the ticket. I manually fixed the llog file on MDS4 and recovery now completes. I think that covers it. If necessary, I can put all this information in another ticket and I should be able to provide the corrupted and fixed llog file for diagnosis as well.&lt;/p&gt;</comment>
                            <comment id="133290" author="pjones" created="Wed, 11 Nov 2015 19:26:18 +0000"  >&lt;p&gt;Excellent work Giuseppe (or Joe if you prefer)! &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt; &lt;/p&gt;</comment>
                            <comment id="133308" author="morrone" created="Wed, 11 Nov 2015 22:00:12 +0000"  >&lt;p&gt;Yes, that is good work!&lt;/p&gt;

&lt;p&gt;One way or another we do probably need another ticket.  I suspect that fixing the assertion and llog corruption will be the blocker that needs fixing for 2.8.  At that point fixing the fact that the recovery timer can go negative will drop in priority.  The recovery timer really needs to have more sane behavior in the face of errors, and we don&apos;t want to lose track of that issue.&lt;/p&gt;</comment>
                            <comment id="133317" author="dinatale2" created="Wed, 11 Nov 2015 23:56:53 +0000"  >&lt;p&gt;Thanks! Went ahead and created a new ticket with the details and I attached the llog files I mentioned.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://jira.hpdd.intel.com/browse/LU-7419&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://jira.hpdd.intel.com/browse/LU-7419&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="133384" author="di.wang" created="Thu, 12 Nov 2015 18:50:04 +0000"  >&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2015-11-04 14:15:59 LustreError: 11466:0:(llog_osd.c:833:llog_osd_next_block()) ldne-MDT0003-osp-MDT0000: can&apos;t read llog block from log [0x300000401:0x1:0x0] offset 32768: rc = -5
2015-11-04 14:15:59 LustreError: 11466:0:(llog.c:578:llog_process_thread()) Local llog found corrupted
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;should be fixed by the patch &lt;a href=&quot;http://review.whamcloud.com/#/c/16969/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/16969/&lt;/a&gt; in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7039&quot; title=&quot;llog_osd.c:778:llog_osd_next_block()) ASSERTION( last_rec-&amp;gt;lrh_index == tail-&amp;gt;lrt_index ) failed:&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7039&quot;&gt;&lt;del&gt;LU-7039&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="133437" author="tappro" created="Fri, 13 Nov 2015 13:39:45 +0000"  >&lt;p&gt;yes, that looks like &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7039&quot; title=&quot;llog_osd.c:778:llog_osd_next_block()) ASSERTION( last_rec-&amp;gt;lrh_index == tail-&amp;gt;lrt_index ) failed:&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7039&quot;&gt;&lt;del&gt;LU-7039&lt;/del&gt;&lt;/a&gt; &lt;/p&gt;</comment>
                            <comment id="133439" author="pjones" created="Fri, 13 Nov 2015 13:42:55 +0000"  >&lt;p&gt;Giuseppe&lt;/p&gt;

&lt;p&gt;Does the recommended fix solve the issue for your reproducer?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="133900" author="di.wang" created="Thu, 19 Nov 2015 05:44:27 +0000"  >&lt;p&gt;Just found another problem which might contribute to this issue &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7450&quot; title=&quot;call dcb commit callback in osd_trans_stop()&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7450&quot;&gt;&lt;del&gt;LU-7450&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="134463" author="pjones" created="Tue, 24 Nov 2015 22:45:30 +0000"  >&lt;p&gt;The current belief is that this is a duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7039&quot; title=&quot;llog_osd.c:778:llog_osd_next_block()) ASSERTION( last_rec-&amp;gt;lrh_index == tail-&amp;gt;lrt_index ) failed:&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7039&quot;&gt;&lt;del&gt;LU-7039&lt;/del&gt;&lt;/a&gt; and/or &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7450&quot; title=&quot;call dcb commit callback in osd_trans_stop()&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7450&quot;&gt;&lt;del&gt;LU-7450&lt;/del&gt;&lt;/a&gt;. We can reopen if evidence comes to light that contradicts this.&lt;/p&gt;</comment>
                            <comment id="135981" author="dinatale2" created="Thu, 10 Dec 2015 23:58:09 +0000"  >&lt;p&gt;Peter,&lt;/p&gt;

&lt;p&gt;I still believe this is a minor issue, nothing critical. I think the issue is that recovery can in fact fail (or enter an unrecoverable state) and that is not being reported properly. I suggest that this issue can be used to implement reporting that the recovery status is failure/unrecoverable if the timer expires. Thoughts?&lt;/p&gt;

&lt;p&gt;Giuseppe&lt;/p&gt;</comment>
                            <comment id="135983" author="pjones" created="Fri, 11 Dec 2015 00:08:58 +0000"  >&lt;p&gt;Ok Giuseppe I&apos;ll reopen the ticket and defer to Mike to comment. For now, I&apos;ll drop the priority and move this to 2.9 to reflect the reduced criticality of the issue.&lt;/p&gt;</comment>
                            <comment id="160999" author="di.wang" created="Fri, 5 Aug 2016 23:15:26 +0000"  >&lt;p&gt;This recovery status reporting issue will be resolved by &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8407&quot; title=&quot;Recovery timer hangs at zero on DNE MDTs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8407&quot;&gt;&lt;del&gt;LU-8407&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="161001" author="pjones" created="Fri, 5 Aug 2016 23:17:10 +0000"  >&lt;p&gt;Thanks for the tipoff Di&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="31663">LU-7039</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="33208">LU-7450</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="38208">LU-8407</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="33100">LU-7419</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="18609" name="mount.fail.txt.gz" size="179074" author="cliffw" created="Wed, 12 Aug 2015 16:16:19 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzxkbz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>