<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:54:40 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5805] tgt_recov blocked and &quot;waking for gap in transno&quot;</title>
                <link>https://jira.whamcloud.com/browse/LU-5805</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We are testing our 2.5.3-based branch using osd-zfs.  The clients, lnet router, and server nodes all had version 2.5.3-1chaos installed (see github.com/chaos/lustre).&lt;/p&gt;

&lt;p&gt;On the recommendation from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5803&quot; title=&quot;This server is not able to keep up with request traffic&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5803&quot;&gt;&lt;del&gt;LU-5803&lt;/del&gt;&lt;/a&gt;, I made a test build of lustre that consists of 2.5.3-1chaos + &lt;a href=&quot;http://review.whamcloud.com/12365&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/12365&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I installed this on the servers only.  At the time, we had the SWL IO test running (mixture of ior, mdtest, simul, etc. all running at the same time).&lt;/p&gt;

&lt;p&gt;I then rebooted just the servers onto the test build.  The OSS nodes show lots of startup error messages, many that we didn&apos;t see without this new patch.  Granted, it was just one time.&lt;/p&gt;

&lt;p&gt;See attached file named simply &quot;log&quot;.  This is the console log from one of the OSS nodes.&lt;/p&gt;

&lt;p&gt;Here&apos;s my initial view of what is going on:&lt;/p&gt;

&lt;p&gt;The OSS nodes boot significantly faster than the MGS/MDS node.  We have retry set to 32.  I suspect  that this noise is related to the MGS not yet having started:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2014-10-24 14:21:38 LustreError: 7421:0:(client.c:1083:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff880fef274800 x1482881169883144/t0(0) o253-&amp;gt;MGC10.1.1.169@o2ib9@10.1.1.1
69@o2ib9:26/25 lens 4768/4768 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1
2014-10-24 14:21:38 LustreError: 7421:0:(obd_mount_server.c:1120:server_register_target()) lcy-OST0001: error registering with the MGS: rc = -5 (not fatal)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The MGS/MDS node doesn&apos;t start mounting the MGS and MDS devices until 14:25:47 and 14:25:47, respectively.&lt;/p&gt;

&lt;p&gt;The MDS enters recovery at this time:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2014-10-24 14:26:23 zwicky-lcy-mds1 login: Lustre: lcy-MDT0000: Will be in recovery for at least 5:00, or until 134 clients reconnect.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;So there are at least 4 problems here.  We may need to split them up into separate subtickets:&lt;/p&gt;

&lt;ol&gt;
	&lt;li&gt;OSS noise before MGS/MDS has started&lt;/li&gt;
	&lt;li&gt;tgt_recov &quot;blocked for more than 102 seconds&quot; (some of the 16 OSS nodes did this)&lt;/li&gt;
	&lt;li&gt;&quot;waking for gap in transno&quot;, the MDS and some of the OSS nodes show a swath of these&lt;/li&gt;
	&lt;li&gt;Many OSS nodes hit &quot;2014-10-24 14:36:23 LustreError: 7479:0:(ost_handler.c:1776:ost_blocking_ast()) Error -2 syncing data on lock cancel&quot; within a few minutes of recovery being complete&lt;/li&gt;
&lt;/ol&gt;
</description>
                <environment></environment>
        <key id="27313">LU-5805</key>
            <summary>tgt_recov blocked and &quot;waking for gap in transno&quot;</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="green">Oleg Drokin</assignee>
                                    <reporter username="morrone">Christopher Morrone</reporter>
                        <labels>
                            <label>llnl</label>
                    </labels>
                <created>Fri, 24 Oct 2014 21:48:01 +0000</created>
                <updated>Mon, 18 Jul 2016 21:52:01 +0000</updated>
                            <resolved>Mon, 25 Apr 2016 20:26:41 +0000</resolved>
                                    <version>Lustre 2.5.3</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="97489" author="morrone" created="Sat, 25 Oct 2014 00:02:19 +0000"  >&lt;p&gt;We need to triage these various startup problems ASAP.  We need to know if we can start rolling this out into production in a week, and to do that we need to tag a local lustre release on Oct 28.  I am not feeling terribly confident about 2.5&apos;s recovery situation at this point.&lt;/p&gt;</comment>
                            <comment id="97504" author="green" created="Sat, 25 Oct 2014 13:32:13 +0000"  >&lt;p&gt;1. the send limit expired message is normal considering mgs is not up for a prolonged period of time (client would just refuse to start at such time since it cannot get a config our of anywhere).&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;Since the FS is not really usable until MGS starts (for initial client mounts), I wonder if you can just delay ost startup to better match initial mgs starting time?&lt;br/&gt;
2. the tgt recov blocked message is also normal, it&apos;s an artifact of the sleep timer watchdog you configured in the kernel. We probably should explore how to shut it by letting kernel know we are not really stuck yet and the prolonged delay is actually expected (the recovery timeout).&lt;br/&gt;
3. waking for gap in transno. - this one indicates that while all of the expected clients reconnected, some of them failed to present the transaction we think should be next (lost reply to client before server went down?)&lt;br/&gt;
4. is the subset of nodes displaying this message same as the nodes displaying message in #3 (ignoring mds)? The message means the object went away while a lock on this object was held. potentially could be caused by orphans code if MDS somehow did not remember right what was the last touched object but not quite too far back into the past to trigger the catastrophic fallout reported in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5648&quot; title=&quot;corrupt files contain extra data&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5648&quot;&gt;&lt;del&gt;LU-5648&lt;/del&gt;&lt;/a&gt;? If it only happens after recovery and never again, I would suspect that&apos;s one of the lead suspicions and the forgetful mds issue needs to be addressed instead.You did not abort recovery in this case, did you?&lt;/li&gt;
&lt;/ul&gt;
</comment>
                            <comment id="97605" author="morrone" created="Mon, 27 Oct 2014 18:02:27 +0000"  >&lt;blockquote&gt;&lt;p&gt;Since the FS is not really usable until MGS starts (for initial client mounts), I wonder if you can just delay ost startup to better match initial mgs starting time?&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Sure, we could also hop on one foot, juggle, and chew bubblegum while restarting Lustre.  But we don&apos;t want to.&lt;/p&gt;

&lt;p&gt;We can add this to the &quot;Things That Suck About Lustre&quot; list, but not let it hold up our 2.5 roll out.&lt;/p&gt;

&lt;blockquote&gt;&lt;p&gt;the tgt recov blocked message is also normal, it&apos;s an artifact of the sleep timer watchdog you configured in the kernel. We probably should explore how to shut it by letting kernel know we are not really stuck yet and the prolonged delay is actually expected&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Right.  Lustre is behaving badly in the kernel.  Has been for quite some time.  I agree.  We would like to see it fixed.&lt;/p&gt;

&lt;blockquote&gt;&lt;p&gt;waking for gap in transno. - this one indicates that while all of the expected clients reconnected, some of them failed to present the transaction we think should be next (lost reply to client before server went down?)&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;That sounds like a rather worrying bug.  Do you think that &lt;a href=&quot;http://review.whamcloud.com/12365&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/12365&lt;/a&gt; is causing that?  Should I take that back out?&lt;/p&gt;

&lt;p&gt;Would you recommend running the code as is at a major HPC center?&lt;/p&gt;

&lt;blockquote&gt;&lt;p&gt; is the subset of nodes displaying this message same as the nodes displaying message in #3 (ignoring mds)?&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;No, I don&apos;t see a connection.  After a weekend of testing, it is clear that the &quot;Error -2 syncing data on lock cancel&quot; is an ongoing problem:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2014-10-27 08:12:52 LustreError: 99105:0:(ost_handler.c:1776:ost_blocking_ast()) Skipped 3 previous similar messages
2014-10-27 08:17:34 LustreError: 8795:0:(ost_handler.c:1776:ost_blocking_ast()) Error -2 syncing data on lock cancel
2014-10-27 08:25:46 LustreError: 7471:0:(ost_handler.c:1776:ost_blocking_ast()) Error -2 syncing data on lock cancel
2014-10-27 08:25:46 LustreError: 7471:0:(ost_handler.c:1776:ost_blocking_ast()) Skipped 1 previous similar message
2014-10-27 08:29:34 LustreError: 7471:0:(ost_handler.c:1776:ost_blocking_ast()) Error -2 syncing data on lock cancel
2014-10-27 08:29:34 LustreError: 27295:0:(ost_handler.c:1776:ost_blocking_ast()) Error -2 syncing data on lock cancel
2014-10-27 08:29:34 LustreError: 86007:0:(ost_handler.c:1776:ost_blocking_ast()) Error -2 syncing data on lock cancel
2014-10-27 08:41:25 LustreError: 8001:0:(ost_handler.c:1776:ost_blocking_ast()) Error -2 syncing data on lock cancel
2014-10-27 08:41:25 LustreError: 8001:0:(ost_handler.c:1776:ost_blocking_ast()) Skipped 1 previous similar message
2014-10-27 09:09:51 LustreError: 8064:0:(ost_handler.c:1776:ost_blocking_ast()) Error -2 syncing data on lock cancel
2014-10-27 09:14:45 LustreError: 8354:0:(ost_handler.c:1776:ost_blocking_ast()) Error -2 syncing data on lock cancel
2014-10-27 09:28:07 LustreError: 7704:0:(ost_handler.c:1776:ost_blocking_ast()) Error -2 syncing data on lock cancel
2014-10-27 09:54:12 LustreError: 8301:0:(ost_handler.c:1776:ost_blocking_ast()) Error -2 syncing data on lock cancel
2014-10-27 09:54:12 LustreError: 8562:0:(ost_handler.c:1776:ost_blocking_ast()) Error -2 syncing data on lock cancel
2014-10-27 09:54:12 LustreError: 8562:0:(ost_handler.c:1776:ost_blocking_ast()) Skipped 2 previous similar messages
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I am thinking that I need to take out the &lt;a href=&quot;http://review.whamcloud.com/12365&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/12365&lt;/a&gt; patch.&lt;/p&gt;</comment>
                            <comment id="97663" author="green" created="Tue, 28 Oct 2014 04:34:14 +0000"  >&lt;p&gt;Hm, the patch 12365 would certainly play with timeouts, but it&apos;s not obvious how it would cause message #3. But if you never saw the problem without this patch (I understand you never really deployed 2.5 before too so likely you do not have enough data to look back at.) Also I am not sure that we can refer to it as &quot;bug&quot;, the issue can occur genuinely.&lt;br/&gt;
Consider that there&apos;s a bunch of requests being handled at about same time and then server crashes, some replies made it out and some did not. If the lower-numbered transaction one did not make it out - you&apos;ll see a message like that. Though whenever you care or not I am less sure so perhaps we can just make it to never show in the console after all like we already do for vbr.&lt;/p&gt;

&lt;p&gt;As for the patch itself, basically it&apos;s adding up time to requests in recovery before dropping them on the floor:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;+++ b/lustre/ptlrpc/service.c
@@ -1302,7 +1302,9 @@ &lt;span class=&quot;code-keyword&quot;&gt;static&lt;/span&gt; &lt;span class=&quot;code-object&quot;&gt;int&lt;/span&gt; ptlrpc_at_send_early_reply(struct ptlrpc_reques
                 * during the recovery period send at least 4 early replies,
                 * spacing them every at_extra &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; we can. at_estimate should
                 * always equal &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; fixed value during recovery. */
-               at_measured(&amp;amp;svcpt-&amp;gt;scp_at_estimate, min(at_extra,
+               at_measured(&amp;amp;svcpt-&amp;gt;scp_at_estimate,
+                           cfs_time_current_sec() -
+                           req-&amp;gt;rq_arrival_time.tv_sec + min(at_extra,
                            req-&amp;gt;rq_export-&amp;gt;exp_obd-&amp;gt;obd_recovery_timeout / 4))
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;This seems to be really simple and not harmful in any way to me.&lt;/p&gt;

&lt;p&gt;Also for message #4, this apparently was hit by various people as early as 2.4, sample tickets are &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3421&quot; title=&quot;(ost_handler.c:1762:ost_blocking_ast()) Error -2 syncing data on lock cancel causes first ENOSPC client issues then MDS server locks up&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3421&quot;&gt;&lt;del&gt;LU-3421&lt;/del&gt;&lt;/a&gt; and LU-5653. This is currently believed to be harmless, though we currently lack total understanding of what&apos;s going on here. You are able to hit this on a test system, can we somehow get access to investigate it further or if you know how to cause this reliably, that would be good too.&lt;/p&gt;

&lt;p&gt;For message #1 - I am not sure I really understand what you think would be a better solution. The clients need some way to obtain a configuration (i.e. mgs being up), they would not be able to mount. Are you just unhappy about the noise in messages? But the error is genuine after all, so just plain silencing the message might not make a lot of sense.&lt;/p&gt;

&lt;p&gt;I filed &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5816&quot; title=&quot;Silence misleading kernel message&amp;quot;task tgt_recov:XXX blocked for more than 120 seconds&amp;quot;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5816&quot;&gt;&lt;del&gt;LU-5816&lt;/del&gt;&lt;/a&gt; for message #2.&lt;/p&gt;
</comment>
                            <comment id="98001" author="morrone" created="Thu, 30 Oct 2014 22:02:16 +0000"  >&lt;p&gt;For message #4, we agree that this has been around a while.  We have found instances of it under 2.4.&lt;/p&gt;

&lt;p&gt;Note however that our recent hit was under 2.5.3+, so the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3421&quot; title=&quot;(ost_handler.c:1762:ost_blocking_ast()) Error -2 syncing data on lock cancel causes first ENOSPC client issues then MDS server locks up&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3421&quot;&gt;&lt;del&gt;LU-3421&lt;/del&gt;&lt;/a&gt; solution did not address whatever issue is currently causing it.  LU-5653, if that is the correct ticket number, is hidden from me.&lt;/p&gt;

&lt;p&gt;No, I don&apos;t think we will be able to give you access to this system.  But Intel does have access to hyperion.&lt;/p&gt;

&lt;p&gt;Our new testing guy thought he was on the track of reproducing the issue, but then it suddenly stopped happening.  He back tracked to earlier tests that caused it and he wasn&apos;t able to see it there either.&lt;/p&gt;</comment>
                            <comment id="98015" author="green" created="Fri, 31 Oct 2014 00:28:57 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3421&quot; title=&quot;(ost_handler.c:1762:ost_blocking_ast()) Error -2 syncing data on lock cancel causes first ENOSPC client issues then MDS server locks up&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3421&quot;&gt;&lt;del&gt;LU-3421&lt;/del&gt;&lt;/a&gt; patch in the end was fixing a hang, not this particular message that was just a (possibly) related symptom, I think.&lt;br/&gt;
Similarly in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5652&quot; title=&quot;client eviction if lock enqueue reply is lost&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5652&quot;&gt;LU-5652&lt;/a&gt; (that I see now was transformed into a customer closed ticket somehow since I last looked) this message was just noted while other problems were happening with the system.&lt;br/&gt;
While we do have access to Hyperion, what we really need is a way to reliably reproduce the issue (I checked my own logs on my test systems and I saw this message only twice, both on the same day Sep 4th, but on two different systems in different unrelated tests).&lt;br/&gt;
If you can no longer reliably reproduce this too, then there&apos;s no point in any access anyway at this stage, and if there&apos;s a solid reproducer that works everywhere, then too we can try it under our own control.&lt;/p&gt;</comment>
                            <comment id="98077" author="morrone" created="Fri, 31 Oct 2014 17:59:30 +0000"  >&lt;p&gt;It is not terribly reasonable to expect that customers will give full access to their machines, or that the burden is on them to have solid reproducers.  You will need to find other ways to make progress on the many issues like this.&lt;/p&gt;

&lt;p&gt;In the long term that means stronger requirements on code quality, but of course that does not help us today.  In the short term, since no one understands the code well enough to have any clue what is happening it probably means that debugging patches are necessary to start the process of learning what is going on.&lt;/p&gt;</comment>
                            <comment id="98124" author="green" created="Sat, 1 Nov 2014 02:14:13 +0000"  >&lt;p&gt;I don&apos;t think we need a debug patch yet. I envision having full or close to full debug on the OSS where this happens (with debug buffer big enough to contain the whole thing + margin until you can trigger the dump) should be enough. Naturally this requires a reproducer and some time to setup and babysit to get the logs.&lt;/p&gt;</comment>
                            <comment id="98198" author="morrone" created="Mon, 3 Nov 2014 17:56:40 +0000"  >&lt;p&gt;Didn&apos;t we just establish that a reproducer is not forthcoming?  Additionally, LLNL staff time to babysit and get logs is not in great abundance.  You need to find another way to make progress.&lt;/p&gt;</comment>
                            <comment id="123219" author="adilger" created="Tue, 4 Aug 2015 17:11:37 +0000"  >&lt;p&gt;The patch &lt;a href=&quot;http://review.whamcloud.com/12672&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/12672&lt;/a&gt; &quot;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5816&quot; title=&quot;Silence misleading kernel message&amp;quot;task tgt_recov:XXX blocked for more than 120 seconds&amp;quot;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5816&quot;&gt;&lt;del&gt;LU-5816&lt;/del&gt;&lt;/a&gt; target: don&apos;t trigger watchdog waiting in recovery&quot; addresses issue #2 and was landed for 2.7.0 and 2.5.4.&lt;/p&gt;</comment>
                            <comment id="143021" author="marc@llnl.gov" created="Fri, 19 Feb 2016 19:19:03 +0000"  >&lt;p&gt;Is Intel waiting on LLNL for this?&lt;/p&gt;</comment>
                            <comment id="143218" author="pjones" created="Mon, 22 Feb 2016 17:59:24 +0000"  >&lt;p&gt;I think that a level set is reasonable - which of the initial four reported problems (if any) still manifest themselves on the latest 2.5.5 version in production at LLNL?&lt;/p&gt;</comment>
                            <comment id="146399" author="charr" created="Mon, 21 Mar 2016 21:41:02 +0000"  >&lt;p&gt;We haven&apos;t seen this error since 2/17/16. &lt;/p&gt;</comment>
                            <comment id="146410" author="pjones" created="Mon, 21 Mar 2016 22:51:31 +0000"  >&lt;p&gt;Cameron&lt;/p&gt;

&lt;p&gt;How frequently was it happening prior to that date and when did your 2.5.5 FE version get rolled out?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="146523" author="charr" created="Tue, 22 Mar 2016 19:53:57 +0000"  >&lt;p&gt;Peter,&lt;br/&gt;
The system giving the errors in February was a T&amp;amp;D system and I don&apos;t know when it might have been updated to 2.5.5 (likely around Jan-Feb). &lt;/p&gt;

&lt;p&gt;As for frequency prior to that, it doesn&apos;t look like we&apos;ve seen it since last July on a handful of nodes from multiple clusters. I&apos;m in favor of closing. &lt;/p&gt;

&lt;p&gt;Chris?&lt;/p&gt;
</comment>
                            <comment id="150102" author="pjones" created="Mon, 25 Apr 2016 20:26:41 +0000"  >&lt;p&gt;Thanks Cameron&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="27340">LU-5816</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="24752">LU-5079</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="30429">LU-6664</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="16223" name="log" size="11293" author="morrone" created="Fri, 24 Oct 2014 21:48:01 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwzhr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>16286</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>