<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:38:49 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4006] LNET Messages staying in Queue</title>
                <link>https://jira.whamcloud.com/browse/LU-4006</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We&apos;ll need some more information on data to gather server side, but when the Titan compute platform is shut down the value of the queued messages in /proc/sys/lnet/stats on the server remains constant until the target platform returns to service. We have seen this during the weekly maintenance on Titan as well as during a large scale test shot with 2.4.0 Servers and 2.4.0 clients using SLES11 SP2. &lt;/p&gt;

&lt;p&gt;We have a home-grown monitor for the backlog of messages for a particular server (and LNET RTR, but at the time of reporting the LNET RTR&apos;s are all down from a hardware perspective) &amp;#8211; We can attach that script if it may be useful. &lt;/p&gt;

&lt;p&gt;Please provide the data gathering techniques we should employ to make problem diagnosis more informative. We will likely have a shot at data gathering every Tuesday.&lt;/p&gt;

&lt;p&gt;While there are a large number of LNET messages queued (to what I assume are the LNET peers for the routers), LNET messages continue to be processed for other peers (either directly connected or through other routers); which is why I marked this as Minor.&lt;/p&gt;</description>
                <environment>RHEL5 server, SLES11 SP1 router/client as well as RHEL6 server w/ SLES 11 SP1 or SP2 client</environment>
        <key id="21115">LU-4006</key>
            <summary>LNET Messages staying in Queue</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="isaac">Isaac Huang</assignee>
                                    <reporter username="hilljjornl">Jason Hill</reporter>
                        <labels>
                            <label>mn4</label>
                    </labels>
                <created>Wed, 25 Sep 2013 01:14:44 +0000</created>
                <updated>Fri, 26 Jan 2024 21:19:31 +0000</updated>
                            <resolved>Thu, 6 Mar 2014 16:11:58 +0000</resolved>
                                    <version>Lustre 1.8.9</version>
                    <version>Lustre 2.4.1</version>
                                    <fixVersion>Lustre 2.6.0</fixVersion>
                    <fixVersion>Lustre 2.5.3</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>10</watches>
                                                                            <comments>
                            <comment id="67497" author="jamesanunez" created="Wed, 25 Sep 2013 01:45:26 +0000"  >&lt;p&gt;Jason, &lt;br/&gt;
Please attached your script. It might be helpful to see how you are monitor the messages. &lt;/p&gt;

&lt;p&gt;Also, is this the latest 2.4.0 that you are using, i.e. 2.4.1-RC2?&lt;/p&gt;

&lt;p&gt;Thanks, &lt;br/&gt;
James&lt;/p&gt;</comment>
                            <comment id="67498" author="hilljjornl" created="Wed, 25 Sep 2013 01:58:40 +0000"  >&lt;p&gt;James,&lt;/p&gt;

&lt;p&gt;The 2.4.X variant was actually a 2.4.0RC or pre-RC. I will inquire with my colleagues who were running the test, I don&apos;t have direct knowledge on that side. &lt;/p&gt;</comment>
                            <comment id="67503" author="liang" created="Wed, 25 Sep 2013 03:55:19 +0000"  >&lt;p&gt;/proc/sys/lnet/stats is not queued message, it&apos;s counter for already sent/received bytes, and it will remain constant if all peers were down and no IO went through LNet. &lt;br/&gt;
Queued messages can be observed via /proc/sys/lnet/peers, also, upper layer will control number of queued message to LNet as well (both mdc and osc have their own message queue).&lt;/p&gt;</comment>
                            <comment id="67517" author="blakecaldwell" created="Wed, 25 Sep 2013 12:34:03 +0000"  >&lt;p&gt;Here is what we observed with 2.4.0RC. All Titan osc clients were evicted at this point, but we noticed messages on the client side coming from the servers.&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;2013-09-03 15:58:54&amp;#93;&lt;/span&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;c13-0c1s0n0&amp;#93;&lt;/span&gt;LNet: 15622:0:(lib-move.c:1828:lnet_parse_put()) Dropping PUT from 12345-10.36.226.46@o2ib235 portal 4 match 1443971358080092 offset 192 length 192: 2&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;2013-09-03 15:58:54&amp;#93;&lt;/span&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;c13-0c1s0n0&amp;#93;&lt;/span&gt;LNet: 15622:0:(lib-move.c:1828:lnet_parse_put()) Skipped 16 previous similar messages&lt;/p&gt;

&lt;p&gt;On the server side at 04:36:07 PM:&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@atlas-oss4f7 ~&amp;#93;&lt;/span&gt;# cat /proc/sys/lnet/peers&lt;br/&gt;
nid                      refs state  last   max   rtr   min    tx   min queue&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;snip&amp;#93;&lt;/span&gt;&lt;br/&gt;
10.36.230.208@o2ib234    3201  down    -1     8     8     8 -3187 -124974 843288&lt;/p&gt;

&lt;p&gt;The number of refs and queue were decreasing. Some had more than 8k messages queued. There were 11 servers with messages still in their queue at 4:40 PM.&lt;/p&gt;

&lt;p&gt;This entry would appear every 100s in the lctl dk debug output:&lt;/p&gt;

&lt;p&gt;00000800:00000100:3.0:1378240617.211971:0:3150:0:(o2iblnd_cb.c:2844:kiblnd_cm_callback()) 10.36.230.181@o2ib234: ADDR ERROR -110&lt;br/&gt;
00000800:00000100:3.0:1378240617.211980:0:3150:0:(o2iblnd_cb.c:2072:kiblnd_peer_connect_failed()) Deleting messages for 10.36.230.181@o2ib234: connection failed&lt;/p&gt;

&lt;p&gt;At 04:48:54 PM Connectivity was restored to this lnet router (up vs. down as above) and all lnet messages had drained.&lt;br/&gt;
 10.36.230.208@o2ib234       6    up    -1     8     8     8     8 -124974 0&lt;/p&gt;</comment>
                            <comment id="67550" author="jlevi" created="Wed, 25 Sep 2013 16:06:36 +0000"  >&lt;p&gt;Liang,&lt;br/&gt;
Would you be able to comment on this one?&lt;br/&gt;
Thank you!&lt;/p&gt;</comment>
                            <comment id="67591" author="hilljjornl" created="Wed, 25 Sep 2013 19:01:24 +0000"  >&lt;p&gt;Liang:&lt;/p&gt;

&lt;p&gt;on page 32-36 in the 1.8 Lustre Operations manual routerstat references /proc/sys/lnet/stats &amp;#8211; the first field is msgs_alloc which as we understood it was the count of the currently processing messages for a host. We are looking at a more &quot;global&quot; scale &amp;#8211; knowing what is happening on a per-peer basis is interesting but harder to alert and monitor on. Are we incorrect about the first field in /proc/sys/lnet/stats?&lt;/p&gt;

&lt;p&gt;&amp;#8211;&lt;br/&gt;
-Jason&lt;/p&gt;</comment>
                            <comment id="67620" author="dillowda" created="Wed, 25 Sep 2013 21:28:22 +0000"  >&lt;p&gt;It is worth noting that the machine was taken down just after 8am on Sep 3rd, and we still had several thousand messages queued to the machine that afternoon; even some of the evictions were happening &lt;em&gt;way&lt;/em&gt; late.&lt;/p&gt;</comment>
                            <comment id="67622" author="dillowda" created="Wed, 25 Sep 2013 21:31:24 +0000"  >&lt;p&gt;Also, the messages that had been queued up were being sent to the routers after they came back, except they were to LNETs that no longer existed on Titan &amp;#8211; this is how we noticed the problem during the Atlas test on the 3rd. Titan had been down long enough (6+ hours) that it should have been completely evicted long before then, and non of this traffic should have made it into the Gemini side.&lt;/p&gt;</comment>
                            <comment id="67642" author="liang" created="Thu, 26 Sep 2013 04:24:49 +0000"  >&lt;p&gt;yes you are right, allocated message in lnet/stats is number of global messages (all together). However, LNet itself will not automatically send huge amount of messages, router checker will send some ping messages to routers but at low frequency, e.g. 60 seconds by default, so I think those traffics are not really from LNet itself but from upper layer services. btw, you can see queued message on each NI (or lnet network) by check lnet/nis.&lt;br/&gt;
Also, it could be helpful if you can get D_RPCTRACE from the server, to see what kind of messages are queued to LNet, so we can analyse what&apos;s going on.&lt;/p&gt;</comment>
                            <comment id="68083" author="hilljjornl" created="Tue, 1 Oct 2013 17:15:16 +0000"  >&lt;p&gt;Gathering data now from a handful of servers. Our monitoring alerts when the allocated message count exceeds 30,000 for 2 samples, each sample being taken every 30 seconds. Data upload forthcoming. I used the following to generate the data:&lt;/p&gt;

&lt;p&gt;echo +rpctrace &amp;gt; /proc/sys/lnet/debug; lctl dk &amp;gt; /dev/null; sleep 60; lctl dk &amp;gt; /tmp/rpctrace.$HOSTNAME.20131010; echo -rpctrace &amp;gt; /proc/sys/lnet/debug; lctl dk &amp;gt; /dev/null&lt;/p&gt;</comment>
                            <comment id="68084" author="hilljjornl" created="Tue, 1 Oct 2013 17:15:48 +0000"  >&lt;p&gt;output of lctl dk with +rpctrace enabled in the debug flag.&lt;/p&gt;</comment>
                            <comment id="68352" author="isaac" created="Fri, 4 Oct 2013 05:56:24 +0000"  >&lt;p&gt;To summarize, my understanding is that after a client cluster (both the clients and routers that connect them to servers) disappears all at once, queued messages on servers drain out very very slowly. Please correct me if this is wrong.&lt;/p&gt;

&lt;p&gt;I believe there&apos;s two things that caused the problem:&lt;br/&gt;
1. When a router is dead, messages already queued on that router still stay on the queue, and it&apos;d take the LND very long time to drain the queue. As Blake pointed out previously, there was a connection error once about every 100s, so the LND drained about 8 messages every 100 seconds. It&apos;d take hours and hours to drain thousands of queued messages if the router stays down.&lt;br/&gt;
2. Even after upper layers have already evicted the clients, the messages would still stay queued. This is because LNetMDUnlink wouldn&apos;t be able to unlink the MDs as the queued messages still hold references on them. This is very stupid - it&apos;s a waste of network resources to let the network drain messages already aborted by upper layers.&lt;/p&gt;

&lt;p&gt;How to fix them:&lt;br/&gt;
1. When a router becomes dead, all messages queued on it except the ones whose final destinations are the router itself (e.g. router pinger traffic) should be retried on other available routes, and completed immediately with error if no route is available. &lt;br/&gt;
2. When upper layer calls LNetMDUnlink and the MD is busy, we should try to abort the message that references the MD, rather than simply wait for the network to complete it. If the message is queued at the LND already, it&apos;d be difficult to abort it. But if it&apos;s still queued on the peer of the next hop at the LNet layer, it should not be hard to complete it immediately with error. Since at most peer_credits messages can be queued at LND at a time, the majority of queued messages would be drained immediately, which is a low-hanging fruit we can get much easier than aborting messages queued at LNDs already.&lt;/p&gt;

&lt;p&gt;I think we should fix 2 first. If messages still stay a long time, then upper layers aren&apos;t unlink them fast enough, e.g. very slow evictions; in which case we should also look at upper layers for potential problems. In contrast, if we fix 1 first, then such potential problems at upper layers won&apos;t have a chance to show up at all. However, neither fix would be trivial.&lt;/p&gt;</comment>
                            <comment id="68437" author="liang" created="Sat, 5 Oct 2013 08:46:31 +0000"  >&lt;p&gt;Isaac, thanks for looking into it, and I agree that it&apos;s because very slow draining of queued messages, there could be many clients and much fewer routers, so each router has thousands of queued messages.&lt;br/&gt;
I&apos;m thinking that if we want to have a relatively simple way of aborting LNet message, we probably need a new flag for MD, e.g. LNET_MD_CAN_ABORT (or LNET_MD_EXCL), and LNet allow exact one message to associate with this MD (LNet will return error if user tries to send multiple messages on MD with this flag), so we don&apos;t need to maintain a list of message on each MD, and scan all them on unlink, which is more complex on the latest LNet because locking changes. This should satisfy current use-case in Lustre/ptlrpc, although it is still not trivial.&lt;br/&gt;
Also, we need to add new RPC API in ptlrpc to allow abort.&lt;/p&gt;
</comment>
                            <comment id="68729" author="isaac" created="Thu, 10 Oct 2013 06:01:40 +0000"  >&lt;p&gt;Liang, I haven&apos;t looked at the locking changes, but I have no plan to queue messages on MD. How often does Lustre/PTLRPC send multiple messages from a same MD? A quick code scan under lustre/ptlrpc/ showed zero such use. The FF server collectives does this, but only after gossip confirms the target node to be alive, which minimizes the chance of sending a message to a dead node in the first place.&lt;/p&gt;

&lt;p&gt;My current plan is to simply save a pointer to the latest message on MD: NULL the pointer when the message goes to LND, replace it when there&apos;s another message sent over the same MD, and complete the message if any on LNetMDUnlink. If Lustre in the future does send multiple messages from a same MD, then we&apos;d still have a chance to abort the most recent message, which is the one that&apos;s most likely still on the queue. This should cover most cases. Also, I don&apos;t see a need to add a new MD flag. This should always be the unlink behavior.&lt;/p&gt;</comment>
                            <comment id="68812" author="liang" created="Fri, 11 Oct 2013 13:27:35 +0000"  >&lt;p&gt;one my concern is, if we don&apos;t add any new flag (or explicit API for abort), then we are changing semantic of current API, for now LNetMDUnlink will not implicitly abort any inflight PUT/GET (although Lustre doesn&apos;t rely on this), so would it be reasonable to add a new flag to allow Unlink to abort inflight message?&lt;/p&gt;</comment>
                            <comment id="69075" author="isaac" created="Wed, 16 Oct 2013 04:19:25 +0000"  >&lt;p&gt;If I haven&apos;t missed something, I don&apos;t see any semantic change. Currently callers would eventually get a SENT event with unlinked flag set and likely an error status code. With the proposed change, it&apos;s still the same thing, i.e. SENT event with unlinked=1 and status=-ECANCL; the only difference is that now the event could come much sooner. Callers are supposed to handle that event anyway, sooner or later. I don&apos;t think this is a semantic change. The API semantics never (and can&apos;t) say how soon such piggybacked unlink event would happen. If for some reason Lustre can&apos;t handle an instantaneous piggybacked unlink, then it&apos;s a bug in Lustre that should be fixed.&lt;/p&gt;</comment>
                            <comment id="69498" author="isaac" created="Tue, 22 Oct 2013 06:35:33 +0000"  >&lt;p&gt;Patch posted: &lt;a href=&quot;http://review.whamcloud.com/#/c/8041/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/8041/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Though the idea was straightforward, the original approach I suggested turned out too messy to implement: MD needs to keep a pointer to a message which is not reference counted, very tricky to remove a message from its peer/NI queue, and so on.&lt;/p&gt;

&lt;p&gt;Instead I chose to abort messages in lnet_post_send_locked(). The drawback is it&apos;d take an additional LND timeout for messages to be aborted. But the advantages are: much much simpler code, and all queued messages on an unlinked MD will be aborted rather just one. I think this is a good trade-off between instantaneous unlink and code complexity.&lt;/p&gt;</comment>
                            <comment id="69501" author="liang" created="Tue, 22 Oct 2013 08:16:02 +0000"  >&lt;p&gt;Yes I think this way is better, the other reason I proposed to use new flag for MD is, even if we only track the last message that associated with the MD, we still need some complex locking operations because different messages are protected by different locks, and this way can totally avoid complex lock operations.&lt;br/&gt;
Still, I think that unlink implies abort is a slight semantic change, e.g, if user create a MD with threshold == LNET_MD_THRESH_INF, and ping arbitrary number of peers then unlink immediately, and reply on callback to count results, it will just work with current LNet, but it might fail some pings if unlink implied abort queued messages. But I think it is probably fine if nobody replies this sematic(at least Lustre doesn&apos;t), so I don&apos;t insist on this.&lt;/p&gt;</comment>
                            <comment id="69516" author="hilljjornl" created="Tue, 22 Oct 2013 13:37:33 +0000"  >&lt;p&gt;Can I get an idea of the time to solution? Are we looking at something that will be tested and ready to install in the next month or something longer?&lt;/p&gt;

&lt;p&gt;Thx.&lt;br/&gt;
&amp;#8211;&lt;br/&gt;
-Jason&lt;/p&gt;</comment>
                            <comment id="69517" author="simmonsja" created="Tue, 22 Oct 2013 13:52:04 +0000"  >&lt;p&gt;There is a patch ready for testing &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</comment>
                            <comment id="69580" author="isaac" created="Tue, 22 Oct 2013 20:03:59 +0000"  >&lt;p&gt;Liang, yes it&apos;s a change in the case you described. But I&apos;d rather regard it as an abuse of an obscure part of the API semantics rather than a valid use case we&apos;d support, to unlink a MD where there&apos;s active message you don&apos;t want to give up. I&apos;ve pushed an update that added a comment over LNetMDUnlink: &quot;As a result, active messages associated with the MD may get aborted.&quot;.&lt;/p&gt;</comment>
                            <comment id="70230" author="isaac" created="Wed, 30 Oct 2013 05:40:45 +0000"  >&lt;p&gt;Patch updated: can&apos;t use LNET_MD_FLAG_ZOMBIE, which can be set as a result of MD auto unlink, where it&apos;s not abort and active messages should not be canceled as a result (e.g. a REPLY message from a MD exhausted by the corresponding GET) - the only way is to add a new flag LNET_MD_FLAG_ABORTED, set by LNetM&lt;span class=&quot;error&quot;&gt;&amp;#91;DE&amp;#93;&lt;/span&gt;Unlink.&lt;/p&gt;</comment>
                            <comment id="70506" author="isaac" created="Fri, 1 Nov 2013 17:09:09 +0000"  >&lt;p&gt;Hi, we&apos;ve tested the patch with our internal tests systems, and there&apos;d be some additional test at Hyperion. Meanwhile if you want to test it at ORNL, I&apos;d suggest to:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;Put it on a few clients first. Although it&apos;s supposed to solve a server-side problem, it changes code path for every outgoing message.&lt;/li&gt;
	&lt;li&gt;Then put it on a couple of servers, avoid the MDS/MGS though.&lt;/li&gt;
&lt;/ul&gt;
</comment>
                            <comment id="73115" author="simmonsja" created="Mon, 9 Dec 2013 19:11:57 +0000"  >&lt;p&gt;Will this patch be cherry picked to b2_4 and b2_5?&lt;/p&gt;</comment>
                            <comment id="73123" author="jamesanunez" created="Mon, 9 Dec 2013 20:18:21 +0000"  >&lt;p&gt;Patch &lt;a href=&quot;http://review.whamcloud.com/#/c/8041/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/8041/&lt;/a&gt; landed to master. &lt;/p&gt;

&lt;p&gt;I&apos;ll look into plans for landing in b2_4 and b2_5.&lt;/p&gt;</comment>
                            <comment id="73125" author="jamesanunez" created="Mon, 9 Dec 2013 20:40:33 +0000"  >&lt;p&gt;James, &lt;/p&gt;

&lt;p&gt;Has anyone at ORNL tested the patch and, if so, did it fix or help the number of messages in the queue? Feedback from you will help us determine if the patch is cherry-picked/back ported to b2_4 and/or b2_5.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
James&lt;/p&gt;</comment>
                            <comment id="73295" author="simmonsja" created="Wed, 11 Dec 2013 17:06:21 +0000"  >&lt;p&gt;Is it an only server side patch? If so I can arrange to have it tested.&lt;/p&gt;</comment>
                            <comment id="73588" author="jamesanunez" created="Mon, 16 Dec 2013 16:30:50 +0000"  >&lt;p&gt;James, &lt;/p&gt;

&lt;p&gt;The patch can be applied to the servers only and you should see a benefit from it, but the patch is for both clients and servers. This configuration is fine for testing, but we recommend patching both clients and servers in production.&lt;/p&gt;

&lt;p&gt;Thanks, &lt;br/&gt;
James&lt;/p&gt;</comment>
                            <comment id="74201" author="jamesanunez" created="Tue, 31 Dec 2013 20:00:24 +0000"  >&lt;p&gt;Jason or James, &lt;/p&gt;

&lt;p&gt;Is this still an issue or should we close this ticket?&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
James&lt;/p&gt;</comment>
                            <comment id="74252" author="simmonsja" created="Thu, 2 Jan 2014 18:10:18 +0000"  >&lt;p&gt;I plan to test this patch at scale with 2.4 clients.&lt;/p&gt;</comment>
                            <comment id="74664" author="simmonsja" created="Thu, 9 Jan 2014 17:47:36 +0000"  >&lt;p&gt;Had a discussion at work about testing this patch. It was decided not to test with this patch in the near future since it a small case. You can close this ticket. If we run into it in the future we can reopen this ticket again.&lt;/p&gt;</comment>
                            <comment id="74673" author="jamesanunez" created="Thu, 9 Jan 2014 18:32:19 +0000"  >&lt;p&gt;James, &lt;br/&gt;
Thank you for the update. I&apos;m going to close this ticket, but we can reopen it if this patch does not solve your problem.&lt;/p&gt;

&lt;p&gt;James&lt;/p&gt;</comment>
                            <comment id="75432" author="simmonsja" created="Wed, 22 Jan 2014 15:10:41 +0000"  >&lt;p&gt;January 21 we preformed a test shot with our Luste 2.4 file system with 2.4 clients. Before we were on 1.8 clients. During startup we ran into this issue. We have another test Feburary 4th and will perform the upgrade the 28th. I plan to run with this patch server side to see if resolves the issues we are seeing.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="13571" name="LU-4006.tgz" size="217" author="hilljjornl" created="Tue, 1 Oct 2013 17:15:48 +0000"/>
                            <attachment id="13546" name="lnet_stats.sh" size="2020" author="hilljjornl" created="Wed, 25 Sep 2013 02:00:10 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzw3vj:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>10722</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>