<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:02:38 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-13600] limit number of RPCs in flight during recovery</title>
                <link>https://jira.whamcloud.com/browse/LU-13600</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;It seems that if there are many uncommitted RPCs on the client when the server fails, they may end up sending a very large number of RPCs to the server during recovery replay/resend.  This can cause the MDS/OSS to run out of memory because the many RPCs in the incoming request queue grows too large, as seen in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9372&quot; title=&quot;OOM happens on OSS during Lustre recovery for more than 5000 clients&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9372&quot;&gt;&lt;del&gt;LU-9372&lt;/del&gt;&lt;/a&gt;.  This can happen with very fast MDS/OSS nodes with large journals that can process a large number of requests before the journal has committed.&lt;/p&gt;

&lt;p&gt;The patch &lt;a href=&quot;https://review.whamcloud.com/31622&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/31622&lt;/a&gt; &quot;&lt;tt&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9372&quot; title=&quot;OOM happens on OSS during Lustre recovery for more than 5000 clients&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9372&quot;&gt;&lt;del&gt;LU-9372&lt;/del&gt;&lt;/a&gt; ptlrpc: fix req_buffers_max and req_history_max setting&lt;/tt&gt;&quot; added the &lt;tt&gt;req_buffers_max&lt;/tt&gt; parameter to limit the number of RPCs in the incoming request queue (excess RPCs will be dropped by the server until some of the existing RPCs are processed).&lt;/p&gt;

&lt;p&gt;However, that parameter is off/unlimited by default, as it isn&apos;t obvious how to set it on a particular system (it depends on the number of clients, their &lt;tt&gt;max_rpcs_in_flight&lt;/tt&gt;, and the server RAM size).  Also, if a subset of clients consume all of the spots in the request queue during recovery, then it is possible that other clients with uncommitted RPCs cannot get &lt;em&gt;any&lt;/em&gt; of their RPCs into the queue, and this may cause recovery to fail due to missing sequence numbers.&lt;/p&gt;

&lt;p&gt;Instead, it makes sense for &lt;em&gt;clients&lt;/em&gt; to limit the number of RPCs that they send to the server during recovery, so that the MDS/OSS doesn&apos;t get overwhelmed by unprocessed RPCs.  As long as each client has &lt;b&gt;at least&lt;/b&gt; one RPC in flight to the target, then this will ensure that recovery can complete properly.  This may slightly slow down recovery, but is much better than limiting the number of uncommitted RPCs at the server side during normal opeerations, since that could force extra journal commits and slow down RPC processing.&lt;/p&gt;

&lt;p&gt;My suggestion would be to limit clients to &quot;&lt;tt&gt;min(max_rpcs_in_flight, 8)&lt;/tt&gt;&quot; RPCs in flight during recovery, which is enough to avoid most of the RPC round-trip latency during recovery, but should not overwhelm the server (since it needs to handle this many RPCs in flight anyway).  In the analysis of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9372&quot; title=&quot;OOM happens on OSS during Lustre recovery for more than 5000 clients&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9372&quot;&gt;&lt;del&gt;LU-9372&lt;/del&gt;&lt;/a&gt;, it showed up to 1M RPCs pending on the OSS during recovery of 5000 clients, about 2000 RPCs/client, which is far too many even if there are multiple OSTs per OSS.&lt;/p&gt;

&lt;p&gt;Even with this in place, it also makes sense for the OSS to avoid clients overwhelming it during recovery.  There should should be a separate patch to default &lt;tt&gt;req_buffers_max&lt;/tt&gt; to be limited by the OSS RAM size, so that the server doesn&apos;t OOM if there are older clients that do not limit their RPCs during recovery, or too many clients for some reason, even if this means recovery &lt;em&gt;may&lt;/em&gt; not finish correctly (though this is very unlikely).  A reasonable default limit would be something like &lt;tt&gt;((cfs_totalram_pages() / 1048576)&lt;/tt&gt;.  For the reported cases, this would be easily large enough to allow recovery (max 60k or 90k RPCs for 60GB or 90GB RAM, for 2000 and 5000 clients respectively), without overwhelming the OSS (1 RPC per 1MB of RAM).&lt;/p&gt;</description>
                <environment></environment>
        <key id="59319">LU-13600</key>
            <summary>limit number of RPCs in flight during recovery</summary>
                <type id="4" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11310&amp;avatarType=issuetype">Improvement</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="tappro">Mikhail Pershin</assignee>
                                    <reporter username="adilger">Andreas Dilger</reporter>
                        <labels>
                            <label>LTS12</label>
                    </labels>
                <created>Mon, 25 May 2020 18:31:18 +0000</created>
                <updated>Wed, 9 Dec 2020 21:22:59 +0000</updated>
                            <resolved>Fri, 19 Jun 2020 22:01:20 +0000</resolved>
                                                    <fixVersion>Lustre 2.14.0</fixVersion>
                    <fixVersion>Lustre 2.12.6</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="271081" author="adilger" created="Mon, 25 May 2020 18:39:52 +0000"  >&lt;p&gt;Mike, could you please take a look at this.&lt;/p&gt;

&lt;p&gt;I think the patch to limit the server &lt;tt&gt;req_buffers_max&lt;/tt&gt; should be easily done in one patch.  Since this is only the default, and could be changed at runtime, it doesn&apos;t have to be perfect.  IMHO, this would still be a lot better than the current OOM problem during recovery.  It has to be large enough to avoid problems during normal processing, but small enough to avoid OOM.&lt;/p&gt;

&lt;p&gt;The client-side limit may or may not be easy, but I haven&apos;t looked into the details.  In &lt;em&gt;theory&lt;/em&gt; the client should only have 1-2 RPCs in flight during recovery, or maybe the recovery code doesn&apos;t check &lt;tt&gt;max_rpcs_in_flight&lt;/tt&gt;?  It might be that the 1-2 RPCs in flight during recovery is how many the &lt;em&gt;server&lt;/em&gt; has processed from the request queue so that it can get a contiguous sequence of RPCs to process, but the &lt;em&gt;clients&lt;/em&gt; eagerly try to send all of their uncommitted requests so that they are available to the server for processing?&lt;/p&gt;</comment>
                            <comment id="271108" author="tappro" created="Tue, 26 May 2020 07:01:25 +0000"  >&lt;p&gt;Andreas, yes, considering symptoms, that looks like server accepts much more requests than can process and all of them are waiting in processing queue consuming memory. We know for sure that server is processing recovery requests one-by-one, no concurrent execution, at the same time &lt;tt&gt;ptlrpc_replay_next&lt;/tt&gt; has no in-flight requests control as I can see. It accounts outgoing replays in &lt;tt&gt;imp_replay_inflight&lt;/tt&gt; but just to know when it will become empty. That can be good point to add per-client control.&lt;/p&gt;</comment>
                            <comment id="272287" author="adilger" created="Mon, 8 Jun 2020 17:54:17 +0000"  >&lt;p&gt;Mike, any chance you could make a patch to try and limit the client-side outgoing RPCs in flight?  A reasonable limit would be &lt;tt&gt;min(8, max_rpcs_in_flight)&lt;/tt&gt;.&lt;/p&gt;</comment>
                            <comment id="272306" author="tappro" created="Mon, 8 Jun 2020 20:04:33 +0000"  >&lt;p&gt;yes, sure&lt;/p&gt;</comment>
                            <comment id="272421" author="tappro" created="Tue, 9 Jun 2020 20:34:05 +0000"  >&lt;p&gt;Andreas, on closer inspection it seems client still sends replays one-by-one, the &lt;tt&gt;ptlrpc_import_recovery_state_machine()&lt;/tt&gt; choose request to replay and is called again from replay interpreter, so next replay is sent when reply for the previous one is received. Therefore either this is broken somehow or server had OOM even with single replay per client. Can that be just due to OST write replay specifics and several OSTs per node?&lt;/p&gt;</comment>
                            <comment id="272422" author="tappro" created="Tue, 9 Jun 2020 20:43:22 +0000"  >&lt;p&gt;Well, I think while previous comment is true, there is still one place where requests are sent without rate control - &lt;tt&gt;ldlm_replay_locks()&lt;/tt&gt;. All locks are replayed at once and this looks like the only real place where clients can overwhelm server with bunch of RPCs&lt;/p&gt;</comment>
                            <comment id="272438" author="adilger" created="Wed, 10 Jun 2020 07:39:03 +0000"  >&lt;p&gt;Definitely there have been several reports with servers having millions of outstanding RPCs that cause OOM. I don&apos;t know if there are logs in one of these tickets that could show the type of RPC being sent. &lt;/p&gt;</comment>
                            <comment id="272441" author="tappro" created="Wed, 10 Jun 2020 07:54:46 +0000"  >&lt;p&gt;Speaking about lock, they don&apos;t look as source of problem, all of them were on server before recovery as well. So that is interesting, maybe this is related to &lt;tt&gt;RESENT&lt;/tt&gt; requests somehow. I will check logs in tickets&lt;/p&gt;</comment>
                            <comment id="272498" author="adilger" created="Wed, 10 Jun 2020 18:37:04 +0000"  >&lt;p&gt;i agree that it may not be the locks themselves, but the RPCs enqueued for replaying the locks that are causing problems?  It may be that the RPC size is larger than the size of the lock itself.&lt;/p&gt;</comment>
                            <comment id="272557" author="chunteraa" created="Thu, 11 Jun 2020 02:01:17 +0000"  >&lt;p&gt;Would a &lt;a href=&quot;http://doc.lustre.org/lustre_manual.xhtml#dbdoclet.nrstuning&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;NRS policy&lt;/a&gt; help with this issue ?&lt;/p&gt;</comment>
                            <comment id="272669" author="tappro" created="Thu, 11 Jun 2020 17:02:19 +0000"  >&lt;p&gt;From the latest logs:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
ldlm_lib.c:1639:abort_lock_replay_queue()) Skipped 4416881 previous similar messages
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;There are 1661 clients and 4M locks in replay queue, so it seems that is the source of problem as we discussed.&lt;/p&gt;</comment>
                            <comment id="272698" author="adilger" created="Thu, 11 Jun 2020 22:29:48 +0000"  >&lt;p&gt;Chris, no an NRS policy will not help this case, because NRS depends on processing all of the RPCs on the server, and this problem is that there are too many RPCs arriving. &lt;/p&gt;</comment>
                            <comment id="272701" author="adilger" created="Thu, 11 Jun 2020 23:35:28 +0000"  >&lt;blockquote&gt;
&lt;p&gt;There are 1661 clients and 4M locks in replay queue, so it seems that is the source of problem as we discussed.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This also makes sense because clients have a limited number of outstanding RPCs to replay, but they may have thousands of locks each.&lt;/p&gt;

&lt;p&gt;As a workaround to avoid this, the compute nodes could limit the DLM LRU size so that they don&apos;t have so many locks to replay. Something like:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lctl set_param ldlm.namespaces.\*.lru_size=1000
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;That would keep the maximum number of locks per MDT/OST to 1.6M for 1600 clientswhich is 1/3 of the current number. &lt;/p&gt;</comment>
                            <comment id="272752" author="gerrit" created="Fri, 12 Jun 2020 14:19:54 +0000"  >&lt;p&gt;Mike Pershin (mpershin@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/38920&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/38920&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13600&quot; title=&quot;limit number of RPCs in flight during recovery&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13600&quot;&gt;&lt;del&gt;LU-13600&lt;/del&gt;&lt;/a&gt; ptlrpc: limit rate of lock replays&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: dc1bcf18d4d115d9b1bdbe0eacfa6d39f95a307a&lt;/p&gt;</comment>
                            <comment id="272755" author="tappro" created="Fri, 12 Jun 2020 14:25:56 +0000"  >&lt;p&gt;I&apos;ve made patch to limit lock replay rate from client. Meanwhile I still wonder why do we start seeing that effect only recently. That can be just result of specific tasks on clients, etc. But it is suspicious that several different sites reported the same problem almost at the same time. There can be other issues, e.g. something causes too many same locks on client or similar.&#160;&lt;/p&gt;</comment>
                            <comment id="273130" author="tappro" created="Wed, 17 Jun 2020 17:47:28 +0000"  >&lt;p&gt;It could be also that unstable network causes replays to be resent, in that case they will be also staying in recovery queue on server until processed. That means each replayed lock may have not single but several requests in queue waiting for processing. That could explain why this issue was seen recently at couple sites - in both cases there were network errors and OOM issue disappeared when network was stabilised.&lt;/p&gt;</comment>
                            <comment id="273265" author="jpeyrard" created="Fri, 19 Jun 2020 10:15:44 +0000"  >&lt;p&gt;Hi Mikhail and all,&lt;/p&gt;

&lt;p&gt;Working on multiple case like this one in the past I have seen a common pattern between this case.&lt;/p&gt;

&lt;p&gt;I can say if we mount one OST at a time, we see the memory increase when the recovery came into play, and this memory is free when the recovery is finished, it&apos;s a constant and monotonic increase.&lt;/p&gt;

&lt;p&gt;So for example you can have 10G of memory used on the OSS, and you enter recovery which will increase the memory like 1G per second of similar to reach 20G of used memory. Then at the end of the recovery, you go back to 10G of used memory.&lt;/p&gt;

&lt;p&gt;Here it&apos;s just an example, but it was the pattern I always see.&lt;/p&gt;

&lt;p&gt;And I have seen OST which can take around 50G of memory on the OSS.&lt;/p&gt;

&lt;p&gt;That seem weird for me that the &quot;memory used&quot; on the OSS looking at &quot;free -g&quot; increase and it&apos;s free at the end of the recovery.&lt;/p&gt;

&lt;p&gt;So I would think this OOM in recovery issue may be related to a management of the memory on the recovery and maybe the number of locks involved in the recovery.&lt;/p&gt;

&lt;p&gt;On every cluster I have seen this issue, there was at least 1500 client mounting the FS.&lt;/p&gt;</comment>
                            <comment id="273266" author="tappro" created="Fri, 19 Jun 2020 10:22:29 +0000"  >&lt;p&gt;Johann, yes, that is the most rational explanation at the moment and supplied patch should decrease that pressure. I&apos;d appreciate If you could check how patch changes that pattern.&lt;/p&gt;</comment>
                            <comment id="273268" author="jpeyrard" created="Fri, 19 Jun 2020 11:02:23 +0000"  >&lt;p&gt;Hi Mikhail,&lt;/p&gt;

&lt;p&gt;We should be able to test that patch and at the same time we have also advise to use&#160;ldlm.namespaces.*.lru_size=1000 on client node.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="273310" author="gerrit" created="Fri, 19 Jun 2020 16:50:12 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/38920/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/38920/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13600&quot; title=&quot;limit number of RPCs in flight during recovery&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13600&quot;&gt;&lt;del&gt;LU-13600&lt;/del&gt;&lt;/a&gt; ptlrpc: limit rate of lock replays&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 3b613a442b8698596096b23ce82e157c158a5874&lt;/p&gt;</comment>
                            <comment id="273366" author="gerrit" created="Fri, 19 Jun 2020 21:49:54 +0000"  >&lt;p&gt;Mike Pershin (mpershin@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/39111&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39111&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13600&quot; title=&quot;limit number of RPCs in flight during recovery&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13600&quot;&gt;&lt;del&gt;LU-13600&lt;/del&gt;&lt;/a&gt; ptlrpc: limit rate of lock replays&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 115c8b69d3ff2dfb3b3843c21a7752e5e0034c91&lt;/p&gt;</comment>
                            <comment id="273367" author="pjones" created="Fri, 19 Jun 2020 22:01:20 +0000"  >&lt;p&gt;Landed for 2.14&lt;/p&gt;</comment>
                            <comment id="273492" author="gerrit" created="Mon, 22 Jun 2020 19:16:52 +0000"  >&lt;p&gt;Mike Pershin (mpershin@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/39140&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39140&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13600&quot; title=&quot;limit number of RPCs in flight during recovery&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13600&quot;&gt;&lt;del&gt;LU-13600&lt;/del&gt;&lt;/a&gt; ptlrpc: re-enterable signal_completed_replay()&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: fedadd783f9b1b113ec27c40d1c6f87e0ebce9aa&lt;/p&gt;</comment>
                            <comment id="274373" author="gerrit" created="Fri, 3 Jul 2020 15:01:27 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/39140/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39140/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13600&quot; title=&quot;limit number of RPCs in flight during recovery&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13600&quot;&gt;&lt;del&gt;LU-13600&lt;/del&gt;&lt;/a&gt; ptlrpc: re-enterable signal_completed_replay()&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 24451f379050373cb05ad1df7dd19134f21abba7&lt;/p&gt;</comment>
                            <comment id="275113" author="gerrit" created="Sat, 11 Jul 2020 07:29:05 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/39111/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39111/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13600&quot; title=&quot;limit number of RPCs in flight during recovery&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13600&quot;&gt;&lt;del&gt;LU-13600&lt;/del&gt;&lt;/a&gt; ptlrpc: limit rate of lock replays&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 6b6d9c0911e45a9f38c1fdedfbb91293bd21cfb5&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                                        </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="45601">LU-9372</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="61196">LU-14027</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i0116v:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>