<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:21:53 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-8943] Enable Multiple IB/OPA Endpoints Between Nodes</title>
                <link>https://jira.whamcloud.com/browse/LU-8943</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;OPA driver optimizations are based on the MPI model where it is expected to have multiple endpoints between two given nodes. To enable this optimization for Lustre, we need to make it possible, via an LND-specific tuneable, to create multiple endpoints and to balance the traffic over them.&lt;/p&gt;

&lt;p&gt;I have already created an experimental patch to test this theory out.  I was able to push OPA performance to 12.4GB/s by just having 2 QPs between the nodes and round robin messages between them.&lt;/p&gt;

&lt;p&gt;This Jira ticket is for productizing my patch and testing it out thoroughly for OPA and IB.  Test results will be posted to this ticket.&lt;/p&gt;</description>
                <environment></environment>
        <key id="42449">LU-8943</key>
            <summary>Enable Multiple IB/OPA Endpoints Between Nodes</summary>
                <type id="4" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11310&amp;avatarType=issuetype">Improvement</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="doug">Doug Oucharek</assignee>
                                    <reporter username="doug">Doug Oucharek</reporter>
                        <labels>
                            <label>lnet</label>
                    </labels>
                <created>Thu, 15 Dec 2016 19:02:29 +0000</created>
                <updated>Tue, 5 Dec 2017 08:40:39 +0000</updated>
                            <resolved>Fri, 12 May 2017 12:22:45 +0000</resolved>
                                                    <fixVersion>Lustre 2.10.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>21</watches>
                                                                            <comments>
                            <comment id="182673" author="gerrit" created="Tue, 31 Jan 2017 00:34:34 +0000"  >&lt;p&gt;Doug Oucharek (doug.s.oucharek@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/25168&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/25168&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8943&quot; title=&quot;Enable Multiple IB/OPA Endpoints Between Nodes&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8943&quot;&gt;&lt;del&gt;LU-8943&lt;/del&gt;&lt;/a&gt; lnd: Enable Multiple OPA Endpoints between Nodes&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: c10acb41f474365f2b3026bdb04890a065cb9849&lt;/p&gt;</comment>
                            <comment id="184802" author="doug" created="Tue, 14 Feb 2017 16:55:32 +0000"  >&lt;p&gt;To activate this patch, you need to use the following option:&lt;/p&gt;

&lt;p&gt;options ko2iblnd conns_per_peer=&amp;lt;n&amp;gt;&lt;/p&gt;

&lt;p&gt;Where &amp;lt;n&amp;gt; is the number QPs you want per peer connection. &#160;At the moment, both sides of the connection must have the same setting (I need to fix this in the patch...only the client side should need this).&lt;/p&gt;

&lt;p&gt;I found that setting &amp;lt;n&amp;gt; to 6 gave me amazing performance. &#160;Note: I have not tried this patch yet with the recommended hfi tunings. &#160;They &quot;will&quot; interfere with this patch and should initially be avoided.&lt;/p&gt;

&lt;p&gt;Another note: I believe there is a race condition in the hfi driver we trigger when there is too much parallelism. &#160;A couple of times running this patch, I found the hfi driver &quot;missed&quot; an event. &#160;I am talking to the OPA developers about this.&lt;/p&gt;</comment>
                            <comment id="186481" author="doug" created="Tue, 28 Feb 2017 18:45:48 +0000"  >&lt;p&gt;The patch for this ticket is showing a lot of promise. &#160;To productize it so we can land it to master, I need to do the following:&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;Work the code so only the active side of a connection needs to have conns_per_peer set. &#160;The passive side should just adapt automatically.&lt;/li&gt;
	&lt;li&gt;Make sure backwards compatibility is not broken when this feature is turned on in either the active side or passive side.&lt;/li&gt;
	&lt;li&gt;The code which implements the round-robin behaviour has a potential infinite loop when things go wrong. &#160;I need to add protection against that happening.&lt;/li&gt;
	&lt;li&gt;There is no code to recover a downed connection to get us back to the conns_per_peer level. &#160;I&apos;m not sure I will add that but need to evaluate the situation more.&lt;/li&gt;
	&lt;li&gt;Right now, there is no easy way to see if this feature is active and how well it is working. &#160;I need to add some connection-based stats to be queried by lnetctl so we have a way to validate this feature and monitor it.&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;In addition, testing needs to be done to see how much more CPU this feature consumes when it is activated. &#160;We need to measure the costs as well as the benefits. &#160;This needs to all be done with MLX hardware as well as OPA just to see what happens if this is activated on MLX-based networks.&lt;/p&gt;</comment>
                            <comment id="192753" author="doug" created="Wed, 19 Apr 2017 23:44:47 +0000"  >&lt;p&gt;I have attached an Excel spreadsheet showing the performance changes with different conns_per_peer settings for both OPA and MLX-QDR. &#160;For OPA, there is a tab showing the change without any HFI1 tunings (i.e. just the defaults) and with the recommended HFI1 tunings.&lt;/p&gt;

&lt;p&gt;Summary: Using this patch with conns_per_peer of 3 and the recommended HFI1 tunings provides good and consistent performance.&lt;/p&gt;

&lt;p&gt;Still to be done: Testing this patch for backwards compatibility. &#160;&lt;/p&gt;</comment>
                            <comment id="193284" author="doug" created="Mon, 24 Apr 2017 21:05:27 +0000"  >&lt;p&gt;Backwards compatibility testing looks good. &#160;An upgraded node who initiates connections will create conns_per_peer connections and the non-upgraded receiver node will allow that many connections to be created. &#160;However, the non-upgraded node will not &quot;use&quot; all the connections to send messages, only the first one. &#160;So performance will not improve.&lt;/p&gt;

&lt;p&gt;If things are reversed (non-upgraded initiator to upgraded receiver) will work as if neither side is upgraded because it is the initiator who decides how many connections to have and in this case, it will just be one.&lt;/p&gt;

&lt;p&gt;So, to get the performance benefit, both sides of a connection need to be upgraded with this patch and the initiator needs to have conns_per_peer set &amp;gt; 1.&lt;/p&gt;

&lt;p&gt;Based on the attached spreadsheet, I recommend OPA systems with many cores use conns_per_peer = 3 and these HFI1 parameters:&lt;/p&gt;

&lt;p&gt;options hfi1 krcvqs=8 piothreshold=0 sge_copy_mode=2 wss_threshold=70&lt;/p&gt;

&lt;p&gt;However, if you are on a VM or have a limited number of cores, change conns_per_peer = 4 and&#160;krcvqs = 4 in the HFI1 parameters.&lt;/p&gt;</comment>
                            <comment id="193289" author="adilger" created="Mon, 24 Apr 2017 21:43:28 +0000"  >&lt;blockquote&gt;
&lt;p&gt;I recommend OPA systems with many cores use &lt;tt&gt;conns_per_peer = 3&lt;/tt&gt; and these HFI1 parameters:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;options hfi1 krcvqs=8 piothreshold=0 sge_copy_mode=2 wss_threshold=70
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/blockquote&gt;

&lt;p&gt;Are you going to add these to the &lt;tt&gt;/usr/sbin/ko2iblnd-probe&lt;/tt&gt; script, or be set by default in some other manner, or will this be up to the user to discover and set?  At a very minimum there should be an update to the Lustre User Manual (see &lt;a href=&quot;https://wiki.hpdd.intel.com/display/PUB/Making+changes+to+the+Lustre+Manual&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://wiki.hpdd.intel.com/display/PUB/Making+changes+to+the+Lustre+Manual&lt;/a&gt;), but providing good performance out of the box is preferred.&lt;/p&gt;</comment>
                            <comment id="194512" author="doug" created="Thu, 4 May 2017 22:41:46 +0000"  >&lt;p&gt;I did update the OPA defaults to set conns_per_peer to 4 when OPA is detected. &#160;I&apos;ll also update the manual under &lt;a href=&quot;https://jira.whamcloud.com/browse/LUDOC-374&quot; title=&quot;Add notes about conns_per_peer ko2iblnd parameter&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LUDOC-374&quot;&gt;&lt;del&gt;LUDOC-374&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I bumped the conns_per_peer to 4 from 3 because OPA team is going to start recommending a krcvqs default of 4 especially for a low number of cores (i.e. VMs). &#160;Having a conns_per_peer of 4 helps to compensate for the lower krcvqs number so we should work well out of the box whether krcvqs is 4 or 8.&lt;/p&gt;</comment>
                            <comment id="195613" author="gerrit" created="Fri, 12 May 2017 05:06:07 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/25168/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/25168/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8943&quot; title=&quot;Enable Multiple IB/OPA Endpoints Between Nodes&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8943&quot;&gt;&lt;del&gt;LU-8943&lt;/del&gt;&lt;/a&gt; lnd: Enable Multiple OPA Endpoints between Nodes&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 7241e68f37962991ef43a6c01b3a83ff67282d88&lt;/p&gt;</comment>
                            <comment id="195646" author="pjones" created="Fri, 12 May 2017 12:22:45 +0000"  >&lt;p&gt;Landed for 2.10&lt;/p&gt;</comment>
                            <comment id="195764" author="dmiter" created="Sat, 13 May 2017 19:49:12 +0000"  >&lt;p&gt;I observed strange behavior. It looks after this commit I cannot unload&#160;ko2iblnd module. LNet is busy even all unmounted successfully. Only reboot helps.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="195765" author="adilger" created="Sun, 14 May 2017 00:13:43 +0000"  >&lt;p&gt;Does &quot;&lt;tt&gt;lctl network down&lt;/tt&gt;&quot; or &quot;&lt;tt&gt;lnetctl lnet unconfigure&lt;/tt&gt;&quot; help?&lt;/p&gt;</comment>
                            <comment id="195770" author="dmiter" created="Sun, 14 May 2017 06:59:47 +0000"  >&lt;p&gt;No, as I mentoined before only reboot helps.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# lustre_rmmod                                                                  
rmmod: ERROR: Module ko2iblnd is in use

# lsmod|less                                                                    
Module                  Size  Used by
ko2iblnd              233790  1 
ptlrpc               1343928  0 
obdclass             1744518  1 ptlrpc
lnet                  483843  3 ko2iblnd,obdclass,ptlrpc
libcfs                416336  4 lnet,ko2iblnd,obdclass,ptlrpc
[...]

# lctl network down                                                             
LNET busy

lnetctl &amp;gt; lnet unconfigure
unconfigure:
    - lnet:
          errno: -16
          descr: &quot;LNet unconfigure error: Device or resource busy&quot;
lnetctl &amp;gt; lnet unconfigure --all
unconfigure:
    - lnet:
          errno: -16
          descr: &quot;LNet unconfigure error: Device or resource busy&quot;

# lustre_rmmod                                                                  
rmmod: ERROR: Module ko2iblnd is in use



&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="195874" author="doug" created="Mon, 15 May 2017 18:47:30 +0000"  >&lt;p&gt;When I created the performance spreadsheet, I needed to keep changing conns_per_peer. &#160;I had no problems taking down and brining up LNet using these commands:&lt;/p&gt;

&lt;p&gt;Up:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;modprobe lnet
lctl network configure
modprobe lnet-selftest


&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Down:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;rmmod lnet-selftest
lctl network down
rmmod ko2iblnd
rmmod lnet


&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;There must be something different about what you are doing which is triggering ref counters to not be reduced. Are you using DLC? What is your environment? &#160;Are both nodes running the latest code with this patch?&lt;/p&gt;</comment>
                            <comment id="195876" author="dmiter" created="Mon, 15 May 2017 19:04:52 +0000"  >&lt;p&gt;I&apos;m using new Lustre client with this patch and old Lustre servers without this patch. So, I just mount lustre FS then use it and then try to unload after umount. I don&apos;t use DLC. I have CentOS 7.3 in both sites.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="195877" author="doug" created="Mon, 15 May 2017 19:08:18 +0000"  >&lt;p&gt;That might be the reason. &#160;The client will create multiple connections, but the server will only have one they are all talking to. &#160;When one connection on the client is closed, the connection on the server will be closed. &#160;I suspect the remaining connections on the client can&apos;t be closed. &#160;I&apos;ll have to look at the code to see what I can do in this situation.&lt;/p&gt;

&lt;p&gt;I suspect if the server has the patch, you would not have a problem.&lt;/p&gt;</comment>
                            <comment id="195925" author="doug" created="Tue, 16 May 2017 01:05:36 +0000"  >&lt;p&gt;I just tried to reproduce with the passive node being unpatched. &#160;Was not able to reproduce your issue. &#160;The &quot;lctl network down&quot; takes a long time, but does succeed. &#160;There must be something else here. &#160;Do you know if your parameters like map_on_demand are different? &#160;Is a reconnection happening to renegotiate the parameters? &#160;This is something I have not tried.&lt;/p&gt;</comment>
                            <comment id="196082" author="doug" created="Tue, 16 May 2017 21:25:48 +0000"  >&lt;p&gt;Dmitry, when you get the file system mounted, can you issue the following sequence on both nodes to ensure we are creating 4 connections on each:&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lctl
&amp;gt; network o2ib
&amp;gt; conn_list
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You should see 4 connections to the peer if the initiator (usually the client) has the MultiQP patch, and 1 connection to the peer if it doesn&apos;t.&lt;/p&gt;</comment>
                            <comment id="196084" author="dmiter" created="Tue, 16 May 2017 21:45:28 +0000"  >&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# lctl
lctl &amp;gt; network o2ib
lctl &amp;gt; conn_list
192.168.213.125@o2ib mtu -1
192.168.213.125@o2ib mtu -1
192.168.213.125@o2ib mtu -1
192.168.213.125@o2ib mtu -1
192.168.213.125@o2ib mtu -1
192.168.213.125@o2ib mtu -1
192.168.213.125@o2ib mtu -1
192.168.213.125@o2ib mtu -1
192.168.213.231@o2ib mtu -1
192.168.213.231@o2ib mtu -1
192.168.213.231@o2ib mtu -1
192.168.213.231@o2ib mtu -1
192.168.213.232@o2ib mtu -1
192.168.213.232@o2ib mtu -1
192.168.213.232@o2ib mtu -1
192.168.213.232@o2ib mtu -1
192.168.213.233@o2ib mtu -1
192.168.213.233@o2ib mtu -1
192.168.213.233@o2ib mtu -1
192.168.213.233@o2ib mtu -1
192.168.213.234@o2ib mtu -1
192.168.213.234@o2ib mtu -1
192.168.213.234@o2ib mtu -1
192.168.213.234@o2ib mtu -1
192.168.213.235@o2ib mtu -1
192.168.213.235@o2ib mtu -1
192.168.213.235@o2ib mtu -1
192.168.213.235@o2ib mtu -1
192.168.213.236@o2ib mtu -1
192.168.213.236@o2ib mtu -1
192.168.213.236@o2ib mtu -1
192.168.213.236@o2ib mtu -1

# lnetctl lnet unconfigure --all                                                
unconfigure:
    - lnet:
          errno: -16
          descr: &quot;LNet unconfigure error: Device or resource busy&quot;



&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt; Client: 2.9.57_48_g0386263&lt;br/&gt;
 Servers:&lt;br/&gt;
 lustre: 2.7.19.10&lt;br/&gt;
 kernel: patchless_client&lt;br/&gt;
 build: 2.7.19.10--PRISTINE-3.10.0-514.10.2.el7_lustre.x86_64&lt;/p&gt;</comment>
                            <comment id="196085" author="dmiter" created="Tue, 16 May 2017 21:46:18 +0000"  >&lt;p&gt;192.168.213.125@o2ib - client&lt;/p&gt;</comment>
                            <comment id="196086" author="dmiter" created="Tue, 16 May 2017 21:52:25 +0000"  >&lt;p&gt;From server:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# lctl                                                                          
lctl &amp;gt; network o2ib
lctl &amp;gt; conn_list
192.168.213.125@o2ib mtu -1
192.168.213.125@o2ib mtu -1
192.168.213.125@o2ib mtu -1
192.168.213.125@o2ib mtu -1
...
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="196191" author="doug" created="Wed, 17 May 2017 16:27:38 +0000"  >&lt;p&gt;Cliff is seeing this same problem on the soak cluster but there is no OPA, only MLX IB.  I&apos;m beginning to wonder if this is a problem with the Mutli-Rail drop rather than this change.&lt;/p&gt;</comment>
                            <comment id="196193" author="pjones" created="Wed, 17 May 2017 16:29:53 +0000"  >&lt;p&gt;Would it be a good idea to track all this under a new ticket instead of tacking onto an already closed one?&lt;/p&gt;</comment>
                            <comment id="196241" author="doug" created="Wed, 17 May 2017 20:58:15 +0000"  >&lt;p&gt;Cliff created a ticket for this already: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9524&quot; title=&quot;LNET Fails to unload. &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9524&quot;&gt;&lt;del&gt;LU-9524&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Summary: this appears to have been introduced in patch: &lt;a href=&quot;https://review.whamcloud.com/#/c/26959/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/#/c/26959/&lt;/a&gt; and not the change under this ticket.  ptlrpc is not longer being unloaded with lustre_rmmod so lnet won&apos;t unload.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                            <outwardlinks description="duplicates">
                                                        </outwardlinks>
                                                                <inwardlinks description="is duplicated by">
                                                        </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10323">
                    <name>Gantt End to End</name>
                                                                <inwardlinks description="has to be finished together with">
                                        <issuelink>
            <issuekey id="45697">LUDOC-374</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                                <inwardlinks description="is related to">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="26374" name="MultiQP-Tests.xlsx" size="88778" author="doug" created="Wed, 19 Apr 2017 23:41:57 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzyyj3:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>