<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:56:15 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-12857] recovery-mds-scale test_failover_ost fails with &#8220;import is not in FULL state&#8221;</title>
                <link>https://jira.whamcloud.com/browse/LU-12857</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;recovery-mds-scale test_failover-ost fails with &#8220;import is not in FULL state&#8221;. In the suite_log, we see&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;trevis-50vm3:  rpc : @@@@@@ FAIL: can&apos;t put import for osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid into FULL state after 1475 sec, have DISCONN 
trevis-50vm3:   Trace dump:
trevis-50vm3:   = /usr/lib64/lustre/tests/test-framework.sh:5864:error()
&#8230;
recovery-mds-scale test_failover_ost: @@@@@@ FAIL: import is not in FULL state 
  Trace dump:
  = /usr/lib64/lustre/tests/test-framework.sh:5864:error()
  = /usr/lib64/lustre/tests/test-framework.sh:7245:wait_clients_import_state()
  = /usr/lib64/lustre/tests/recovery-mds-scale.sh:159:failover_target()
  = /usr/lib64/lustre/tests/recovery-mds-scale.sh:242:test_failover_ost()
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;then the test suite hangs and times out.&lt;/p&gt;

&lt;p&gt;In the case of &lt;a href=&quot;https://testing.whamcloud.com/test_sets/e816cdac-eb87-11e9-be86-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/e816cdac-eb87-11e9-be86-52540065bddc&lt;/a&gt;, we fail over ost4 from vm5 to vm6 and the OST failovers look successful. Checking clients after failover, we see&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[86033.161862] Lustre: DEBUG MARKER: /usr/sbin/lctl mark ==== Checking the clients loads AFTER failover -- failure NOT OK
[86033.416887] Lustre: DEBUG MARKER: ==== Checking the clients loads AFTER failover -- failure NOT OK
[86035.230364] Lustre: DEBUG MARKER: /usr/sbin/lctl mark ost4 has failed over 1 times, and counting...
[86035.476561] Lustre: DEBUG MARKER: ost4 has failed over 1 times, and counting...
[86079.403400] LNetError: 24068:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) lpni 10.9.50.8@tcp added to recovery queue. Health = 0
[86079.406508] LNetError: 24068:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) Skipped 8 previous similar messages
[86344.411364] LNetError: 24068:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) lpni 10.9.50.8@tcp added to recovery queue. Health = 0
[86344.413633] LNetError: 24068:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) Skipped 14 previous similar messages
[86827.848075] Lustre: DEBUG MARKER: /usr/sbin/lctl mark ==== Checking the clients loads BEFORE failover -- failure NOT OK              ELAPSED=408 DURATION=86400 PERIOD=1200
[86828.072043] Lustre: DEBUG MARKER: ==== Checking the clients loads BEFORE failover -- failure NOT OK ELAPSED=408 DURATION=86400 PERIOD=1200
[86829.772372] Lustre: DEBUG MARKER: /usr/sbin/lctl mark Wait ost6 recovery complete before doing next failover...
[86830.015147] Lustre: DEBUG MARKER: Wait ost6 recovery complete before doing next failover...
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We check recovery_status for each of the OST and find there is an issue &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[86841.794013] Lustre: DEBUG MARKER: trevis-50vm6.trevis.whamcloud.com: executing _wait_recovery_complete *.lustre-OST0006.recovery_status 1475
[86842.076196] Lustre: DEBUG MARKER: /usr/sbin/lctl mark Checking clients are in FULL state before doing next failover...
[86842.297866] Lustre: DEBUG MARKER: Checking clients are in FULL state before doing next failover...
[86859.446351] LNetError: 24068:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) lpni 10.9.50.8@tcp added to recovery queue. Health = 0
[86859.448604] LNetError: 24068:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) Skipped 29 previous similar messages
[87464.461375] LNetError: 24068:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) lpni 10.9.50.8@tcp added to recovery queue. Health = 0
[87464.463680] LNetError: 24068:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) Skipped 33 previous similar messages
[88064.484412] LNetError: 24068:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) lpni 10.9.50.8@tcp added to recovery queue. Health = 0
[88064.486869] LNetError: 24068:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) Skipped 33 previous similar messages
[88350.603831] Lustre: DEBUG MARKER: /usr/sbin/lctl mark  recovery-mds-scale test_failover_ost: @@@@@@ FAIL: import is not in FULL state 
[88350.872125] Lustre: DEBUG MARKER: recovery-mds-scale test_failover_ost: @@@@@@ FAIL: import is not in FULL state
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We then try to exit the test but experience LNet errors&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[88354.805689] Server failover period: 1200 seconds
[88354.805689] Exited after:           408 seconds
[88354.805689] Number of failovers before exit:
[88354.805689] mds1: 0 times
[88354.805689] ost1: 0 times
[88354.805689] ost2: 0 times
[88354.805689] ost3: 0 times
[88354.805689] ost4: 1 times
[88354.805689] ost5: 0 times
[88354.805689] ost6: 0 times
[88354.805689] o
[88355.027288] Lustre: DEBUG MARKER: Duration: 86400
[88355.647928] Lustre: DEBUG MARKER: /usr/sbin/lctl dk &amp;gt; /autotest/autotest2/2019-10-08/lustre-b2_12-el7_7-x86_64--failover--1_25__52___8d15cfb4-3473-4202-941b-c914ac734bd4/recovery-mds-scale.test_failover_ost.debug_log.$(hostname -s).1570642546.log;
[88355.647928]          dmesg &amp;gt; /autotest/autotest2/2019
[88360.734604] Lustre: lustre-OST0001: Connection restored to d9d4bfb9-3fbd-d3f0-7667-bf145a641dfe (at 10.9.50.1@tcp)
[88360.736451] Lustre: Skipped 7 previous similar messages
[88669.517330] LNetError: 24068:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) lpni 10.9.50.8@tcp added to recovery queue. Health = 0
[88669.519558] LNetError: 24068:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) Skipped 33 previous similar messages
[89279.521338] LNetError: 24068:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) lpni 10.9.50.8@tcp added to recovery queue. Health = 0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The call traces are displayed. We see ll_ost00_00*, ll_ost_io00_00* and ll_ost_create00 call traces for some OSTs. For example, we see&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[172080.643164] ll_ost00_000    S ffff92afba003150     0 24093      2 0x00000080
[172080.644487] Call Trace:
[172080.644949]  [&amp;lt;ffffffff9e77f229&amp;gt;] schedule+0x29/0x70
[172080.645903]  [&amp;lt;ffffffffc0bdb355&amp;gt;] ptlrpc_wait_event+0x345/0x360 [ptlrpc]
[172080.647064]  [&amp;lt;ffffffff9e0da0b0&amp;gt;] ? wake_up_state+0x20/0x20
[172080.648125]  [&amp;lt;ffffffffc0be1ad2&amp;gt;] ptlrpc_main+0xa02/0x1460 [ptlrpc]
[172080.649264]  [&amp;lt;ffffffff9e0d3efe&amp;gt;] ? finish_task_switch+0x4e/0x1c0
[172080.650356]  [&amp;lt;ffffffff9e77ec88&amp;gt;] ? __schedule+0x448/0x9c0
[172080.651428]  [&amp;lt;ffffffffc0be10d0&amp;gt;] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc]
[172080.652739]  [&amp;lt;ffffffff9e0c50d1&amp;gt;] kthread+0xd1/0xe0
[172080.653638]  [&amp;lt;ffffffff9e0c5000&amp;gt;] ? insert_kthread_work+0x40/0x40
[172080.654737]  [&amp;lt;ffffffff9e78cd37&amp;gt;] ret_from_fork_nospec_begin+0x21/0x21
[172080.655906]  [&amp;lt;ffffffff9e0c5000&amp;gt;] ? insert_kthread_work+0x40/0x40
[172080.682767] ll_ost_create00 S ffff92afbc14a0e0     0 24096      2 0x00000080
[172080.684064] Call Trace:
[172080.684525]  [&amp;lt;ffffffff9e77f229&amp;gt;] schedule+0x29/0x70
[172080.685421]  [&amp;lt;ffffffffc0bdb355&amp;gt;] ptlrpc_wait_event+0x345/0x360 [ptlrpc]
[172080.686731]  [&amp;lt;ffffffff9e0da0b0&amp;gt;] ? wake_up_state+0x20/0x20
[172080.687779]  [&amp;lt;ffffffffc0be1ad2&amp;gt;] ptlrpc_main+0xa02/0x1460 [ptlrpc]
[172080.688908]  [&amp;lt;ffffffff9e0d3efe&amp;gt;] ? finish_task_switch+0x4e/0x1c0
[172080.689999]  [&amp;lt;ffffffff9e77ec88&amp;gt;] ? __schedule+0x448/0x9c0
[172080.691013]  [&amp;lt;ffffffffc0be10d0&amp;gt;] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc]
[172080.692314]  [&amp;lt;ffffffff9e0c50d1&amp;gt;] kthread+0xd1/0xe0
[172080.693223]  [&amp;lt;ffffffff9e0c5000&amp;gt;] ? insert_kthread_work+0x40/0x40
[172080.694371]  [&amp;lt;ffffffff9e78cd37&amp;gt;] ret_from_fork_nospec_begin+0x21/0x21
[172080.695537]  [&amp;lt;ffffffff9e0c5000&amp;gt;] ? insert_kthread_work+0x40/0x40
[172080.710767] ll_ost_io00_000 S ffff92afb8075230     0 24098      2 0x00000080
[172080.712129] Call Trace:
[172080.712586]  [&amp;lt;ffffffff9e77f229&amp;gt;] schedule+0x29/0x70
[172080.713487]  [&amp;lt;ffffffffc0bdb355&amp;gt;] ptlrpc_wait_event+0x345/0x360 [ptlrpc]
[172080.714784]  [&amp;lt;ffffffff9e0da0b0&amp;gt;] ? wake_up_state+0x20/0x20
[172080.715887]  [&amp;lt;ffffffffc0be1ad2&amp;gt;] ptlrpc_main+0xa02/0x1460 [ptlrpc]
[172080.717015]  [&amp;lt;ffffffff9e0d3efe&amp;gt;] ? finish_task_switch+0x4e/0x1c0
[172080.718104]  [&amp;lt;ffffffff9e77ec88&amp;gt;] ? __schedule+0x448/0x9c0
[172080.719126]  [&amp;lt;ffffffffc0be10d0&amp;gt;] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc]
[172080.720443]  [&amp;lt;ffffffff9e0c50d1&amp;gt;] kthread+0xd1/0xe0
[172080.721352]  [&amp;lt;ffffffff9e0c5000&amp;gt;] ? insert_kthread_work+0x40/0x40
[172080.722448]  [&amp;lt;ffffffff9e78cd37&amp;gt;] ret_from_fork_nospec_begin+0x21/0x21
[172080.723612]  [&amp;lt;ffffffff9e0c5000&amp;gt;] ? insert_kthread_work+0x40/0x40
[172080.752571] ll_ost_seq00_00 S ffff92afb8071070     0 24101      2 0x00000080
[172080.753858] Call Trace:
[172080.754332]  [&amp;lt;ffffffff9e77f229&amp;gt;] schedule+0x29/0x70
[172080.755326]  [&amp;lt;ffffffffc0bdb355&amp;gt;] ptlrpc_wait_event+0x345/0x360 [ptlrpc]
[172080.756527]  [&amp;lt;ffffffff9e0da0b0&amp;gt;] ? wake_up_state+0x20/0x20
[172080.757566]  [&amp;lt;ffffffffc0be1ad2&amp;gt;] ptlrpc_main+0xa02/0x1460 [ptlrpc]
[172080.758745]  [&amp;lt;ffffffff9e0d3efe&amp;gt;] ? finish_task_switch+0x4e/0x1c0
[172080.759848]  [&amp;lt;ffffffff9e77ec88&amp;gt;] ? __schedule+0x448/0x9c0
[172080.760868]  [&amp;lt;ffffffffc0be10d0&amp;gt;] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc]
[172080.762189]  [&amp;lt;ffffffff9e0c50d1&amp;gt;] kthread+0xd1/0xe0
[172080.763085]  [&amp;lt;ffffffff9e0c5000&amp;gt;] ? insert_kthread_work+0x40/0x40
[172080.764193]  [&amp;lt;ffffffff9e78cd37&amp;gt;] ret_from_fork_nospec_begin+0x21/0x21
[172080.765353]  [&amp;lt;ffffffff9e0c5000&amp;gt;] ? insert_kthread_work+0x40/0x40
&#8230;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="57148">LU-12857</key>
            <summary>recovery-mds-scale test_failover_ost fails with &#8220;import is not in FULL state&#8221;</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="ashehata">Amir Shehata</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                    </labels>
                <created>Mon, 14 Oct 2019 16:26:24 +0000</created>
                <updated>Tue, 25 Apr 2023 07:20:05 +0000</updated>
                            <resolved>Tue, 30 Nov 2021 14:14:34 +0000</resolved>
                                    <version>Lustre 2.13.0</version>
                    <version>Lustre 2.12.3</version>
                                    <fixVersion>Lustre 2.15.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="256399" author="adilger" created="Tue, 15 Oct 2019 07:25:02 +0000"  >&lt;p&gt;The OST thread stack traces with &lt;tt&gt;ptlrpc_wait_event()&lt;/tt&gt; are normal - these are idle threads waiting for requests to process, so it looks like the OST is mounted, but not getting requests.&lt;/p&gt;

&lt;p&gt;AFAIK, the messages &quot;&lt;tt&gt;lnet_peer_ni_add_to_recoveryq_locked() ... Health = 0&lt;/tt&gt;&quot; indicate that LNet identified that there were errors sending to that node, but it seems that it is never recovering from that problem.  This might be a case where LNet Health is making a bad decision when there is only a single interface to a peer, and it can never recover from this case because it isn&apos;t sending any messages over that interface that would increase the health value?&lt;/p&gt;

&lt;p&gt;There were a few LNet-related patches landed recently to b2_12 that might be the source of this problem, in particular:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;https://review.whamcloud.com/34252 LU-11816 lnet: setup health timeout defaults
https://review.whamcloud.com/34967 LU-12344 lnet: handle remote health error
https://review.whamcloud.com/33304 LU-11478 lnet: misleading discovery seqno
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The first patch (34252) is enabling LNet Health on b2_12 for the first time, which may not be prudent given that this hasn&apos;t been widely deployed in production yet (it originally landed as v2_12_53-66-g8632e94aeb so 2.13 would be the first release to use it).  Since we are trying to get 2.12.3 out the door ASAP, if we can&apos;t understand and resolve this quickly, it probably makes sense to revert 34252, or make a small patch that defaults to having health disabled until we can get more operational experience in this mode.  &lt;/p&gt;</comment>
                            <comment id="256402" author="pjones" created="Tue, 15 Oct 2019 08:53:40 +0000"  >&lt;p&gt;I would strongly suggest that we re-disable health by default to avoid this issue for users who are not intending to use this feature.&lt;/p&gt;</comment>
                            <comment id="316160" author="adilger" created="Thu, 21 Oct 2021 01:47:34 +0000"  >&lt;p&gt;In an &lt;tt&gt;recovery-mds-scale&lt;/tt&gt; test failure I looked at, I see that the client import state is failing because it is &lt;tt&gt;IDLE&lt;/tt&gt; instead of &lt;tt&gt;FULL&lt;/tt&gt;.  That is normal if &lt;tt&gt;osc.&amp;#42;.idle_disconnect&lt;/tt&gt; is enabled, which is the default on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/df019c9d-d46a-48ea-90fd-f6fa0665ebe1&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/df019c9d-d46a-48ea-90fd-f6fa0665ebe1&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;trevis-55vm7:  rpc : @@@@@@ FAIL: can&apos;t put import for osc.lustre-OST0003-osc-[-0-9a-f]*.ost_server_uuid into FULL state after 1475 sec, have IDLE 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="316161" author="gerrit" created="Thu, 21 Oct 2021 01:59:54 +0000"  >&lt;p&gt;&quot;Andreas Dilger &amp;lt;adilger@whamcloud.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/45318&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/45318&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12857&quot; title=&quot;recovery-mds-scale test_failover_ost fails with &#8220;import is not in FULL state&#8221;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12857&quot;&gt;&lt;del&gt;LU-12857&lt;/del&gt;&lt;/a&gt; tests: allow clients to be IDLE after recovery&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 22573acdaf7fa4dfbdd86067971a170a45ae6a6f&lt;/p&gt;</comment>
                            <comment id="319485" author="gerrit" created="Tue, 30 Nov 2021 03:52:12 +0000"  >&lt;p&gt;&quot;Oleg Drokin &amp;lt;green@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/45318/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/45318/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12857&quot; title=&quot;recovery-mds-scale test_failover_ost fails with &#8220;import is not in FULL state&#8221;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12857&quot;&gt;&lt;del&gt;LU-12857&lt;/del&gt;&lt;/a&gt; tests: allow clients to be IDLE after recovery&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: af666bef058c5b7997527fc851a84a89375912fb&lt;/p&gt;</comment>
                            <comment id="319559" author="pjones" created="Tue, 30 Nov 2021 14:14:34 +0000"  >&lt;p&gt;Landed for 2.15&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="67508">LU-15342</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="60085">LU-13813</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00nzz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>