<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:05:21 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-13922] Client blocked in lstat()</title>
                <link>https://jira.whamcloud.com/browse/LU-13922</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Recently, with Lustre 2.12.5, we experienced a blocked Lustre client on the Robinhood server with the following task info:&#160;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Aug 24 23:05:15 fir-rbh01 kernel: INFO: task robinhood:10783 blocked for more than 120 seconds.
Aug 24 23:05:15 fir-rbh01 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Aug 24 23:05:15 fir-rbh01 kernel: robinhood       D ffffa13de5779040     0 10783      1 0x00000080
Aug 24 23:05:15 fir-rbh01 kernel: Call Trace:
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa84fa3d2&amp;gt;] ? security_inode_permission+0x22/0x30
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa896be39&amp;gt;] schedule_preempt_disabled+0x29/0x70
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8969db7&amp;gt;] __mutex_lock_slowpath+0xc7/0x1d0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa896919f&amp;gt;] mutex_lock+0x1f/0x2f
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8961dd2&amp;gt;] lookup_slow+0x33/0xa7
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8453268&amp;gt;] path_lookupat+0x838/0x8b0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa84342b2&amp;gt;] ? __mem_cgroup_commit_charge+0xe2/0x2f0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa841d665&amp;gt;] ? kmem_cache_alloc+0x35/0x1f0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa845415f&amp;gt;] ? getname_flags+0x4f/0x1a0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa845330b&amp;gt;] filename_lookup+0x2b/0xc0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa84552f7&amp;gt;] user_path_at_empty+0x67/0xc0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa83ed3ad&amp;gt;] ? handle_mm_fault+0x39d/0x9b0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8455361&amp;gt;] user_path_at+0x11/0x20
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8448223&amp;gt;] vfs_fstatat+0x63/0xc0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8448641&amp;gt;] SYSC_newlstat+0x31/0x60
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8972925&amp;gt;] ? do_page_fault+0x35/0x90
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8448aae&amp;gt;] SyS_newlstat+0xe/0x10
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8977ddb&amp;gt;] system_call_fastpath+0x22/0x27
Aug 24 23:05:15 fir-rbh01 kernel: INFO: task robinhood:10784 blocked for more than 120 seconds.
Aug 24 23:05:15 fir-rbh01 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Aug 24 23:05:15 fir-rbh01 kernel: robinhood       D ffffa13de5778000     0 10784      1 0x00000080
Aug 24 23:05:15 fir-rbh01 kernel: Call Trace:
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa84fa3d2&amp;gt;] ? security_inode_permission+0x22/0x30
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa896be39&amp;gt;] schedule_preempt_disabled+0x29/0x70
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8969db7&amp;gt;] __mutex_lock_slowpath+0xc7/0x1d0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa896919f&amp;gt;] mutex_lock+0x1f/0x2f
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8961dd2&amp;gt;] lookup_slow+0x33/0xa7
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8453268&amp;gt;] path_lookupat+0x838/0x8b0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa836a57d&amp;gt;] ? tracing_record_cmdline+0x1d/0x120
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa841d665&amp;gt;] ? kmem_cache_alloc+0x35/0x1f0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa845415f&amp;gt;] ? getname_flags+0x4f/0x1a0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa845330b&amp;gt;] filename_lookup+0x2b/0xc0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa84552f7&amp;gt;] user_path_at_empty+0x67/0xc0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8455361&amp;gt;] user_path_at+0x11/0x20
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8448223&amp;gt;] vfs_fstatat+0x63/0xc0
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8448641&amp;gt;] SYSC_newlstat+0x31/0x60
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa833a176&amp;gt;] ? __audit_syscall_exit+0x1e6/0x280
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8448aae&amp;gt;] SyS_newlstat+0xe/0x10
Aug 24 23:05:15 fir-rbh01 kernel:  [&amp;lt;ffffffffa8977ddb&amp;gt;] system_call_fastpath+0x22/0x27
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;The rest of the clients were ok but I guess a inode was blocked. The only solution I found was to restart the MDS (here, fir-MDT0003).&lt;/p&gt;

&lt;p&gt;I took a crash dump of the MDS when this happened and is it available on the FTP server as &lt;tt&gt;fir-md1-s4_vmcore_2020-08-25-08-27-56&lt;/tt&gt;.&lt;/p&gt;

&lt;p&gt;Also, I&apos;m providing the following files from the crash dump:&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;vmcore dmesg as &lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/35693/35693_fir-md1-s4_vmcore-dmesg_2020-08-25-08-27-56.txt&quot; title=&quot;fir-md1-s4_vmcore-dmesg_2020-08-25-08-27-56.txt attached to LU-13922&quot;&gt;fir-md1-s4_vmcore-dmesg_2020-08-25-08-27-56.txt&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;
	&lt;li&gt;vmcore&apos;s foreach bt as &lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/35694/35694_fir-md1-s4_foreach-bt_2020-08-25-08-27-56.txt&quot; title=&quot;fir-md1-s4_foreach-bt_2020-08-25-08-27-56.txt attached to LU-13922&quot;&gt;fir-md1-s4_foreach-bt_2020-08-25-08-27-56.txt&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;Lately, our MDS has been highly loaded with a lot of &lt;tt&gt;mdt_rdpg&lt;/tt&gt; threads, but until then didn&apos;t block on any inode. This is different and worth investigating in my opinion.&lt;/p&gt;

&lt;p&gt;I&apos;m wondering if this could be due to DNE as I&apos;m seeing a lot of those:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;PID: 21603  TASK: ffff9cf0f86ec100  CPU: 41  COMMAND: &quot;mdt_rdpg01_017&quot;
 #0 [ffff9d00ff888e48] crash_nmi_callback at ffffffff87456027
 #1 [ffff9d00ff888e58] nmi_handle at ffffffff87b6f91c
 #2 [ffff9d00ff888eb0] do_nmi at ffffffff87b6fb3d
 #3 [ffff9d00ff888ef0] end_repeat_nmi at ffffffff87b6ed89
    [exception RIP: fld_cache_lookup+157]
    RIP: ffffffffc089d9dd  RSP: ffff9ceff320ba50  RFLAGS: 00000202
    RAX: 0000000000000003  RBX: ffff9d10f868cf00  RCX: ffff9d10f27ad6c0
    RDX: ffff9d103898cbb8  RSI: ffff9d10f27ad6c0  RDI: ffff9d10f868cf28
    RBP: ffff9ceff320ba68   R8: 00000002800432da   R9: 0000000000000007
    R10: ffff9ce79d0a6700  R11: ffff9ce79d0a6700  R12: 00000002800432e6
    R13: ffff9d103898cbb8  R14: 00000002800432e6  R15: ffff9d00456b7000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
--- &amp;lt;NMI exception stack&amp;gt; ---
 #4 [ffff9ceff320ba50] fld_cache_lookup at ffffffffc089d9dd [fld]
 #5 [ffff9ceff320ba70] fld_local_lookup at ffffffffc089ef02 [fld]
 #6 [ffff9ceff320bab8] osd_fld_lookup at ffffffffc13ae0e8 [osd_ldiskfs]
 #7 [ffff9ceff320bac8] osd_remote_fid at ffffffffc13ae233 [osd_ldiskfs]
 #8 [ffff9ceff320bb08] osd_it_ea_rec at ffffffffc13b73eb [osd_ldiskfs]
 #9 [ffff9ceff320bb70] lod_it_rec at ffffffffc15f6fb7 [lod]
#10 [ffff9ceff320bb80] mdd_dir_page_build at ffffffffc167edd7 [mdd]
#11 [ffff9ceff320bbe8] dt_index_walk at ffffffffc0d082a0 [obdclass]
#12 [ffff9ceff320bc58] mdd_readpage at ffffffffc16805bf [mdd]
#13 [ffff9ceff320bc90] mdt_readpage at ffffffffc14dc82a [mdt]
#14 [ffff9ceff320bcd0] tgt_request_handle at ffffffffc0ff866a [ptlrpc]
#15 [ffff9ceff320bd58] ptlrpc_server_handle_request at ffffffffc0f9b44b [ptlrpc]
#16 [ffff9ceff320bdf8] ptlrpc_main at ffffffffc0f9edb4 [ptlrpc]
#17 [ffff9ceff320bec8] kthread at ffffffff874c2e81
#18 [ffff9ceff320bf50] ret_from_fork_nospec_begin at ffffffff87b77c24
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Thanks!&lt;br/&gt;
 Stephane&lt;br/&gt;
 &#160;&lt;/p&gt;</description>
                <environment>CentOS 7.6 (3.10.0-957.27.2.el7_lustre.pl2.x86_64, Lustre 2.12.5 + 2 patches from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13599&quot; title=&quot;LustreError: 30166:0:(service.c:189:ptlrpc_save_lock()) ASSERTION( rs-&amp;gt;rs_nlocks &amp;lt; 8 ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13599&quot;&gt;&lt;strike&gt;LU-13599&lt;/strike&gt;&lt;/a&gt;</environment>
        <key id="60483">LU-13922</key>
            <summary>Client blocked in lstat()</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="laisiyao">Lai Siyao</assignee>
                                    <reporter username="sthiell">Stephane Thiell</reporter>
                        <labels>
                    </labels>
                <created>Tue, 25 Aug 2020 18:02:06 +0000</created>
                <updated>Thu, 22 Oct 2020 13:47:59 +0000</updated>
                            <resolved>Sun, 20 Sep 2020 13:07:14 +0000</resolved>
                                    <version>Lustre 2.12.5</version>
                                    <fixVersion>Lustre 2.14.0</fixVersion>
                    <fixVersion>Lustre 2.12.6</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="278133" author="pjones" created="Wed, 26 Aug 2020 17:27:36 +0000"  >&lt;p&gt;Lai&lt;/p&gt;

&lt;p&gt;Could you please advise?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="278503" author="laisiyao" created="Tue, 1 Sep 2020 13:03:02 +0000"  >&lt;p&gt;fld_cache_lookup() doesn&apos;t look to be the cause of this deadlock, because all places it&apos;s called are taking read lock. But it can be optimized: it doesn&apos;t look necessary to check FID is remote in readdir, I&apos;ll update a patch for this.&lt;/p&gt;</comment>
                            <comment id="278505" author="gerrit" created="Tue, 1 Sep 2020 13:08:18 +0000"  >&lt;p&gt;Lai Siyao (lai.siyao@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/39782&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39782&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13922&quot; title=&quot;Client blocked in lstat()&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13922&quot;&gt;&lt;del&gt;LU-13922&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: no need to add OI cache in readdir&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 03120b1a1266fcbd77e00709623d11803a91f655&lt;/p&gt;</comment>
                            <comment id="278535" author="sthiell" created="Tue, 1 Sep 2020 16:43:49 +0000"  >&lt;p&gt;Lai, that sounds great! Thanks! After migrating a lot of files to this MDT0003, we&apos;ve seen the overall MDS load increasing with mdt_rdpg03 threads taking a lot of cpu time. We will be happy to test your patch when it&apos;s ready.&lt;/p&gt;</comment>
                            <comment id="278927" author="sthiell" created="Sat, 5 Sep 2020 00:12:07 +0000"  >&lt;p&gt;Hi Lai,&lt;/p&gt;

&lt;p&gt;Thanks so much! We&apos;re running with your patch on this MDS. It seems to help. MDS load went down from 200+ to 10-20 with the same workload. BTW, we have identified a user who is likely generating this load: she has 2500 running jobs and uses Python&apos;s glob on large directories (110k+ entries). Perfect for testing readdir scalability! &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</comment>
                            <comment id="278932" author="laisiyao" created="Sat, 5 Sep 2020 03:38:51 +0000"  >&lt;p&gt;Great, it will be landed when review passed.&lt;/p&gt;</comment>
                            <comment id="278982" author="sthiell" created="Tue, 8 Sep 2020 04:57:13 +0000"  >&lt;p&gt;We have been using this patch on top of 2.12.5 for 3 full days now on a MDS in production and the server load is definitely reduced. MDS is also more responsive from clients even under heavy readdir() load from specific jobs. No side-effect noticed so far. Thanks Lai!&lt;/p&gt;</comment>
                            <comment id="280086" author="gerrit" created="Sun, 20 Sep 2020 04:27:13 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/39782/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39782/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13922&quot; title=&quot;Client blocked in lstat()&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13922&quot;&gt;&lt;del&gt;LU-13922&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: no need to add OI cache in readdir&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: bc5934632df10aaa02b32b8254a473c14c6f8104&lt;/p&gt;</comment>
                            <comment id="280087" author="pjones" created="Sun, 20 Sep 2020 13:07:14 +0000"  >&lt;p&gt;Landed for 2.14&lt;/p&gt;</comment>
                            <comment id="281404" author="gerrit" created="Sat, 3 Oct 2020 18:35:38 +0000"  >&lt;p&gt;Minh Diep (mdiep@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/40135&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/40135&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13922&quot; title=&quot;Client blocked in lstat()&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13922&quot;&gt;&lt;del&gt;LU-13922&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: no need to add OI cache in readdir&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 71927c664d6cfe8b1d931394739a23e7ceec79bd&lt;/p&gt;</comment>
                            <comment id="282951" author="gerrit" created="Thu, 22 Oct 2020 06:18:45 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/40135/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/40135/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13922&quot; title=&quot;Client blocked in lstat()&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13922&quot;&gt;&lt;del&gt;LU-13922&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: no need to add OI cache in readdir&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: bdef16d60713743e832fbb9d14ecb5cd116398c7&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="35694" name="fir-md1-s4_foreach-bt_2020-08-25-08-27-56.txt" size="910923" author="sthiell" created="Tue, 25 Aug 2020 17:55:10 +0000"/>
                            <attachment id="35693" name="fir-md1-s4_vmcore-dmesg_2020-08-25-08-27-56.txt" size="403467" author="sthiell" created="Tue, 25 Aug 2020 17:55:00 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i018bz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>