<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:26:21 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-16361] Lustre client was crashed when I used WBC</title>
                <link>https://jira.whamcloud.com/browse/LU-16361</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;h1&gt;&lt;a name=&quot;Overview&quot;&gt;&lt;/a&gt;Overview&lt;/h1&gt;

&lt;p&gt;I&apos;m interested in WBC(Write back cache) feature&lt;span class=&quot;error&quot;&gt;&amp;#91;1&amp;#93;&lt;/span&gt;. So I&apos;m testing the WBC in development.&lt;/p&gt;

&lt;p&gt;WBC mostly works well. But, my lustre client machine crashed on some testing.&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;1&amp;#93;&lt;/span&gt; &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10938&quot; class=&quot;external-link&quot; rel=&quot;nofollow&quot;&gt;https://jira.whamcloud.com/browse/LU-10938&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;&lt;a name=&quot;Detailofcrash&quot;&gt;&lt;/a&gt;Detail of crash&lt;/h2&gt;

&lt;p&gt;I couldn&apos;t access to the lustre mounted directory after presenting error messages on some testing. And I couldn&apos;t unmount the lustre mounted directory. And sometimes my machine crashed after that.&lt;/p&gt;

&lt;p&gt;My machine crashed at the following timings.&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;Immediately after an error occurred.&lt;/li&gt;
	&lt;li&gt;A few hours after an error occurred.&lt;/li&gt;
	&lt;li&gt;The machine shutdown process was stopped after the error occurred/&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;There were several patterns of error messages.&lt;/p&gt;
&lt;h3&gt;&lt;a name=&quot;Pattern1%3ARequestsenthastimedoutforsentdelay&quot;&gt;&lt;/a&gt;Pattern1: Request sent has timed out for sent delay&lt;/h3&gt;


&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Nov  9 18:49:20 anila kernel: [ 3667.591798] Lustre: 6549:0:(client.c:2284:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1667987353/real 0]     req@0000000002a4f44e x1749008903039168/t0(0) o36-&amp;gt;exafs-MDT0000-mdc-ffff97086cd1d800@192.168.10.43@tcp:12/10 lens 520/576 e 0 to 1 dl 1667987360 ref 3 fl Rpc:Xr/0/ffffffff rc 0/-1 job:&apos;&apos;
Nov  9 18:49:20 anila kernel: [ 3667.591810] Lustre: exafs-MDT0000-mdc-ffff97086cd1d800: Connection to exafs-MDT0000 (at 192.168.10.43@tcp) was lost; in progress operations using this service     will wait for recovery to complete
Nov  9 18:49:24 anila kernel: [ 3671.431810] Lustre: 4874:0:(client.c:2284:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1667987355/real 0]     req@000000003297c839 x1749008903114560/t0(0) o4-&amp;gt;exafs-OST0000-osc-ffff97086cd1d800@192.168.10.43@tcp:6/4 lens 488/448 e 0 to 1 dl 1667987364 ref 3 fl Rpc:Xr/0/ffffffff rc 0/-1 job:&apos;&apos;
Nov  9 18:49:24 anila kernel: [ 3671.431814] Lustre: 4874:0:(client.c:2284:ptlrpc_expire_one_request()) Skipped 1 previous similar message
Nov  9 18:49:24 anila kernel: [ 3671.431833] Lustre: exafs-OST0000-osc-ffff97086cd1d800: Connection to exafs-OST0000 (at 192.168.10.43@tcp) was lost; in progress operations using this service     will wait for recovery to complete
Nov  9 18:49:25 anila kernel: [ 3672.395923] Lustre: exafs-MDT0000-mdc-ffff97086cd1d800: Connection restored to 192.168.10.43@tcp (at 192.168.10.43@tcp)
Nov  9 18:49:33 anila kernel: [ 3680.231831] Lustre: 4901:0:(client.c:2284:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1667987365/real 1667987371]    req@00000000be7eddcb x1749008903114624/t0(0) o4-&amp;gt;exafs-OST0000-osc-ffff97086cd1d800@192.168.10.43@tcp:6/4 lens 488/448 e 0 to 1 dl 1667987373 ref 2 fl Rpc:XQr/2/ffffffff rc 0/-1 job:&apos;&apos;
Nov  9 18:49:33 anila kernel: [ 3680.231835] Lustre: 4901:0:(client.c:2284:ptlrpc_expire_one_request()) Skipped 1 previous similar message
Nov  9 18:49:33 anila kernel: [ 3680.231853] Lustre: exafs-OST0000-osc-ffff97086cd1d800: Connection to exafs-OST0000 (at 192.168.10.43@tcp) was lost; in progress operations using this service     will wait for recovery to complete
Nov  9 18:49:41 anila kernel: [ 3688.839825] Lustre: 4877:0:(client.c:2284:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1667987369/real 1667987371]    req@00000000eedcb0b0 x1749008903115136/t0(0) o4-&amp;gt;exafs-OST0000-osc-ffff97086cd1d800@192.168.10.43@tcp:6/4 lens 488/448 e 0 to 1 dl 1667987381 ref 2 fl Rpc:XQr/2/ffffffff rc -11/-1 job:&apos;&apos;
Nov  9 18:49:41 anila kernel: [ 3688.839829] Lustre: 4877:0:(client.c:2284:ptlrpc_expire_one_request()) Skipped 1 previous similar message
Nov  9 18:49:43 anila kernel: [ 3690.631834] Lustre: exafs-MDT0000-mdc-ffff97086cd1d800: Connection to exafs-MDT0000 (at 192.168.10.43@tcp) was lost; in progress operations using this service     will wait for recovery to complete
Nov  9 18:49:46 anila kernel: [ 3693.991839] Lustre: 4892:0:(client.c:2284:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1667987365/real 1667987371]    req@000000006d74b528 x1749008903117952/t0(0) o4-&amp;gt;exafs-OST0000-osc-ffff97086cd1d800@192.168.10.43@tcp:6/4 lens 488/448 e 0 to 1 dl 1667987386 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:&apos;&apos;
Nov  9 18:49:46 anila kernel: [ 3693.991843] Lustre: 4892:0:(client.c:2284:ptlrpc_expire_one_request()) Skipped 2 previous similar messages
Nov  9 18:49:56 anila kernel: [ 3703.943822] Lustre: 4872:0:(client.c:2284:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1667987388/real 0]     req@0000000017b47a96 x1749008903197504/t0(0) o400-&amp;gt;exafs-OST0002-osc-ffff97086cd1d800@192.168.10.44@tcp:28/4 lens 224/224 e 0 to 1 dl 1667987396 ref 2 fl Rpc:XNr/0/ffffffff rc 0/-1 job:&apos;&apos;
Nov  9 18:49:56 anila kernel: [ 3703.943826] Lustre: 4872:0:(client.c:2284:ptlrpc_expire_one_request()) Skipped 1 previous similar message
Nov  9 18:49:56 anila kernel: [ 3703.943840] Lustre: exafs-OST0002-osc-ffff97086cd1d800: Connection to exafs-OST0002 (at 192.168.10.44@tcp) was lost; in progress operations using this service     will wait for recovery to complete
Nov  9 18:49:56 anila kernel: [ 3703.943846] LustreError: 166-1: MGC192.168.10.43@tcp: Connection to MGS (at 192.168.10.43@tcp) was lost; in progress operations using this service will fail
Nov  9 18:49:58 anila kernel: [ 3705.255880] LustreError: 4849:0:(events.c:211:client_bulk_callback()) event type 1, status -5, desc 000000005fad487b
Nov  9 18:50:10 anila kernel: [ 3717.255825] Lustre: exafs-MDT0001-mdc-ffff97086cd1d800: Connection to exafs-MDT0001 (at 192.168.10.44@tcp) was lost; in progress operations using this service     will wait for recovery to complete
Nov  9 18:50:10 anila kernel: [ 3717.953515] Lustre: exafs-MDT0001-mdc-ffff97086cd1d800: Connection restored to 192.168.10.44@tcp (at 192.168.10.44@tcp)
Nov  9 18:50:16 anila kernel: [ 3723.655861] LustreError: 4841:0:(events.c:211:client_bulk_callback()) event type 1, status -113, desc 00000000eea9e18b
Nov  9 18:50:16 anila kernel: [ 3723.655955] LustreError: 4841:0:(events.c:211:client_bulk_callback()) Skipped 7 previous similar messages
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&#160;&lt;/p&gt;
&lt;h3&gt;&lt;a name=&quot;Pattern2%3ArevalidateFIDerror%28whilethemachineshutdownprocess%29&quot;&gt;&lt;/a&gt;Pattern2: revalidate FID error( while the machine shutdown process )&lt;/h3&gt;


&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;LNetError: 4850:0:(lib-move.c:2250:lnet_handle_find_routed_path()) peer 192.168.10.43@tcp has no available nets
LNetError: 4860:0:(lib-move.c:2250:lnet_handle_find_routed_path()) Skipped 27 previous similar messages
systemd-shutdown[1]: Syncing filesystems and block devices - timed out, issuing SIGKILL ti PID 12341.
LustreError: 9158:0:(file.c:5124:ll_inode_revalidate_fini()) exafs: revalidate FID [0x200000007:0x1:0x0] error: rc = -4]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;


&lt;h3&gt;&lt;a name=&quot;Pattern3%3ABadRIPvalue&quot;&gt;&lt;/a&gt;Pattern3: Bad RIP value&lt;/h3&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Nov 15 18:19:34 anila kernel: [ 2660.738158] INFO: task sync:5644 blocked for more than 120 seconds.
Nov 15 18:19:34 anila kernel: [ 2660.738223] Tainted: P OE 5.4.0-131-generic #147-Ubuntu
Nov 15 18:19:34 anila kernel: [ 2660.738274] &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Nov 15 18:19:34 anila kernel: [ 2660.738333] sync D 0 5644 5172 0x00004000
Nov 15 18:19:34 anila kernel: [ 2660.738337] Call Trace:
Nov 15 18:19:34 anila kernel: [ 2660.738350] __schedule+0x2e3/0x740
Nov 15 18:19:34 anila kernel: [ 2660.738355] schedule+0x42/0xb0
Nov 15 18:19:34 anila kernel: [ 2660.738364] wb_wait_for_completion+0x56/0x90
Nov 15 18:19:34 anila kernel: [ 2660.738371] ? __wake_up_pollfree+0x40/0x40
Nov 15 18:19:34 anila kernel: [ 2660.738375] sync_inodes_sb+0xd8/0x2a0
Nov 15 18:19:34 anila kernel: [ 2660.738382] sync_inodes_one_sb+0x15/0x20
Nov 15 18:19:34 anila kernel: [ 2660.738389] iterate_supers+0xa3/0x100
Nov 15 18:19:34 anila kernel: [ 2660.738393] ? default_file_splice_write+0x30/0x30
Nov 15 18:19:34 anila kernel: [ 2660.738398] ksys_sync+0x42/0xb0
Nov 15 18:19:34 anila kernel: [ 2660.738402] __ia32_sys_sync+0xe/0x20
Nov 15 18:19:34 anila kernel: [ 2660.738409] do_syscall_64+0x57/0x190
Nov 15 18:19:34 anila kernel: [ 2660.738415] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Nov 15 18:19:34 anila kernel: [ 2660.738419] RIP: 0033:0x7f34a384022b
Nov 15 18:19:34 anila kernel: [ 2660.738428] Code: Bad RIP value.
Nov 15 18:19:34 anila kernel: [ 2660.738431] RSP: 002b:00007ffd325c6158 EFLAGS: 00000246 ORIG_RAX: 00000000000000a2
Nov 15 18:19:34 anila kernel: [ 2660.738434] RAX: ffffffffffffffda RBX: 00007ffd325c6298 RCX: 00007f34a384022b
Nov 15 18:19:34 anila kernel: [ 2660.738436] RDX: 00007f34a391a501 RSI: 0000000000000000 RDI: 00007f34a38e0bc0
Nov 15 18:19:34 anila kernel: [ 2660.738437] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
Nov 15 18:19:34 anila kernel: [ 2660.738439] R10: 00007f34a3914640 R11: 0000000000000246 R12: 0000000000000000
Nov 15 18:19:34 anila kernel: [ 2660.738440] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;h3&gt;&lt;a name=&quot;Pattern4%3AOutofmemory&quot;&gt;&lt;/a&gt;Pattern4: Out of memory&lt;/h3&gt;

&lt;p&gt;Out of memory message&lt;/p&gt;

&lt;h2&gt;&lt;a name=&quot;Teststhatcausedtheerror&quot;&gt;&lt;/a&gt;Tests that caused the error&lt;/h2&gt;

&lt;h3&gt;&lt;a name=&quot;Writea20GBfile&quot;&gt;&lt;/a&gt;Write a 20GB file&lt;/h3&gt;
&lt;ol&gt;
	&lt;li&gt;I set the 100ms latency of the network used for Luster. I set it by tc command. The purpose of latency is to simulate the case where the client is far away from the Lustre server.&lt;/li&gt;
	&lt;li&gt;I ran the following command to write a 20GB file.&lt;/li&gt;
&lt;/ol&gt;


&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt; # ./ior --dataPacketType=timestamp -C -Q 1 -g -G 1836258435 -k -e -o /mnt/lustre/io500/test/ior_file_easy -t 1m -F -w -D 300 -O stoneWallingWearOut=1 -a POSIX -b 20480m
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;ul&gt;
	&lt;li&gt;fsync was executed because -e option was specified.&lt;/li&gt;
	&lt;li&gt;Error didn&apos;t occur when sync was not executed.&lt;/li&gt;
	&lt;li&gt;I think that syncing a 20GB file is a ordinary operation.&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;&lt;a name=&quot;ExecuteIOR%2CMDTestcontinuously.&quot;&gt;&lt;/a&gt;Execute IOR, MDTest continuously.&lt;/h3&gt;
&lt;ol&gt;
	&lt;li&gt;I didn&apos;t set latency.&lt;/li&gt;
	&lt;li&gt;I executed IOR, MDTest continuously without fsync.&lt;/li&gt;
&lt;/ol&gt;


&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt; # mpiexec -np 4 ./ior --dataPacketType=timestamp -g -G 1836258167 -k -o /mnt/lustre/io500/test/ior-easy/ior_file_easy -t 1m -b 25600m -F -w -D 300 -O stoneWallingWearOut=1 -a POSIX
 # sudo -n bash -c &quot;echo 3 &amp;gt; /proc/sys/vm/drop_caches&quot;
 # mpiexec -np 4 ./mdtest --dataPacketType=timestamp -n 1000000 -u -L -F -P -G -1572531335 -N 0 -d /mnt/lustre/io500/test/mdtest-easy -C -W 300 -a POSIX
 # sudo -n bash -c &quot;echo 3 &amp;gt; /proc/sys/vm/drop_caches&quot;
 # mpiexec -np 4 ./ior --dataPacketType=timestamp -g -G=1836268855 -k -o /mnt/lustre/io500/test/ior-hard/file -t 47008 -b 47008 -s 500000 -w -D 300 -a POSIX -O stoneWallingWearOut=1
 # sudo -n bash -c &quot;echo 3 &amp;gt; /proc/sys/vm/drop_caches&quot;
 # mpiexec -np 4 ./mdtest --dataPacketType=timestamp -n 1000000 -t -w 3901 -e 3901 -P -G=-1572520647 -N 0 -F -d /mnt/lustre/io500/test/mdtest-hard -C -W 300 -a POSIX
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;ul&gt;
	&lt;li&gt;IOR-easy write: 100GB&lt;/li&gt;
	&lt;li&gt;MDTest-easy write: 4,000,000 files&lt;/li&gt;
	&lt;li&gt;IOR-hard write: 500000 segment (About 85GB)&lt;/li&gt;
	&lt;li&gt;MDTest-hard write: 4,000,000 files&lt;/li&gt;
	&lt;li&gt;Error didn&apos;t occur when I didn&apos;t execute fsync.&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;&lt;a name=&quot;Syncalotoffiles&quot;&gt;&lt;/a&gt;Sync a lot of files&lt;/h3&gt;
&lt;ol&gt;
	&lt;li&gt;I didn&apos;t set latency.&lt;/li&gt;
	&lt;li&gt;I ran MDTest without fsync. A lot of files were cached in WBC.&lt;/li&gt;
	&lt;li&gt;I ran sync.&lt;/li&gt;
&lt;/ol&gt;


&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt; # mpiexec -np 4 ./mdtest --dataPacketType=timestamp -n 1000000 -u -L -F -P -G -1572531335 -N 0 -d /mnt/lustre/test/mdtest-easy -C -W 300 -a POSIX
 # sync
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;h2&gt;&lt;a name=&quot;myprediction&quot;&gt;&lt;/a&gt;my prediction&lt;/h2&gt;

&lt;p&gt;Releasing WBC will take a lot of time if I set the latency or sync a lot of files. As a result, request will be timed out.&lt;/p&gt;

&lt;p&gt;No error occurred when I ran IOR/MDTest with fsync option in no-latency environment. So I think that syncing in small steps will inhibit this error.&lt;/p&gt;</description>
                <environment>[Lustre version]&lt;br/&gt;
&lt;a href=&quot;https://review.whamcloud.com/#/c/fs/lustre-release/+/47730/&quot;&gt;https://review.whamcloud.com/#/c/fs/lustre-release/+/47730/&lt;/a&gt;&lt;br/&gt;
=&amp;gt;patch set 1&lt;br/&gt;
&lt;br/&gt;
[Lustre client machine]&lt;br/&gt;
- OS: Ubuntu20.04&lt;br/&gt;
- kernel: 5.4.0-131-generic&lt;br/&gt;
- Cache: 175GB[1]&lt;br/&gt;
- remarks:&lt;br/&gt;
&amp;nbsp;&amp;nbsp;- dirty_writeback_centisecs=0&lt;br/&gt;
&amp;nbsp;&amp;nbsp;- write back chace: aging_keep mode&lt;br/&gt;
&amp;nbsp;&amp;nbsp;- Lustre&amp;#39;s maximum inode number is about 160,000,000.&lt;br/&gt;
&lt;br/&gt;
[1] Total RAM(250GB) * dirty_background_ratio(70%) = 175GB&lt;br/&gt;
&lt;br/&gt;
[2]&lt;br/&gt;
$ lfs df -i&lt;br/&gt;
UUID                      Inodes       IUsed       IFree IUse% Mounted on&lt;br/&gt;
exafs-MDT0000_UUID      83050496        6256    83044240   1% /mnt/lustre[MDT:0] &lt;br/&gt;
exafs-MDT0001_UUID      83050496         267    83050229   1% /mnt/lustre[MDT:1] &lt;br/&gt;
exafs-OST0000_UUID      26214400        1927    26212473   1% /mnt/lustre[OST:0] &lt;br/&gt;
exafs-OST0001_UUID      26214400        1834    26212566   1% /mnt/lustre[OST:1] &lt;br/&gt;
exafs-OST0002_UUID      26214400        1995    26212405   1% /mnt/lustre[OST:2] &lt;br/&gt;
exafs-OST0003_UUID      26214400        1898    26212502   1% /mnt/lustre[OST:3] &lt;br/&gt;
&lt;br/&gt;
filesystem_summary:    104856469        6523   104849946   1% /mnt/lustre</environment>
        <key id="73467">LU-16361</key>
            <summary>Lustre client was crashed when I used WBC</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="honjo.rikimaru">Honjo Rikimaru</reporter>
                        <labels>
                    </labels>
                <created>Fri, 2 Dec 2022 08:58:23 +0000</created>
                <updated>Tue, 6 Dec 2022 00:25:07 +0000</updated>
                                                                                <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="51932">LU-10938</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i036z3:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>