<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:02:54 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-6747] Intermittent rc=-EROFS from lod_statfs_and_check</title>
                <link>https://jira.whamcloud.com/browse/LU-6747</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Running mdtest I get failures like this:                   &lt;/p&gt;

&lt;p&gt;06/18/2015 16:22:12: Process 26(zwicky29): FAILED in create_remove_items_helper, unable to create file file.mdtest.26.0 (cwd=/p/lburn/faaland1/zfs-crada/mdtest/2/#test-dir.0/mdtest_tree.26.0): No space left on device                    &lt;br/&gt;
06/18/2015 16:22:12: Process 19(zwicky22): FAILED in create_remove_items_helper, unable to create file file.mdtest.19.0 (cwd=/p/lburn/faaland1/zfs-crada/mdtest/2/#test-dir.0/mdtest_tree.19.0): No space left on device&lt;/p&gt;

&lt;p&gt;The three servers involved are using the zfs backend and their pools have lots of free space; all are &amp;lt;1% full.&lt;/p&gt;

&lt;p&gt;In the lustre debug log on the MDS, I&apos;m seeing&lt;br/&gt;
lod_qos.c:238:lod_statfs_and_check() return -30 (-LUSTRE_EROFS)&lt;br/&gt;
lod_qos.c:1016:lod_alloc_rr() return -28 (-ENOSPC)&lt;/p&gt;

&lt;p&gt;Many other functions report exiting with -28 as well:&lt;/p&gt;

&lt;p&gt;lod_object.c:2104:lod_declare_xattr_set()&lt;br/&gt;
lod_object.c:3352:lod_declare_striped_object()&lt;br/&gt;
lod_object.c:3384:lod_declare_striped_object()&lt;br/&gt;
lod_object.c:3463:lod_declare_object_create()&lt;br/&gt;
lod_qos.c:1913:lod_qos_prep_create()&lt;br/&gt;
mdd_dir.c:1786:mdd_create_data()&lt;br/&gt;
mdd_dir.c:1807:mdd_create_data()&lt;br/&gt;
mdd_dir.c:1983:mdd_declare_object_create()&lt;br/&gt;
mdd_dir.c:2054:mdd_declare_create()&lt;br/&gt;
mdd_dir.c:2354:mdd_create()&lt;br/&gt;
mdd_object.c:352:mdd_declare_object_create_internal()&lt;br/&gt;
mdt_open.c:1105:mdt_open_by_fid_lock()&lt;br/&gt;
mdt_open.c:1255:mdt_reint_open()&lt;br/&gt;
mdt_open.c:1374:mdt_reint_open()&lt;br/&gt;
mdt_open.c:138:mdt_create_data()&lt;br/&gt;
mdt_open.c:347:mdt_mfd_open()&lt;br/&gt;
mdt_open.c:607:mdt_finish_open()&lt;br/&gt;
mdt_reint.c:1997:mdt_reint_rec()&lt;/p&gt;

&lt;p&gt;I&apos;ve attached a few thousand lines of debug output from the mds with both debug and debug_subsys set to -1.  I can reproduce easily, so I can get debug output with specific subsystems turned off or on.&lt;/p&gt;</description>
                <environment>lustre 2.7.54&lt;br/&gt;
spl/zfs 0.6.4.1&lt;br/&gt;
single MDS with one MDT and MGS&lt;br/&gt;
two OSS&amp;#39;s with one OST each</environment>
        <key id="30741">LU-6747</key>
            <summary>Intermittent rc=-EROFS from lod_statfs_and_check</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="utopiabound">Nathaniel Clark</assignee>
                                    <reporter username="ofaaland">Olaf Faaland</reporter>
                        <labels>
                            <label>llnl</label>
                    </labels>
                <created>Fri, 19 Jun 2015 00:06:16 +0000</created>
                <updated>Sat, 13 Feb 2016 14:25:03 +0000</updated>
                            <resolved>Fri, 10 Jul 2015 12:19:52 +0000</resolved>
                                    <version>Lustre 2.7.0</version>
                                    <fixVersion>Lustre 2.8.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="119047" author="ofaaland" created="Fri, 19 Jun 2015 00:08:30 +0000"  >&lt;p&gt;mdtest is being run as follows:&lt;/p&gt;

&lt;p&gt;mdtest-1.8.3 was launched with 32 total task(s) on 32 nodes&lt;br/&gt;
Command line used: /g/g0/faaland1/projects/zfs-crada/mdtest/mdtest -v -d /p/lburn/faaland1/zfs-crada/mdtest/2/ -u -t -w 2048 -i 100 -n 1000                                                                                                 &lt;br/&gt;
Path: /p/lburn/faaland1/zfs-crada/mdtest/2                                                                            &lt;br/&gt;
FS: 42.2 TiB   Used FS: 0.0%   Inodes: 7.2 Mi   Used Inodes: 0.1%                                                     &lt;/p&gt;

&lt;p&gt;32 tasks, 32000 files/directories&lt;/p&gt;

&lt;p&gt;   Operation               Duration              Rate&lt;br/&gt;
   ---------               --------              ----&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;iteration 1 06/18/2015 16:22:01 *&lt;br/&gt;
   Tree creation     :      0.010 sec,    100.583 ops/sec&lt;br/&gt;
   Directory creation:      3.964 sec,   8072.493 ops/sec&lt;br/&gt;
   Directory stat    :      1.346 sec,  23773.545 ops/sec&lt;br/&gt;
   Directory removal :      5.736 sec,   5578.833 ops/sec&lt;br/&gt;
06/18/2015 16:22:12: Process 26(zwicky29): FAILED in create_remove_items_helper, unable to create file file.mdtest.26.0 (cwd=/p/lburn/faaland1/zfs-crada/mdtest/2/#test-dir.0/mdtest_tree.26.0): No space left on device                    &lt;br/&gt;
06/18/2015 16:22:12: Process 19(zwicky22): FAILED in create_remove_items_helper, unable to create file file.mdtest.19.0 (cwd=/p/lburn/faaland1/zfs-crada/mdtest/2/#test-dir.0/mdtest_tree.19.0): No space left on device&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;This occurs intermittently; sometimes mdtest will create tens of thousands of files before this occurs, other times none or very few are created before it happens.  When it does occur, all the clients in the mdtest job (32 in the example I gave above) report the same failure.&lt;/p&gt;

&lt;p&gt;There is no console log output on any of the servers, between the time I start running mdtest and after the error has occurred.&lt;/p&gt;

&lt;p&gt;The problem disappears on its own; re-running mdtest after it fails, with a new target directory on the same filesystem, resumes with no problem.  This is true whether mdtest runs on any of the same nodes as the previous invocation or not.&lt;/p&gt;

&lt;p&gt;I sampled prealloc_status, prealloc_next_id, and prealloc_last_id for each of the two OSTs on a 1-second interval starting when I ran mdtest and stopping after the job failed.  prealloc_status  was 0 the entire time for both OSTs, and  $(cat prealloc_last_id) - $(cat prealloc_next_id) got as low as 5030 for one OST.&lt;/p&gt;

&lt;p&gt;I also noticed that files are not created evenly on the OSTs according to ltop; it shows the lock count increasing on one OST for several seconds, then that OST&apos;s lock count stops changing and the other OST&apos;s lock count starts increasing.  In the case of one 32-node mdtest run, it created all 32000 files on OST0001 and none on OST0000.  Other times it finishes with the same number of locks on each OST, but they see-saw up.&lt;/p&gt;




</comment>
                            <comment id="119048" author="ofaaland" created="Fri, 19 Jun 2015 01:04:17 +0000"  >&lt;p&gt;osp_statfs() reports many free blocks and many free files for OST0000 when called by lod_statfs_and_check:&lt;/p&gt;

&lt;p&gt;274 00020000:00000001:19.0:1434585022.784732:0:63759:0:(lod_qos.c:193:lod_statfs_and_check()) Process entered&lt;br/&gt;
275 00000004:00000001:19.0:1434585022.784733:0:63759:0:(osp_dev.c:631:osp_statfs()) Process entered&lt;br/&gt;
276 00000004:00001000:19.0:1434585022.784733:0:63759:0:(osp_dev.c:662:osp_statfs()) lburn-OST0000-osc-MDT0000: 5658306464 blocks, 5658303744 free, 5658295552 avail, 5658321151 files, 5658303744 free files&lt;br/&gt;
277 00000004:00000001:19.0:1434585022.784735:0:63759:0:(osp_dev.c:663:osp_statfs()) Process leaving (rc=0 : 0 : 0)&lt;br/&gt;
278 00020000:00000001:19.0:1434585022.784736:0:63759:0:(lod_qos.c:238:lod_statfs_and_check()) Process leaving (rc=18446744073709551586 : -30 : ffffffffffffffe2)&lt;/p&gt;</comment>
                            <comment id="119067" author="gerrit" created="Fri, 19 Jun 2015 07:49:30 +0000"  >&lt;p&gt;Andreas Dilger (andreas.dilger@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/15346&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/15346&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6747&quot; title=&quot;Intermittent rc=-EROFS from lod_statfs_and_check&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6747&quot;&gt;&lt;del&gt;LU-6747&lt;/del&gt;&lt;/a&gt; osd-zfs: initialize obd_statfs in osd_statfs()&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 8d468ec53f908d8a2d067eabab817d04d334d737&lt;/p&gt;</comment>
                            <comment id="119068" author="adilger" created="Fri, 19 Jun 2015 07:49:34 +0000"  >&lt;p&gt;The &lt;tt&gt;-EROFS&lt;/tt&gt; return is because the OST returned &lt;tt&gt;os_state=OS_STATE_READONLY&lt;/tt&gt; to the MDS, and if all of the OSTs are returning this then there are no available OST objects and the MDS needs to return &lt;tt&gt;-ENOSPC&lt;/tt&gt;.  The main point of investigation should be why the OST(s) are returning OS_STATE_READONLY.&lt;/p&gt;

&lt;p&gt;As far as I can see, the only place that I can see which sets OS_STATE_READONLY is in osd-ldiskfs, but you are testing ZFS OSTs so it isn&apos;t clear where that is coming from (there is only a commented-out section in osd-zfs/osd_handler.c which references it).&lt;/p&gt;

&lt;p&gt;If you run with &lt;tt&gt;D_CACHE&lt;/tt&gt; enabled on the OSS (&lt;tt&gt;lctl set_param debug=+cache&lt;/tt&gt;) it would print the os_state field on the OST during statfs processing.&lt;/p&gt;

&lt;p&gt;Looking at the osd-zfs osd_statfs() method it seems possible that os_state is not being initialized properly and is just holding random data?  I&apos;d think that this would have triggered earlier if that were the case, or maybe I&apos;m mistaken and it is initialized somewhere else, but in the osd-ldiskfs case it is initialized within its osd_statfs() call.&lt;/p&gt;</comment>
                            <comment id="119069" author="adilger" created="Fri, 19 Jun 2015 07:50:18 +0000"  >&lt;p&gt;Totally untested patch, but I think it is pretty reasonable and will be tested out soon enough.&lt;/p&gt;</comment>
                            <comment id="119089" author="pjones" created="Fri, 19 Jun 2015 12:23:08 +0000"  >&lt;p&gt;Assigning to Nathaniel for any follow on questions&lt;/p&gt;</comment>
                            <comment id="119150" author="ofaaland" created="Fri, 19 Jun 2015 21:01:10 +0000"  >&lt;p&gt;Both OSTs are indeed reporting a nonzero state intermittently:&lt;/p&gt;

&lt;p&gt;zwicky-lburn-oss1.out:00002000:00000020:0.0:1434584977.251447:0:12631:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821990 free, 176813342 avail; 5658325578 objects: 5658303680 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:0.0:1434585002.260232:0:12631:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821990 free, 176821809 avail; 5658327093 objects: 5658303680 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:9.0:1434585002.890186:0:12554:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821990 free, 176805734 avail; 5658327093 objects: 5658303680 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:9.0:1434585007.890294:0:12554:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821988 free, 176812034 avail; 5658330734 objects: 5658303616 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:0.0:1434585017.266203:0:12631:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821992 free, 176821815 avail; 5658321151 objects: 5658303744 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:10.0:1434585017.891472:0:12649:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821992 free, 176821736 avail; 5658321151 objects: 5658303744 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:0.0:1434585022.267691:0:12631:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821992 free, 176821815 avail; 5658321151 objects: 5658303744 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:10.0:1434585022.912333:0:12649:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821992 free, 176821736 avail; 5658321151 objects: 5658303744 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:0.0:1434585027.269289:0:12631:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821992 free, 176821815 avail; 5658321151 objects: 5658303744 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:10.0:1434585027.913373:0:12649:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821992 free, 176821736 avail; 5658321151 objects: 5658303744 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:0.0:1434585032.270956:0:12631:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821992 free, 176821815 avail; 5658321151 objects: 5658303744 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:10.0:1434585032.913344:0:12649:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821992 free, 176821736 avail; 5658321151 objects: 5658303744 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:0.0:1434585037.272373:0:12631:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821992 free, 176821815 avail; 5658321151 objects: 5658303744 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:10.0:1434585037.914346:0:12649:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821992 free, 176821736 avail; 5658321151 objects: 5658303744 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:0.0:1434585042.273878:0:12631:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821992 free, 176821815 avail; 5658321151 objects: 5658303744 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss1.out:00002000:00000020:10.0:1434585042.914405:0:12649:0:(ofd_obd.c:841:ofd_statfs()) 176822077 blocks: 176821992 free, 176821736 avail; 5658321151 objects: 5658303744 free; state 9125230e&lt;br/&gt;
zwicky-lburn-oss2.out:00002000:00000020:1.0:1434584984.055391:0:12541:0:(ofd_obd.c:841:ofd_statfs()) 176822073 blocks: 176821934 free, 176821857 avail; 5658316107 objects: 5658301888 free; state c9c4f476&lt;br/&gt;
zwicky-lburn-oss2.out:00002000:00000020:1.0:1434584989.057246:0:12541:0:(ofd_obd.c:841:ofd_statfs()) 176822076 blocks: 176821943 free, 176821866 avail; 5658315946 objects: 5658302176 free; state c9c4f476&lt;br/&gt;
zwicky-lburn-oss2.out:00002000:00000020:1.0:1434585014.066824:0:12541:0:(ofd_obd.c:841:ofd_statfs()) 176822076 blocks: 176821940 free, 176821863 avail; 5658319856 objects: 5658302080 free; state c9c4f476&lt;br/&gt;
zwicky-lburn-oss2.out:00002000:00000020:1.0:1434585019.068752:0:12541:0:(ofd_obd.c:841:ofd_statfs()) 176822076 blocks: 176821940 free, 176821863 avail; 5658319856 objects: 5658302080 free; state c9c4f476&lt;br/&gt;
zwicky-lburn-oss2.out:00002000:00000020:1.0:1434585029.072516:0:12541:0:(ofd_obd.c:841:ofd_statfs()) 176822076 blocks: 176821921 free, 176821844 avail; 5658339248 objects: 5658301472 free; state c9c4f476&lt;br/&gt;
zwicky-lburn-oss2.out:00002000:00000020:8.0:1434585031.588445:0:12764:0:(ofd_obd.c:841:ofd_statfs()) 176822076 blocks: 176821922 free, 176790984 avail; 5658338073 objects: 5658301504 free; state c9c4f476&lt;br/&gt;
zwicky-lburn-oss2.out:00002000:00000020:1.0:1434585034.074483:0:12541:0:(ofd_obd.c:841:ofd_statfs()) 176822076 blocks: 176821924 free, 176795098 avail; 5658334103 objects: 5658301568 free; state c9c4f476&lt;br/&gt;
zwicky-lburn-oss2.out:00002000:00000020:4.0:1434585044.078171:0:12541:0:(ofd_obd.c:841:ofd_statfs()) 176822073 blocks: 176821940 free, 176820539 avail; 4863994422 objects: 4863984771 free; state c9c4f476&lt;br/&gt;
zwicky-lburn-oss2.out:00002000:00000020:4.0:1434585049.080189:0:12541:0:(ofd_obd.c:841:ofd_statfs()) 176822076 blocks: 176821945 free, 176821872 avail; 3106822606 objects: 3106816830 free; state c9c4f476&lt;/p&gt;

&lt;p&gt;I&apos;ll try the patch.&lt;/p&gt;</comment>
                            <comment id="119176" author="ofaaland" created="Sat, 20 Jun 2015 05:05:53 +0000"  >&lt;p&gt;Nathaniel,&lt;/p&gt;

&lt;p&gt;The patch had an error in its usage of offsetof(), which I fixed, but the patch still fails to build on SLES12.  It builds fine on RHEL.  I looked at the SLES12 build console output but am not able to tell what the problem is.  Can you take a look?&lt;/p&gt;

&lt;p&gt;thanks,&lt;br/&gt;
Olaf&lt;/p&gt;</comment>
                            <comment id="119186" author="ofaaland" created="Sat, 20 Jun 2015 22:58:29 +0000"  >&lt;p&gt;The patch appears to have resolved the problem; mdtest has been running now for several hours without encountering the issue, and ran for several hours last night as well.&lt;/p&gt;

&lt;p&gt;I&apos;m surprised I didn&apos;t see this problem earlier.  I ran mdtest in the same manner, against the same SPL/ZFS/Lustre versions, on the same clients and servers, with SPL and ZFS built using the --enable-debug configure option.  I didn&apos;t observe this problem.  I then rebuilt SPL and ZFS without debug, and rebuilt lustre against it, which is when I began to see this.&lt;/p&gt;</comment>
                            <comment id="120931" author="gerrit" created="Fri, 10 Jul 2015 03:27:20 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/15346/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/15346/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6747&quot; title=&quot;Intermittent rc=-EROFS from lod_statfs_and_check&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6747&quot;&gt;&lt;del&gt;LU-6747&lt;/del&gt;&lt;/a&gt; osd-zfs: initialize obd_statfs in osd_statfs()&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 1ef0185e8c12aa11a4c87c4956b1ba408c0e3d08&lt;/p&gt;</comment>
                            <comment id="120960" author="pjones" created="Fri, 10 Jul 2015 12:19:52 +0000"  >&lt;p&gt;Landed for 2.8&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="30833">LU-6767</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="18222" name="zwicky-lburn-mds.snippet.out.gz" size="32693" author="ofaaland" created="Fri, 19 Jun 2015 00:06:16 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzxg5r:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>