<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:51:39 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5457] lvbo_init failed for resource</title>
                <link>https://jira.whamcloud.com/browse/LU-5457</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;OST  goes into disconn state on MDS We see the following error on OSS&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;00010000:00020000:9.0:1407349941.597616:0:10879:0:(ldlm_resource.c:1165:ldlm_resource_get()) nbp8-OST001d: lvbo_init failed &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; resource 0x8b3e13:0x0: rc = -2
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Will upload debug logs to ftp site&lt;/p&gt;</description>
                <environment>lustre2.4.3 server</environment>
        <key id="25891">LU-5457</key>
            <summary>lvbo_init failed for resource</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="2">Won&apos;t Fix</resolution>
                                        <assignee username="green">Oleg Drokin</assignee>
                                    <reporter username="mhanafi">Mahmoud Hanafi</reporter>
                        <labels>
                    </labels>
                <created>Wed, 6 Aug 2014 18:40:53 +0000</created>
                <updated>Fri, 16 Oct 2015 03:41:03 +0000</updated>
                            <resolved>Fri, 16 Oct 2015 03:41:03 +0000</resolved>
                                    <version>Lustre 2.4.3</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="90988" author="mhanafi" created="Wed, 6 Aug 2014 18:46:28 +0000"  >&lt;p&gt;We tried to unmount and remount the ost but it just hung. Took a crash dump. Rebooted and remounted. It was ok for a few minutes after recover but when into disconn state again.&lt;/p&gt;

&lt;p&gt;uploaded to ftp.whamcloud.com&lt;br/&gt;
uploads/&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5457&quot; title=&quot;lvbo_init failed for resource&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5457&quot;&gt;&lt;del&gt;LU-5457&lt;/del&gt;&lt;/a&gt;/&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5457&quot; title=&quot;lvbo_init failed for resource&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5457&quot;&gt;&lt;del&gt;LU-5457&lt;/del&gt;&lt;/a&gt;.debug.beforereboot.gz&lt;br/&gt;
uploads/&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5457&quot; title=&quot;lvbo_init failed for resource&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5457&quot;&gt;&lt;del&gt;LU-5457&lt;/del&gt;&lt;/a&gt;/&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5457&quot; title=&quot;lvbo_init failed for resource&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5457&quot;&gt;&lt;del&gt;LU-5457&lt;/del&gt;&lt;/a&gt;.debug.afterrecovery.gz&lt;/p&gt;</comment>
                            <comment id="90990" author="mhanafi" created="Wed, 6 Aug 2014 18:50:09 +0000"  >&lt;p&gt;Recovery stuck&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;nbp8-oss4 /proc/fs/lustre/obdfilter/nbp8-OST001d # cat recovery_status 
status: RECOVERING
recovery_start: 1407348997
time_remaining: 0
connected_clients: 5792/6929
req_replay_clients: 0
lock_repay_clients: 17
completed_clients: 5775
evicted_clients: 1137
replayed_requests: 0
queued_requests: 0
next_transno: 30065297379
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="90992" author="green" created="Wed, 6 Aug 2014 19:01:50 +0000"  >&lt;p&gt;CAn you please also attach kernel logs from the ost and MDS?&lt;/p&gt;

&lt;p&gt;I would suspect you have preallocated object state between OST and MDT out of alignment which would make MDT to reject this ost similar to some other tickets that I cannot remember now, but will update this when I find or based on your other logs.&lt;/p&gt;

&lt;p&gt;the -2 lvbo_init error does not really have much to do with the disconnect problem you are having. It basically means that an object referenced from and MDT does not exist on OST, which might be a valid race or a result of aborted recovery, though it might also mea some slight damage to the filesystem if the OST dd not properly flush creates from the last time over.&lt;/p&gt;</comment>
                            <comment id="90999" author="mhanafi" created="Wed, 6 Aug 2014 19:49:26 +0000"  >&lt;p&gt;uploaded to ftp site.&lt;br/&gt;
oss.messages.gz&lt;br/&gt;
mds.messages.gz&lt;/p&gt;</comment>
                            <comment id="91015" author="mhanafi" created="Wed, 6 Aug 2014 21:09:41 +0000"  >&lt;p&gt;i unmounted and remounted using abort_recovery and that cleared up the issue for now. Will watch to see if it happens again.&lt;/p&gt;</comment>
                            <comment id="91017" author="green" created="Wed, 6 Aug 2014 21:12:47 +0000"  >&lt;p&gt;In the oss log there appears to be a stuck thread that is trying to sync the disk.&lt;br/&gt;
That also prevents MDS from connecting. I wonder if it ever completed. The thread is 11603, the trace stamp is at &quot;Aug 5 11:20:35&quot;: INFO: task tgt_recov:11603 blocked for more than 120 seconds.&lt;/p&gt;</comment>
                            <comment id="91026" author="jfc" created="Wed, 6 Aug 2014 22:59:57 +0000"  >&lt;p&gt;Thank you Oleg.&lt;/p&gt;</comment>
                            <comment id="91107" author="mhanafi" created="Thu, 7 Aug 2014 20:55:45 +0000"  >&lt;p&gt;priority on this can be lowered&lt;/p&gt;</comment>
                            <comment id="114688" author="adilger" created="Fri, 8 May 2015 00:26:27 +0000"  >&lt;p&gt;Mahmoud, it looks like this issue could be closed at this point?&lt;/p&gt;</comment>
                            <comment id="121029" author="mhanafi" created="Fri, 10 Jul 2015 19:15:33 +0000"  >&lt;p&gt;We have hit this issue again running with 2.5.3. From the client we get&lt;br/&gt;
pfe21 /nobackupp8/tmrogers/nobackupp30/tmrogers/dougwaves/nv4-lk # ls -l ntrace.431035&lt;br/&gt;
ls: cannot access ntrace.431035: Cannot allocate memory&lt;/p&gt;

&lt;p&gt;and on the server:&lt;br/&gt;
LustreError: 44987:0:(ldlm_resource.c:1188:ldlm_resource_get()) nbp8-OST008f: lvbo_init failed for resource 0xf1a94b:0x0: rc = -2&lt;/p&gt;</comment>
                            <comment id="121032" author="mhanafi" created="Fri, 10 Jul 2015 19:29:47 +0000"  >&lt;p&gt;looks like there was a network error between the MDS and OSS that may have triggered this.  Why wouldn&apos;t the MDS resend its request?&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;Jul 10 11:50:00 nbp8-mds1 kernel: LNet: 5824:0:(o2iblnd_cb.c:1895:kiblnd_close_conn_locked()) Closing conn to 10.151.27.75@o2ib: error -110(sending)
Jul 10 11:50:00 nbp8-mds1 kernel: Lustre: 6270:0:(client.c:1940:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1436554008/real 1436554200]  req@ffff8801de9f3c00 x1503598763749580/t0(0) o6-&amp;gt;nbp8-OST00c3-osc-MDT0000@10.151.27.75@o2ib:28/4 lens 664/432 e 0 to 1 dl 1436554238 ref 1 fl Rpc:X/0/ffffffff rc 0/-1
Jul 10 11:50:00 nbp8-mds1 kernel: Lustre: nbp8-OST012b-osc-MDT0000: Connection to nbp8-OST012b (at 10.151.27.75@o2ib) was lost; in progress operations using &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; service will wait &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; recovery to complete
Jul 10 11:50:00 nbp8-mds1 kernel: Lustre: 6270:0:(client.c:1940:ptlrpc_expire_one_request()) Skipped 7 previous similar messages
Jul 10 11:50:30 nbp8-mds1 kernel: Lustre: nbp8-MDT0000: haven&lt;span class=&quot;code-quote&quot;&gt;&apos;t heard from client nbp8-MDT0000-lwp-OST000d_UUID (at 10.151.27.75@o2ib) in 227 seconds. I think it&apos;&lt;/span&gt;s dead, and I am evicting it. exp ffff881d30834400, cur 1436554230 expire 1436554080 last 1436554003
Jul 10 11:50:34 nbp8-mds1 kernel: Lustre: nbp8-MDT0000: Client a32f8183-5ffd-56e1-9cce-c045e3ed6606 (at 10.153.12.159@o2ib233) reconnecting
Jul 10 11:50:39 nbp8-mds1 kernel: Lustre: 6277:0:(client.c:1940:ptlrpc_expire_one_request()) @@@ Request sent has timed out &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; sent delay: [sent 1436554008/real 0]  req@ffff8820df0b3c00 x1503598763750488/t0(0) o13-&amp;gt;nbp8-OST008f-osc-MDT0000@10.151.27.75@o2ib:7/4 lens 224/368 e 0 to 1 dl 1436554238 ref 2 fl Rpc:X/0/ffffffff rc 0/-1
Jul 10 11:50:39 nbp8-mds1 kernel: Lustre: nbp8-OST0111-osc-MDT0000: Connection to nbp8-OST0111 (at 10.151.27.75@o2ib) was lost; in progress operations using &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; service will wait &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; recovery to complete
Jul 10 11:50:39 nbp8-mds1 kernel: Lustre: Skipped 6 previous similar messages
Jul 10 11:50:39 nbp8-mds1 kernel: Lustre: 6277:0:(client.c:1940:ptlrpc_expire_one_request()) Skipped 4 previous similar messages
Jul 10 11:50:40 nbp8-mds1 kernel: Lustre: 6295:0:(client.c:1940:ptlrpc_expire_one_request()) @@@ Request sent has timed out &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; slow reply: [sent 1436554007/real 1436554007]  req@ffff880119fce400 x1503598763748812/t0(0) o13-&amp;gt;nbp8-OST0041-osc-MDT0000@10.151.27.75@o2ib:7/4 lens 224/368 e 0 to 1 dl 1436554237 ref 1 fl Rpc:X/0/ffffffff rc 0/-1
Jul 10 11:50:40 nbp8-mds1 kernel: Lustre: nbp8-OST00dd-osc-MDT0000: Connection to nbp8-OST00dd (at 10.151.27.75@o2ib) was lost; in progress operations using &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; service will wait &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; recovery to complete
Jul 10 11:50:41 nbp8-mds1 kernel: Lustre: Skipped 1 previous similar message
Jul 10 11:50:43 nbp8-mds1 kernel: Lustre: 6268:0:(client.c:1940:ptlrpc_expire_one_request()) @@@ Request sent has timed out &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; sent delay: [sent 1436554013/real 0]  req@ffff8820e9604c00 x1503598763751740/t0(0) o6-&amp;gt;nbp8-OST00dd-osc-MDT0000@10.151.27.75@o2ib:28/4 lens 664/432 e 0 to 1 dl 1436554243 ref 2 fl Rpc:X/0/ffffffff rc 0/-1
Jul 10 11:50:43 nbp8-mds1 kernel: Lustre: 6268:0:(client.c:1940:ptlrpc_expire_one_request()) Skipped 6 previous similar messages
Jul 10 11:50:55 nbp8-mds1 kernel: Lustre: nbp8-MDT0000: Client f3b17551-baab-0ace-17ef-34be763ba268 (at 10.153.10.161@o2ib233) reconnecting
Jul 10 11:50:55 nbp8-mds1 kernel: Lustre: Skipped 1 previous similar message
Jul 10 11:51:04 nbp8-mds1 kernel: Lustre: 6285:0:(client.c:1940:ptlrpc_expire_one_request()) @@@ Request sent has timed out &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; sent delay: [sent 1436554032/real 0]  req@ffff8820e1a31c00 x1503598763756756/t0(0) o6-&amp;gt;nbp8-OST00dd-osc-MDT0000@10.151.27.75@o2ib:28/4 lens 664/432 e 0 to 1 dl 1436554262 ref 2 fl Rpc:X/0/ffffffff rc 0/-1
Jul 10 11:51:04 nbp8-mds1 kernel: Lustre: 6285:0:(client.c:1940:ptlrpc_expire_one_request()) Skipped 3 previous similar messages
Jul 10 11:51:13 nbp8-mds1 kernel: Lustre: 6292:0:(client.c:1940:ptlrpc_expire_one_request()) @@@ Request sent has timed out &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; sent delay: [sent 1436554042/real 0]  req@ffff883244b63400 x1503598763759424/t0(0) o6-&amp;gt;nbp8-OST000d-osc-MDT0000@10.151.27.75@o2ib:28/4 lens 664/432 e 0 to 1 dl 1436554272 ref 2 fl Rpc:X/0/ffffffff rc 0/-1
Jul 10 11:51:13 nbp8-mds1 kernel: Lustre: 6292:0:(client.c:1940:ptlrpc_expire_one_request()) Skipped 9 previous similar messages
Jul 10 11:51:30 nbp8-mds1 kernel: Lustre: 6272:0:(client.c:1940:ptlrpc_expire_one_request()) @@@ Request sent has timed out &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; sent delay: [sent 1436554058/real 0]  req@ffff8820e3c28c00 x1503598763763336/t0(0) o6-&amp;gt;nbp8-OST00a9-osc-MDT0000@10.151.27.75@o2ib:28/4 lens 664/432 e 0 to 1 dl 1436554288 ref 2 fl Rpc:X/0/ffffffff rc 0/-1
Jul 10 11:51:30 nbp8-mds1 kernel: Lustre: 6272:0:(client.c:1940:ptlrpc_expire_one_request()) Skipped 3 previous similar messages
Jul 10 11:51:35 nbp8-mds1 kernel: Lustre: nbp8-MDT0000: Client 6910b65a-5f75-621c-ce1c-d96b21b50225 (at 10.153.11.198@o2ib233) reconnecting
Jul 10 11:51:35 nbp8-mds1 kernel: Lustre: Skipped 5 previous similar messages
Jul 10 11:52:04 nbp8-mds1 kernel: Lustre: 6285:0:(client.c:1940:ptlrpc_expire_one_request()) @@@ Request sent has timed out &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; sent delay: [sent 1436554094/real 0]  req@ffff882392070c00 x1503598763774164/t0(0) o6-&amp;gt;nbp8-OST00c3-osc-MDT0000@10.151.27.75@o2ib:28/4 lens 664/432 e 0 to 1 dl 1436554324 ref 2 fl Rpc:X/0/ffffffff rc 0/-1
Jul 10 11:52:04 nbp8-mds1 kernel: Lustre: 6285:0:(client.c:1940:ptlrpc_expire_one_request()) Skipped 28 previous similar messages
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="121248" author="green" created="Tue, 14 Jul 2015 16:50:58 +0000"  >&lt;p&gt;MDS does resend requests, the error you see is just informing us that the RPC failed to send, the we retry.&lt;br/&gt;
In the log I see it&apos;s statfs and destroy RPCs affected.&lt;/p&gt;

&lt;p&gt;None of them should result in missing objects on OSTs even if RPCs never succeeded in resending.&lt;br/&gt;
In fact not any single RPC failure to send from MDS would result in a problem like this either, at least I cannot think of any.&lt;/p&gt;

&lt;p&gt;Now, if you have older pre-2.4 clients, those could initiate their own destroys that might lead to this. Or this could be fallout from earlier sync failures where OST announced it created some objects, failed to sync that to disk and then after dying and restarting the objects that were handed out by MDTs out of this pool are no longer there. - you probably can verify this with some sort of a creation date check on this file that&apos;s missing objects.&lt;/p&gt;</comment>
                            <comment id="121961" author="mhanafi" created="Wed, 22 Jul 2015 23:11:48 +0000"  >&lt;p&gt;Is there a way we can map this error message to the specific file?&lt;/p&gt;

&lt;p&gt;Jul 22 15:27:38 nbp8-oss6 kernel: LustreError: 44478:0:(ldlm_resource.c:1188:ldlm_resource_get()) nbp8-OST0039: lvbo_init failed for resource 0x17d2529:0x0: rc = -2&lt;/p&gt;</comment>
                            <comment id="122062" author="green" created="Thu, 23 Jul 2015 20:40:40 +0000"  >&lt;p&gt;Hm, that&apos;s a bit of a tough one.&lt;/p&gt;

&lt;p&gt;If the object was really there, it&apos;s somewhat simple - you mount your ost as ldiskfs as:&lt;br/&gt;
mount /device /mnt/somewhere -t ldiskfs -o ro #  (ok to do even when OST is up. if done on the same node)&lt;br/&gt;
then you find the object id in /mnt/somewhere/O directory (it&apos;s hashed out)&lt;br/&gt;
and then you do something like:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;[root@centos6-9 tests]# ../utils/ll_decode_filter_fid  /mnt/nfs/O/0/d1/4897
/mnt/nfs/O/0/d1/4897: parent=[0x200002342:0xb1:0x0] stripe=0
[root@centos6-9 tests]# ../utils/lfs fid2path /mnt/lustre/ &lt;span class=&quot;code-quote&quot;&gt;&apos;[0x200002342:0xb1:0x0]&apos;&lt;/span&gt;
/mnt/lustre/d102d/d102d.sanity/file2-1-2
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;So ll_decode_filter_fid gives you parent file fid and then you use that with lfs fid2path to get to the file name.&lt;/p&gt;

&lt;p&gt;Now with no object you cannot get parent file name, but the object itself seems to be in old format (is this an old filesystem that was upgraded multiple times in the past?)&lt;br/&gt;
There&apos;s certainly a more labor intensive way of finding what file it belongs to:&lt;br/&gt;
Basically you print out striping for all files on the OST in question with:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;&lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; i in `../utils/lfs  find /mnt/lustre --ost lustre-OST0000_UUID ` ; &lt;span class=&quot;code-keyword&quot;&gt;do&lt;/span&gt; ../utils/lfs getstripe $i ; done
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;and then in this output you need to look for your object id (0x17d2529 = 24978729 -so this what you are looking for ) from&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lmm_stripe_count:   1
lmm_stripe_size:    1048576
lmm_pattern:        1
lmm_layout_gen:     0
lmm_stripe_offset:  0
	obdidx		 objid		 objid		 group
	     0	           232	         0xe8	             0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Note that obdidx needs to match the OST you are looking for (for multi-striped files), so in your case 0x39 converted to dec is 57. So you are looking for &apos;57\s+24978729&apos; in the output, essentially.&lt;/p&gt;

&lt;p&gt;This will only work if the file referencing this object still exists and visible in the namespace.&lt;br/&gt;
Typically you can spot them easir because when you do &quot;ls&quot; in a dir, you&apos;d get a bunch of errors from ls about being unable to stat some files and then follows the listing of the ones that it was able to stat. (ls -l will output the problematic ones with a bunch of question marks in the permission bits space).&lt;/p&gt;</comment>
                            <comment id="130565" author="mhanafi" created="Thu, 15 Oct 2015 23:47:44 +0000"  >&lt;p&gt;This can be closed&lt;/p&gt;</comment>
                            <comment id="130587" author="pjones" created="Fri, 16 Oct 2015 03:41:03 +0000"  >&lt;p&gt;ok Mahmoud&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwt2n:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>15197</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>