<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:19:21 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-1749] llog_lvfs_create()) error looking up logfile</title>
                <link>https://jira.whamcloud.com/browse/LU-1749</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We had an MDS (running &lt;a href=&quot;https://github.com/chaos/lustre/tree/2.1.1-17chaos&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;2.1.1-17chaos&lt;/a&gt;) lock up due to bug &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1276&quot; title=&quot;MDS threads all stuck in jbd2_journal_start&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1276&quot;&gt;&lt;del&gt;LU-1276&lt;/del&gt;&lt;/a&gt;.  The admins rebooted the node to work around the issue, and after the reboot when the MDS started up we hit the following error:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2012-08-14 13:48:55 LustreError: 3697:0:(llog_lvfs.c:616:llog_lvfs_create()) error looking up logfile 0x1a768b0a:0xa2f17dca: rc -116
2012-08-14 13:48:55 LustreError: 3697:0:(llog_cat.c:174:llog_cat_id2handle()) error opening log id 0x1a768b0a:a2f17dca: rc -116
2012-08-14 13:48:55 LustreError: 3697:0:(llog_obd.c:318:cat_cancel_cb()) Cannot find handle for log 0x1a768b0a
2012-08-14 13:48:55 LustreError: 3472:0:(llog_obd.c:391:llog_obd_origin_setup()) llog_process() with cat_cancel_cb failed: -116
2012-08-14 13:48:55 LustreError: 3472:0:(llog_obd.c:218:llog_setup_named()) obd lsa-OST00bd-osc ctxt 2 lop_setup=ffffffffa05b0a60 failed -116
2012-08-14 13:48:55 LustreError: 3472:0:(osc_request.c:4186:__osc_llog_init()) failed LLOG_MDS_OST_ORIG_CTXT
2012-08-14 13:48:55 LustreError: 3472:0:(osc_request.c:4203:__osc_llog_init()) osc &apos;lsa-OST00bd-osc&apos; tgt &apos;mdd_obd-lsa-MDT0000&apos; catid ffff8808312bd860 rc=-116
2012-08-14 13:48:55 LustreError: 3472:0:(osc_request.c:4205:__osc_llog_init()) logid 0x1a7680f6:0x615e782e
2012-08-14 13:48:55 LustreError: 3472:0:(osc_request.c:4233:osc_llog_init()) rc: -116
2012-08-14 13:48:55 LustreError: 3472:0:(lov_log.c:248:lov_llog_init()) error osc_llog_init idx 189 osc &apos;lsa-OST00bd-osc&apos; tgt &apos;mdd_obd-lsa-MDT0000&apos; (rc=-116)
2012-08-14 13:48:55 LustreError: 3698:0:(llog_lvfs.c:616:llog_lvfs_create()) error looking up logfile 0x1a768b0a:0xa2f17dca: rc -116
2012-08-14 13:48:55 LustreError: 3698:0:(llog_cat.c:174:llog_cat_id2handle()) error opening log id 0x1a768b0a:a2f17dca: rc -116
2012-08-14 13:48:55 LustreError: 3698:0:(llog_obd.c:318:cat_cancel_cb()) Cannot find handle for log 0x1a768b0a
2012-08-14 13:48:55 LustreError: 3407:0:(llog_obd.c:391:llog_obd_origin_setup()) llog_process() with cat_cancel_cb failed: -116
2012-08-14 13:48:55 LustreError: 3407:0:(llog_obd.c:218:llog_setup_named()) obd lsa-OST00bd-osc ctxt 2 lop_setup=ffffffffa05b0a60 failed -116
2012-08-14 13:48:55 LustreError: 3407:0:(osc_request.c:4186:__osc_llog_init()) failed LLOG_MDS_OST_ORIG_CTXT
2012-08-14 13:48:55 LustreError: 3407:0:(osc_request.c:4203:__osc_llog_init()) osc &apos;lsa-OST00bd-osc&apos; tgt &apos;mdd_obd-lsa-MDT0000&apos; catid ffff8804333af960 rc=-116
2012-08-14 13:48:55 LustreError: 3407:0:(osc_request.c:4205:__osc_llog_init()) logid 0x1a7680f6:0x615e782e
2012-08-14 13:48:56 LustreError: 3407:0:(osc_request.c:4233:osc_llog_init()) rc: -116
2012-08-14 13:48:56 LustreError: 3407:0:(lov_log.c:248:lov_llog_init()) error osc_llog_init idx 189 osc &apos;lsa-OST00bd-osc&apos; tgt &apos;mdd_obd-lsa-MDT0000&apos; (rc=-116)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I don&apos;t see a file in the OBJECTS directory that seems to match 0x1a768b0a:0xa2f17dca (if that is where we are looking).  Although -116 is -ESTALE, so I&apos;m not sure that we&apos;re even getting to the lower-level lookup.  It may be that mds_lvfs_fid2dentry() is returning -ESTALE because the id is 0.&lt;/p&gt;

&lt;p&gt;This has left the OST connection &quot;inactive&quot; on the MDS, so any users with data on that OST are currently dead in the water.&lt;/p&gt;
</description>
                <environment>&lt;a href=&quot;https://github.com/chaos/lustre/tree/2.1.1-17chaos&quot;&gt;https://github.com/chaos/lustre/tree/2.1.1-17chaos&lt;/a&gt;</environment>
        <key id="15496">LU-1749</key>
            <summary>llog_lvfs_create()) error looking up logfile</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="2">Won&apos;t Fix</resolution>
                                        <assignee username="laisiyao">Lai Siyao</assignee>
                                    <reporter username="morrone">Christopher Morrone</reporter>
                        <labels>
                            <label>llnl</label>
                    </labels>
                <created>Wed, 15 Aug 2012 14:37:18 +0000</created>
                <updated>Tue, 16 Aug 2016 16:37:36 +0000</updated>
                            <resolved>Tue, 16 Aug 2016 16:37:36 +0000</resolved>
                                                    <fixVersion>Lustre 2.3.0</fixVersion>
                    <fixVersion>Lustre 2.4.0</fixVersion>
                    <fixVersion>Lustre 2.1.4</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>8</watches>
                                                                            <comments>
                            <comment id="43281" author="pjones" created="Wed, 15 Aug 2012 15:10:07 +0000"  >&lt;p&gt;Oleg is looking into this one&lt;/p&gt;</comment>
                            <comment id="43282" author="green" created="Wed, 15 Aug 2012 15:10:40 +0000"  >&lt;p&gt;Do you have any confirmation of clients inability to access data on that OST?&lt;br/&gt;
If it&apos;s only MDT that cannot connect, then the end result would be inability to create any new objects there, but clients talk to OSTs directly and so should not be affected by this glitch you have now.&lt;/p&gt;</comment>
                            <comment id="43287" author="green" created="Wed, 15 Aug 2012 15:56:41 +0000"  >&lt;p&gt;Ok, now regarding the ESTALE theory, there are way too many ways to get that.&lt;/p&gt;

&lt;p&gt;The oid itself is suspect for 0x1a768b0a id, you need tto have a pretty sizeable fs and use a lot of the inodes there, otherwise (quoting from ext3_nfs_get_inode):&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (ino &amp;gt; le32_to_cpu(EXT3_SB(sb)-&amp;gt;s_es-&amp;gt;s_inodes_count))
                &lt;span class=&quot;code-keyword&quot;&gt;return&lt;/span&gt; ERR_PTR(-ESTALE);
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Do you have more than 443976458 inodes on this fs? If not, then it appear to be a case of CATALOGS or the a catalog in llogs is corrupted and contains garbage.&lt;br/&gt;
(now, how did the corruption occur I don&apos;t know).&lt;/p&gt;

&lt;p&gt;Another possibility is if this inode (443976458) does exist, but has a different generation. (can you check that please too with debugfs?)&lt;/p&gt;</comment>
                            <comment id="43288" author="morrone" created="Wed, 15 Aug 2012 16:31:48 +0000"  >&lt;p&gt;Sigh, actually I didn&apos;t even think about that.  The admins led me to believe it was a serious problem, but now that I look it appears that the clients are all connected to the OST just fine.&lt;/p&gt;

&lt;p&gt;I think we can lower the priority on this one.&lt;/p&gt;</comment>
                            <comment id="43289" author="morrone" created="Wed, 15 Aug 2012 16:41:03 +0000"  >&lt;p&gt;We created the filesystem with 1074003968 inodes.  210379586 in are use and 863624382 are free (as reported by df -i on the mds).&lt;/p&gt;</comment>
                            <comment id="43293" author="green" created="Wed, 15 Aug 2012 17:40:45 +0000"  >&lt;p&gt;ok, so the inode is real and valid. Can you please check if it&apos;s in use and what generation does it have?&lt;br/&gt;
(also what file does it correspond to)&lt;/p&gt;</comment>
                            <comment id="43295" author="green" created="Wed, 15 Aug 2012 17:46:06 +0000"  >&lt;p&gt;Overall (pending verification of inode being in use by some other file), I suspect this will turn out to be another instance of bug 22658 (commit id c4f5d67193d61a3948bc1e01c5d602b8ffb7d011), only instead of llog being already deleted, it&apos;s deleted and the inode reused for something else producing ESTALE.&lt;br/&gt;
We could probably deal with ESTALE in the same way we do with ENOENT, though that would mask some pathological cases like asking for llog with id of 0 (or otherwise invalid), those I guess we&apos;ll need to just catch in a more visible way, though.&lt;/p&gt;</comment>
                            <comment id="43298" author="morrone" created="Wed, 15 Aug 2012 19:04:57 +0000"  >&lt;p&gt;It looks like inode 443976458 is /OBJECTS/1a768b0a:ea7675ee, which debugfs&apos;s &quot;stat&quot; shows as generation number 3933631982.  Is that the right generation number?  Is a number that high normal?&lt;/p&gt;

&lt;p&gt;So it looks like you are correct about the inode being reused.&lt;/p&gt;
</comment>
                            <comment id="43304" author="green" created="Wed, 15 Aug 2012 21:26:58 +0000"  >&lt;p&gt;Well, 0xea7675ee IS 3933631982 which means the inode now corresponds to a valid llog, but a different one than we expect. Weird that it coincided like that. There is no upper limit on generation, so that&apos; fine.&lt;/p&gt;

&lt;p&gt;After all I suspect the end cause is the same, reused llog inode.&lt;br/&gt;
As such your problem will resolve all by itself once this new llog is deleted, or you can expedite the events by deleting the file now (at a potential expense of some objects leakage, or not).&lt;/p&gt;

&lt;p&gt;And a the fix we need to also handle ESTALE in a similar way to how we handle ENOENT now in llog create path.&lt;/p&gt;</comment>
                            <comment id="43329" author="pjones" created="Thu, 16 Aug 2012 08:58:54 +0000"  >&lt;p&gt;Lai&lt;/p&gt;

&lt;p&gt;Could you please create a fix to address the issue Oleg has identified in his analysis?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="43412" author="laisiyao" created="Fri, 17 Aug 2012 11:20:01 +0000"  >&lt;p&gt;review is on &lt;a href=&quot;http://review.whamcloud.com/#change,3708&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,3708&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="43823" author="nedbass" created="Mon, 27 Aug 2012 14:56:24 +0000"  >&lt;blockquote&gt;&lt;p&gt;As such your problem will resolve all by itself once this new llog is deleted,&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;What normally causes the llog to be deleted by itself?  The inactive OST breaks quota reporting, so we need to decide whether to take down the MDS to manually remove the file.&lt;/p&gt;</comment>
                            <comment id="44680" author="dmoreno" created="Wed, 12 Sep 2012 09:23:23 +0000"  >&lt;p&gt;Also at CEA they also hit this bug, so we&apos;re interested in this patch.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;/p&gt;</comment>
                            <comment id="45820" author="marc@llnl.gov" created="Mon, 1 Oct 2012 17:17:06 +0000"  >&lt;p&gt;Has any progress been made on this?&lt;/p&gt;</comment>
                            <comment id="45911" author="laisiyao" created="Wed, 3 Oct 2012 01:15:52 +0000"  >&lt;p&gt;Patch for b2_3 is at: &lt;a href=&quot;http://review.whamcloud.com/#change,4163&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,4163&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="46651" author="pjones" created="Tue, 16 Oct 2012 22:50:36 +0000"  >&lt;p&gt;Landed for 2.3 and 2.4&lt;/p&gt;</comment>
                            <comment id="46675" author="marc@llnl.gov" created="Wed, 17 Oct 2012 11:48:04 +0000"  >&lt;p&gt;Is there a patch for 2.1?&lt;/p&gt;</comment>
                            <comment id="46679" author="morrone" created="Wed, 17 Oct 2012 12:56:30 +0000"  >&lt;p&gt;Patch looks pretty simple, applies cleanly to 2.1.&lt;/p&gt;</comment>
                            <comment id="48786" author="emoly.liu" created="Wed, 5 Dec 2012 01:22:27 +0000"  >&lt;p&gt;Patch for b2_1 is at &lt;a href=&quot;http://review.whamcloud.com/4742&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/4742&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="55123" author="nedbass" created="Fri, 29 Mar 2013 23:08:01 +0000"  >&lt;p&gt;We&apos;re still affected by this bug. OST lsa-OST00bd remains inactive on the MDS. MDS is at tag 2.1.4-3chaos which includes patch  &lt;a href=&quot;http://review.whamcloud.com/4742&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/4742&lt;/a&gt;.  The file /OBJECTS/1a7680f6:615e782e does not exist on the MDT.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2013-03-20 10:05:33 LustreError: 3488:0:(llog_lvfs.c:616:llog_lvfs_create()) error looking up logfile 0x1a7680f6:0x615e782e: rc -116
2013-03-20 10:05:33 LustreError: 3488:0:(llog_obd.c:220:llog_setup_named()) obd lsa-OST00bd-osc ctxt 2 lop_setup=ffffffffa06a0f20 failed -116
2013-03-20 10:05:33 LustreError: 3488:0:(osc_request.c:4229:__osc_llog_init()) failed LLOG_MDS_OST_ORIG_CTXT
2013-03-20 10:05:33 LustreError: 3488:0:(osc_request.c:4246:__osc_llog_init()) osc &apos;lsa-OST00bd-osc&apos; tgt &apos;mdd_obd-lsa-MDT0000&apos; catid ffff88022f6ff8c0 rc=-116
2013-03-20 10:05:33 LustreError: 3488:0:(osc_request.c:4248:__osc_llog_init()) logid 0x1a7680f6:0x615e782e
2013-03-20 10:05:33 LustreError: 3488:0:(osc_request.c:4276:osc_llog_init()) rc: -116
2013-03-20 10:05:33 LustreError: 3488:0:(lov_log.c:248:lov_llog_init()) error osc_llog_init idx 189 osc &apos;lsa-OST00bd-osc&apos; tgt &apos;mdd_obd-lsa-MDT0000&apos; (rc=-116)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
</comment>
                            <comment id="55124" author="nedbass" created="Fri, 29 Mar 2013 23:08:34 +0000"  >&lt;p&gt;Reopening.&lt;/p&gt;</comment>
                            <comment id="55419" author="nedbass" created="Wed, 3 Apr 2013 19:37:33 +0000"  >&lt;p&gt;It appears that inode 443976458 and 443975021 were again reused for a valid llogs (note inode and generations numbers match file name):&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# levi-mds1 /root &amp;gt; debugfs -c -R &apos;stat /OBJECTS/1a7680f6:46b56639&apos; /dev/sdb    
debugfs 1.41.12 (17-May-2010)                                                   
/dev/sdb: catastrophic mode - not reading inode or group bitmaps                
Inode: 443973878   Type: regular    Mode:  0666   Flags: 0x80000                
Generation: 1186293305    Version: 0x00000000:00000000        

# levi-mds1 /root &amp;gt; printf &quot;%d:%d\n&quot; 0x1a7680f6 0x46b56639
443973878:1186293305

# levi-mds1 /root &amp;gt; debugfs -c -R &apos;stat /OBJECTS/1a76856d:a619a721&apos; /dev/sdb 
debugfs 1.41.12 (17-May-2010)
/dev/sdb: catastrophic mode - not reading inode or group bitmaps
Inode: 443975021   Type: regular    Mode:  0666   Flags: 0x0
Generation: 2786699041    Version: 0x00000000:00000000

# levi-mds1 /root &amp;gt; printf &quot;%d:%d\n&quot; 0x1a76856d 0xa619a721
443975021:2786699041
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This explains why we are still seeing -ESTALE, as there still seems to be a catalog entry referring to those inodes but with a different generation number:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;LustreError: 3488:0:(llog_lvfs.c:616:llog_lvfs_create()) error looking up logfile 0x1a7680f6:0x615e782e: rc -116
LustreError: 3488:0:(llog_obd.c:220:llog_setup_named()) obd lsa-OST00bd-osc ctxt 2 lop_setup=ffffffffa06a0f20 failed -116
LustreError: 3488:0:(osc_request.c:4229:__osc_llog_init()) failed LLOG_MDS_OST_ORIG_CTXT
LustreError: 3488:0:(osc_request.c:4246:__osc_llog_init()) osc &apos;lsa-OST00bd-osc&apos; tgt &apos;mdd_obd-lsa-MDT0000&apos; catid ffff88022f6ff8c0 rc=-116
LustreError: 3488:0:(osc_request.c:4248:__osc_llog_init()) logid 0x1a7680f6:0x615e782e
LustreError: 3488:0:(osc_request.c:4276:osc_llog_init()) rc: -116
LustreError: 3488:0:(lov_log.c:248:lov_llog_init()) error osc_llog_init idx 189 osc &apos;lsa-OST00bd-osc&apos; tgt &apos;mdd_obd-lsa-MDT0000&apos; (rc=-116)
LustreError: 3770:0:(llog_lvfs.c:616:llog_lvfs_create()) error looking up logfile 0x1a768d8e:0xead6b12b: rc -116
LustreError: 3770:0:(llog_cat.c:174:llog_cat_id2handle()) error opening log id 0x1a768d8e:ead6b12b: rc -116
LustreError: 3770:0:(llog_obd.c:320:cat_cancel_cb()) Cannot find handle for log 0x1a768d8e: -116
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;But, even with the patch for this bug, ESTALE is not handled gracefully and the OST is left deactivated:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;LustreError: 5254:0:(lov_log.c:160:lov_llog_origin_connect()) error osc_llog_connect tgt 189 (-107)
LustreError: 5254:0:(mds_lov.c:872:__mds_lov_synchronize()) mdd_obd-lsa-MDT0000: lsa-OST00bd_UUID failed at llog_origin_connect: -107
Lustre: lsa-OST00bd_UUID: Sync failed deactivating: rc -107
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="55420" author="nedbass" created="Wed, 3 Apr 2013 19:39:06 +0000"  >&lt;p&gt;Raising priority because this bug breaks quota reporting.&lt;/p&gt;</comment>
                            <comment id="57710" author="laisiyao" created="Mon, 6 May 2013 05:45:53 +0000"  >&lt;p&gt;landed.&lt;/p&gt;</comment>
                            <comment id="162059" author="simmonsja" created="Tue, 16 Aug 2016 16:37:36 +0000"  >&lt;p&gt;Old ticket for unsupported version&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10490" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>End date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Thu, 26 Jun 2014 14:37:18 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzv5ev:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>4411</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10493" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>Start date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Wed, 15 Aug 2012 14:37:18 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    </customfields>
    </item>
</channel>
</rss>