<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:12:48 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-7888] kernel: INFO: task mount.lustre:22219 blocked for more than 120 seconds</title>
                <link>https://jira.whamcloud.com/browse/LU-7888</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;During a test he following caused the test to fail:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Mar  2 15:12:36 lotus-42vm5 kernel: LDISKFS-fs (sdc): mounted filesystem with ordered data mode. quota=on. Opts: 
Mar  2 15:12:36 lotus-42vm5 kernel: 
Mar  2 15:12:38 lotus-42vm5 kernel: LDISKFS-fs (sda): mounted filesystem with ordered data mode. quota=on. Opts: 
Mar  2 15:12:38 lotus-42vm5 kernel: 
Mar  2 15:12:38 lotus-42vm5 kernel: LDISKFS-fs (sde): mounted filesystem with ordered data mode. quota=on. Opts: 
Mar  2 15:12:38 lotus-42vm5 kernel: 
Mar  2 15:12:38 lotus-42vm5 kernel: LDISKFS-fs (sdd): mounted filesystem with ordered data mode. quota=on. Opts: 
Mar  2 15:12:38 lotus-42vm5 kernel: 
Mar  2 15:13:03 lotus-42vm5 kernel: LDISKFS-fs (sdc): mounted filesystem with ordered data mode. quota=on. Opts: 
Mar  2 15:13:03 lotus-42vm5 kernel: 
Mar  2 15:13:03 lotus-42vm5 kernel: LDISKFS-fs (sdc): mounted filesystem with ordered data mode. quota=on. Opts: 
Mar  2 15:13:03 lotus-42vm5 kernel: 
Mar  2 15:13:09 lotus-42vm5 kernel: LDISKFS-fs (sdd): mounted filesystem with ordered data mode. quota=on. Opts: 
Mar  2 15:13:09 lotus-42vm5 kernel: 
Mar  2 15:13:09 lotus-42vm5 kernel: LDISKFS-fs (sdd): mounted filesystem with ordered data mode. quota=on. Opts: 
Mar  2 15:13:09 lotus-42vm5 kernel: 
Mar  2 15:13:09 lotus-42vm5 kernel: Lustre: ctl-testfs-MDT0000: No data found on store. Initialize space
Mar  2 15:13:09 lotus-42vm5 kernel: Lustre: testfs-MDT0000: new disk, initializing
Mar  2 15:13:09 lotus-42vm5 kernel: Lustre: Failing over testfs-MDT0000
Mar  2 15:13:10 lotus-42vm5 kernel: Lustre: server umount testfs-MDT0000 complete
Mar  2 15:13:10 lotus-42vm5 kernel: LDISKFS-fs (sda): mounted filesystem with ordered data mode. quota=on. Opts: 
Mar  2 15:13:10 lotus-42vm5 kernel: 
Mar  2 15:13:10 lotus-42vm5 kernel: LDISKFS-fs (sda): mounted filesystem with ordered data mode. quota=on. Opts: 
Mar  2 15:13:10 lotus-42vm5 kernel: 
Mar  2 15:13:10 lotus-42vm5 kernel: Lustre: srv-testfs-MDT0001: No data found on store. Initialize space
Mar  2 15:13:10 lotus-42vm5 kernel: Lustre: Skipped 1 previous similar message
Mar  2 15:13:10 lotus-42vm5 kernel: Lustre: testfs-MDT0001: new disk, initializing
Mar  2 15:13:10 lotus-42vm5 kernel: LustreError: 137-5: testfs-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Mar  2 15:13:41 lotus-42vm5 kernel: Lustre: 7592:0:(client.c:2048:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1456960415/real 1456960415]  req@ffff880061d19c80 x1527730997821688/t0(0) o38-&amp;gt;testfs-MDT0000-osp-MDT0001@10.14.82.129@tcp:24/4 lens 520/544 e 0 to 1 dl 1456960421 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Mar  2 15:13:42 lotus-42vm5 kernel: LDISKFS-fs (sdd): mounted filesystem with ordered data mode. quota=on. Opts: 
Mar  2 15:13:42 lotus-42vm5 kernel: 
Mar  2 15:14:00 lotus-42vm5 kernel: LustreError: 137-5: testfs-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Mar  2 15:14:00 lotus-42vm5 kernel: LustreError: Skipped 1 previous similar message
Mar  2 15:14:36 lotus-42vm5 kernel: Lustre: 7592:0:(client.c:2048:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1456960465/real 1456960465]  req@ffff88005e254980 x1527730997821712/t0(0) o38-&amp;gt;testfs-MDT0000-osp-MDT0001@10.14.82.129@tcp:24/4 lens 520/544 e 0 to 1 dl 1456960476 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Mar  2 15:14:36 lotus-42vm5 kernel: Lustre: 7592:0:(client.c:2048:ptlrpc_expire_one_request()) Skipped 1 previous similar message
Mar  2 15:14:50 lotus-42vm5 kernel: LustreError: 137-5: testfs-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Mar  2 15:14:50 lotus-42vm5 kernel: LustreError: Skipped 1 previous similar message
Mar  2 15:15:31 lotus-42vm5 kernel: Lustre: 7592:0:(client.c:2048:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1456960515/real 1456960515]  req@ffff88005e254c80 x1527730997821736/t0(0) o38-&amp;gt;testfs-MDT0000-osp-MDT0001@10.14.82.129@tcp:24/4 lens 520/544 e 0 to 1 dl 1456960531 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Mar  2 15:15:31 lotus-42vm5 kernel: Lustre: 7592:0:(client.c:2048:ptlrpc_expire_one_request()) Skipped 1 previous similar message
Mar  2 15:15:40 lotus-42vm5 kernel: LustreError: 137-5: testfs-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Mar  2 15:15:40 lotus-42vm5 kernel: LustreError: Skipped 1 previous similar message
Mar  2 15:15:44 lotus-42vm5 kernel: INFO: task mount.lustre:22219 blocked for more than 120 seconds.
Mar  2 15:15:44 lotus-42vm5 kernel:      Not tainted 2.6.32-573.18.1.el6_lustre.x86_64 #1
Mar  2 15:15:44 lotus-42vm5 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Mar  2 15:15:44 lotus-42vm5 kernel: mount.lustre  D 0000000000000000     0 22219  22217 0x00000080
Mar  2 15:15:44 lotus-42vm5 kernel: ffff88007840f808 0000000000000082 ffff880002215a00 ffff88007bb26f4b
Mar  2 15:15:44 lotus-42vm5 kernel: ffffffffa12a9e1b ffffffffa12a9e19 ffff88007840f828 ffff88007bb26f78
Mar  2 15:15:44 lotus-42vm5 kernel: ffff88007840f818 ffffffff8129caf8 ffff88007c5c45f8 ffff88007840ffd8
Mar  2 15:15:44 lotus-42vm5 kernel: Call Trace:
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff8129caf8&amp;gt;] ? vsnprintf+0x218/0x5e0
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff8153bc06&amp;gt;] __mutex_lock_slowpath+0x96/0x210
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff8129cf64&amp;gt;] ? snprintf+0x34/0x40
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa125404f&amp;gt;] ? server_name2fsname+0x6f/0x90 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa040a1f0&amp;gt;] ? qsd_conn_callback+0x0/0x180 [lquota]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff8153b72b&amp;gt;] mutex_lock+0x2b/0x50
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa128ac3f&amp;gt;] lustre_register_lwp_item+0xdf/0xab0 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa040bd09&amp;gt;] qsd_prepare+0x949/0x11b0 [lquota]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa0464c82&amp;gt;] osd_prepare+0x132/0x720 [osd_ldiskfs]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa08e5640&amp;gt;] lod_prepare+0xf0/0x1e0 [lod]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa095d0af&amp;gt;] mdd_prepare+0xef/0x1280 [mdd]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa149f81b&amp;gt;] ? tgt_ses_key_init+0x6b/0x190 [ptlrpc]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa125ff2d&amp;gt;] ? keys_fill+0xdd/0x1c0 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa07faf76&amp;gt;] mdt_prepare+0x56/0x3b0 [mdt]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa1264403&amp;gt;] ? lu_context_init+0xa3/0x240 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa128a51f&amp;gt;] server_start_targets+0x176f/0x1db0 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa1252550&amp;gt;] ? class_config_llog_handler+0x0/0x17b0 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa1291595&amp;gt;] server_fill_super+0xbe5/0x1a7c [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa125cc82&amp;gt;] lustre_fill_super+0xa82/0x2150 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa125c200&amp;gt;] ? lustre_fill_super+0x0/0x2150 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff8119567f&amp;gt;] get_sb_nodev+0x5f/0xa0
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa1253de5&amp;gt;] lustre_get_sb+0x25/0x30 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff81194cbb&amp;gt;] vfs_kern_mount+0x7b/0x1b0
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff81194e62&amp;gt;] do_kern_mount+0x52/0x130
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff811b6e1b&amp;gt;] do_mount+0x2fb/0x930
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff811b74e0&amp;gt;] sys_mount+0x90/0xe0
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff8100b0d2&amp;gt;] system_call_fastpath+0x16/0x1b
Mar  2 15:15:44 lotus-42vm5 kernel: INFO: task mount.lustre:22496 blocked for more than 120 seconds.
Mar  2 15:15:44 lotus-42vm5 kernel:      Not tainted 2.6.32-573.18.1.el6_lustre.x86_64 #1
Mar  2 15:15:44 lotus-42vm5 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Mar  2 15:15:44 lotus-42vm5 kernel: mount.lustre  D 0000000000000000     0 22496  22495 0x00000080
Mar  2 15:15:44 lotus-42vm5 kernel: ffff88006083f908 0000000000000082 ffffffffffffffff 0000000000000000
Mar  2 15:15:44 lotus-42vm5 kernel: 0000000000000001 77fa41c5810ac58a 0000000000000e3b ffff880077fa41c5
Mar  2 15:15:44 lotus-42vm5 kernel: 0000000000000000 0000000affffffff ffff880077e805f8 ffff88006083ffd8
Mar  2 15:15:44 lotus-42vm5 kernel: Call Trace:
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa036db53&amp;gt;] ? libcfs_debug_vmsg2+0x5e3/0xbe0 [libcfs]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff8153bc06&amp;gt;] __mutex_lock_slowpath+0x96/0x210
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff8153b72b&amp;gt;] mutex_lock+0x2b/0x50
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa0341d10&amp;gt;] mgc_set_info_async+0x450/0x1a50 [mgc]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa036e191&amp;gt;] ? libcfs_debug_msg+0x41/0x50 [libcfs]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa1284745&amp;gt;] server_mgc_set_fs+0x115/0x4e0 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa1288e5f&amp;gt;] server_start_targets+0xaf/0x1db0 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa125ab48&amp;gt;] ? lustre_start_mgc+0xac8/0x2180 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa1291595&amp;gt;] server_fill_super+0xbe5/0x1a7c [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa125cc82&amp;gt;] lustre_fill_super+0xa82/0x2150 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa125c200&amp;gt;] ? lustre_fill_super+0x0/0x2150 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff8119567f&amp;gt;] get_sb_nodev+0x5f/0xa0
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffffa1253de5&amp;gt;] lustre_get_sb+0x25/0x30 [obdclass]
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff81194cbb&amp;gt;] vfs_kern_mount+0x7b/0x1b0
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff81194e62&amp;gt;] do_kern_mount+0x52/0x130
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff811b6e1b&amp;gt;] do_mount+0x2fb/0x930
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff811b74e0&amp;gt;] sys_mount+0x90/0xe0
Mar  2 15:15:44 lotus-42vm5 kernel: [&amp;lt;ffffffff8100b0d2&amp;gt;] system_call_fastpath+0x16/0x1b
Mar  2 15:16:26 lotus-42vm5 kernel: Lustre: 7592:0:(client.c:2048:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1456960565/real 1456960565]  req@ffff88005e254980 x1527730997821760/t0(0) o38-&amp;gt;testfs-MDT0000-osp-MDT0001@10.14.82.129@tcp:24/4 lens 520/544 e 0 to 1 dl 1456960586 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Mar  2 15:16:26 lotus-42vm5 kernel: Lustre: 7592:0:(client.c:2048:ptlrpc_expire_one_request()) Skipped 1 previous similar message
Mar  2 15:16:30 lotus-42vm5 kernel: LustreError: 137-5: testfs-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Mar  2 15:16:30 lotus-42vm5 kernel: LustreError: Skipped 1 previous similar message
Mar  2 15:17:21 lotus-42vm5 kernel: Lustre: 7592:0:(client.c:2048:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1456960615/real 1456960615]  req@ffff88005e254c80 x1527730997821784/t0(0) o38-&amp;gt;testfs-MDT0000-osp-MDT0001@10.14.82.129@tcp:24/4 lens 520/544 e 0 to 1 dl 1456960641 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
class_config_llog_handler+0x0/0x17b0 [obdclass]
Mar  2 15:19:00 lotus-42vm5 kernel: LustreError: 137-5: testfs-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="35469">LU-7888</key>
            <summary>kernel: INFO: task mount.lustre:22219 blocked for more than 120 seconds</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="yong.fan">nasf</assignee>
                                    <reporter username="adilger">Andreas Dilger</reporter>
                        <labels>
                    </labels>
                <created>Sat, 19 Mar 2016 15:21:19 +0000</created>
                <updated>Wed, 17 Aug 2016 21:11:03 +0000</updated>
                            <resolved>Mon, 16 May 2016 21:40:48 +0000</resolved>
                                    <version>Lustre 2.7.0</version>
                    <version>Lustre 2.9.0</version>
                                    <fixVersion>Lustre 2.9.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>3</watches>
                                                                            <comments>
                            <comment id="146282" author="gerrit" created="Mon, 21 Mar 2016 00:50:54 +0000"  >&lt;p&gt;Fan Yong (fan.yong@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/19034&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/19034&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7888&quot; title=&quot;kernel: INFO: task mount.lustre:22219 blocked for more than 120 seconds&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7888&quot;&gt;&lt;del&gt;LU-7888&lt;/del&gt;&lt;/a&gt; obdclass: not hold global lock when lwp callback&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: d0e4a83866bfead0106671e67efa78fa0ed6ecc5&lt;/p&gt;</comment>
                            <comment id="152434" author="gerrit" created="Mon, 16 May 2016 16:47:57 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/19034/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/19034/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7888&quot; title=&quot;kernel: INFO: task mount.lustre:22219 blocked for more than 120 seconds&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7888&quot;&gt;&lt;del&gt;LU-7888&lt;/del&gt;&lt;/a&gt; obdclass: not hold global lock when lwp callback&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: bccfc65d04dbd59bedb5dc1509bbdc732fc09b53&lt;/p&gt;</comment>
                            <comment id="152487" author="pjones" created="Mon, 16 May 2016 21:40:48 +0000"  >&lt;p&gt;Landed for 2.9&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                                        </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzy50n:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>