<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:17:17 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-8408] mgc_request.c:141:config_log_put()) ASSERTION( atomic_read(&amp;cld-&gt;cld_refcount) &gt; 0 )</title>
                <link>https://jira.whamcloud.com/browse/LU-8408</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Running Lustre 2.8.0, we hit the following assertion on many clients at the same time while attempting to umount a DNE filesystem.&lt;/p&gt;

&lt;p&gt;One thing to note is that the client were mounting two lustre filesystems at the time, one DNE and one non-DNE.  The DNE filesystem was the only on that I was trying to unmount.&lt;/p&gt;

&lt;p&gt;The DNE filesystem was also not quite happy at the time.  2 out of 16 MDTs were not running.  The other 14 MDTs were stuck in recovery with a recovery timer of 0 seconds.  The assertion happened, I believe, after I finished bringing up the two missing MDTs.  The umount was just stuck until they came up.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2016-07-15 10:36:13 [162519.584734] Lustre: 7006:0:(client.c:2063:ptlrpc_expire_one_request()) Skipped 109 previous similar messages
2016-07-15 10:46:55 [163161.876527] Lustre: 7006:0:(client.c:2063:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1468604725/real 1468
604725]  req@ffff881feee70300 x1539773254358284/t0(0) o400-&amp;gt;lquake2-MDT000f-mdc-ffff88201e8e2800@172.19.1.126@o2ib100:12/10 lens 224/224 e 2 to 1 dl 1468604815 ref 1 fl
 Rpc:X/c0/ffffffff rc 0/-1
2016-07-15 10:46:55 [163161.917624] Lustre: 7006:0:(client.c:2063:ptlrpc_expire_one_request()) Skipped 111 previous similar messages
2016-07-15 10:57:25 [163792.204109] Lustre: 7006:0:(client.c:2063:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1468605355/real 1468
605355]  req@ffff881feee70300 x1539773254361464/t0(0) o400-&amp;gt;lquake2-MDT000f-mdc-ffff88201e8e2800@172.19.1.126@o2ib100:12/10 lens 224/224 e 2 to 1 dl 1468605445 ref 1 fl
 Rpc:X/c0/ffffffff rc 0/-1
2016-07-15 10:57:25 [163792.245207] Lustre: 7006:0:(client.c:2063:ptlrpc_expire_one_request()) Skipped 109 previous similar messages

&amp;lt;ConMan&amp;gt; Console [opal99] log at 2016-07-15 11:00:00 PDT.
2016-07-15 11:07:53 [164420.530684] Lustre: 7006:0:(client.c:2063:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1468606018/real 1468
606018]  req@ffff881fee80ad00 x1539773254364936/t0(0) o38-&amp;gt;lquake2-MDT0001-mdc-ffff88201e8e2800@172.19.1.112@o2ib100:12/10 lens 520/544 e 0 to 1 dl 1468606073 ref 1 fl 
Rpc:XN/0/ffffffff rc 0/-1
2016-07-15 11:07:53 [164420.571687] Lustre: 7006:0:(client.c:2063:ptlrpc_expire_one_request()) Skipped 109 previous similar messages
2016-07-15 11:16:26 [164934.492190] Lustre: lquake2-MDT000f-mdc-ffff88201e8e2800: Connection restored to 172.19.1.126@o2ib100 (at 172.19.1.126@o2ib100)
2016-07-15 11:16:26 [164934.510252] Lustre: Skipped 3 previous similar messages
2016-07-15 11:16:32 [164940.031847] Lustre: lquake2-MDT0008-mdc-ffff88201e8e2800: Connection restored to 172.19.1.119@o2ib100 (at 172.19.1.119@o2ib100)
2016-07-15 11:16:53 [164960.817277] Lustre: lquake2-MDT000e-mdc-ffff88201e8e2800: Connection restored to 172.19.1.125@o2ib100 (at 172.19.1.125@o2ib100)
2016-07-15 11:16:53 [164960.835337] Lustre: Skipped 8 previous similar messages
2016-07-15 11:24:52 [165440.838390] hsi0: can&apos;t use GFP_NOIO for QPs on device hfi1_0, using GFP_KERNEL
2016-07-15 11:24:54 [165442.178261] LustreError: 152795:0:(mgc_request.c:141:config_log_put()) ASSERTION( atomic_read(&amp;amp;cld-&amp;gt;cld_refcount) &amp;gt; 0 ) failed: 
2016-07-15 11:24:54 [165442.196348] LustreError: 152795:0:(mgc_request.c:141:config_log_put()) LBUG
2016-07-15 11:24:54 [165442.206772] Pid: 152795, comm: umount
2016-07-15 11:24:54 Jul 15 11:24:54 opal99 kernel: [[165442.213463] 
2016-07-15 11:24:54 [165442.213463] Call Trace:
2016-07-15 11:24:54 165442.178261] LustreError: 152795:0:(mgc_request.c:141:config_l[165442.223668]  [&amp;lt;ffffffffa0add7e3&amp;gt;] libcfs_debug_dumpstack+0x53/0x80 [libcfs]
2016-07-15 11:24:54 og_put()) ASSERTION( atomic_read[165442.235364]  [&amp;lt;ffffffffa0addd85&amp;gt;] lbug_with_loc+0x45/0xc0 [libcfs]
2016-07-15 11:24:54 (&amp;amp;cld-&amp;gt;cld_refcount) &amp;gt; 0 ) faile[165442.245430]  [&amp;lt;ffffffffa11fc2e3&amp;gt;] config_log_put+0x3a3/0x3e0 [mgc]
2016-07-15 11:24:54 d: 
2016-07-15 11:24:54 Jul 15 11:24:54 opal99 kern[165442.255502]  [&amp;lt;ffffffffa12037be&amp;gt;] mgc_process_config+0x77e/0x1280 [mgc]
2016-07-15 11:24:54 el: [165442.196348] LustreError:[165442.266077]  [&amp;lt;ffffffff810c6cae&amp;gt;] ? dequeue_task_fair+0x42e/0x640
2016-07-15 11:24:54  152795:0:(mgc_request.c:141:con[165442.276080]  [&amp;lt;ffffffff810c0045&amp;gt;] ? sched_clock_cpu+0xa5/0xe0
2016-07-15 11:24:54 fig_log_put()) LBUG
2016-07-15 11:24:54 Jul 15 11:2[165442.285669]  [&amp;lt;ffffffff8101560b&amp;gt;] ? __switch_to+0x17b/0x4d0
2016-07-15 11:24:54 6:21 opal99 kernel: LustreError:[165442.295087]  [&amp;lt;ffffffffa0c7f715&amp;gt;] obd_process_config.constprop.13+0x85/0x2d0 [obdclass]
2016-07-15 11:24:54  152795:0:(mgc_request.c:141:con[165442.307221]  [&amp;lt;ffffffffa0c7fae0&amp;gt;] ? lustre_cfg_new+0x180/0x400 [obdclass]
2016-07-15 11:24:54 fig_log_put()) ASSERTION( atomic[165442.318001]  [&amp;lt;ffffffffa0c81760&amp;gt;] lustre_end_log+0xf0/0x5c0 [obdclass]
2016-07-15 11:24:54 _read(&amp;amp;cld-&amp;gt;cld_refcount) &amp;gt; 0 ) [165442.328469]  [&amp;lt;ffffffffa1156a5d&amp;gt;] ll_put_super+0x8d/0xae0 [lustre]
2016-07-15 11:24:54 failed: 
2016-07-15 11:24:54 Jul 15 11:26:21 opal99[165442.338523]  [&amp;lt;ffffffff8122dc17&amp;gt;] ? fsnotify_clear_marks_by_inode+0xa7/0x140
2016-07-15 11:24:54  kernel: LustreError: 152795:0:([165442.349585]  [&amp;lt;ffffffff8112af1d&amp;gt;] ? call_rcu_sched+0x1d/0x20
2016-07-15 11:24:54 mgc_request.c:141:config_log_put[165442.359112]  [&amp;lt;ffffffffa1182a5c&amp;gt;] ? ll_destroy_inode+0x1c/0x20 [lustre]
2016-07-15 11:24:54 ()) LBUG
2016-07-15 11:24:54 [165442.369662]  [&amp;lt;ffffffff81203fa8&amp;gt;] ? destroy_inode+0x38/0x60
2016-07-15 11:24:54 [165442.378003]  [&amp;lt;ffffffff812040d6&amp;gt;] ? evict+0x106/0x170
2016-07-15 11:24:54 [165442.385728]  [&amp;lt;ffffffff8120417e&amp;gt;] ? dispose_list+0x3e/0x50
2016-07-15 11:24:54 [165442.393919]  [&amp;lt;ffffffff81204e24&amp;gt;] ? evict_inodes+0x114/0x140
2016-07-15 11:24:54 [165442.402261]  [&amp;lt;ffffffff811ea176&amp;gt;] generic_shutdown_super+0x56/0xe0
2016-07-15 11:24:54 [165442.411149]  [&amp;lt;ffffffff811ea552&amp;gt;] kill_anon_super+0x12/0x20
2016-07-15 11:24:54 [165442.419336]  [&amp;lt;ffffffffa0c7f2b5&amp;gt;] lustre_kill_super+0x45/0x50 [obdclass]
2016-07-15 11:24:54 [165442.428768]  [&amp;lt;ffffffff811ea909&amp;gt;] deactivate_locked_super+0x49/0x60
2016-07-15 11:24:54 [165442.437678]  [&amp;lt;ffffffff811eaf06&amp;gt;] deactivate_super+0x46/0x60
2016-07-15 11:24:54 [165442.445858]  [&amp;lt;ffffffff81208a25&amp;gt;] mntput_no_expire+0xc5/0x120
2016-07-15 11:24:54 [165442.454095]  [&amp;lt;ffffffff81209b9f&amp;gt;] SyS_umount+0x9f/0x3c0
2016-07-15 11:24:54 [165442.461739]  [&amp;lt;ffffffff8165d709&amp;gt;] system_call_fastpath+0x16/0x1b
2016-07-15 11:24:54 [165442.470213] 
2016-07-15 11:24:54 [165442.473946] Kernel panic - not syncing: LBUG
2016-07-15 11:24:54 [165442.480405] CPU: 0 PID: 152795 Comm: umount Tainted: P           OE  ------------   3.10.0-327.22.2.1chaos.ch6.x86_64 #1
2016-07-15 11:24:54 [165442.494284] Hardware name: Penguin Computing Relion OCP1930e/S2600KPR, BIOS SE5C610.86B.01.01.0016.033120161139 03/31/2016
2016-07-15 11:24:54 [165442.508355]  ffffffffa0af9e0f 00000000389f16da ffff88080bc67ab8 ffffffff8164c6b4
2016-07-15 11:24:54 [165442.518381]  ffff88080bc67b38 ffffffff816456af ffffffff00000008 ffff88080bc67b48
2016-07-15 11:24:54 [165442.528375]  ffff88080bc67ae8 00000000389f16da ffffffffa1206417 0000000000000246
2016-07-15 11:24:54 [165442.538336] Call Trace:
2016-07-15 11:24:54 [165442.542668]  [&amp;lt;ffffffff8164c6b4&amp;gt;] dump_stack+0x19/0x1b
2016-07-15 11:24:54 [165442.550011]  [&amp;lt;ffffffff816456af&amp;gt;] panic+0xd8/0x1e7
2016-07-15 11:24:54 [165442.556931]  [&amp;lt;ffffffffa0adddeb&amp;gt;] lbug_with_loc+0xab/0xc0 [libcfs]
2016-07-15 11:24:54 [165442.565366]  [&amp;lt;ffffffffa11fc2e3&amp;gt;] config_log_put+0x3a3/0x3e0 [mgc]
2016-07-15 11:24:54 [165442.573761]  [&amp;lt;ffffffffa12037be&amp;gt;] mgc_process_config+0x77e/0x1280 [mgc]
2016-07-15 11:24:54 [165442.582608]  [&amp;lt;ffffffff810c6cae&amp;gt;] ? dequeue_task_fair+0x42e/0x640
2016-07-15 11:24:54 [165442.590863]  [&amp;lt;ffffffff810c0045&amp;gt;] ? sched_clock_cpu+0xa5/0xe0
2016-07-15 11:24:54 [165442.598713]  [&amp;lt;ffffffff8101560b&amp;gt;] ? __switch_to+0x17b/0x4d0
2016-07-15 11:24:54 [165442.606353]  [&amp;lt;ffffffffa0c7f715&amp;gt;] obd_process_config.constprop.13+0x85/0x2d0 [obdclass]
2016-07-15 11:24:54 [165442.616688]  [&amp;lt;ffffffffa0c7fae0&amp;gt;] ? lustre_cfg_new+0x180/0x400 [obdclass]
2016-07-15 11:24:54 [165442.625648]  [&amp;lt;ffffffffa0c81760&amp;gt;] lustre_end_log+0xf0/0x5c0 [obdclass]
2016-07-15 11:24:54 [165442.634304]  [&amp;lt;ffffffffa1156a5d&amp;gt;] ll_put_super+0x8d/0xae0 [lustre]
2016-07-15 11:24:54 [165442.642562]  [&amp;lt;ffffffff8122dc17&amp;gt;] ? fsnotify_clear_marks_by_inode+0xa7/0x140
2016-07-15 11:24:54 [165442.651820]  [&amp;lt;ffffffff8112af1d&amp;gt;] ? call_rcu_sched+0x1d/0x20
2016-07-15 11:24:54 [165442.659521]  [&amp;lt;ffffffffa1182a5c&amp;gt;] ? ll_destroy_inode+0x1c/0x20 [lustre]
2016-07-15 11:24:54 [165442.668280]  [&amp;lt;ffffffff81203fa8&amp;gt;] ? destroy_inode+0x38/0x60
2016-07-15 11:24:54 [165442.675870]  [&amp;lt;ffffffff812040d6&amp;gt;] ? evict+0x106/0x170
2016-07-15 11:24:54 [165442.682867]  [&amp;lt;ffffffff8120417e&amp;gt;] ? dispose_list+0x3e/0x50
2016-07-15 11:24:54 [165442.690342]  [&amp;lt;ffffffff81204e24&amp;gt;] ? evict_inodes+0x114/0x140
2016-07-15 11:24:54 [165442.698012]  [&amp;lt;ffffffff811ea176&amp;gt;] generic_shutdown_super+0x56/0xe0
2016-07-15 11:24:54 [165442.706269]  [&amp;lt;ffffffff811ea552&amp;gt;] kill_anon_super+0x12/0x20
2016-07-15 11:24:54 [165442.713850]  [&amp;lt;ffffffffa0c7f2b5&amp;gt;] lustre_kill_super+0x45/0x50 [obdclass]
2016-07-15 11:24:54 [165442.722674]  [&amp;lt;ffffffff811ea909&amp;gt;] deactivate_locked_super+0x49/0x60
2016-07-15 11:24:54 [165442.731014]  [&amp;lt;ffffffff811eaf06&amp;gt;] deactivate_super+0x46/0x60
2016-07-15 11:24:54 [165442.738662]  [&amp;lt;ffffffff81208a25&amp;gt;] mntput_no_expire+0xc5/0x120
2016-07-15 11:24:54 [165442.746397]  [&amp;lt;ffffffff81209b9f&amp;gt;] SyS_umount+0x9f/0x3c0
2016-07-15 11:24:54 [165442.753539]  [&amp;lt;ffffffff8165d709&amp;gt;] system_call_fastpath+0x16/0x1b
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="38210">LU-8408</key>
            <summary>mgc_request.c:141:config_log_put()) ASSERTION( atomic_read(&amp;cld-&gt;cld_refcount) &gt; 0 )</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="yong.fan">nasf</assignee>
                                    <reporter username="morrone">Christopher Morrone</reporter>
                        <labels>
                            <label>llnl</label>
                    </labels>
                <created>Fri, 15 Jul 2016 18:37:34 +0000</created>
                <updated>Thu, 16 Feb 2017 19:57:05 +0000</updated>
                            <resolved>Fri, 2 Sep 2016 02:50:11 +0000</resolved>
                                    <version>Lustre 2.8.0</version>
                                    <fixVersion>Lustre 2.9.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="159036" author="pjones" created="Sat, 16 Jul 2016 13:29:06 +0000"  >&lt;p&gt;Fan Yong&lt;/p&gt;

&lt;p&gt;Could you please advise on this issue?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="160443" author="yong.fan" created="Mon, 1 Aug 2016 16:27:16 +0000"  >&lt;p&gt;Honestly, only from the bug descriptions, it is not easy to exactly locate the root reason. I have tried many times in my VM environment, but cannot get the same failure trace. Then I studied through the MGC logic carefully, and find that the current &apos;config_llog_data::cld_refcount&apos; logic is some confusing, that may cause reference leak or over dereferenced under some cases. I will make a patch to clean the &apos;config_llog_data::cld_refcount&apos; logic entirely. It is quite possible that the patch has contained the solution for current trouble in this ticket. I would suggest to verify the patch on your system after passing Maloo tests.&lt;/p&gt;</comment>
                            <comment id="160444" author="gerrit" created="Mon, 1 Aug 2016 16:29:02 +0000"  >&lt;p&gt;Fan Yong (fan.yong@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/21616&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21616&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8408&quot; title=&quot;mgc_request.c:141:config_log_put()) ASSERTION( atomic_read(&amp;amp;cld-&amp;gt;cld_refcount) &amp;gt; 0 )&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8408&quot;&gt;&lt;del&gt;LU-8408&lt;/del&gt;&lt;/a&gt; mgc: handle config_llog_data::cld_refcount properly&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 6454c370f5afcdc5b91ea792653973888c326b68&lt;/p&gt;</comment>
                            <comment id="160870" author="morrone" created="Fri, 5 Aug 2016 01:25:12 +0000"  >&lt;p&gt;If you provide a patch for 2.8, I&apos;ll try it out.  The one for master does not apply.&lt;/p&gt;</comment>
                            <comment id="160890" author="yong.fan" created="Fri, 5 Aug 2016 12:49:40 +0000"  >&lt;p&gt;The patch for b2_8_fe:&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/21740&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21740&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="162454" author="morrone" created="Thu, 18 Aug 2016 21:22:09 +0000"  >&lt;p&gt;We have not hit this since running the b2_8_fe backported &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8404&quot; title=&quot;When service node nid is incorrect, MDT log message missing bad nid&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8404&quot;&gt;&lt;del&gt;LU-8404&lt;/del&gt;&lt;/a&gt; patch (&lt;a href=&quot;http://review.whamcloud.com/21740&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21740&lt;/a&gt;).&lt;/p&gt;</comment>
                            <comment id="162495" author="yong.fan" created="Fri, 19 Aug 2016 05:20:31 +0000"  >&lt;p&gt;Excellent! Thanks Chris for the verification.&lt;/p&gt;</comment>
                            <comment id="164733" author="gerrit" created="Fri, 2 Sep 2016 02:21:29 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/21616/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21616/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8408&quot; title=&quot;mgc_request.c:141:config_log_put()) ASSERTION( atomic_read(&amp;amp;cld-&amp;gt;cld_refcount) &amp;gt; 0 )&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8408&quot;&gt;&lt;del&gt;LU-8408&lt;/del&gt;&lt;/a&gt; mgc: handle config_llog_data::cld_refcount properly&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 48d24ebd6d51873a6c560000ea3b638fdae22a27&lt;/p&gt;</comment>
                            <comment id="164753" author="yong.fan" created="Fri, 2 Sep 2016 02:50:11 +0000"  >&lt;p&gt;The patch has been landed on master.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzyhlz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>