<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:26:26 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-9465] Kernel NULL pointer: osd_object.c:427:osd_object_init()) soaked-OST0005: lookup [0x440000401:0x195026b:0x0]/0x920ea8 failed: rc = 17</title>
                <link>https://jira.whamcloud.com/browse/LU-9465</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;soak-7 survived several failovers, last failover at 2017-05-07 07:41:31&lt;br/&gt;
The soak cluster failed over soak-10 at 2017-05-07 18:23:22&lt;br/&gt;
Immediately after finishing recovery, soak-7 crashed.&lt;br/&gt;
The OSS is reconnected to the recently failed-over MDT on soak-10/11&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;May  7 18:22:39 soak-7 kernel: LustreError: 11-0: soaked-MDT0003-lwp-OST0011: operation obd_ping to node 192.168.1.110@o2ib10 failed: rc = -107
May  7 18:22:39 soak-7 kernel: Lustre: soaked-MDT0003-lwp-OST0005: Connection to soaked-MDT0003 (at 192.168.1.110@o2ib10) was lost; in progress operations using &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; service will wait &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; recovery to complete
May  7 18:22:39 soak-7 kernel: Lustre: Skipped 2 previous similar messages
May  7 18:22:39 soak-7 kernel: LustreError: Skipped 3 previous similar messages
May  7 18:23:21 soak-7 kernel: LNet: 228:0:(o2iblnd_cb.c:2421:kiblnd_passive_connect()) Conn stale 192.168.1.111@o2ib10 version 12/12 incarnation 1494181401470091/1494181401470091
May  7 18:23:21 soak-7 kernel: Lustre: soaked-OST0005: Connection restored to  (at 192.168.1.111@o2ib10)
May  7 18:23:21 soak-7 kernel: Lustre: Skipped 2 previous similar messages
May  7 18:23:22 soak-7 kernel: LNet: 7422:0:(o2iblnd_cb.c:1377:kiblnd_reconnect_peer()) Abort reconnection of 192.168.1.111@o2ib10: connected
May  7 18:23:29 soak-7 kernel: LustreError: 167-0: soaked-MDT0003-lwp-OST0011: This client was evicted by soaked-MDT0003; in progress operations using &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; service will fail.
May  7 18:23:29 soak-7 kernel: LustreError: Skipped 1 previous similar message
May  7 18:23:43 soak-7 kernel: Lustre: soaked-OST0005: deleting orphan objects from 0x440000401:26279429 to 0x440000401:26291121
May  7 18:23:43 soak-7 kernel: Lustre: soaked-OST0011: deleting orphan objects from 0x780000401:26209136 to 0x780000401:26218273
May  7 18:23:43 soak-7 kernel: Lustre: soaked-OST000b: deleting orphan objects from 0x5c0000400:26329949 to 0x5c0000400:26339745
May  7 18:23:43 soak-7 kernel: Lustre: soaked-OST0017: deleting orphan objects from 0x8c0000401:26229632 to 0x8c0000401:26238017
May  7 18:23:54 soak-7 kernel: LustreError: 167-0: soaked-MDT0003-lwp-OST000b: This client was evicted by soaked-MDT0003; in progress operations using &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; service will fail.
May  7 18:23:54 soak-7 kernel: Lustre: soaked-MDT0003-lwp-OST0017: Connection restored to 192.168.1.111@o2ib10 (at 192.168.1.111@o2ib10)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Then, a hard crash&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;[38854.133273] Lustre: soaked-MDT0003-lwp-OST0017: Connection restored to 192.168.1.111@o2ib10 (at 192.168.1.111@o2ib10)
[38854.147850] Lustre: Skipped 3 previous similar messages
[55622.538966] perf: interrupt took too &lt;span class=&quot;code-object&quot;&gt;long&lt;/span&gt; (5010 &amp;gt; 5007), lowering kernel.perf_event_max_sample_rate to 39000
[60371.183844] LustreError: 16407:0:(osd_object.c:427:osd_object_init()) soaked-OST0005: lookup [0x440000401:0x195026b:0x0]/0x920ea8 failed: rc = 17
[60371.201275] BUG: unable to handle kernel NULL pointer dereference at 0000000000000011
[60371.211442] IP: [&amp;lt;ffffffffa0a0d328&amp;gt;] lu_object_find_try+0x178/0x2b0 [obdclass]
[60371.221570] PGD 0
[60371.225825] Oops: 0000 [#1] SMP
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;There is a crash dump available on the node, vmcore-dmesg attached.&lt;/p&gt;</description>
                <environment>Soak stress cluster</environment>
        <key id="45915">LU-9465</key>
            <summary>Kernel NULL pointer: osd_object.c:427:osd_object_init()) soaked-OST0005: lookup [0x440000401:0x195026b:0x0]/0x920ea8 failed: rc = 17</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="laisiyao">Lai Siyao</assignee>
                                    <reporter username="cliffw">Cliff White</reporter>
                        <labels>
                            <label>soak</label>
                    </labels>
                <created>Mon, 8 May 2017 15:04:59 +0000</created>
                <updated>Sat, 8 Jul 2017 02:13:10 +0000</updated>
                            <resolved>Wed, 10 May 2017 17:40:28 +0000</resolved>
                                    <version>Lustre 2.10.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="194866" author="pjones" created="Mon, 8 May 2017 17:48:26 +0000"  >&lt;p&gt;Lai&lt;/p&gt;

&lt;p&gt;Could you please advise on this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="195117" author="jamesanunez" created="Tue, 9 May 2017 15:27:35 +0000"  >&lt;p&gt;Just a couple of notes for this ticket:&lt;br/&gt;
LFSCK was turned off&lt;br/&gt;
Soak testing ran for approximately 48 hours before the crash&lt;br/&gt;
Running master - build #3573 with no other patches applied&lt;/p&gt;</comment>
                            <comment id="195202" author="cliffw" created="Tue, 9 May 2017 23:34:16 +0000"  >&lt;p&gt;Restarted testing with latest master, 3577&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;Competed two failovers with lfsck turned off.&lt;/li&gt;
	&lt;li&gt;Restarted with lfsck turned on&lt;/li&gt;
	&lt;li&gt;soak-5 (OSS) completed failover:&lt;br/&gt;
2017-05-09 20:08:27
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;,327:fsmgmt.fsmgmt:INFO     oss_failover completed, running lfsck&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;MDS reported a single error:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;May  9 20:08:39 soak-8 kernel: LustreError: 5550:0:(lfsck_lib.c:2680:lfsck_load_one_trace_file()) soaked-MDT0000-osd: unlink lfsck sub trace file lfsck_namespace_00: rc = 0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Before soak hits timeout, MDS has wedged:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;May  9 20:12:41 soak-8 kernel: NMI watchdog: BUG: soft lockup - CPU#6 stuck &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; 23s! [OI_scrub:5551]
May  9 20:12:41 soak-8 kernel: Modules linked in: osp(OE) mdd(OE) lod(OE) mdt(OE) lfsck(OE) mgs(OE) mgc(OE) osd_ldiskfs(OE) ldiskfs(OE) lquota(OE) fid(OE) fld(OE) ko2iblnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) zfs(POE) zunicode(POE) zavl(POE) zcommon(POE) znvpair(POE) spl(OE) zlib_deflate 8021q garp mrp stp llc rpcrdma ib_isert iscsi_target_mod ib_iser libiscsi scsi_transport_iscsi ib_srpt target_core_mod ib_srp scsi_transport_srp scsi_tgt ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm mlx4_ib ib_core intel_powerclamp coretemp intel_rapl iosf_mbi kvm irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd dm_round_robin ipmi_ssif sb_edac ipmi_devintf ntb sg iTCO_wdt ioatdma shpchp edac_core mei_me iTCO_vendor_support mei lpc_ich ipmi_si pcspkr i2c_i801
May  9 20:12:41 soak-8 kernel: ipmi_msghandler wmi nfsd dm_multipath dm_mod nfs_acl lockd grace auth_rpcgss sunrpc ip_tables ext4 mbcache jbd2 sd_mod crc_t10dif crct10dif_generic mlx4_en mgag200 drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops isci igb ttm ahci crct10dif_pclmul crct10dif_common ptp libsas crc32c_intel libahci pps_core drm mlx4_core mpt2sas libata dca raid_class i2c_algo_bit scsi_transport_sas devlink i2c_core fjes
May  9 20:12:41 soak-8 kernel: CPU: 6 PID: 5551 Comm: OI_scrub Tainted: P           OE  ------------   3.10.0-514.16.1.el7_lustre.x86_64 #1
May  9 20:12:41 soak-8 kernel: Hardware name: Intel Corporation SandyBridge Platform/To be filled by O.E.M., BIOS SE5C600.86B.01.08.0003.022620131521 02/26/2013
May  9 20:12:41 soak-8 kernel: task: ffff88083fde6dd0 ti: ffff880703600000 task.ti: ffff880703600000
May  9 20:12:41 soak-8 kernel: RIP: 0010:[&amp;lt;ffffffffa121d1d9&amp;gt;]  [&amp;lt;ffffffffa121d1d9&amp;gt;] osd_inode_iteration+0x489/0xcc0 [osd_ldiskfs]
May  9 20:12:41 soak-8 kernel: RSP: 0018:ffff880703603d18  EFLAGS: 00000293
May  9 20:12:41 soak-8 kernel: RAX: 0000000000000004 RBX: 0000000023f30a01 RCX: 0000000000000000
May  9 20:12:41 soak-8 kernel: RDX: ffff880703603d78 RSI: ffff8800b2a36000 RDI: ffff8803162f6000
May  9 20:12:41 soak-8 kernel: RBP: ffff880703603df0 R08: ffff880703603d57 R09: 0000000000000004
May  9 20:12:41 soak-8 kernel: R10: 0000000023f30a01 R11: ffffea000c8fcc00 R12: 0000000023f30a01
May  9 20:12:41 soak-8 kernel: R13: ffff880703603d08 R14: 0000000023f30a01 R15: ffff880703603d08
May  9 20:12:41 soak-8 kernel: FS:  0000000000000000(0000) GS:ffff88042e180000(0000) knlGS:0000000000000000
May  9 20:12:41 soak-8 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May  9 20:12:41 soak-8 kernel: CR2: 00007f64d55202e0 CR3: 00000000019be000 CR4: 00000000000407e0
May  9 20:12:41 soak-8 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
May  9 20:12:41 soak-8 kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
May  9 20:12:41 soak-8 kernel: Stack:
May  9 20:12:41 soak-8 kernel: ffffffffa121b990 ffffffffa1217a20 ffff8800b2a36000 00000000810d354f
May  9 20:12:41 soak-8 kernel: ffff8803162f6000 ffff8800b2a37468 0000000020000000 010000000000000c
May  9 20:12:41 soak-8 kernel: 0000000000000000 0000000000000000 ffff8800b2a36000 0000000000000000
May  9 20:12:41 soak-8 kernel: Call Trace:
May  9 20:12:41 soak-8 kernel: [&amp;lt;ffffffffa121b990&amp;gt;] ? osd_ios_ROOT_scan+0x300/0x300 [osd_ldiskfs]
May  9 20:12:41 soak-8 kernel: [&amp;lt;ffffffffa1217a20&amp;gt;] ? osd_preload_next+0xb0/0xb0 [osd_ldiskfs]
May  9 20:12:41 soak-8 kernel: [&amp;lt;ffffffffa121e370&amp;gt;] osd_scrub_main+0x960/0xf30 [osd_ldiskfs]
May  9 20:12:41 soak-8 kernel: [&amp;lt;ffffffff810c54c0&amp;gt;] ? wake_up_state+0x20/0x20
May  9 20:12:41 soak-8 kernel: [&amp;lt;ffffffffa121da10&amp;gt;] ? osd_inode_iteration+0xcc0/0xcc0 [osd_ldiskfs]
May  9 20:12:41 soak-8 kernel: [&amp;lt;ffffffff810b0a4f&amp;gt;] kthread+0xcf/0xe0
May  9 20:12:41 soak-8 kernel: [&amp;lt;ffffffff810b0980&amp;gt;] ? kthread_create_on_node+0x140/0x140
May  9 20:12:41 soak-8 kernel: [&amp;lt;ffffffff81697318&amp;gt;] ret_from_fork+0x58/0x90
May  9 20:12:41 soak-8 kernel: [&amp;lt;ffffffff810b0980&amp;gt;] ? kthread_create_on_node+0x140/0x140
May  9 20:12:41 soak-8 kernel: Code: 00 e8 7c eb 97 ff e9 0f fc ff ff 0f 1f 80 00 00 00 00 45 89 e9 4c 8d 85 67 ff ff ff 48 8b 4d a8 48 8d 55 88 48 8b b5 38 ff ff ff &amp;lt;48&amp;gt; 8b bd 48 ff ff ff 48 8b 85 28 ff ff ff ff d0 85 c0 41 89 c5
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;lfsck hit 600 second timeout, abort attempted:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;2017-05-09 20:18:46,982:fsmgmt.fsmgmt:ERROR    lfsck still running after 600s, aborting
2017-05-09 20:18:46,983:fsmgmt.fsmgmt:INFO     executing cmd: lctl lfsck_stop -M soaked-MDT0000
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;One the MDS hits the lockup, it hits it over and over and within minutes is doing nothing other that hitting the lockup. &lt;br/&gt;
At this point, decided not to wait for the crash, dumped stacks, them forced a crash dump.&lt;/p&gt;

&lt;p&gt;Crash dump is available on soak-8 in /var/crash. &lt;br/&gt;
console log, vmcore-dmesg are attached. &lt;/p&gt;</comment>
                            <comment id="195249" author="laisiyao" created="Wed, 10 May 2017 14:06:38 +0000"  >&lt;p&gt;It looks to be a code error, I will make a patch soon.&lt;/p&gt;</comment>
                            <comment id="195258" author="laisiyao" created="Wed, 10 May 2017 14:28:46 +0000"  >&lt;p&gt;ahh, just found it&apos;s fixed in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9394&quot; title=&quot;lu_object_find_try - kernel NULL pointer dereference&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9394&quot;&gt;&lt;del&gt;LU-9394&lt;/del&gt;&lt;/a&gt; already, and the fix is in latest master.&lt;/p&gt;</comment>
                            <comment id="195259" author="laisiyao" created="Wed, 10 May 2017 14:36:18 +0000"  >&lt;p&gt;sorry, I tried to mark it duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9394&quot; title=&quot;lu_object_find_try - kernel NULL pointer dereference&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9394&quot;&gt;&lt;del&gt;LU-9394&lt;/del&gt;&lt;/a&gt;, but I can&apos;t fill &apos;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9394&quot; title=&quot;lu_object_find_try - kernel NULL pointer dereference&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9394&quot;&gt;&lt;del&gt;LU-9394&lt;/del&gt;&lt;/a&gt;&apos; in Bugzilla-Id field. Can someone help me fix it?&lt;/p&gt;</comment>
                            <comment id="195260" author="cliffw" created="Wed, 10 May 2017 14:36:33 +0000"  >&lt;p&gt;I tested master build 3577 and hit a very similar problem as reported above. I don&apos;t think it&apos;s fixed&lt;/p&gt;</comment>
                            <comment id="195261" author="cliffw" created="Wed, 10 May 2017 14:36:52 +0000"  >&lt;p&gt;Tests of latest master build failed. That&apos;s not a fix&lt;/p&gt;</comment>
                            <comment id="195301" author="adilger" created="Wed, 10 May 2017 17:31:07 +0000"  >&lt;p&gt;The &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9394&quot; title=&quot;lu_object_find_try - kernel NULL pointer dereference&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9394&quot;&gt;&lt;del&gt;LU-9394&lt;/del&gt;&lt;/a&gt; fix is for the original problem reported in this bug, &quot;&lt;tt&gt;NULL pointer dereference at lu&amp;#95;object&amp;#95;find&amp;#95;try()&lt;/tt&gt;&quot;.  The stack trace added yesterday is for something completely different, &quot;&lt;tt&gt;soft lockup in osd&amp;#95;inode&amp;#95;iteration()&lt;/tt&gt;&quot;, which looks to be related to LFSCK/Scrub.  If the testing with the latest master build hit the soft lockup in osd&amp;#95;inode&amp;#95;iteration() then it should go into a separate ticket.&lt;/p&gt;</comment>
                            <comment id="195305" author="cliffw" created="Wed, 10 May 2017 17:37:28 +0000"  >&lt;p&gt;I have created &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9488&quot; title=&quot;soft lockup in osd_inode_iteration()&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9488&quot;&gt;&lt;del&gt;LU-9488&lt;/del&gt;&lt;/a&gt; and transferred the information there.&lt;/p&gt;</comment>
                            <comment id="195306" author="jamesanunez" created="Wed, 10 May 2017 17:40:28 +0000"  >&lt;p&gt;This issue should be fixed with the patch for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9394&quot; title=&quot;lu_object_find_try - kernel NULL pointer dereference&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9394&quot;&gt;&lt;del&gt;LU-9394&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="45698">LU-9394</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="26634" name="soak-8.console.log" size="2334734" author="cliffw" created="Tue, 9 May 2017 23:35:10 +0000"/>
                            <attachment id="26633" name="soak-8.vmcore-dmesg.txt" size="1033888" author="cliffw" created="Tue, 9 May 2017 23:34:59 +0000"/>
                            <attachment id="26617" name="vmcore-dmesg.txt" size="231534" author="cliffw" created="Mon, 8 May 2017 15:04:49 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                    <customfield id="customfield_10020" key="com.atlassian.jira.plugin.system.customfieldtypes:float">
                        <customfieldname>Bugzilla ID</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9394.0</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzc3r:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>