<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:38:55 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-10871] MDS hit LBUG: LustreError: 2566:0:(osd_handler.c:3304:osd_destroy()) ASSERTION( !lu_object_is_dying(dt-&gt;do_lu.lo_header) ) failed</title>
                <link>https://jira.whamcloud.com/browse/LU-10871</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;After soak running for about 7 hours&lt;/p&gt;

&lt;p&gt;MDS hit LBUG&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;
Apr 1 23:58:08 soak-11 multipathd: sdp: mark as failed
Apr 1 23:58:08 soak-11 multipathd: 360080e50001fedb80000015952012962: Entering recovery mode: max_retries=300
Apr 1 23:58:08 soak-11 multipathd: 360080e50001fedb80000015952012962: remaining active paths: 0
Apr 1 23:58:09 soak-11 kernel: LustreError: 2566:0:(osd_handler.c:3853:osd_ref_del()) soaked-MDT0003: nlink == 0 on [0x2c000d70d:0x389:0x0], maybe an upgraded file? (LU-3915)
Apr 1 23:58:09 soak-11 kernel: LustreError: 2566:0:(osd_handler.c:3304:osd_destroy()) ASSERTION( !lu_object_is_dying(dt-&amp;gt;do_lu.lo_header) ) failed: 
Apr 1 23:58:09 soak-11 kernel: LustreError: 2566:0:(osd_handler.c:3304:osd_destroy()) LBUG
Apr 1 23:58:09 soak-11 kernel: Pid: 2566, comm: mdt_out00_004
Apr 1 23:58:09 soak-11 kernel: #012Call Trace:
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffffc0dc47ae&amp;gt;] libcfs_call_trace+0x4e/0x60 [libcfs]
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffffc0dc483c&amp;gt;] lbug_with_loc+0x4c/0xb0 [libcfs]
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffffc142a0b0&amp;gt;] osd_destroy+0x4a0/0x760 [osd_ldiskfs]
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffffc1429237&amp;gt;] ? osd_ref_del+0x2c7/0x6a0 [osd_ldiskfs]
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffffc1428589&amp;gt;] ? osd_attr_set+0x199/0xb80 [osd_ldiskfs]
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffff816b3232&amp;gt;] ? down_write+0x12/0x3d
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffffc11c9141&amp;gt;] out_obj_destroy+0x101/0x2c0 [ptlrpc]
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffffc11c93b0&amp;gt;] out_tx_destroy_exec+0x20/0x190 [ptlrpc]
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffffc11c3e91&amp;gt;] out_tx_end+0xe1/0x5c0 [ptlrpc]
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffffc11c7a72&amp;gt;] out_handle+0x1442/0x1bb0 [ptlrpc]
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffffc11568a2&amp;gt;] ? lustre_msg_get_opc+0x22/0xf0 [ptlrpc]
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffffc11be0a9&amp;gt;] ? tgt_request_preprocess.isra.28+0x299/0x7a0 [ptlrpc]
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffffc11beeda&amp;gt;] tgt_request_handle+0x92a/0x13b0 [ptlrpc]
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffffc1164813&amp;gt;] ptlrpc_server_handle_request+0x253/0xab0 [ptlrpc]
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffffc11616c8&amp;gt;] ? ptlrpc_wait_event+0x98/0x340 [ptlrpc]
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffff810c7c82&amp;gt;] ? default_wake_function+0x12/0x20
Apr 1 23:58:09 soak-11 kernel: [&amp;lt;ffffffff810bdc4b&amp;gt;] ? __wake_up_common+0x5b/0x90
Apr 1 23:58:10 soak-11 kernel: [&amp;lt;ffffffffc1167fc2&amp;gt;] ptlrpc_main+0xa92/0x1e40 [ptlrpc]
Apr 1 23:58:10 soak-11 kernel: [&amp;lt;ffffffffc1167530&amp;gt;] ? ptlrpc_main+0x0/0x1e40 [ptlrpc]
Apr 1 23:58:10 soak-11 kernel: [&amp;lt;ffffffff810b4031&amp;gt;] kthread+0xd1/0xe0
Apr 1 23:58:10 soak-11 kernel: [&amp;lt;ffffffff810b3f60&amp;gt;] ? kthread+0x0/0xe0
Apr 1 23:58:10 soak-11 kernel: [&amp;lt;ffffffff816c0577&amp;gt;] ret_from_fork+0x77/0xb0
Apr 1 23:58:10 soak-11 kernel: [&amp;lt;ffffffff810b3f60&amp;gt;] ? kthread+0x0/0xe0
Apr 1 23:58:10 soak-11 kernel:

Apr 1 23:58:10 soak-11 kernel: Kernel panic - not syncing: LBUG
Apr 2 00:02:02 soak-11 systemd: Starting Stop Read-Ahead Data Collection...
Apr 2 00:02:02 soak-11 rsyslogd: action &apos;action 0&apos; resumed (module &apos;builtin:omfwd&apos;) [v8.24.0 try http://www.rsyslog.com/e/2359 ]
Apr 2 00:02:02 soak-11 rsyslogd: action &apos;action 0&apos; resumed (module &apos;builtin:omfwd&apos;) [v8.24.0 try http://www.rsyslog.com/e/2359 ]
Apr 2 00:02:02 soak-11 systemd: Started Stop Read-Ahead Data Collection.
Apr 2 00:03:37 soak-11 chronyd[1280]: Source 64.6.144.6 replaced with 69.10.161.7
Apr 2 00:10:01 soak-11 systemd: Created slice User Slice of root.
Apr 2 00:10:01 soak-11 systemd: Starting User Slice of root.
Apr 2 00:10:01 soak-11 systemd: Started Session 1 of user root.
Apr 2 00:10:01 soak-11 systemd: Starting Session 1 of user root.
Apr 2 00:10:01 soak-11 CROND[2234]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Apr 2 00:10:01 soak-11 systemd: Removed slice User Slice of root.
Apr 2 00:10:01 soak-11 systemd: Stopping User Slice of root.
Apr 2 00:15:54 soak-11 systemd: Starting Cleanup of Temporary Directories...
Apr 2 00:15:54 soak-11 systemd: Started Cleanup of Temporary Directories.

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>2.11.0 RC3&lt;br/&gt;
lustre-master-ib build 73</environment>
        <key id="51624">LU-10871</key>
            <summary>MDS hit LBUG: LustreError: 2566:0:(osd_handler.c:3304:osd_destroy()) ASSERTION( !lu_object_is_dying(dt-&gt;do_lu.lo_header) ) failed</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="hongchao.zhang">Hongchao Zhang</assignee>
                                    <reporter username="sarah">Sarah Liu</reporter>
                        <labels>
                            <label>soak</label>
                    </labels>
                <created>Mon, 2 Apr 2018 17:47:31 +0000</created>
                <updated>Wed, 30 Aug 2023 17:32:19 +0000</updated>
                            <resolved>Wed, 30 Aug 2023 17:32:19 +0000</resolved>
                                    <version>Lustre 2.11.0</version>
                    <version>Lustre 2.14.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="224973" author="cliffw" created="Mon, 2 Apr 2018 17:59:34 +0000"  >&lt;p&gt;Core dump is available on soak at /scratch/dumps&lt;br/&gt;
vmcore-dmesg is attached. &lt;/p&gt;</comment>
                            <comment id="224979" author="adilger" created="Mon, 2 Apr 2018 20:43:07 +0000"  >&lt;p&gt;The &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9808&quot; title=&quot;recovery-small test 102: osd_destroy()) ASSERTION( !lu_object_is_dying(dt-&amp;gt;do_lu.lo_header) ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9808&quot;&gt;&lt;del&gt;LU-9808&lt;/del&gt;&lt;/a&gt; failure is on the same assertion, but it is in the &lt;tt&gt;mdt&lt;/tt&gt; code instead of the &lt;tt&gt;out&lt;/tt&gt;.  Since the &lt;tt&gt;recovery-small test_102&lt;/tt&gt; failure is easily reproducible, it might make sense to debug and fix that, rather than spending time trying to understand this less common failure.&lt;/p&gt;</comment>
                            <comment id="225013" author="pjones" created="Tue, 3 Apr 2018 12:08:13 +0000"  >&lt;p&gt;Hongchao&lt;/p&gt;

&lt;p&gt;Can you please investigate?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="225090" author="hongchao.zhang" created="Wed, 4 Apr 2018 08:38:07 +0000"  >&lt;p&gt;Hi Cliff&lt;/p&gt;

&lt;p&gt;Is the Core dump at /scratch/dumps at onyx.hpdd.intel.com? I can&apos;t find it at that directory.&lt;br/&gt;
Thanks!&lt;/p&gt;</comment>
                            <comment id="225115" author="cliffw" created="Wed, 4 Apr 2018 14:57:00 +0000"  >&lt;p&gt;The dump is on spirit. spirit.hpdd.intel.com &lt;/p&gt;</comment>
                            <comment id="225391" author="hongchao.zhang" created="Sun, 8 Apr 2018 10:52:42 +0000"  >&lt;p&gt;Hi Cliff,&lt;br/&gt;
Thanks. but I can&apos;t login the spirit anymore (previously it is okay), I have reopened DCO-5820 to require the access to spirit.&lt;/p&gt;</comment>
                            <comment id="263518" author="sarah" created="Tue, 18 Feb 2020 22:05:23 +0000"  >&lt;p&gt;Hit the problem again on master branch&lt;br/&gt;
tag-2.13.52&lt;br/&gt;
crash dump can be found on spirit at /scratch/dumps/soak-8.spirit.whamcloud.com/10.10.1.108-2020-02-15-11\:36\:21/&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 2575.647060] Lustre: Skipped 17 previous similar messages
[ 2575.694765] LustreError: dumping log to /tmp/lustre-log.1581765979.5236
[ 2576.220180] LNet: 4683:0:(o2iblnd_cb.c:3393:kiblnd_check_conns()) Timed out tx for 192.168.1.110@o2ib: 0 seconds
[ 2576.231562] LNet: 4683:0:(o2iblnd_cb.c:3393:kiblnd_check_conns()) Skipped 6 previous similar messages
[ 2576.241901] LNetError: 4683:0:(lib-msg.c:479:lnet_handle_local_failure()) ni 192.168.1.108@o2ib added to recovery queue. Health = 900
[ 2606.654791] LustreError: 6202:0:(ldlm_request.c:124:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1581765710, 300s ago); not entering recove
ry in server code, just going back to sleep ns: mdt-soaked-MDT0000_UUID lock: ffff88f8c632d580/0xa21a133420fc92f4 lrc: 3/1,0 mode: --/PR res: [0x20000040e:0x3:0x
0].0x0 bits 0x13/0x0 rrc: 13 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 6202 timeout: 0 lvb_type: 0
[ 2606.698855] LustreError: 6202:0:(ldlm_request.c:124:ldlm_expired_completion_wait()) Skipped 3 previous similar messages
[ 2643.197397] LNetError: 4683:0:(lib-msg.c:479:lnet_handle_local_failure()) ni 192.168.1.108@o2ib added to recovery queue. Health = 900
[ 2643.210833] LNetError: 4683:0:(lib-msg.c:479:lnet_handle_local_failure()) Skipped 1 previous similar message
[ 2767.192219] Lustre: MGS: Connection restored to 5c9f140e-4423-4 (at 192.168.1.110@o2ib)
[ 2767.201198] Lustre: Skipped 1 previous similar message
[ 2769.137827] Lustre: soaked-MDT0000: Received new LWP connection from 192.168.1.110@o2ib, removing former export from same NID
[ 2769.150490] Lustre: Skipped 1 previous similar message
[ 2853.947102] Lustre: soaked-MDT0002-osp-MDT0000: Connection restored to 192.168.1.110@o2ib (at 192.168.1.110@o2ib)
[ 2853.958590] Lustre: Skipped 2 previous similar messages
[ 2870.647167] Lustre: 4940:0:(service.c:1440:ptlrpc_at_send_early_reply()) @@@ Could not add any time (5/5), not sending early reply  req@ffff88f8af2fda00 x1658
449052245760/t0(0) o36-&amp;gt;775f92d9-5437-4@192.168.1.127@o2ib:509/0 lens 528/2888 e 24 to 0 dl 1581766279 ref 2 fl Interpret:/0/0 rc 0/0 job:&apos;&apos;
[ 2871.318872] Lustre: 4858:0:(service.c:1440:ptlrpc_at_send_early_reply()) @@@ Could not add any time (5/5), not sending early reply  req@ffff88f4963b2d00 x1658
449307111936/t0(0) o36-&amp;gt;4d2dbe7c-0a77-4@192.168.1.126@o2ib:509/0 lens 528/2888 e 24 to 0 dl 1581766279 ref 2 fl Interpret:/0/0 rc 0/0 job:&apos;&apos;
[ 2871.348299] Lustre: 4858:0:(service.c:1440:ptlrpc_at_send_early_reply()) Skipped 2 previous similar messages
[ 2877.572936] Lustre: soaked-MDT0000: Client 4d2dbe7c-0a77-4 (at 192.168.1.126@o2ib) reconnecting
[ 3153.983138] Lustre: soaked-MDT0000: Received new LWP connection from 192.168.1.110@o2ib, removing former export from same NID
[ 3153.995796] Lustre: Skipped 1 previous similar message
[ 3153.996927] Lustre: soaked-MDT0000: Connection restored to soaked-MDT0001-mdtlov_UUID (at 192.168.1.109@o2ib)
[ 3153.996931] Lustre: Skipped 4 previous similar messages
[ 3154.093652] LustreError: 5720:0:(osd_handler.c:4169:osd_ref_del()) soaked-MDT0000: nlink == 0 on [0x200011d4e:0x9eaf:0x0], maybe an upgraded file? (LU-3915)
[ 3154.109319] LustreError: 5720:0:(osd_handler.c:3592:osd_destroy()) ASSERTION( !lu_object_is_dying(dt-&amp;gt;do_lu.lo_header) ) failed:
[ 3154.122342] LustreError: 5720:0:(osd_handler.c:3592:osd_destroy()) LBUG
[ 3154.129735] Pid: 5720, comm: mdt_out01_006 3.10.0-1062.9.1.el7_lustre.x86_64 #1 SMP Wed Feb 12 06:45:58 UTC 2020
[ 3154.141104] Call Trace:
[ 3154.143850]  [&amp;lt;ffffffffc0b0ffac&amp;gt;] libcfs_call_trace+0x8c/0xc0 [libcfs]
[ 3154.151193]  [&amp;lt;ffffffffc0b1005c&amp;gt;] lbug_with_loc+0x4c/0xa0 [libcfs]
[ 3154.158133]  [&amp;lt;ffffffffc1224f80&amp;gt;] osd_destroy+0x4a0/0x760 [osd_ldiskfs]
[ 3154.165558]  [&amp;lt;ffffffffc0f92321&amp;gt;] out_obj_destroy+0x101/0x2c0 [ptlrpc]
[ 3154.172976]  [&amp;lt;ffffffffc0f92590&amp;gt;] out_tx_destroy_exec+0x20/0x190 [ptlrpc]
[ 3154.180640]  [&amp;lt;ffffffffc0f8c991&amp;gt;] out_tx_end+0xe1/0x5c0 [ptlrpc]
[ 3154.187425]  [&amp;lt;ffffffffc0f90c52&amp;gt;] out_handle+0x1442/0x1bb0 [ptlrpc]
[ 3154.194490]  [&amp;lt;ffffffffc0f8972a&amp;gt;] tgt_request_handle+0x95a/0x1610 [ptlrpc]
[ 3154.202224]  [&amp;lt;ffffffffc0f2b0f6&amp;gt;] ptlrpc_server_handle_request+0x256/0xb10 [ptlrpc]
[ 3154.210846]  [&amp;lt;ffffffffc0f2f4f4&amp;gt;] ptlrpc_main+0xbb4/0x1550 [ptlrpc]
[ 3154.217898]  [&amp;lt;ffffffffa58c61f1&amp;gt;] kthread+0xd1/0xe0
[ 3154.223385]  [&amp;lt;ffffffffa5f8dd37&amp;gt;] ret_from_fork_nospec_end+0x0/0x39
[ 3154.230407]  [&amp;lt;ffffffffffffffff&amp;gt;] 0xffffffffffffffff
[ 3154.235988] Kernel panic - not syncing: LBUG
[ 3154.240763] CPU: 30 PID: 5720 Comm: mdt_out01_006 Kdump: loaded Tainted: G           OE  ------------   3.10.0-1062.9.1.el7_lustre.x86_64 #1
[ 3154.254834] Hardware name: Intel Corporation S2600GZ ........../S2600GZ, BIOS SE5C600.86B.01.08.0003.022620131521 02/26/2013
[ 3154.267355] Call Trace:
[ 3154.270096]  [&amp;lt;ffffffffa5f7ac23&amp;gt;] dump_stack+0x19/0x1b
[ 3154.275834]  [&amp;lt;ffffffffa5f74967&amp;gt;] panic+0xe8/0x21f
[ 3154.281187]  [&amp;lt;ffffffffc0b100ab&amp;gt;] lbug_with_loc+0x9b/0xa0 [libcfs]
[ 3154.288092]  [&amp;lt;ffffffffc1224f80&amp;gt;] osd_destroy+0x4a0/0x760 [osd_ldiskfs]
[ 3154.295481]  [&amp;lt;ffffffffa5f80caa&amp;gt;] ? _cond_resched+0x3a/0x50
[ 3154.301699]  [&amp;lt;ffffffffa5f7fd42&amp;gt;] ? down_write+0x12/0x3d
[ 3154.307674]  [&amp;lt;ffffffffc0f92321&amp;gt;] out_obj_destroy+0x101/0x2c0 [ptlrpc]
[ 3154.314999]  [&amp;lt;ffffffffc0f92590&amp;gt;] out_tx_destroy_exec+0x20/0x190 [ptlrpc]
[ 3154.322612]  [&amp;lt;ffffffffc0f8c991&amp;gt;] out_tx_end+0xe1/0x5c0 [ptlrpc]
[ 3154.329353]  [&amp;lt;ffffffffc0f90c52&amp;gt;] out_handle+0x1442/0x1bb0 [ptlrpc]
[ 3154.336372]  [&amp;lt;ffffffffa596ad65&amp;gt;] ? tracing_is_on+0x15/0x30
[ 3154.342627]  [&amp;lt;ffffffffc0f8972a&amp;gt;] tgt_request_handle+0x95a/0x1610 [ptlrpc]
[ 3154.350305]  [&amp;lt;ffffffffc0af700e&amp;gt;] ? ktime_get_real_seconds+0xe/0x10 [libcfs]
[ 3154.358212]  [&amp;lt;ffffffffc0f2b0f6&amp;gt;] ptlrpc_server_handle_request+0x256/0xb10 [ptlrpc]
[ 3154.366791]  [&amp;lt;ffffffffc0f2787b&amp;gt;] ? ptlrpc_wait_event+0x12b/0x4f0 [ptlrpc]
[ 3154.374495]  [&amp;lt;ffffffffa58d3360&amp;gt;] ? task_rq_unlock+0x20/0x20
[ 3154.380825]  [&amp;lt;ffffffffa58d3903&amp;gt;] ? __wake_up+0x13/0x20
[ 3154.386689]  [&amp;lt;ffffffffc0f2f4f4&amp;gt;] ptlrpc_main+0xbb4/0x1550 [ptlrpc]
[ 3154.393738]  [&amp;lt;ffffffffc0f2e940&amp;gt;] ? ptlrpc_register_service+0xf90/0xf90 [ptlrpc]
[ 3154.401985]  [&amp;lt;ffffffffa58c61f1&amp;gt;] kthread+0xd1/0xe0
[ 3154.407429]  [&amp;lt;ffffffffa58c6120&amp;gt;] ? insert_kthread_work+0x40/0x40
[ 3154.414231]  [&amp;lt;ffffffffa5f8dd37&amp;gt;] ret_from_fork_nospec_begin+0x21/0x21
[ 3154.421517]  [&amp;lt;ffffffffa58c6120&amp;gt;] ? insert_kthread_work+0x40/0x40
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="274814" author="sarah" created="Wed, 8 Jul 2020 23:49:20 +0000"  >&lt;p&gt;hit this again on lustre-master-ib build#437&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="47570">LU-9808</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="29926" name="vmcore-dmesg.txt" size="873522" author="cliffw" created="Mon, 2 Apr 2018 18:00:09 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzv3j:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>