<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:29:44 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-9839] lov_object.c:879:lov_layout_change()) ASSERTION( atomic_read(&amp;lov-&gt;lo_active_ios) == 0 ) failed</title>
                <link>https://jira.whamcloud.com/browse/LU-9839</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Just hit this assertion on master-next that was immediately followed by a NULL pointer deref&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[60342.200382] Lustre: DEBUG MARKER: == sanity test 405: Various layout swap lock tests =================================================== 16:21:48 (1501964508)
[60385.832096] LustreError: 10934:0:(lov_object.c:879:lov_layout_change()) ASSERTION( atomic_read(&amp;amp;lov-&amp;gt;lo_active_ios) == 0 ) failed: 
[60385.832103] BUG: unable to handle kernel NULL pointer dereference at           (null)
[60385.832111] IP: [&amp;lt;ffffffffa078decc&amp;gt;] lov_sub_get+0x1ec/0x760 [lov]
[60385.832113] PGD 0 
[60385.832114] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
[60385.832130] Modules linked in: lustre(OE) ofd(OE) osp(OE) lod(OE) ost(OE) mdt(OE) mdd(OE) mgs(OE) osd_ldiskfs(OE) ldiskfs(OE) lquota(OE) lfsck(OE) obdecho(OE) mgc(OE) lov(OE) osc(OE) mdc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) brd ext4 loop mbcache jbd2 rpcsec_gss_krb5 syscopyarea sysfillrect ata_generic sysimgblt pata_acpi ttm virtio_balloon drm_kms_helper pcspkr ata_piix serio_raw virtio_console virtio_blk floppy drm i2c_piix4 libata i2c_core nfsd ip_tables [last unloaded: libcfs]
[60385.832132] CPU: 2 PID: 5577 Comm: kworker/u16:0 Tainted: G        W  OE  ------------   3.10.0-debug #1
[60385.832133] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
[60385.832138] Workqueue: writeback bdi_writeback_workfn (flush-lustre-3)
[60385.832138] task: ffff88009d1c88c0 ti: ffff880058194000 task.ti: ffff880058194000
[60385.832143] RIP: 0010:[&amp;lt;ffffffffa078decc&amp;gt;]  [&amp;lt;ffffffffa078decc&amp;gt;] lov_sub_get+0x1ec/0x760 [lov]
[60385.832145] RSP: 0018:ffff8800581978c0  EFLAGS: 00010297
[60385.832146] RAX: ffff880014330f68 RBX: ffff880078ddedf0 RCX: 0000000000000000
[60385.832146] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff880014330f68
[60385.832147] RBP: ffff880058197900 R08: 0000000000000001 R09: 0000000000000000
[60385.832147] R10: 0000000000000000 R11: ffffffffffffffff R12: 0000000000000000
[60385.832148] R13: ffff880078ddefd0 R14: ffff880078ddeda0 R15: ffff88000c9b7e10
[60385.832149] FS:  0000000000000000(0000) GS:ffff8800bc680000(0000) knlGS:0000000000000000
[60385.832149] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[60385.832150] CR2: 0000000000000000 CR3: 00000000ae6b8000 CR4: 00000000000006e0
[60385.832152] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[60385.832152] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[60385.832153] Stack:
[60385.832155]  0000000000000000 0000000000000000 ffff880021d0fed0 0000000000000000
[60385.832156]  0000000000000000 ffff88002e8451d0 0000000000000000 ffff880078ddeda0
[60385.832157]  ffff8800581979b0 ffffffffa078ecda ffff88000c9b7e10 ffff880021dabfa0
[60385.832157] Call Trace:
[60385.832162]  [&amp;lt;ffffffffa078ecda&amp;gt;] lov_io_iter_init+0x1ca/0x8b0 [lov]
[60385.832185]  [&amp;lt;ffffffffa037794c&amp;gt;] cl_io_iter_init+0x5c/0x120 [obdclass]
[60385.832201]  [&amp;lt;ffffffffa0379c5c&amp;gt;] cl_io_loop+0x19c/0xb30 [obdclass]
[60385.832213]  [&amp;lt;ffffffffa0dcae9b&amp;gt;] cl_sync_file_range+0x2db/0x380 [lustre]
[60385.832222]  [&amp;lt;ffffffffa0dec5ba&amp;gt;] ll_writepages+0x7a/0x200 [lustre]
[60385.832225]  [&amp;lt;ffffffff8117e8b1&amp;gt;] do_writepages+0x21/0x50
[60385.832227]  [&amp;lt;ffffffff81218f50&amp;gt;] __writeback_single_inode+0x40/0x2b0
[60385.832228]  [&amp;lt;ffffffff81219c11&amp;gt;] writeback_sb_inodes+0x2b1/0x4d0
[60385.832230]  [&amp;lt;ffffffff81219ecf&amp;gt;] __writeback_inodes_wb+0x9f/0xd0
[60385.832232]  [&amp;lt;ffffffff8121a76b&amp;gt;] wb_writeback+0x28b/0x340
[60385.832233]  [&amp;lt;ffffffff8121c9cc&amp;gt;] bdi_writeback_workfn+0x20c/0x4e0
[60385.832235]  [&amp;lt;ffffffff8109add6&amp;gt;] process_one_work+0x206/0x5b0
[60385.832236]  [&amp;lt;ffffffff8109ad6b&amp;gt;] ? process_one_work+0x19b/0x5b0
[60385.832238]  [&amp;lt;ffffffff8109b29b&amp;gt;] worker_thread+0x11b/0x3a0
[60385.832239]  [&amp;lt;ffffffff8109b180&amp;gt;] ? process_one_work+0x5b0/0x5b0
[60385.832240]  [&amp;lt;ffffffff810a2eda&amp;gt;] kthread+0xea/0xf0
[60385.832242]  [&amp;lt;ffffffff810a2df0&amp;gt;] ? kthread_create_on_node+0x140/0x140
[60385.832245]  [&amp;lt;ffffffff8170fbd8&amp;gt;] ret_from_fork+0x58/0x90
[60385.832246]  [&amp;lt;ffffffff810a2df0&amp;gt;] ? kthread_create_on_node+0x140/0x140
[60385.832257] Code: 85 bc 04 00 00 44 8b 82 08 01 00 00 8b 4d cc 48 8b 75 c0 44 39 c1 0f 83 aa 04 00 00 48 8b 92 10 01 00 00 48 89 c7 4a 8b 54 22 18 &amp;lt;48&amp;gt; 8b 0c f2 c7 83 b0 01 00 00 00 00 00 00 4c 89 7b 38 48 81 c1 
[60385.832261] RIP  [&amp;lt;ffffffffa078decc&amp;gt;] lov_sub_get+0x1ec/0x760 [lov]
[60385.832262]  RSP &amp;lt;ffff8800581978c0&amp;gt;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;modules and crashdump are in /exports/crashdumps/192.168.10.224-2017-08-05-16\:22\:38 on my node&lt;br/&gt;
tag in my tree: master-20170804&lt;/p&gt;</description>
                <environment></environment>
        <key id="47675">LU-9839</key>
            <summary>lov_object.c:879:lov_layout_change()) ASSERTION( atomic_read(&amp;lov-&gt;lo_active_ios) == 0 ) failed</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="simmonsja">James A Simmons</assignee>
                                    <reporter username="green">Oleg Drokin</reporter>
                        <labels>
                            <label>ornl</label>
                    </labels>
                <created>Sun, 6 Aug 2017 05:19:50 +0000</created>
                <updated>Mon, 14 Aug 2023 19:34:48 +0000</updated>
                                                                                <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="204571" author="green" created="Sun, 6 Aug 2017 05:25:05 +0000"  >&lt;p&gt;Also the backtrace on the asserting thread is:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;crash&amp;gt; bt 10934
PID: 10934  TASK: ffff88003811e540  CPU: 3   COMMAND: &quot;swap_lock_test&quot;
 #0 [ffff8800bc6c5e68] crash_nmi_callback at ffffffff81047152
 #1 [ffff8800bc6c5e78] nmi_handle at ffffffff81708369
 #2 [ffff8800bc6c5ec8] do_nmi at ffffffff817084a8
 #3 [ffff8800bc6c5ef0] end_repeat_nmi at ffffffff817077e3
    [exception RIP: vgacon_scroll+877]
    RIP: ffffffff813d405d  RSP: ffff880014dc3680  RFLAGS: 00000083
    RAX: 0000000000000004  RBX: ffff8800b8de4800  RCX: 00000000000000a0
    RDX: ffff8800000bd6e0  RSI: ffff8800b8df8520  RDI: 00000000000000a0
    RBP: ffff880014dc36b8   R8: 0000000000008520   R9: 0000000000000730
    R10: 0000000000000000  R11: 0000000000000050  R12: ffff8800000bd6e0
    R13: 0000000000000198  R14: 0000000000000198  R15: 000000000000ffa0
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
--- &amp;lt;NMI exception stack&amp;gt; ---
 #4 [ffff880014dc3680] vgacon_scroll at ffffffff813d405d
 #5 [ffff880014dc36c0] scrup at ffffffff8144eacc
 #6 [ffff880014dc36f0] lf at ffffffff8144eb80
 #7 [ffff880014dc3720] vt_console_print at ffffffff8144ee62
 #8 [ffff880014dc3788] call_console_drivers.constprop.17 at ffffffff81077e41
 #9 [ffff880014dc37b0] console_unlock at ffffffff8107862c
#10 [ffff880014dc37e8] vprintk_emit at ffffffff81078926
#11 [ffff880014dc3858] printk at ffffffff816f8e55
#12 [ffff880014dc38b8] cfs_print_to_console at ffffffffa01bb59a [libcfs]
#13 [ffff880014dc38e8] libcfs_debug_vmsg2 at ffffffffa01c65ee [libcfs]
#14 [ffff880014dc3a30] libcfs_debug_msg at ffffffffa01c6cb7 [libcfs]
#15 [ffff880014dc3a90] lov_conf_set at ffffffffa07a19e8 [lov]
#16 [ffff880014dc3b00] cl_conf_set at ffffffffa0370240 [obdclass]
#17 [ffff880014dc3b30] ll_layout_conf at ffffffffa0dceea1 [lustre]
#18 [ffff880014dc3bb8] ll_layout_refresh at ffffffffa0dcf7dd [lustre]
#19 [ffff880014dc3c68] vvp_io_init at ffffffffa0e14c67 [lustre]
#20 [ffff880014dc3cb8] cl_io_init0 at ffffffffa0377d48 [obdclass]
#21 [ffff880014dc3cf0] cl_io_init at ffffffffa0377eda [obdclass]
#22 [ffff880014dc3d20] cl_get_grouplock at ffffffffa0e0d02a [lustre]
#23 [ffff880014dc3d70] ll_get_grouplock at ffffffffa0dd03ee [lustre]
#24 [ffff880014dc3e08] ll_file_ioctl at ffffffffa0dd47cf [lustre]
#25 [ffff880014dc3eb8] do_vfs_ioctl at ffffffff81201985
#26 [ffff880014dc3f30] sys_ioctl at ffffffff81201c41
#27 [ffff880014dc3f80] system_call_fastpath at ffffffff8170fc89
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="211129" author="green" created="Sun, 15 Oct 2017 18:35:43 +0000"  >&lt;p&gt;Just hit this assertion again on master-next in sanity test 405:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 6650.701477] Lustre: DEBUG MARKER: == sanity test 405: Various layout swap lock tests =================================================== 21:10:20 (1507857020)
[ 6718.387820] LustreError: 7955:0:(lov_object.c:879:lov_layout_change()) ASSERTION( atomic_read(&amp;amp;lov-&amp;gt;lo_active_ios) == 0 ) failed: 
[ 6718.387821] LustreError: 7955:0:(lov_object.c:879:lov_layout_change()) LBUG
[ 6718.387822] Pid: 7955, comm: swap_lock_test
[ 6718.387822] 
Call Trace:
[ 6718.387836]  [&amp;lt;ffffffffa01d57fe&amp;gt;] libcfs_call_trace+0x4e/0x60 [libcfs]
[ 6718.387843]  [&amp;lt;ffffffffa01d588c&amp;gt;] lbug_with_loc+0x4c/0xb0 [libcfs]
[ 6718.387853]  [&amp;lt;ffffffffa08de974&amp;gt;] lov_conf_set+0xab4/0xac0 [lov]
[ 6718.387869]  [&amp;lt;ffffffffa0f70001&amp;gt;] ? ll_file_flock+0x841/0xd80 [lustre]
[ 6718.387898]  [&amp;lt;ffffffffa03931b0&amp;gt;] cl_conf_set+0x60/0x120 [obdclass]
[ 6718.387908]  [&amp;lt;ffffffffa0f7d301&amp;gt;] ll_layout_conf+0x81/0x400 [lustre]
[ 6718.387918]  [&amp;lt;ffffffffa0f7dc3d&amp;gt;] ll_layout_refresh+0x5bd/0xb10 [lustre]
[ 6718.387933]  [&amp;lt;ffffffffa0fc3797&amp;gt;] vvp_io_init+0x347/0x440 [lustre]
[ 6718.387957]  [&amp;lt;ffffffffa039acb8&amp;gt;] cl_io_init0.isra.17+0x88/0x160 [obdclass]
[ 6718.387974]  [&amp;lt;ffffffffa039ae4a&amp;gt;] cl_io_init+0x3a/0x80 [obdclass]
[ 6718.387986]  [&amp;lt;ffffffffa0fbbc2a&amp;gt;] cl_get_grouplock+0xca/0x2f0 [lustre]
[ 6718.387995]  [&amp;lt;ffffffffa0f7e84e&amp;gt;] ll_get_grouplock+0x22e/0x6d0 [lustre]
[ 6718.388004]  [&amp;lt;ffffffffa0f834b2&amp;gt;] ll_file_ioctl+0x47c2/0x48a0 [lustre]
[ 6718.388008]  [&amp;lt;ffffffff81201985&amp;gt;] do_vfs_ioctl+0x305/0x520
[ 6718.388011]  [&amp;lt;ffffffff81706487&amp;gt;] ? _raw_spin_unlock_irq+0x27/0x50
[ 6718.388013]  [&amp;lt;ffffffff81201c41&amp;gt;] SyS_ioctl+0xa1/0xc0
[ 6718.388016]  [&amp;lt;ffffffff8170fc89&amp;gt;] system_call_fastpath+0x16/0x1b
[ 6718.388017] 
[ 6718.388018] Kernel panic - not syncing: LBUG
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Crashdump is in 192.168.10.224-2017-10-12-21\:11\:34 on my system&lt;/p&gt;</comment>
                            <comment id="213299" author="green" created="Fri, 10 Nov 2017 00:56:23 +0000"  >&lt;p&gt;Just hit it once more on current master-next&lt;/p&gt;</comment>
                            <comment id="368869" author="simmonsja" created="Sat, 8 Apr 2023 03:18:16 +0000"  >&lt;p&gt;Just ran into this on 2.15 LTS production system.&lt;/p&gt;</comment>
                            <comment id="370850" author="simmonsja" created="Thu, 27 Apr 2023 19:06:05 +0000"  >&lt;p&gt;This bug shouldn&apos;t happen. In lov_layout_change() we call llo_delete() which in turn calls lov_layout_wait() that empties lo_waitq. The reference count should be zero. Then the LASSERT happens a few steps later. How could it be possible to add new IO in that time?&lt;/p&gt;</comment>
                            <comment id="370927" author="gerrit" created="Fri, 28 Apr 2023 14:23:08 +0000"  >&lt;p&gt;&quot;James Simmons &amp;lt;jsimmons@infradead.org&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/50800&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/50800&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9839&quot; title=&quot;lov_object.c:879:lov_layout_change()) ASSERTION( atomic_read(&amp;amp;lov-&amp;gt;lo_active_ios) == 0 ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9839&quot;&gt;LU-9839&lt;/a&gt; lov: ensure lo_waitq is drains for layout change&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 9bfd454ba335124f8b98dc51eb5f727f58d972a7&lt;/p&gt;</comment>
                            <comment id="375063" author="gerrit" created="Sat, 10 Jun 2023 14:41:55 +0000"  >&lt;p&gt;&quot;Alexander Zarochentsev &amp;lt;alexander.zarochentsev@hpe.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/51269&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/51269&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9839&quot; title=&quot;lov_object.c:879:lov_layout_change()) ASSERTION( atomic_read(&amp;amp;lov-&amp;gt;lo_active_ios) == 0 ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9839&quot;&gt;LU-9839&lt;/a&gt; clio: debug active_ios race&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 8705d17265065eeab1f058218e1fa870cc6f5cb6&lt;/p&gt;</comment>
                            <comment id="375064" author="zam" created="Sat, 10 Jun 2023 14:44:06 +0000"  >
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/51269&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/51269&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9839&quot; title=&quot;lov_object.c:879:lov_layout_change()) ASSERTION( atomic_read(&amp;amp;lov-&amp;gt;lo_active_ios) == 0 ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9839&quot;&gt;LU-9839&lt;/a&gt; clio: debug active_ios race&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;crashed and gave additional info about a parallel i/o:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[2577729.757681] LustreError: 11039:0:(lov_object.c:1286:lov_layout_change()) ASSERTION( atomic_read(&amp;amp;lov-&amp;gt;lo_active_ios) == 0 ) failed: ios: 1, last 
io: osc_lru_shrink
[2577729.776963] LustreError: 11039:0:(lov_object.c:1286:lov_layout_change()) LBUG
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="378373" author="gerrit" created="Wed, 12 Jul 2023 11:34:02 +0000"  >&lt;p&gt;&quot;Alexander Zarochentsev &amp;lt;alexander.zarochentsev@hpe.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/51638&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/51638&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9839&quot; title=&quot;lov_object.c:879:lov_layout_change()) ASSERTION( atomic_read(&amp;amp;lov-&amp;gt;lo_active_ios) == 0 ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9839&quot;&gt;LU-9839&lt;/a&gt; clio: lov active ios accounting fix&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 72990b262c4dd763a109e7755f697acb0046c543&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzhvz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>