<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:12:04 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-14705] ASSERTION( llog_osd_exist(loghandle) ) failed: with concurent &quot;lfs changelog_clear&quot;</title>
                <link>https://jira.whamcloud.com/browse/LU-14705</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;On a client several instances of robinhood was started reading and clearing changelogs (with the same user &quot;cl1&quot;).&lt;/p&gt;

&lt;p&gt;Error -22 (-EINVAL) is returned on the client and on the server:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
mdd_changelog_clear()) scratch2-MDD0002: Failure to clear the changelog &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; user 1: -22
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;It seems normal: one rbh instance is trying to clear changelog that have been already clear by the other instance.&lt;/p&gt;

&lt;p&gt;After a while, the MDS crash with the following backtrace:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
[125289.545832] LustreError: 142168:0:(llog_osd.c:905:llog_osd_next_block()) ASSERTION( llog_osd_exist(loghandle) ) failed: 
[125289.556790] LustreError: 142168:0:(llog_osd.c:905:llog_osd_next_block()) LBUG
[125289.564017] Pid: 142168, comm: mdt01_107 3.10.0-1062.18.1.el7.x86_64 #1 SMP Tue Mar 17 23:49:17 UTC 2020
[125289.564017] Call Trace:
[125289.564026]  [&amp;lt;ffffffffc0ae57cc&amp;gt;] libcfs_call_trace+0x8c/0xc0 [libcfs]
[125289.570676]  [&amp;lt;ffffffffc0ae587c&amp;gt;] lbug_with_loc+0x4c/0xa0 [libcfs]
[125289.576977]  [&amp;lt;ffffffffc0f82158&amp;gt;] llog_osd_next_block+0xb28/0xbc0 [obdclass]
[125289.584162]  [&amp;lt;ffffffffc0f75088&amp;gt;] llog_process_thread+0x338/0x1a10 [obdclass]
[125289.591417]  [&amp;lt;ffffffffc0f7681c&amp;gt;] llog_process_or_fork+0xbc/0x450 [obdclass]
[125289.598584]  [&amp;lt;ffffffffc0f7bbf9&amp;gt;] llog_cat_process_cb+0x239/0x250 [obdclass]
[125289.605751]  [&amp;lt;ffffffffc0f755af&amp;gt;] llog_process_thread+0x85f/0x1a10 [obdclass]
[125289.613006]  [&amp;lt;ffffffffc0f7681c&amp;gt;] llog_process_or_fork+0xbc/0x450 [obdclass]
[125289.620173]  [&amp;lt;ffffffffc0f78581&amp;gt;] llog_cat_process_or_fork+0x1e1/0x360 [obdclass]
[125289.627776]  [&amp;lt;ffffffffc0f7872e&amp;gt;] llog_cat_process+0x2e/0x30 [obdclass]
[125289.634509]  [&amp;lt;ffffffffc197ca34&amp;gt;] llog_changelog_cancel.isra.16+0x54/0x1c0 [mdd]
[125289.642023]  [&amp;lt;ffffffffc197eb60&amp;gt;] mdd_changelog_llog_cancel+0xd0/0x270 [mdd]
[125289.649190]  [&amp;lt;ffffffffc1981c13&amp;gt;] mdd_changelog_clear+0x503/0x690 [mdd]
[125289.655920]  [&amp;lt;ffffffffc1984d03&amp;gt;] mdd_iocontrol+0x163/0x540 [mdd]
[125289.662130]  [&amp;lt;ffffffffc18063ec&amp;gt;] mdt_iocontrol+0x5ec/0xb00 [mdt]
[125289.668355]  [&amp;lt;ffffffffc1806d84&amp;gt;] mdt_set_info+0x484/0x490 [mdt]
[125289.674479]  [&amp;lt;ffffffffc1392f5a&amp;gt;] tgt_request_handle+0xada/0x1570 [ptlrpc]
[125289.681535]  [&amp;lt;ffffffffc13378cb&amp;gt;] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
[125289.689336]  [&amp;lt;ffffffffc133b234&amp;gt;] ptlrpc_main+0xb34/0x1470 [ptlrpc]
[125289.695749]  [&amp;lt;ffffffffa6ac6321&amp;gt;] kthread+0xd1/0xe0
[125289.700751]  [&amp;lt;ffffffffa718dd1d&amp;gt;] ret_from_fork_nospec_begin+0x7/0x21
[125289.707314]  [&amp;lt;ffffffffffffffff&amp;gt;] 0xffffffffffffffff
[125289.712416] Kernel panic - not syncing: LBUG
[125289.716774] CPU: 18 PID: 142168 Comm: mdt01_107 Kdump: loaded Tainted: G           OE  ------------ T 3.10.0-1062.18.1.el7.x86_64 #1
[125289.728760] Hardware name: Bull SAS BullSequana X430-E5 2U-1N/X11DPi-NT, BIOS 3.3 02/24/2020
[125289.737279] Call Trace:
[125289.739826]  [&amp;lt;ffffffffa717b416&amp;gt;] dump_stack+0x19/0x1b
[125289.745046]  [&amp;lt;ffffffffa7174a0b&amp;gt;] panic+0xe8/0x21f
[125289.749931]  [&amp;lt;ffffffffc0ae58cb&amp;gt;] lbug_with_loc+0x9b/0xa0 [libcfs]
[125289.756211]  [&amp;lt;ffffffffc0f82158&amp;gt;] llog_osd_next_block+0xb28/0xbc0 [obdclass]
[125289.763349]  [&amp;lt;ffffffffc0aebd68&amp;gt;] ? libcfs_debug_vmsg2+0x6d8/0xb30 [libcfs]
[125289.770410]  [&amp;lt;ffffffffc0f75088&amp;gt;] llog_process_thread+0x338/0x1a10 [obdclass]
[125289.777630]  [&amp;lt;ffffffffc1980160&amp;gt;] ? mdd_obd_set_info_async+0x440/0x440 [mdd]
[125289.784769]  [&amp;lt;ffffffffc0f7681c&amp;gt;] llog_process_or_fork+0xbc/0x450 [obdclass]
[125289.791912]  [&amp;lt;ffffffffc0f7bbf9&amp;gt;] llog_cat_process_cb+0x239/0x250 [obdclass]
[125289.799052]  [&amp;lt;ffffffffc0f755af&amp;gt;] llog_process_thread+0x85f/0x1a10 [obdclass]
[125289.806283]  [&amp;lt;ffffffffc0f7b9c0&amp;gt;] ? llog_cat_cancel_records+0x3d0/0x3d0 [obdclass]
[125289.813940]  [&amp;lt;ffffffffc0f7681c&amp;gt;] llog_process_or_fork+0xbc/0x450 [obdclass]
[125289.821083]  [&amp;lt;ffffffffc0f7b9c0&amp;gt;] ? llog_cat_cancel_records+0x3d0/0x3d0 [obdclass]
[125289.828745]  [&amp;lt;ffffffffc0f78581&amp;gt;] llog_cat_process_or_fork+0x1e1/0x360 [obdclass]
[125289.836313]  [&amp;lt;ffffffffc1980160&amp;gt;] ? mdd_obd_set_info_async+0x440/0x440 [mdd]
[125289.843452]  [&amp;lt;ffffffffc0f7872e&amp;gt;] llog_cat_process+0x2e/0x30 [obdclass]
[125289.850153]  [&amp;lt;ffffffffc197ca34&amp;gt;] llog_changelog_cancel.isra.16+0x54/0x1c0 [mdd]
[125289.857631]  [&amp;lt;ffffffffa6b07632&amp;gt;] ? ktime_get+0x52/0xe0
[125289.862944]  [&amp;lt;ffffffffc197eb60&amp;gt;] mdd_changelog_llog_cancel+0xd0/0x270 [mdd]
[125289.870080]  [&amp;lt;ffffffffc1981c13&amp;gt;] mdd_changelog_clear+0x503/0x690 [mdd]
[125289.876777]  [&amp;lt;ffffffffc1984d03&amp;gt;] mdd_iocontrol+0x163/0x540 [mdd]
[125289.882975]  [&amp;lt;ffffffffc0fc08c3&amp;gt;] ? lu_context_init+0xd3/0x1f0 [obdclass]
[125289.889853]  [&amp;lt;ffffffffc18063ec&amp;gt;] mdt_iocontrol+0x5ec/0xb00 [mdt]
[125289.896039]  [&amp;lt;ffffffffc1806d84&amp;gt;] mdt_set_info+0x484/0x490 [mdt]
[125289.902155]  [&amp;lt;ffffffffc1392f5a&amp;gt;] tgt_request_handle+0xada/0x1570 [ptlrpc]
[125289.909141]  [&amp;lt;ffffffffc136c6a1&amp;gt;] ? ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc]
[125289.916794]  [&amp;lt;ffffffffc0ae5bde&amp;gt;] ? ktime_get_real_seconds+0xe/0x10 [libcfs]
[125289.923948]  [&amp;lt;ffffffffc13378cb&amp;gt;] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
[125289.931713]  [&amp;lt;ffffffffc13346e5&amp;gt;] ? ptlrpc_wait_event+0xa5/0x360 [ptlrpc]
[125289.938590]  [&amp;lt;ffffffffa6ad3a33&amp;gt;] ? __wake_up+0x13/0x20
[125289.943924]  [&amp;lt;ffffffffc133b234&amp;gt;] ptlrpc_main+0xb34/0x1470 [ptlrpc]
[125289.950272]  [&amp;lt;ffffffffa7180d92&amp;gt;] ? __schedule+0x402/0x840
[125289.955870]  [&amp;lt;ffffffffc133a700&amp;gt;] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc]
[125289.963349]  [&amp;lt;ffffffffa6ac6321&amp;gt;] kthread+0xd1/0xe0
[125289.968315]  [&amp;lt;ffffffffa6ac6250&amp;gt;] ? insert_kthread_work+0x40/0x40
[125289.974497]  [&amp;lt;ffffffffa718dd1d&amp;gt;] ret_from_fork_nospec_begin+0x7/0x21
[125289.981023]  [&amp;lt;ffffffffa6ac6250&amp;gt;] ? insert_kthread_work+0x40/0x40
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;We were able to extract dk log from the crashdump:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
00000004:00000080:25.0:1619726994.236714:0:142143:0:(mdt_handler.c:6728:mdt_iocontrol()) handling ioctl cmd 0x424066b3
00000004:00000080:25.0:1619726994.236723:0:142143:0:(mdd_device.c:1780:mdd_changelog_clear()) scratch2-MDD0002: Purge request: id=1, endrec=255008878
00000040:00080000:25.0:1619726994.236727:0:142143:0:(llog.c:655:llog_process_thread()) index: 1, lh_last_idx: 1 synced_idx: 1 lgh_last_idx: 1
00000040:00080000:25.0:1619726994.236728:0:142143:0:(llog_cat.c:794:llog_cat_process_common()) processing log [0xb2:0x1:0x0]:0 at index 1 of catalog [0x6:0xa:0x0]
00000004:00000080:25.0:1619726994.236732:0:142143:0:(mdd_device.c:1753:mdd_changelog_clear_cb()) Rewriting changelog user 1 endrec to 255008878
00000040:00080000:25.0:1619726994.236738:0:142143:0:(llog.c:705:llog_process_thread()) stop processing plain 0xb2:1:0 index 2 count 2
00000040:00080000:25.0:1619726994.236739:0:142143:0:(llog.c:705:llog_process_thread()) stop processing catalog 0x6:10:0 index 2 count 2
00000004:00000080:25.0:1619726994.236740:0:142143:0:(mdd_device.c:1818:mdd_changelog_clear()) scratch2-MDD0002: Purging changelog entries up to 255008878
00000040:00080000:25.0:1619726994.236753:0:142143:0:(llog.c:655:llog_process_thread()) index: 4033, lh_last_idx: 4325 synced_idx: 0 lgh_last_idx: 4325
00000040:00080000:25.0:1619726994.236754:0:142143:0:(llog_cat.c:794:llog_cat_process_common()) processing log [0x1c1e:0x1:0x0]:0 at index 4033 of catalog [0x5:0xa:0x0]
00000004:00000080:18.0:1619726994.237619:0:142168:0:(mdt_handler.c:6728:mdt_iocontrol()) handling ioctl cmd 0x424066b3
00000004:00000080:18.0:1619726994.237630:0:142168:0:(mdd_device.c:1780:mdd_changelog_clear()) scratch2-MDD0002: Purge request: id=1, endrec=255008878
00000040:00080000:18.0:1619726994.237637:0:142168:0:(llog.c:655:llog_process_thread()) index: 1, lh_last_idx: 1 synced_idx: 1 lgh_last_idx: 1
00000040:00080000:18.0:1619726994.237638:0:142168:0:(llog_cat.c:794:llog_cat_process_common()) processing log [0xb2:0x1:0x0]:0 at index 1 of catalog [0x6:0xa:0x0]
00000004:00000080:18.0:1619726994.237644:0:142168:0:(mdd_device.cText color:1753:mdd_changelog_clear_cb()) Rewriting changelog user 1 endrec to 255008878
00000040:00080000:18.0:1619726994.237650:0:142168:0:(llog.c:705:llog_process_thread()) stop processing plain 0xb2:1:0 index 2 count 2
00000040:00080000:18.0:1619726994.237651:0:142168:0:(llog.c:705:llog_process_thread()) stop processing catalog 0x6:10:0 index 2 count 2
00000004:00000080:18.0:1619726994.237652:0:142168:0:(mdd_device.c:1818:mdd_changelog_clear()) scratch2-MDD0002: Purging changelog entries up to 255008878
00000040:00080000:18.0:1619726994.237666:0:142168:0:(llog.c:655:llog_process_thread()) index: 4033, lh_last_idx: 4325 synced_idx: 0 lgh_last_idx: 4325
00000040:00080000:18.0:1619726994.237666:0:142168:0:(llog_cat.c:794:llog_cat_process_common()) processing log [0x1c1e:0x1:0x0]:0 at index 4033 of catalog [0x5:0xa:0x0]
00000040:00080000:25.0:1619726994.237839:0:142143:0:(llog_cat.c:1123:llog_cat_set_first_idx()) catlog [0x5:0xa:0x0] first idx 4033, last_idx 4325
00000040:00080000:25.0:1619726994.237843:0:142143:0:(llog_cat.c:1162:llog_cat_cleanup()) cancel plain log [0x1c1e:0x1:0x0] at index 4033 of catalog [0x5:0xa:0x0]
00000040:00080000:25.0:1619726994.237844:0:142143:0:(llog.c:705:llog_process_thread()) stop processing plain 0x1c1e:1:0 index 64767 count 1
00000040:00040000:18.0:1619726994.237845:0:142168:0:(llog_osd.c:905:llog_osd_next_block()) ASSERTION( llog_osd_exist(loghandle) ) failed: 
00000040:00080000:25.0:1619726994.237845:0:142143:0:(llog.c:655:llog_process_thread()) index: 4034, lh_last_idx: 4325 synced_idx: 0 lgh_last_idx: 4325
00000040:00080000:25.0:1619726994.237846:0:142143:0:(llog_cat.c:794:llog_cat_process_common()) processing log [0x1c1f:0x1:0x0]:0 at index 4034 of catalog [0x5:0xa:0x0]
00000040:00080000:25.0:1619726994.238798:0:142143:0:(llog.c:705:llog_process_thread()) stop processing plain 0x1c1f:1:0 index 530 count 62869
00000040:00080000:25.0:1619726994.238799:0:142143:0:(llog.c:705:llog_process_thread()) stop processing catalog 0x5:10:0 index 4034 count 293
00000040:00040000:18.0:1619726994.248804:0:142168:0:(llog_osd.c:905:llog_osd_next_block()) LBUG
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;It seems that one mdd_changelog_clear instance delete a plain llog when the other instance is trying to retrieve the next record of the same plain llog.&lt;/p&gt;

&lt;p&gt;I have unsuccessfully tried to reproduce the issue with VMs and several &quot;lfs changelog_clear&quot; and &quot;lfs changelog&quot; instances.&lt;/p&gt;

&lt;p&gt;I have backported the &lt;a href=&quot;https://review.whamcloud.com/#/c/43572/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/43572/&lt;/a&gt; (&quot;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14606&quot; title=&quot;llog_changelog_cancel_cb returns ENOENT(-2)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-14606&quot;&gt;&lt;del&gt;LU-14606&lt;/del&gt;&lt;/a&gt; llog: hide ENOENT for cancelling record&quot;) because this patch seems to handle &quot;llog cancel&quot; for changelog more cleanly (&quot;llog_changelog_cancel_cb&quot; --&amp;gt; RETURN(LLOG_DEL_RECORD)). But I don&apos;t think it will resolve the issue.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</description>
                <environment>Lustre 2.12.6 + patches</environment>
        <key id="64390">LU-14705</key>
            <summary>ASSERTION( llog_osd_exist(loghandle) ) failed: with concurent &quot;lfs changelog_clear&quot;</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="eaujames">Etienne Aujames</assignee>
                                    <reporter username="eaujames">Etienne Aujames</reporter>
                        <labels>
                            <label>CEA</label>
                            <label>changelogs</label>
                    </labels>
                <created>Tue, 25 May 2021 08:33:02 +0000</created>
                <updated>Fri, 17 Sep 2021 15:55:50 +0000</updated>
                                                                                <due></due>
                            <votes>1</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="303270" author="gerrit" created="Wed, 2 Jun 2021 14:25:25 +0000"  >&lt;p&gt;Etienne AUJAMES (eaujames@ddn.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/43896&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/43896&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14705&quot; title=&quot;ASSERTION( llog_osd_exist(loghandle) ) failed: with concurent &amp;quot;lfs changelog_clear&amp;quot;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-14705&quot;&gt;LU-14705&lt;/a&gt; test: reproducer for the changelog_clear race&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: b5878e8f7fc9d04fefa9210d56e88cc8f145db09&lt;/p&gt;</comment>
                            <comment id="304311" author="eaujames" created="Fri, 11 Jun 2021 18:21:31 +0000"  >&lt;p&gt;The issue seems to have been already resolved (on master) by the: &lt;br/&gt;
 &lt;a href=&quot;https://review.whamcloud.com/#/c/43719/2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/43719&lt;/a&gt; (&quot;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14688&quot; title=&quot;Changelog cancel improvement&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-14688&quot;&gt;&lt;del&gt;LU-14688&lt;/del&gt;&lt;/a&gt; mdt: changelog purge deletes plain llog&quot;)&lt;/p&gt;</comment>
                            <comment id="304324" author="eaujames" created="Fri, 11 Jun 2021 20:10:47 +0000"  >&lt;p&gt;b2_12 backport is available here:&lt;br/&gt;
 &lt;a href=&quot;https://review.whamcloud.com/43990&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/43990 (&quot;&lt;/a&gt;LU-14688 mdt: changelog purge deletes plain llog&quot;)&lt;/p&gt;</comment>
                            <comment id="313231" author="sthiell" created="Fri, 17 Sep 2021 15:55:50 +0000"  >&lt;p&gt;Hello,&lt;br/&gt;
We hit this problem with 2.12.7 on Oak:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[623468.203914] LustreError: 6125:0:(llog_osd.c:905:llog_osd_next_block()) ASSERTION( llog_osd_exist(loghandle) ) failed: 
[623468.215956] LustreError: 6125:0:(llog_osd.c:905:llog_osd_next_block()) LBUG
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;I have just applied the backported patch (thanks Etienne for the info!). It would be nice to have it included in Lustre 2.12.8. Thanks!&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                                        </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="63744">LU-14606</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="64300">LU-14688</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="38821" name="crash_changelog.tgz" size="76566" author="eaujames" created="Tue, 25 May 2021 09:00:47 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                    <customfield id="customfield_10030" key="com.atlassian.jira.plugin.system.customfieldtypes:labels">
                        <customfieldname>Epic/Theme</customfieldname>
                        <customfieldvalues>
                                        <label>changelog</label>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i01vav:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>