<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:44:54 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-11556] conf-sanity test 32b crashes on MDT umount with &#8220;ASSERTION( atomic_read(&amp;d-&gt;ld_ref) == 0 ) failed: Refcount is 1&#8221;</title>
                <link>https://jira.whamcloud.com/browse/LU-11556</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;conf-sanity test_32b crashes for master clients and 2.10.5 servers. Looking at the client test_log at &lt;a href=&quot;https://testing.whamcloud.com/test_sets/b1b83d10-cedf-11e8-9238-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/b1b83d10-cedf-11e8-9238-52540065bddc&lt;/a&gt;, we see that the test crashes during umount of mdt1&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;CMD: /usr/sbin/lctl get_param -n mdc.t32fs-MDT0000*.max_rpcs_in_flight 14
sed: -e expression #1, char 9: unknown option to `s&apos;
pdsh@onyx-37vm9: no remote hosts specified
CMD: /usr/sbin/lctl get_param -n mdc.t32fs-MDT0000*.max_rpcs_in_flight 14
sed: -e expression #1, char 9: unknown option to `s&apos;
pdsh@onyx-37vm9: no remote hosts specified
CMD: onyx-37vm12 umount -d /tmp/t32/mnt/mdt
CMD: onyx-37vm12 umount -d /tmp/t32/mnt/mdt1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Looking at the MDS (vm12) console log, we see&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[23682.075589] Lustre: DEBUG MARKER: cat /tmp/t32/list2
[23682.368852] LustreError: 15627:0:(mdt_lib.c:961:mdt_attr_valid_xlate()) Unknown attr bits: 0x60000
[23682.888492] LustreError: 15627:0:(mdt_lib.c:961:mdt_attr_valid_xlate()) Unknown attr bits: 0x60000
[23682.889529] LustreError: 15627:0:(mdt_lib.c:961:mdt_attr_valid_xlate()) Skipped 7 previous similar messages
[23688.214366] Lustre: DEBUG MARKER: /usr/sbin/lctl set_param -n osd*.*.force_sync=1
[23688.389563] LustreError: 15627:0:(mdt_lib.c:961:mdt_attr_valid_xlate()) Unknown attr bits: 0x60000
[23688.557868] Lustre: DEBUG MARKER: test -f /tmp/t32/sha1sums
[23688.885389] Lustre: DEBUG MARKER: cat /tmp/t32/sha1sums
[23690.182337] Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2&amp;gt;/dev/null ||
[23690.182337] 				/usr/sbin/lctl lustre_build_version 2&amp;gt;/dev/null ||
[23690.182337] 				/usr/sbin/lctl --version 2&amp;gt;/dev/null | cut -d&apos; &apos; -f2
[23690.526084] Lustre: DEBUG MARKER: /usr/sbin/lctl conf_param t32fs-MDT0000.mdc.max_rpcs_in_flight=&apos;14&apos;
[23690.690187] Lustre: Modifying parameter t32fs-MDT0000-mdc.mdc.max_rpcs_in_flight in log t32fs-client
[23690.691354] Lustre: Skipped 6 previous similar messages
[23691.038484] Lustre: DEBUG MARKER: umount -d /tmp/t32/mnt/mdt
[23691.209348] Lustre: Failing over t32fs-MDT0000
[23691.377221] Lustre: server umount t32fs-MDT0000 complete
[23691.678261] Lustre: DEBUG MARKER: umount -d /tmp/t32/mnt/mdt1
[23691.844378] LustreError: 11-0: t32fs-MDT0000-lwp-MDT0001: operation mds_disconnect to node 0@lo failed: rc = -107
[23692.002188] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) header@ffff89bbb6989360[0x0, 1, [0x240000401:0x1:0x0] hash exist]{
[23692.002188] 
[23692.003646] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....mdt@ffff89bbb69893b0mdt-object@ffff89bbb6989360( , writecount=0)
[23692.003646] 
[23692.005001] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....mdd@ffff89bb9d517870mdd-object@ffff89bb9d517870(open_count=0, valid=0, cltime=0, flags=0)
[23692.005001] 
[23692.006540] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....lod@ffff89bbe46fd5b0lod-object@ffff89bbe46fd5b0
[23692.006540] 
[23692.007729] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....osp@ffff89bbe218f2d0osp-object@ffff89bbe218f280
[23692.007729] 
[23692.008942] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) } header@ffff89bbb6989360
[23692.008942] 
[23692.009948] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) header@ffff89bbb6843510[0x0, 1, [0x280000401:0x1:0x0] hash exist]{
[23692.009948] 
[23692.011273] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....mdt@ffff89bbb6843560mdt-object@ffff89bbb6843510( , writecount=0)
[23692.011273] 
[23692.012624] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....mdd@ffff89bbe47c2e60mdd-object@ffff89bbe47c2e60(open_count=0, valid=0, cltime=0, flags=0)
[23692.012624] 
[23692.014163] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....lod@ffff89bbf9a19b60lod-object@ffff89bbf9a19b60
[23692.014163] 
[23692.015351] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....osd-ldiskfs@ffff89bbf93f8400osd-ldiskfs-object@ffff89bbf93f8400(i:ffff89bbdb2b9a38:25056/936018527)[plain]
[23692.015351] 
[23692.017030] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) } header@ffff89bbb6843510
[23692.017030] 
[23692.025094] LustreError: 11816:0:(lu_object.c:1177:lu_de[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;


&lt;p&gt;Looking at the kernel crash log, we can see the call trace&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[23691.678261] Lustre: DEBUG MARKER: umount -d /tmp/t32/mnt/mdt1
[23691.844378] LustreError: 11-0: t32fs-MDT0000-lwp-MDT0001: operation mds_disconnect to node 0@lo failed: rc = -107
[23692.002188] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) header@ffff89bbb6989360[0x0, 1, [0x240000401:0x1:0x0] hash exist]{

[23692.003646] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....mdt@ffff89bbb69893b0mdt-object@ffff89bbb6989360( , writecount=0)

[23692.005001] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....mdd@ffff89bb9d517870mdd-object@ffff89bb9d517870(open_count=0, valid=0, cltime=0, flags=0)

[23692.006540] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....lod@ffff89bbe46fd5b0lod-object@ffff89bbe46fd5b0

[23692.007729] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....osp@ffff89bbe218f2d0osp-object@ffff89bbe218f280

[23692.008942] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) } header@ffff89bbb6989360

[23692.009948] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) header@ffff89bbb6843510[0x0, 1, [0x280000401:0x1:0x0] hash exist]{

[23692.011273] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....mdt@ffff89bbb6843560mdt-object@ffff89bbb6843510( , writecount=0)

[23692.012624] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....mdd@ffff89bbe47c2e60mdd-object@ffff89bbe47c2e60(open_count=0, valid=0, cltime=0, flags=0)

[23692.014163] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....lod@ffff89bbf9a19b60lod-object@ffff89bbf9a19b60

[23692.015351] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) ....osd-ldiskfs@ffff89bbf93f8400osd-ldiskfs-object@ffff89bbf93f8400(i:ffff89bbdb2b9a38:25056/936018527)[plain]

[23692.017030] LustreError: 11816:0:(osp_dev.c:1277:osp_device_free()) } header@ffff89bbb6843510

[23692.025094] LustreError: 11816:0:(lu_object.c:1177:lu_device_fini()) ASSERTION( atomic_read(&amp;amp;d-&amp;gt;ld_ref) == 0 ) failed: Refcount is 1
[23692.025214] LustreError: 17697:0:(mdt_handler.c:4808:mdt_fini()) ASSERTION( atomic_read(&amp;amp;d-&amp;gt;ld_ref) == 0 ) failed: 
[23692.025215] LustreError: 17697:0:(mdt_handler.c:4808:mdt_fini()) LBUG
[23692.025217] Pid: 17697, comm: umount 3.10.0-862.9.1.el7_lustre.x86_64 #1 SMP Mon Aug 27 17:48:12 UTC 2018
[23692.025217] Call Trace:
[23692.025249]  [&amp;lt;ffffffffc09287cc&amp;gt;] libcfs_call_trace+0x8c/0xc0 [libcfs]
[23692.025254]  [&amp;lt;ffffffffc092887c&amp;gt;] lbug_with_loc+0x4c/0xa0 [libcfs]
[23692.025266]  [&amp;lt;ffffffffc13d2872&amp;gt;] mdt_device_fini+0x8f2/0x930 [mdt]
[23692.025302]  [&amp;lt;ffffffffc0afc4b7&amp;gt;] class_cleanup+0x987/0xce0 [obdclass]
[23692.025318]  [&amp;lt;ffffffffc0afe83f&amp;gt;] class_process_config+0x19bf/0x2420 [obdclass]
[23692.025332]  [&amp;lt;ffffffffc0aff466&amp;gt;] class_manual_cleanup+0x1c6/0x710 [obdclass]
[23692.025353]  [&amp;lt;ffffffffc0b2c6de&amp;gt;] server_put_super+0x8de/0xcd0 [obdclass]
[23692.025376]  [&amp;lt;ffffffffa9c1debd&amp;gt;] generic_shutdown_super+0x6d/0x100
[23692.025378]  [&amp;lt;ffffffffa9c1e2a2&amp;gt;] kill_anon_super+0x12/0x20
[23692.025393]  [&amp;lt;ffffffffc0b01eb2&amp;gt;] lustre_kill_super+0x32/0x50 [obdclass]
[23692.025395]  [&amp;lt;ffffffffa9c1e65e&amp;gt;] deactivate_locked_super+0x4e/0x70
[23692.025397]  [&amp;lt;ffffffffa9c1ede6&amp;gt;] deactivate_super+0x46/0x60
[23692.025402]  [&amp;lt;ffffffffa9c3cd4f&amp;gt;] cleanup_mnt+0x3f/0x80
[23692.025404]  [&amp;lt;ffffffffa9c3cde2&amp;gt;] __cleanup_mnt+0x12/0x20
[23692.025417]  [&amp;lt;ffffffffa9ab803b&amp;gt;] task_work_run+0xbb/0xe0
[23692.025426]  [&amp;lt;ffffffffa9a2ac55&amp;gt;] do_notify_resume+0xa5/0xc0
[23692.025440]  [&amp;lt;ffffffffaa120ad8&amp;gt;] int_signal+0x12/0x17
[23692.025460]  [&amp;lt;ffffffffffffffff&amp;gt;] 0xffffffffffffffff
[23692.025463] Kernel panic - not syncing: LBUG
[23692.025469] CPU: 1 PID: 17697 Comm: umount Kdump: loaded Tainted: G           OE  ------------   3.10.0-862.9.1.el7_lustre.x86_64 #1
[23692.025469] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
[23692.025470] Call Trace:
[23692.025478]  [&amp;lt;ffffffffaa10e84e&amp;gt;] dump_stack+0x19/0x1b
[23692.025480]  [&amp;lt;ffffffffaa108b50&amp;gt;] panic+0xe8/0x21f
[23692.025486]  [&amp;lt;ffffffffc09288cb&amp;gt;] lbug_with_loc+0x9b/0xa0 [libcfs]
[23692.025493]  [&amp;lt;ffffffffc13d2872&amp;gt;] mdt_device_fini+0x8f2/0x930 [mdt]
[23692.025508]  [&amp;lt;ffffffffc0afc4b7&amp;gt;] class_cleanup+0x987/0xce0 [obdclass]
[23692.025522]  [&amp;lt;ffffffffc0afe83f&amp;gt;] class_process_config+0x19bf/0x2420 [obdclass]
[23692.025529]  [&amp;lt;ffffffffc0933bd7&amp;gt;] ? libcfs_debug_msg+0x57/0x80 [libcfs]
[23692.025543]  [&amp;lt;ffffffffc0aff466&amp;gt;] class_manual_cleanup+0x1c6/0x710 [obdclass]
[23692.025558]  [&amp;lt;ffffffffc0b2c6de&amp;gt;] server_put_super+0x8de/0xcd0 [obdclass]
[23692.025561]  [&amp;lt;ffffffffa9c1debd&amp;gt;] generic_shutdown_super+0x6d/0x100
[23692.025563]  [&amp;lt;ffffffffa9c1e2a2&amp;gt;] kill_anon_super+0x12/0x20
[23692.025577]  [&amp;lt;ffffffffc0b01eb2&amp;gt;] lustre_kill_super+0x32/0x50 [obdclass]
[23692.025578]  [&amp;lt;ffffffffa9c1e65e&amp;gt;] deactivate_locked_super+0x4e/0x70
[23692.025580]  [&amp;lt;ffffffffa9c1ede6&amp;gt;] deactivate_super+0x46/0x60
[23692.025581]  [&amp;lt;ffffffffa9c3cd4f&amp;gt;] cleanup_mnt+0x3f/0x80
[23692.025583]  [&amp;lt;ffffffffa9c3cde2&amp;gt;] __cleanup_mnt+0x12/0x20
[23692.025584]  [&amp;lt;ffffffffa9ab803b&amp;gt;] task_work_run+0xbb/0xe0
[23692.025586]  [&amp;lt;ffffffffa9a2ac55&amp;gt;] do_notify_resume+0xa5/0xc0
[23692.025588]  [&amp;lt;ffffffffaa120ad8&amp;gt;] int_signal+0x12/0x17
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We&#8217;ve seen conf-sanity test 32b crash for 2.11.0 servers at &lt;a href=&quot;https://testing.whamcloud.com/test_sets/fd4e6636-d44e-11e8-9238-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/fd4e6636-d44e-11e8-9238-52540065bddc&lt;/a&gt;&lt;/p&gt;</description>
                <environment>master clients with 2.11.0 or 2.10.5 servers</environment>
        <key id="53689">LU-11556</key>
            <summary>conf-sanity test 32b crashes on MDT umount with &#8220;ASSERTION( atomic_read(&amp;d-&gt;ld_ref) == 0 ) failed: Refcount is 1&#8221;</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                    </labels>
                <created>Mon, 22 Oct 2018 21:42:06 +0000</created>
                <updated>Wed, 2 Mar 2022 08:18:44 +0000</updated>
                                            <version>Lustre 2.12.0</version>
                    <version>Lustre 2.13.0</version>
                    <version>Lustre 2.12.1</version>
                    <version>Lustre 2.12.4</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>8</watches>
                                                                            <comments>
                            <comment id="235229" author="gerrit" created="Mon, 22 Oct 2018 23:18:40 +0000"  >&lt;p&gt;Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/33422&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/33422&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11556&quot; title=&quot;conf-sanity test 32b crashes on MDT umount with &#8220;ASSERTION( atomic_read(&amp;amp;d-&amp;gt;ld_ref) == 0 ) failed: Refcount is 1&#8221;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11556&quot;&gt;LU-11556&lt;/a&gt; tests: fix set_persistent_param_and_check breakage&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 319003a39aee9ec683341d79e1ab0bd681c03c2a&lt;/p&gt;</comment>
                            <comment id="235233" author="adilger" created="Mon, 22 Oct 2018 23:26:05 +0000"  >&lt;p&gt;Note that the above patch is &lt;b&gt;not&lt;/b&gt; intended to solve the problem of the crash, it is just fixing another test script problem I saw while looking at the logs. It was introduced by patch &lt;a href=&quot;https://review.whamcloud.com/30087&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/30087&lt;/a&gt; &quot;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7004&quot; title=&quot;fix &amp;quot;lctl set_param -P&amp;quot; to allow deprecation of &amp;quot;lctl conf_param&amp;quot;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7004&quot;&gt;&lt;del&gt;LU-7004&lt;/del&gt;&lt;/a&gt; tests: move from lctl conf_param to lctl set_param -P&quot; which recently landed, because that is calling &quot;set_persistent_param_and_check $node&quot; when it should be calling &quot;set_persistent_param_and_check $facet&quot;&lt;/p&gt;</comment>
                            <comment id="235833" author="jhammond" created="Mon, 29 Oct 2018 20:44:35 +0000"  >&lt;p&gt;This is a 2.10.5 bug not a 2.12-2.10.5 interop bug. (But it may be that this 2.10.5 bug is still present in 2.12.)&lt;/p&gt;</comment>
                            <comment id="236197" author="gerrit" created="Fri, 2 Nov 2018 07:17:00 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/33422/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/33422/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11556&quot; title=&quot;conf-sanity test 32b crashes on MDT umount with &#8220;ASSERTION( atomic_read(&amp;amp;d-&amp;gt;ld_ref) == 0 ) failed: Refcount is 1&#8221;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11556&quot;&gt;LU-11556&lt;/a&gt; tests: fix set_persistent_param_and_check breakage&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: cf9745943f6e003ef207adcab039cd472e6f3068&lt;/p&gt;</comment>
                            <comment id="241570" author="adilger" created="Thu, 7 Feb 2019 22:56:41 +0000"  >&lt;p&gt;+1 on b2_10: &lt;a href=&quot;https://testing.whamcloud.com/test_sets/f0c5be5c-2af0-11e9-9e7f-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/f0c5be5c-2af0-11e9-9e7f-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="244473" author="mdiep" created="Fri, 22 Mar 2019 00:39:59 +0000"  >&lt;p&gt;+1 on b2_12 &lt;a href=&quot;https://testing.whamcloud.com/test_sessions/949cecde-fa29-47f8-8b0b-7469db2b2989&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sessions/949cecde-fa29-47f8-8b0b-7469db2b2989&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="244859" author="jamesanunez" created="Thu, 28 Mar 2019 22:54:40 +0000"  >&lt;p&gt;I&apos;ve hit this issue three times in a row for conf-sanity test 32b while testing patch &lt;a href=&quot;https://review.whamcloud.com/#/c/33954/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/#/c/33954/&lt;/a&gt; with master, future 2.13.0, clients and 2.10.x servers. &lt;/p&gt;

&lt;p&gt;A couple of test crashes at&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/10949eb8-46f0-11e9-8e92-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/10949eb8-46f0-11e9-8e92-52540065bddc&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/6984d84c-51a6-11e9-a256-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/6984d84c-51a6-11e9-a256-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="269561" author="hornc" created="Thu, 7 May 2020 14:54:28 +0000"  >&lt;p&gt;+1 on master &lt;a href=&quot;https://testing.whamcloud.com/test_sessions/7e1b1932-32a6-4df8-bcc2-cf205d60f7e9&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sessions/7e1b1932-32a6-4df8-bcc2-cf205d60f7e9&lt;/a&gt;&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="35313">LU-7872</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="53693">LU-11558</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i004r3:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>