<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:45:43 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4772] MGS is waiting for obd_unlinked_exports</title>
                <link>https://jira.whamcloud.com/browse/LU-4772</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Running acceptance-small for HSM testing on 2.5.1-RC4, replay-single hangs in test 53g. The test results are at &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/23724c76-abad-11e3-a696-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/23724c76-abad-11e3-a696-52540035b04c&lt;/a&gt; , but there are no logs for test 53g, except at the end of the suite_log:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== replay-single test 53g: |X| drop open reply and close request while close and open are both in flight == 18:37:54 (1394761074)
fail_loc=0x119
fail_loc=0x80000115
fail_loc=0
Replay barrier on lscratch-MDT0000
Failing mds1 on c16
Stopping /lustre/lscratch/mdt0 (opts:) on c16
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Looking at dmesg on the MDT:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: DEBUG MARKER: == replay-single test 53g: |X| drop open reply and close request while close and open are both in flight == 18:37:54 (1394761074)
Lustre: *** cfs_fail_loc=119, val=2147483648***
Lustre: Skipped 1 previous similar message
LustreError: 18010:0:(ldlm_lib.c:2415:target_send_reply_msg()) @@@ dropping reply  req@ffff88081e682000 x1462504360037684/t274877906958(0) o36-&amp;gt;97aaf730-d78d-08d9-43ce-9e768c9c685f@192.168.2.120@o2ib:0/0 lens 488/448 e 0 to 0 dl 1394761114 ref 1 fl Interpret:/0/0 rc 0/0
LustreError: 18010:0:(ldlm_lib.c:2415:target_send_reply_msg()) Skipped 1 previous similar message
Lustre: *** cfs_fail_loc=115, val=2147483648***
Turning device sda (0x800003) read-only
Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lscratch-MDT0000
Lustre: MGS is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 5. Is it stuck?
Lustre: MGS is waiting for obd_unlinked_exports more than 16 seconds. The obd refcount = 5. Is it stuck?
LustreError: 166-1: MGC192.168.2.116@o2ib: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail
Lustre: MGS is waiting for obd_unlinked_exports more than 32 seconds. The obd refcount = 5. Is it stuck?
Lustre: MGS is waiting for obd_unlinked_exports more than 64 seconds. The obd refcount = 5. Is it stuck?
Lustre: MGS is waiting for obd_unlinked_exports more than 128 seconds. The obd refcount = 5. Is it stuck?
Lustre: 1776:0:(client.c:1901:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1394761310/real 1394761310]  req@ffff8808227c9400 x1462503945016504/t0(0) o250-&amp;gt;MGC192.168.2.116@o2ib@0@lo:26/25 lens 400/544 e 0 to 1 dl 1394761346 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 1776:0:(client.c:1901:ptlrpc_expire_one_request()) Skipped 14 previous similar messages
LustreError: 137-5: lscratch-MDT0000_UUID: not available for connect from 192.168.2.119@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 66 previous similar messages
INFO: task umount:18492 blocked for more than 120 seconds.
      Not tainted 2.6.32-431.5.1.el6_lustre.x86_64 #1
&quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
umount        D 0000000000000000     0 18492  18491 0x00000080
 ffff880803de3aa8 0000000000000082 ffff880803de3a08 ffff88082104b000
 ffffffffa05a6985 0000000000000000 ffff8808102d6084 ffffffffa05a6985
 ffff88082a385af8 ffff880803de3fd8 000000000000fbc8 ffff88082a385af8
Call Trace:
 [&amp;lt;ffffffff81528eb2&amp;gt;] schedule_timeout+0x192/0x2e0
 [&amp;lt;ffffffff81084220&amp;gt;] ? process_timeout+0x0/0x10
 [&amp;lt;ffffffffa0528eeb&amp;gt;] obd_exports_barrier+0xab/0x180 [obdclass]
 [&amp;lt;ffffffffa0d4052e&amp;gt;] mgs_device_fini+0xfe/0x580 [mgs]
 [&amp;lt;ffffffffa0554523&amp;gt;] class_cleanup+0x573/0xd30 [obdclass]
 [&amp;lt;ffffffffa052b086&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
 [&amp;lt;ffffffffa055624a&amp;gt;] class_process_config+0x156a/0x1ad0 [obdclass]
 [&amp;lt;ffffffffa054f3a3&amp;gt;] ? lustre_cfg_new+0x2d3/0x6e0 [obdclass]
 [&amp;lt;ffffffffa0556929&amp;gt;] class_manual_cleanup+0x179/0x6f0 [obdclass]
 [&amp;lt;ffffffffa052b086&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
 [&amp;lt;ffffffffa0591dfd&amp;gt;] server_put_super+0x45d/0xf60 [obdclass]
 [&amp;lt;ffffffff8118b23b&amp;gt;] generic_shutdown_super+0x5b/0xe0
 [&amp;lt;ffffffff8118b326&amp;gt;] kill_anon_super+0x16/0x60
 [&amp;lt;ffffffffa05587d6&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
 [&amp;lt;ffffffff8118bac7&amp;gt;] deactivate_super+0x57/0x80
 [&amp;lt;ffffffff811aaaff&amp;gt;] mntput_no_expire+0xbf/0x110
 [&amp;lt;ffffffff811ab64b&amp;gt;] sys_umount+0x7b/0x3a0
 [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
Lustre: MGS is waiting for obd_unlinked_exports more than 256 seconds. The obd refcount = 5. Is it stuck?
INFO: task umount:18492 blocked for more than 120 seconds.
      Not tainted 2.6.32-431.5.1.el6_lustre.x86_64 #1
&quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
umount        D 0000000000000000     0 18492  18491 0x00000080
 ffff880803de3aa8 0000000000000082 ffff880803de3a08 ffff88082104b000
 ffffffffa05a6985 0000000000000000 ffff8808102d6084 ffffffffa05a6985
 ffff88082a385af8 ffff880803de3fd8 000000000000fbc8 ffff88082a385af8
Call Trace:
 [&amp;lt;ffffffff81528eb2&amp;gt;] schedule_timeout+0x192/0x2e0
 [&amp;lt;ffffffff81084220&amp;gt;] ? process_timeout+0x0/0x10
 [&amp;lt;ffffffffa0528eeb&amp;gt;] obd_exports_barrier+0xab/0x180 [obdclass]
 [&amp;lt;ffffffffa0d4052e&amp;gt;] mgs_device_fini+0xfe/0x580 [mgs]
 [&amp;lt;ffffffffa0554523&amp;gt;] class_cleanup+0x573/0xd30 [obdclass]
 [&amp;lt;ffffffffa052b086&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
 [&amp;lt;ffffffffa055624a&amp;gt;] class_process_config+0x156a/0x1ad0 [obdclass]
 [&amp;lt;ffffffffa054f3a3&amp;gt;] ? lustre_cfg_new+0x2d3/0x6e0 [obdclass]
 [&amp;lt;ffffffffa0556929&amp;gt;] class_manual_cleanup+0x179/0x6f0 [obdclass]
 [&amp;lt;ffffffffa052b086&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
 [&amp;lt;ffffffffa0591dfd&amp;gt;] server_put_super+0x45d/0xf60 [obdclass]
 [&amp;lt;ffffffff8118b23b&amp;gt;] generic_shutdown_super+0x5b/0xe0
 [&amp;lt;ffffffff8118b326&amp;gt;] kill_anon_super+0x16/0x60
 [&amp;lt;ffffffffa05587d6&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
 [&amp;lt;ffffffff8118bac7&amp;gt;] deactivate_super+0x57/0x80
 [&amp;lt;ffffffff811aaaff&amp;gt;] mntput_no_expire+0xbf/0x110
 [&amp;lt;ffffffff811ab64b&amp;gt;] sys_umount+0x7b/0x3a0
 [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
INFO: task umount:18492 blocked for more than 120 seconds.
      Not tainted 2.6.32-431.5.1.el6_lustre.x86_64 #1
&quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
umount        D 0000000000000000     0 18492  18491 0x00000080
 ffff880803de3aa8 0000000000000082 ffff880803de3a08 ffff88082104b000
 ffffffffa05a6985 0000000000000000 ffff8808102d6084 ffffffffa05a6985
 ffff88082a385af8 ffff880803de3fd8 000000000000fbc8 ffff88082a385af8
Call Trace:
 [&amp;lt;ffffffff81528eb2&amp;gt;] schedule_timeout+0x192/0x2e0
 [&amp;lt;ffffffff81084220&amp;gt;] ? process_timeout+0x0/0x10
 [&amp;lt;ffffffffa0528eeb&amp;gt;] obd_exports_barrier+0xab/0x180 [obdclass]
 [&amp;lt;ffffffffa0d4052e&amp;gt;] mgs_device_fini+0xfe/0x580 [mgs]
 [&amp;lt;ffffffffa0554523&amp;gt;] class_cleanup+0x573/0xd30 [obdclass]
 [&amp;lt;ffffffffa052b086&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
 [&amp;lt;ffffffffa055624a&amp;gt;] class_process_config+0x156a/0x1ad0 [obdclass]
 [&amp;lt;ffffffffa054f3a3&amp;gt;] ? lustre_cfg_new+0x2d3/0x6e0 [obdclass]
 [&amp;lt;ffffffffa0556929&amp;gt;] class_manual_cleanup+0x179/0x6f0 [obdclass]
 [&amp;lt;ffffffffa052b086&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
 [&amp;lt;ffffffffa0591dfd&amp;gt;] server_put_super+0x45d/0xf60 [obdclass]
 [&amp;lt;ffffffff8118b23b&amp;gt;] generic_shutdown_super+0x5b/0xe0
 [&amp;lt;ffffffff8118b326&amp;gt;] kill_anon_super+0x16/0x60
 [&amp;lt;ffffffffa05587d6&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
 [&amp;lt;ffffffff8118bac7&amp;gt;] deactivate_super+0x57/0x80
 [&amp;lt;ffffffff811aaaff&amp;gt;] mntput_no_expire+0xbf/0x110
 [&amp;lt;ffffffff811ab64b&amp;gt;] sys_umount+0x7b/0x3a0
 [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
Lustre: 1776:0:(client.c:1901:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1394761910/real 1394761910]  req@ffff8808049d6000 x1462503945016540/t0(0) o250-&amp;gt;MGC192.168.2.116@o2ib@0@lo:26/25 lens 400/544 e 0 to 1 dl 1394761965 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 1776:0:(client.c:1901:ptlrpc_expire_one_request()) Skipped 8 previous similar messages
LustreError: 137-5: lscratch-MDT0000_UUID: not available for connect from 192.168.2.118@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 120 previous similar messages
INFO: task umount:18492 blocked for more than 120 seconds.
      Not tainted 2.6.32-431.5.1.el6_lustre.x86_64 #1
&quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
umount        D 0000000000000000     0 18492  18491 0x00000080
 ffff880803de3aa8 0000000000000082 ffff880803de3a08 ffff88082104b000
 ffffffffa05a6985 0000000000000000 ffff8808102d6084 ffffffffa05a6985
 ffff88082a385af8 ffff880803de3fd8 000000000000fbc8 ffff88082a385af8
Call Trace:
 [&amp;lt;ffffffff81528eb2&amp;gt;] schedule_timeout+0x192/0x2e0
 [&amp;lt;ffffffff81084220&amp;gt;] ? process_timeout+0x0/0x10
 [&amp;lt;ffffffffa0528eeb&amp;gt;] obd_exports_barrier+0xab/0x180 [obdclass]
 [&amp;lt;ffffffffa0d4052e&amp;gt;] mgs_device_fini+0xfe/0x580 [mgs]
 [&amp;lt;ffffffffa0554523&amp;gt;] class_cleanup+0x573/0xd30 [obdclass]
 [&amp;lt;ffffffffa052b086&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
 [&amp;lt;ffffffffa055624a&amp;gt;] class_process_config+0x156a/0x1ad0 [obdclass]
 [&amp;lt;ffffffffa054f3a3&amp;gt;] ? lustre_cfg_new+0x2d3/0x6e0 [obdclass]
 [&amp;lt;ffffffffa0556929&amp;gt;] class_manual_cleanup+0x179/0x6f0 [obdclass]
 [&amp;lt;ffffffffa052b086&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
 [&amp;lt;ffffffffa0591dfd&amp;gt;] server_put_super+0x45d/0xf60 [obdclass]
 [&amp;lt;ffffffff8118b23b&amp;gt;] generic_shutdown_super+0x5b/0xe0
 [&amp;lt;ffffffff8118b326&amp;gt;] kill_anon_super+0x16/0x60
 [&amp;lt;ffffffffa05587d6&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
 [&amp;lt;ffffffff8118bac7&amp;gt;] deactivate_super+0x57/0x80
 [&amp;lt;ffffffff811aaaff&amp;gt;] mntput_no_expire+0xbf/0x110
 [&amp;lt;ffffffff811ab64b&amp;gt;] sys_umount+0x7b/0x3a0
 [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
Lustre: MGS is waiting for obd_unlinked_exports more than 512 seconds. The obd refcount = 5. Is it stuck?
INFO: task umount:18492 blocked for more than 120 seconds.
      Not tainted 2.6.32-431.5.1.el6_lustre.x86_64 #1
&quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
umount        D 0000000000000000     0 18492  18491 0x00000080
 ffff880803de3aa8 0000000000000082 ffff880803de3a08 ffff88082104b000
 ffffffffa05a6985 0000000000000000 ffff8808102d6084 ffffffffa05a6985
 ffff88082a385af8 ffff880803de3fd8 000000000000fbc8 ffff88082a385af8
Call Trace:
 [&amp;lt;ffffffff81528eb2&amp;gt;] schedule_timeout+0x192/0x2e0
 [&amp;lt;ffffffff81084220&amp;gt;] ? process_timeout+0x0/0x10
 [&amp;lt;ffffffffa0528eeb&amp;gt;] obd_exports_barrier+0xab/0x180 [obdclass]
 [&amp;lt;ffffffffa0d4052e&amp;gt;] mgs_device_fini+0xfe/0x580 [mgs]
 [&amp;lt;ffffffffa0554523&amp;gt;] class_cleanup+0x573/0xd30 [obdclass]
 [&amp;lt;ffffffffa052b086&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
 [&amp;lt;ffffffffa055624a&amp;gt;] class_process_config+0x156a/0x1ad0 [obdclass]
 [&amp;lt;ffffffffa054f3a3&amp;gt;] ? lustre_cfg_new+0x2d3/0x6e0 [obdclass]
 [&amp;lt;ffffffffa0556929&amp;gt;] class_manual_cleanup+0x179/0x6f0 [obdclass]
 [&amp;lt;ffffffffa052b086&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
 [&amp;lt;ffffffffa0591dfd&amp;gt;] server_put_super+0x45d/0xf60 [obdclass]
 [&amp;lt;ffffffff8118b23b&amp;gt;] generic_shutdown_super+0x5b/0xe0
 [&amp;lt;ffffffff8118b326&amp;gt;] kill_anon_super+0x16/0x60
 [&amp;lt;ffffffffa05587d6&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
 [&amp;lt;ffffffff8118bac7&amp;gt;] deactivate_super+0x57/0x80
 [&amp;lt;ffffffff811aaaff&amp;gt;] mntput_no_expire+0xbf/0x110
 [&amp;lt;ffffffff811ab64b&amp;gt;] sys_umount+0x7b/0x3a0
 [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
INFO: task umount:18492 blocked for more than 120 seconds.
      Not tainted 2.6.32-431.5.1.el6_lustre.x86_64 #1
&quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
umount        D 0000000000000000     0 18492  18491 0x00000080
 ffff880803de3aa8 0000000000000082 ffff880803de3a08 ffff88082104b000
 ffffffffa05a6985 0000000000000000 ffff8808102d6084 ffffffffa05a6985
 ffff88082a385af8 ffff880803de3fd8 000000000000fbc8 ffff88082a385af8
Call Trace:
 [&amp;lt;ffffffff81528eb2&amp;gt;] schedule_timeout+0x192/0x2e0
 [&amp;lt;ffffffff81084220&amp;gt;] ? process_timeout+0x0/0x10
 [&amp;lt;ffffffffa0528eeb&amp;gt;] obd_exports_barrier+0xab/0x180 [obdclass]
 [&amp;lt;ffffffffa0d4052e&amp;gt;] mgs_device_fini+0xfe/0x580 [mgs]
 [&amp;lt;ffffffffa0554523&amp;gt;] class_cleanup+0x573/0xd30 [obdclass]
 [&amp;lt;ffffffffa052b086&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
 [&amp;lt;ffffffffa055624a&amp;gt;] class_process_config+0x156a/0x1ad0 [obdclass]
 [&amp;lt;ffffffffa054f3a3&amp;gt;] ? lustre_cfg_new+0x2d3/0x6e0 [obdclass]
 [&amp;lt;ffffffffa0556929&amp;gt;] class_manual_cleanup+0x179/0x6f0 [obdclass]
 [&amp;lt;ffffffffa052b086&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
 [&amp;lt;ffffffffa0591dfd&amp;gt;] server_put_super+0x45d/0xf60 [obdclass]
 [&amp;lt;ffffffff8118b23b&amp;gt;] generic_shutdown_super+0x5b/0xe0
 [&amp;lt;ffffffff8118b326&amp;gt;] kill_anon_super+0x16/0x60
 [&amp;lt;ffffffffa05587d6&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
 [&amp;lt;ffffffff8118bac7&amp;gt;] deactivate_super+0x57/0x80
 [&amp;lt;ffffffff811aaaff&amp;gt;] mntput_no_expire+0xbf/0x110
 [&amp;lt;ffffffff811ab64b&amp;gt;] sys_umount+0x7b/0x3a0
 [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
LustreError: 137-5: lscratch-MDT0000_UUID: not available for connect from 192.168.2.120@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 113 previous similar messages
Lustre: 1776:0:(client.c:1901:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1394762585/real 1394762585]  req@ffff880819252800 x1462503945016576/t0(0) o250-&amp;gt;MGC192.168.2.116@o2ib@0@lo:26/25 lens 400/544 e 0 to 1 dl 1394762640 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 1776:0:(client.c:1901:ptlrpc_expire_one_request()) Skipped 8 previous similar messages
Lustre: MGS is waiting for obd_unlinked_exports more than 1024 seconds. The obd refcount = 5. Is it stuck?
LustreError: 137-5: lscratch-MDT0000_UUID: not available for connect from 192.168.2.117@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 120 previous similar messages
Lustre: 1776:0:(client.c:1901:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1394763260/real 1394763260]  req@ffff8808202e3800 x1462503945016612/t0(0) o250-&amp;gt;MGC192.168.2.116@o2ib@0@lo:26/25 lens 400/544 e 0 to 1 dl 1394763315 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 1776:0:(client.c:1901:ptlrpc_expire_one_request()) Skipped 8 previous similar messages
LustreError: 137-5: lscratch-MDT0000_UUID: not available for connect from 192.168.2.119@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 116 previous similar messages
Lustre: 1776:0:(client.c:1901:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1394763935/real 1394763935]  req@ffff8808040e9800 x1462503945016648/t0(0) o250-&amp;gt;MGC192.168.2.116@o2ib@0@lo:26/25 lens 400/544 e 0 to 1 dl 1394763990 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 1776:0:(client.c:1901:ptlrpc_expire_one_request()) Skipped 8 previous similar messages
LustreError: 137-5: lscratch-MDT0000_UUID: not available for connect from 192.168.2.119@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 113 previous similar messages
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;On the client, we see&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: DEBUG MARKER: == replay-single test 53g: |X| drop open reply and close request while close and open are both in flight == 18:37:54 (1394761074)
Lustre: DEBUG MARKER: cancel_lru_locks mdc start
Lustre: DEBUG MARKER: cancel_lru_locks mdc stop
Lustre: DEBUG MARKER: cancel_lru_locks mdc start
Lustre: DEBUG MARKER: cancel_lru_locks mdc stop
Lustre: DEBUG MARKER: local REPLAY BARRIER on lscratch-MDT0000
Lustre: 7328:0:(client.c:1901:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1394761618/real 1394761618]  req@ffff88080f538800 x1462504360038020/t0(0) o250-&amp;gt;MGC192.168.2.116@o2ib@192.168.2.116@o2ib:26/25 lens 400/544 e 0 to 1 dl 1394761673 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 7328:0:(client.c:1901:ptlrpc_expire_one_request()) Skipped 15 previous similar messages
Lustre: 7328:0:(client.c:1901:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1394762293/real 1394762293]  req@ffff880820549400 x1462504360038380/t0(0) o250-&amp;gt;MGC192.168.2.116@o2ib@192.168.2.116@o2ib:26/25 lens 400/544 e 0 to 1 dl 1394762348 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>OpenSFS cluster with one MGS/MDS (MDT is a partition of /dev/sda), one OSS with two OSTs, a node running robinhood, one client and agent node and one client</environment>
        <key id="23643">LU-4772</key>
            <summary>MGS is waiting for obd_unlinked_exports</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="tappro">Mikhail Pershin</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                    </labels>
                <created>Fri, 14 Mar 2014 19:38:29 +0000</created>
                <updated>Tue, 19 Mar 2019 15:12:56 +0000</updated>
                            <resolved>Mon, 13 Jul 2015 13:29:48 +0000</resolved>
                                    <version>Lustre 2.5.1</version>
                    <version>Lustre 2.7.0</version>
                    <version>Lustre 2.8.0</version>
                                    <fixVersion>Lustre 2.8.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>14</watches>
                                                                            <comments>
                            <comment id="85335" author="adilger" created="Fri, 30 May 2014 23:24:40 +0000"  >&lt;p&gt;Also hit this on master at unmount time in conf-sanity.sh test_38:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/aadb2e7a-e60a-11e3-87f3-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/aadb2e7a-e60a-11e3-87f3-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="108343" author="tappro" created="Sun, 1 Mar 2015 16:19:18 +0000"  >&lt;p&gt;Another hit in replay-single.sh test_28 &lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/08c1bd60-bfb3-11e4-88dc-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/08c1bd60-bfb3-11e4-88dc-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I check other logs for this issue and found more details:&lt;br/&gt;
00000020:00080000:0.0:1408673672.851541:0:15843:0:(genops.c:1541:print_export_data()) MGS: UNLINKED ffff88007b1f4800 ac3b9fd6-6587-c04b-9265-166e701c0e80 10.1.4.113@tcp 1 (0 0 0) 1 0 0 0: (null)  0&lt;/p&gt;

&lt;p&gt;that means that export has 1 reference but it is not reference from lock, request or transaction callback - they all are zeroes.&lt;/p&gt;</comment>
                            <comment id="108344" author="gerrit" created="Sun, 1 Mar 2015 17:29:37 +0000"  >&lt;p&gt;Mike Pershin (mike.pershin@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/13920&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13920&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4772&quot; title=&quot;MGS is waiting for obd_unlinked_exports&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4772&quot;&gt;&lt;del&gt;LU-4772&lt;/del&gt;&lt;/a&gt; mgs: free MGS fsdb before export barrier&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 05b9891447670042bf552a59422156efd628af97&lt;/p&gt;</comment>
                            <comment id="108345" author="tappro" created="Sun, 1 Mar 2015 17:34:19 +0000"  >&lt;p&gt;I think the reason for this issue was the msg_fsc structure which keeps export reference while alive. But msg_device_fini() waits for exports before cleaning up those structures. They are cleaned up through corresponding  FSDB cleanup: msg_fsdb_free() -&amp;gt; mgs_ir_fini_fs() -&amp;gt; mgs_fsc_cleanup_by_fsdb()&lt;/p&gt;</comment>
                            <comment id="111706" author="gerrit" created="Wed, 8 Apr 2015 02:04:24 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/13920/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13920/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4772&quot; title=&quot;MGS is waiting for obd_unlinked_exports&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4772&quot;&gt;&lt;del&gt;LU-4772&lt;/del&gt;&lt;/a&gt; mgs: check MGS refcounting before export barrier&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 61cc0fd9636ccf4d302a9f776fe98910e4b0333d&lt;/p&gt;</comment>
                            <comment id="111806" author="lixi" created="Thu, 9 Apr 2015 12:02:47 +0000"  >&lt;p&gt;Hi Mikhail Pershin,&lt;/p&gt;

&lt;p&gt;We are curretly hiting this issue pretty frequently. I saw your comments. I guess you are really close to finish a fix patch? Do you have any initial patch which we can confirm whether it works or not? Please let me know if there is anything I can do. &lt;/p&gt;</comment>
                            <comment id="111955" author="tappro" created="Sat, 11 Apr 2015 04:56:19 +0000"  >&lt;p&gt;Li Xi, initial patch was landed on master check it here too &lt;a href=&quot;http://review.whamcloud.com/13920/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13920/&lt;/a&gt;&lt;br/&gt;
this is debug patch, but if you will use it, note that there is assertion. If that is reason for this issue then it may happen on your side. maybe it is better to replace it with CERROR to just output error message in your case.&lt;/p&gt;</comment>
                            <comment id="111958" author="tappro" created="Sat, 11 Apr 2015 05:45:13 +0000"  >&lt;p&gt;It happened again: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/f4f6624e-df02-11e4-9454-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/f4f6624e-df02-11e4-9454-5254006e85c2&lt;/a&gt;&lt;br/&gt;
now with debug:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;18:50:33:LustreError: 30857:0:(mgs_handler.c:1379:mgs_fsc_debug()) ASSERTION( list_empty(&amp;amp;fsdb-&amp;gt;fsdb_clients) ) failed: Find FSC after cleanup, FSDB lustre
18:50:33:LustreError: 30857:0:(mgs_handler.c:1379:mgs_fsc_debug()) LBUG
18:50:33:Pid: 30857, comm: umount
18:50:33:
18:50:33:Call Trace:
18:50:33: [&amp;lt;ffffffffa0885875&amp;gt;] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
18:50:33: [&amp;lt;ffffffffa0885e77&amp;gt;] lbug_with_loc+0x47/0xb0 [libcfs]
18:50:33: [&amp;lt;ffffffffa10d0523&amp;gt;] mgs_fsc_debug+0x93/0xa0 [mgs]
18:50:33: [&amp;lt;ffffffffa10d36b5&amp;gt;] mgs_device_fini+0x115/0x5b0 [mgs]
18:50:33: [&amp;lt;ffffffffa098fc42&amp;gt;] class_cleanup+0x552/0xd10 [obdclass]
18:50:33: [&amp;lt;ffffffffa0970286&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
18:50:33: [&amp;lt;ffffffffa09923ea&amp;gt;] class_process_config+0x1fea/0x27c0 [obdclass]
18:50:33: [&amp;lt;ffffffffa0892161&amp;gt;] ? libcfs_debug_msg+0x41/0x50 [libcfs]
18:50:33: [&amp;lt;ffffffffa098b3a5&amp;gt;] ? lustre_cfg_new+0x435/0x630 [obdclass]
18:50:33: [&amp;lt;ffffffffa0992ce1&amp;gt;] class_manual_cleanup+0x121/0x870 [obdclass]
18:50:33: [&amp;lt;ffffffffa0970286&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
18:50:33: [&amp;lt;ffffffffa09cd0bf&amp;gt;] server_put_super+0x81f/0xe50 [obdclass]
18:50:33: [&amp;lt;ffffffff8119083b&amp;gt;] generic_shutdown_super+0x5b/0xe0
18:50:33: [&amp;lt;ffffffff81190926&amp;gt;] kill_anon_super+0x16/0x60
18:50:33: [&amp;lt;ffffffffa0994f36&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
18:50:33: [&amp;lt;ffffffff811910c7&amp;gt;] deactivate_super+0x57/0x80
18:50:33: [&amp;lt;ffffffff811b0cff&amp;gt;] mntput_no_expire+0xbf/0x110
18:50:33: [&amp;lt;ffffffff811b184b&amp;gt;] sys_umount+0x7b/0x3a0
18:50:33: [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
18:50:33:
18:50:33:Kernel panic - not syncing: LBUG
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Jinshan, it seems that FSC are not fully cleaned up during class_disconnect_exports()&lt;/p&gt;</comment>
                            <comment id="111959" author="gerrit" created="Sat, 11 Apr 2015 06:28:35 +0000"  >&lt;p&gt;Mike Pershin (mike.pershin@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/14443&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14443&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4772&quot; title=&quot;MGS is waiting for obd_unlinked_exports&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4772&quot;&gt;&lt;del&gt;LU-4772&lt;/del&gt;&lt;/a&gt; mgs: free MGS fsdb before export barrier&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: ac5a43d780bb4d28edd4a1492ec202e482c92705&lt;/p&gt;</comment>
                            <comment id="114904" author="adilger" created="Mon, 11 May 2015 18:19:40 +0000"  >&lt;p&gt;Saw this today in replay-single test_35 &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/a504e786-f557-11e4-8a1d-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/a504e786-f557-11e4-8a1d-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="115734" author="jay" created="Mon, 18 May 2015 23:24:17 +0000"  >&lt;p&gt;The assertion in debug patch is hit in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6616&quot; title=&quot;recovery-small test_27: (mgs_handler.c:1379:mgs_fsc_debug()) ASSERTION( list_empty(&amp;amp;fsdb-&amp;gt;fsdb_clients) ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6616&quot;&gt;&lt;del&gt;LU-6616&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;08:12:16:Lustre: DEBUG MARKER: == recovery-small test 27: fail LOV while using OSC&apos;s == 21:44:09 (1431578649)
08:12:16:Lustre: DEBUG MARKER: grep -c /mnt/mds1&apos; &apos; /proc/mounts
08:12:16:Lustre: DEBUG MARKER: umount -d /mnt/mds1
08:12:16:Lustre: Failing over lustre-MDT0000
08:12:16:LustreError: 12100:0:(mgs_handler.c:1379:mgs_fsc_debug()) ASSERTION( list_empty(&amp;amp;fsdb-&amp;gt;fsdb_clients) ) failed: Find FSC after cleanup, FSDB lustre
08:12:16:LustreError: 12100:0:(mgs_handler.c:1379:mgs_fsc_debug()) LBUG
08:12:17:Pid: 12100, comm: umount
08:12:18:
08:12:18:Call Trace:
08:12:18: [&amp;lt;ffffffffa0704875&amp;gt;] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
08:12:18: [&amp;lt;ffffffffa0704e77&amp;gt;] lbug_with_loc+0x47/0xb0 [libcfs]
08:12:19: [&amp;lt;ffffffffa0e7c523&amp;gt;] mgs_fsc_debug+0x93/0xa0 [mgs]
08:12:19: [&amp;lt;ffffffffa0e7f6b5&amp;gt;] mgs_device_fini+0x115/0x5b0 [mgs]
08:12:19: [&amp;lt;ffffffffa083d582&amp;gt;] class_cleanup+0x552/0xd10 [obdclass]
08:12:19: [&amp;lt;ffffffffa081dbc6&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
08:12:19: [&amp;lt;ffffffffa083fd2a&amp;gt;] class_process_config+0x1fea/0x27c0 [obdclass]
08:12:19: [&amp;lt;ffffffffa0710c31&amp;gt;] ? libcfs_debug_msg+0x41/0x50 [libcfs]
08:12:19: [&amp;lt;ffffffffa0838ce5&amp;gt;] ? lustre_cfg_new+0x435/0x630 [obdclass]
08:12:20: [&amp;lt;ffffffffa0840621&amp;gt;] class_manual_cleanup+0x121/0x870 [obdclass]
08:12:20: [&amp;lt;ffffffffa081dbc6&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
08:12:20: [&amp;lt;ffffffffa087b774&amp;gt;] server_put_super+0xc44/0xea0 [obdclass]
08:12:20: [&amp;lt;ffffffff81190adb&amp;gt;] generic_shutdown_super+0x5b/0xe0
08:12:20: [&amp;lt;ffffffff81190bc6&amp;gt;] kill_anon_super+0x16/0x60
08:12:20: [&amp;lt;ffffffffa0842876&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
08:12:21: [&amp;lt;ffffffff81191367&amp;gt;] deactivate_super+0x57/0x80
08:12:22: [&amp;lt;ffffffff811b0fbf&amp;gt;] mntput_no_expire+0xbf/0x110
08:12:23: [&amp;lt;ffffffff811b1b0b&amp;gt;] sys_umount+0x7b/0x3a0
08:12:23: [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
08:12:23:
08:12:23:Kernel panic - not syncing: LBUG
08:12:23:Pid: 12100, comm: umount Tainted: P           ---------------    2.6.32-504.16.2.el6_lustre.gd805a88.x86_64 #1
08:12:24:Call Trace:
08:12:24: [&amp;lt;ffffffff81529fbc&amp;gt;] ? panic+0xa7/0x16f
08:12:25: [&amp;lt;ffffffffa0704ecb&amp;gt;] ? lbug_with_loc+0x9b/0xb0 [libcfs]
08:12:25: [&amp;lt;ffffffffa0e7c523&amp;gt;] ? mgs_fsc_debug+0x93/0xa0 [mgs]
08:12:25: [&amp;lt;ffffffffa0e7f6b5&amp;gt;] ? mgs_device_fini+0x115/0x5b0 [mgs]
08:12:25: [&amp;lt;ffffffffa083d582&amp;gt;] ? class_cleanup+0x552/0xd10 [obdclass]
08:12:25: [&amp;lt;ffffffffa081dbc6&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
08:12:25: [&amp;lt;ffffffffa083fd2a&amp;gt;] ? class_process_config+0x1fea/0x27c0 [obdclass]
08:12:25: [&amp;lt;ffffffffa0710c31&amp;gt;] ? libcfs_debug_msg+0x41/0x50 [libcfs]
08:12:26: [&amp;lt;ffffffffa0838ce5&amp;gt;] ? lustre_cfg_new+0x435/0x630 [obdclass]
08:12:26: [&amp;lt;ffffffffa0840621&amp;gt;] ? class_manual_cleanup+0x121/0x870 [obdclass]
08:12:27: [&amp;lt;ffffffffa081dbc6&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
08:12:27: [&amp;lt;ffffffffa087b774&amp;gt;] ? server_put_super+0xc44/0xea0 [obdclass]
08:12:27: [&amp;lt;ffffffff81190adb&amp;gt;] ? generic_shutdown_super+0x5b/0xe0
08:12:28: [&amp;lt;ffffffff81190bc6&amp;gt;] ? kill_anon_super+0x16/0x60
08:12:28: [&amp;lt;ffffffffa0842876&amp;gt;] ? lustre_kill_super+0x36/0x60 [obdclass]
08:12:28: [&amp;lt;ffffffff81191367&amp;gt;] ? deactivate_super+0x57/0x80
08:12:28: [&amp;lt;ffffffff811b0fbf&amp;gt;] ? mntput_no_expire+0xbf/0x110
08:12:29: [&amp;lt;ffffffff811b1b0b&amp;gt;] ? sys_umount+0x7b/0x3a0
08:12:29: [&amp;lt;ffffffff8100b072&amp;gt;] ? system_call_fastpath+0x16/0x1b
08:12:29:Initializing cgroup subsys cpuset
08:12:30:Initializing cgroup subsys cpu
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Maloo test result is at: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/32d97c9c-fa45-11e4-8c8b-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/32d97c9c-fa45-11e4-8c8b-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="115753" author="jay" created="Tue, 19 May 2015 00:04:57 +0000"  >&lt;p&gt;I was trying to find out the root cause but I couldn&apos;t fetch the debug log. However, patch 14443 seems harmless anyway so I&apos;d like to give it a try.&lt;/p&gt;</comment>
                            <comment id="116048" author="gerrit" created="Wed, 20 May 2015 19:16:16 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/14443/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14443/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4772&quot; title=&quot;MGS is waiting for obd_unlinked_exports&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4772&quot;&gt;&lt;del&gt;LU-4772&lt;/del&gt;&lt;/a&gt; mgs: free MGS fsdb before export barrier&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 319acc84feeb66d08c7db408ad732889b46765e0&lt;/p&gt;</comment>
                            <comment id="121129" author="pjones" created="Mon, 13 Jul 2015 13:29:48 +0000"  >&lt;p&gt;Landed for 2.8&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                            <outwardlinks description="duplicates">
                                        <issuelink>
            <issuekey id="26132">LU-5539</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="21262">LU-4062</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="25065">LU-5161</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="28113">LU-6103</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="30222">LU-6616</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwhpb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>13125</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>