<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:43:52 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4565] recovery-mds-scale test_failover_ost: failed mounting ost after reboot</title>
                <link>https://jira.whamcloud.com/browse/LU-4565</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This issue was created by maloo for sarah &amp;lt;sarah@whamcloud.com&amp;gt;&lt;/p&gt;

&lt;p&gt;This issue relates to the following test suite run: &lt;a href=&quot;http://maloo.whamcloud.com/test_sets/ed6584da-847f-11e3-9133-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://maloo.whamcloud.com/test_sets/ed6584da-847f-11e3-9133-52540035b04c&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The sub-test test_failover_ost failed with the following error:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;test failed to respond and timed out&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;OST dmesg:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: DEBUG MARKER: zpool list -H lustre-ost2 &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 ||
			zpool import -f -o cachefile=none -d /dev/lvm-Role_OSS lustre-ost2
Lustre: DEBUG MARKER: mkdir -p /mnt/ost2; mount -t lustre   		                   lustre-ost2/ost2 /mnt/ost2
Lustre: 3388:0:(client.c:1903:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1390291785/real 1390291785]  req@ffff88006c21d800 x1457826572533816/t0(0) o38-&amp;gt;lustre-MDT0000-lwp-OST0000@10.10.4.198@tcp:12/10 lens 400/544 e 0 to 1 dl 1390291790 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
LustreError: 13a-8: Failed to get MGS log params and no local copy.
Lustre: 3388:0:(client.c:1903:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1390291791/real 1390291791]  req@ffff88006b122c00 x1457826572533912/t0(0) o38-&amp;gt;lustre-MDT0000-lwp-OST0001@10.10.4.198@tcp:12/10 lens 400/544 e 0 to 1 dl 1390291796 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 3 clients reconnect
LustreError: 137-5: lustre-OST0002_UUID: not available for connect from 10.10.4.200@tcp (no target). If you are running an HA pair check that the target is mounted on the other server.
Lustre: 3388:0:(client.c:1903:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1390291801/real 1390291801]  req@ffff88006c21dc00 x1457826572533940/t0(0) o38-&amp;gt;lustre-MDT0000-lwp-OST0001@10.10.4.198@tcp:12/10 lens 400/544 e 0 to 1 dl 1390291811 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: lustre-OST0000: Recovery over after 0:07, of 3 clients 3 recovered and 0 were evicted.
Lustre: lustre-OST0000: deleting orphan objects from 0x0:833 to 0x0:833
Lustre: 3388:0:(client.c:1903:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1390291816/real 1390291816]  req@ffff880037b0ec00 x1457826572533968/t0(0) o38-&amp;gt;lustre-MDT0000-lwp-OST0001@10.10.4.198@tcp:12/10 lens 400/544 e 0 to 1 dl 1390291831 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
LustreError: 137-5: lustre-OST0002_UUID: not available for connect from 10.10.4.200@tcp (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 18 previous similar messages
Lustre: 3388:0:(client.c:1903:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1390291836/real 1390291836]  req@ffff8800680a7000 x1457826572534004/t0(0) o38-&amp;gt;lustre-MDT0000-lwp-OST0001@10.10.4.198@tcp:12/10 lens 400/544 e 0 to 1 dl 1390291856 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 3388:0:(client.c:1903:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1390291866/real 1390291866]  req@ffff880067d01000 x1457826572534060/t0(0) o38-&amp;gt;lustre-MDT0000-lwp-OST0001@10.10.4.198@tcp:12/10 lens 400/544 e 0 to 1 dl 1390291891 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 3388:0:(client.c:1903:ptlrpc_expire_one_request()) Skipped 1 previous similar message
LustreError: 137-5: lustre-OST0002_UUID: not available for connect from 10.10.4.200@tcp (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 14 previous similar messages
Lustre: 3388:0:(client.c:1903:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1390291871/real 1390291871]  req@ffff88006addb800 x1457826572534072/t0(0) o38-&amp;gt;lustre-MDT0000-lwp-OST0001@10.10.4.198@tcp:12/10 lens 400/544 e 0 to 1 dl 1390291896 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
LustreError: 137-5: lustre-OST0002_UUID: not available for connect from 10.10.4.200@tcp (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 14 previous similar messages
Lustre: 3388:0:(client.c:1903:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1390291931/real 1390291931]  req@ffff88006addbc00 x1457826572534196/t0(0) o38-&amp;gt;lustre-MDT0000-lwp-OST0001@10.10.4.198@tcp:12/10 lens 400/544 e 0 to 1 dl 1390291956 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 3388:0:(client.c:1903:ptlrpc_expire_one_request()) Skipped 6 previous similar messages
LustreError: 137-5: lustre-OST0002_UUID: not available for connect from 10.10.4.200@tcp (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 35 previous similar messages
INFO: task mount.lustre:3972 blocked for more than 120 seconds.
&quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
mount.lustre  D 0000000000000001     0  3972   3971 0x00000080
 ffff88006af93718 0000000000000086 ffff88006af936a8 ffffffff81065c75
 000000106af936b8 ffff88007e4b2ad8 ffff8800022167a8 ffff880002216740
 ffff88006a935098 ffff88006af93fd8 000000000000fb88 ffff88006a935098
Call Trace:
 [&amp;lt;ffffffff81065c75&amp;gt;] ? enqueue_entity+0x125/0x410
 [&amp;lt;ffffffff8103c7d8&amp;gt;] ? pvclock_clocksource_read+0x58/0xd0
 [&amp;lt;ffffffff8150f475&amp;gt;] schedule_timeout+0x215/0x2e0
 [&amp;lt;ffffffff81065c75&amp;gt;] ? enqueue_entity+0x125/0x410
 [&amp;lt;ffffffff810572f4&amp;gt;] ? check_preempt_wakeup+0x1a4/0x260
 [&amp;lt;ffffffff8106605b&amp;gt;] ? enqueue_task_fair+0xfb/0x100
 [&amp;lt;ffffffff8150f0f3&amp;gt;] wait_for_common+0x123/0x180
 [&amp;lt;ffffffff81063990&amp;gt;] ? default_wake_function+0x0/0x20
 [&amp;lt;ffffffff8150f20d&amp;gt;] wait_for_completion+0x1d/0x20
 [&amp;lt;ffffffffa0858c03&amp;gt;] llog_process_or_fork+0x353/0x5f0 [obdclass]
 [&amp;lt;ffffffffa0858eb4&amp;gt;] llog_process+0x14/0x20 [obdclass]
 [&amp;lt;ffffffffa088d444&amp;gt;] class_config_parse_llog+0x1e4/0x330 [obdclass]
 [&amp;lt;ffffffffa105c2d2&amp;gt;] mgc_process_log+0xd22/0x18e0 [mgc]
 [&amp;lt;ffffffffa1056360&amp;gt;] ? mgc_blocking_ast+0x0/0x810 [mgc]
 [&amp;lt;ffffffffa0aaf2a0&amp;gt;] ? ldlm_completion_ast+0x0/0x930 [ptlrpc]
 [&amp;lt;ffffffffa105e4f5&amp;gt;] mgc_process_config+0x645/0x11d0 [mgc]
 [&amp;lt;ffffffffa089d476&amp;gt;] lustre_process_log+0x256/0xa70 [obdclass]
 [&amp;lt;ffffffffa086f832&amp;gt;] ? class_name2dev+0x42/0xe0 [obdclass]
 [&amp;lt;ffffffff81168043&amp;gt;] ? kmem_cache_alloc_trace+0x1a3/0x1b0
 [&amp;lt;ffffffffa086f8de&amp;gt;] ? class_name2obd+0xe/0x30 [obdclass]
 [&amp;lt;ffffffffa08ce71c&amp;gt;] server_start_targets+0x1c4c/0x1e00 [obdclass]
 [&amp;lt;ffffffffa08a0a7b&amp;gt;] ? lustre_start_mgc+0x48b/0x1e60 [obdclass]
 [&amp;lt;ffffffffa08989e0&amp;gt;] ? class_config_llog_handler+0x0/0x1880 [obdclass]
 [&amp;lt;ffffffffa08d3708&amp;gt;] server_fill_super+0xb98/0x1a64 [obdclass]
 [&amp;lt;ffffffffa08a2628&amp;gt;] lustre_fill_super+0x1d8/0x530 [obdclass]
 [&amp;lt;ffffffffa08a2450&amp;gt;] ? lustre_fill_super+0x0/0x530 [obdclass]
 [&amp;lt;ffffffff811845df&amp;gt;] get_sb_nodev+0x5f/0xa0
 [&amp;lt;ffffffffa089a365&amp;gt;] lustre_get_sb+0x25/0x30 [obdclass]
 [&amp;lt;ffffffff81183c1b&amp;gt;] vfs_kern_mount+0x7b/0x1b0
 [&amp;lt;ffffffff81183dc2&amp;gt;] do_kern_mount+0x52/0x130
 [&amp;lt;ffffffff81195382&amp;gt;] ? vfs_ioctl+0x22/0xa0
 [&amp;lt;ffffffff811a3f82&amp;gt;] do_mount+0x2d2/0x8d0
 [&amp;lt;ffffffff811a4610&amp;gt;] sys_mount+0x90/0xe0
 [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
Lustre: 3388:0:(client.c:1903:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1390291996/real 1390291996]  req@ffff88006adf3400 x1457826572534332/t0(0) o38-&amp;gt;lustre-MDT0000-lwp-OST0001@10.10.4.198@tcp:12/10 lens 400/544 e 0 to 1 dl 1390292021 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 3388:0:(client.c:1903:ptlrpc_expire_one_request()) Skipped 7 previous similar messages
LustreError: 137-5: lustre-OST0003_UUID: not available for connect from 10.10.4.202@tcp (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 124 previous similar messages
INFO: task mount.lustre:3972 blocked for more than 120 seconds.
&quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
mount.lustre  D 0000000000000001     0  3972   3971 0x00000080
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>lustre-master build # 1837  RHEL6 zfs</environment>
        <key id="22929">LU-4565</key>
            <summary>recovery-mds-scale test_failover_ost: failed mounting ost after reboot</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="tappro">Mikhail Pershin</assignee>
                                    <reporter username="maloo">Maloo</reporter>
                        <labels>
                            <label>zfs</label>
                    </labels>
                <created>Thu, 30 Jan 2014 07:49:16 +0000</created>
                <updated>Tue, 24 Jun 2014 17:52:18 +0000</updated>
                            <resolved>Tue, 24 Jun 2014 17:52:18 +0000</resolved>
                                                    <fixVersion>Lustre 2.6.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="76001" author="adilger" created="Fri, 31 Jan 2014 18:29:24 +0000"  >&lt;p&gt;It looks like the OST is stuck waiting to connect to the MGS but the MGS has not been started yet. I think the OST shouldn&apos;t block on the MGS, since that was never a requirement to mount the MGS first.  This is doubly true for 2.4+ since the OST already has its index assigned. &lt;/p&gt;</comment>
                            <comment id="82520" author="adilger" created="Fri, 25 Apr 2014 18:03:33 +0000"  >&lt;p&gt;It may be that fixing &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2059&quot; title=&quot;mgc to backup configuration on osd-based llogs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2059&quot;&gt;&lt;del&gt;LU-2059&lt;/del&gt;&lt;/a&gt; &quot;mgc to backup configuration on osd-based llogs&quot; would also fix this problem - then the OST can start with its local logs and avoid waiting for the MGS to start.&lt;/p&gt;</comment>
                            <comment id="84804" author="jlevi" created="Fri, 23 May 2014 18:49:20 +0000"  >&lt;p&gt;Now that &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2059&quot; title=&quot;mgc to backup configuration on osd-based llogs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2059&quot;&gt;&lt;del&gt;LU-2059&lt;/del&gt;&lt;/a&gt; is fixed, who can confirm Andreas&apos; thoughts that this would be fixed as well?&lt;br/&gt;
Thanks!&lt;/p&gt;</comment>
                            <comment id="85475" author="jlevi" created="Mon, 2 Jun 2014 17:20:41 +0000"  >&lt;p&gt;Please reopen this ticket if the issue reoccurs. &lt;/p&gt;</comment>
                            <comment id="87106" author="sarah" created="Thu, 19 Jun 2014 23:21:43 +0000"  >&lt;p&gt;Hit this error again in lustre-master tag-2.5.60 zfs failover testing:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/1173f18a-f62d-11e3-8491-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/1173f18a-f62d-11e3-8491-52540035b04c&lt;/a&gt;&lt;br/&gt;
OST dmesg&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: 26711:0:(client.c:1924:ptlrpc_expire_one_request()) Skipped 6 previous similar messages
INFO: task mount.lustre:27292 blocked for more than 120 seconds.
      Tainted: P           ---------------    2.6.32-431.17.1.el6_lustre.g8d5344f.x86_64 #1
&quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
mount.lustre  D 0000000000000000     0 27292  27291 0x00000080
 ffff88005e7d9718 0000000000000086 ffff88005e7d96a8 ffffffff81069f75
 000000105e7d96b8 ffff88007e4c2ad8 ffff8800023168e8 ffff880002316880
 ffff88005f2685f8 ffff88005e7d9fd8 000000000000fbc8 ffff88005f2685f8
Call Trace:
 [&amp;lt;ffffffff81069f75&amp;gt;] ? enqueue_entity+0x125/0x450
 [&amp;lt;ffffffff8103f9d8&amp;gt;] ? pvclock_clocksource_read+0x58/0xd0
 [&amp;lt;ffffffff81528f05&amp;gt;] schedule_timeout+0x215/0x2e0
 [&amp;lt;ffffffff81069f75&amp;gt;] ? enqueue_entity+0x125/0x450
 [&amp;lt;ffffffff8105ad54&amp;gt;] ? check_preempt_wakeup+0x1a4/0x260
 [&amp;lt;ffffffff8106a39b&amp;gt;] ? enqueue_task_fair+0xfb/0x100
 [&amp;lt;ffffffff81528b83&amp;gt;] wait_for_common+0x123/0x180
 [&amp;lt;ffffffff81061d00&amp;gt;] ? default_wake_function+0x0/0x20
 [&amp;lt;ffffffffa08eaa80&amp;gt;] ? client_lwp_config_process+0x0/0x199a [obdclass]
 [&amp;lt;ffffffff81528c9d&amp;gt;] wait_for_completion+0x1d/0x20
 [&amp;lt;ffffffffa086ea34&amp;gt;] llog_process_or_fork+0x344/0x550 [obdclass]
 [&amp;lt;ffffffffa086ec54&amp;gt;] llog_process+0x14/0x30 [obdclass]
 [&amp;lt;ffffffffa08a0524&amp;gt;] class_config_parse_llog+0x1e4/0x330 [obdclass]
 [&amp;lt;ffffffffa103d252&amp;gt;] mgc_process_log+0xdc2/0x1970 [mgc]
 [&amp;lt;ffffffffa1037290&amp;gt;] ? mgc_blocking_ast+0x0/0x810 [mgc]
 [&amp;lt;ffffffffa0acb250&amp;gt;] ? ldlm_completion_ast+0x0/0x930 [ptlrpc]
 [&amp;lt;ffffffffa103f485&amp;gt;] mgc_process_config+0x645/0x11d0 [mgc]
 [&amp;lt;ffffffffa08b07af&amp;gt;] lustre_process_log+0x20f/0xac0 [obdclass]
 [&amp;lt;ffffffffa08dfa3c&amp;gt;] ? server_find_mount+0xbc/0x160 [obdclass]
 [&amp;lt;ffffffff8116f303&amp;gt;] ? kmem_cache_alloc_trace+0x1a3/0x1b0
 [&amp;lt;ffffffffa08ad8af&amp;gt;] ? server_name2fsname+0x6f/0x90 [obdclass]
 [&amp;lt;ffffffffa08e5a16&amp;gt;] server_start_targets+0x12b6/0x1ae0 [obdclass]
 [&amp;lt;ffffffffa08b3eeb&amp;gt;] ? lustre_start_mgc+0x48b/0x1df0 [obdclass]
 [&amp;lt;ffffffffa08abdc0&amp;gt;] ? class_config_llog_handler+0x0/0x18b0 [obdclass]
 [&amp;lt;ffffffffa08e9ff8&amp;gt;] server_fill_super+0xb98/0x1620 [obdclass]
 [&amp;lt;ffffffffa08b5a28&amp;gt;] lustre_fill_super+0x1d8/0x530 [obdclass]
 [&amp;lt;ffffffffa08b5850&amp;gt;] ? lustre_fill_super+0x0/0x530 [obdclass]
 [&amp;lt;ffffffff8118be5f&amp;gt;] get_sb_nodev+0x5f/0xa0
 [&amp;lt;ffffffffa08ad775&amp;gt;] lustre_get_sb+0x25/0x30 [obdclass]
 [&amp;lt;ffffffff8118b4bb&amp;gt;] vfs_kern_mount+0x7b/0x1b0
 [&amp;lt;ffffffff8118b662&amp;gt;] do_kern_mount+0x52/0x130
 [&amp;lt;ffffffff8119d862&amp;gt;] ? vfs_ioctl+0x22/0xa0
 [&amp;lt;ffffffff811ac63b&amp;gt;] do_mount+0x2fb/0x930
 [&amp;lt;ffffffff811acd00&amp;gt;] sys_mount+0x90/0xe0
 [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
Lustre: 26711:0:(client.c:1924:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1402948490/real 1402948490]  req@ffff88005d6a5c00 x1471097868387440/t0(0) o38-&amp;gt;lustre-MDT0000-lwp-OST0001@10.1.6.21@tcp:12/10 lens 400/544 e 0 to 1 dl 1402948515 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 26711:0:(client.c:1924:ptlrpc_expire_one_request()) Skipped 7 previous similar messages
LustreError: 137-5: lustre-OST0002_UUID: not available for connect from 10.1.6.23@tcp (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 135 previous similar messages
INFO: task mount.lustre:27292 blocked for more than 120 seconds.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="87187" author="adilger" created="Fri, 20 Jun 2014 17:36:51 +0000"  >&lt;p&gt;Mike, could you please take a look at this again.&lt;/p&gt;</comment>
                            <comment id="87224" author="pjones" created="Sat, 21 Jun 2014 04:04:19 +0000"  >&lt;p&gt;I note that this fix tracked under &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2059&quot; title=&quot;mgc to backup configuration on osd-based llogs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2059&quot;&gt;&lt;del&gt;LU-2059&lt;/del&gt;&lt;/a&gt; was not in 2.5.60 - &lt;a href=&quot;http://git.whamcloud.com/fs/lustre-release.git/commit/8c21986e79f50131b0f381e5fe0311294328d660&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://git.whamcloud.com/fs/lustre-release.git/commit/8c21986e79f50131b0f381e5fe0311294328d660&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="87299" author="tappro" created="Mon, 23 Jun 2014 18:17:49 +0000"  >&lt;p&gt;Peter, in that case we need just &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2059&quot; title=&quot;mgc to backup configuration on osd-based llogs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2059&quot;&gt;&lt;del&gt;LU-2059&lt;/del&gt;&lt;/a&gt; ported to b2_5, right?&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="16192">LU-2059</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwdw7:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>12460</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>