<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:02:21 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-6684] lctl lfsck_stop hangs</title>
                <link>https://jira.whamcloud.com/browse/LU-6684</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;As mentioned in  &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6683&quot; title=&quot;OSS crash when starting lfsck layout check&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6683&quot;&gt;&lt;del&gt;LU-6683&lt;/del&gt;&lt;/a&gt;, I ran into a situation where lctl lfsck_stop just hangs indefinitely.&lt;/p&gt;

&lt;p&gt;I have managed to reproduce this twice: &lt;/p&gt;

&lt;p&gt;start lfsck (using lctl lfsck_start -M play01-MDT0000 -t layout), this crashes the OSS servers, reboot the servers and restart the OSTs. Attempting to stop the lfsck in this state just hangs. I have waited &amp;gt;1h and it was still hanging. Unmounting the MDT in this situation also appears to be hanging (after 30 minutes I power cycled the MDS).&lt;/p&gt;</description>
                <environment></environment>
        <key id="30488">LU-6684</key>
            <summary>lctl lfsck_stop hangs</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="yong.fan">nasf</assignee>
                                    <reporter username="ferner">Frederik Ferner</reporter>
                        <labels>
                    </labels>
                <created>Wed, 3 Jun 2015 16:43:22 +0000</created>
                <updated>Wed, 6 Dec 2017 16:29:24 +0000</updated>
                            <resolved>Tue, 2 Feb 2016 04:38:14 +0000</resolved>
                                    <version>Lustre 2.7.0</version>
                                    <fixVersion>Lustre 2.8.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>13</watches>
                                                                            <comments>
                            <comment id="117333" author="pjones" created="Wed, 3 Jun 2015 18:54:25 +0000"  >&lt;p&gt;Fan Yong&lt;/p&gt;

&lt;p&gt;Could you please advise on this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="117411" author="yong.fan" created="Thu, 4 Jun 2015 08:40:02 +0000"  >&lt;p&gt;Hi Frederik, if you can reproduce the issue, then please do as following on the MDT:&lt;/p&gt;

&lt;p&gt;1) echo -1 &amp;gt; /proc/sys/lnet/debug&lt;br/&gt;
2) lctl clear&lt;br/&gt;
3) dmesg -c&lt;br/&gt;
4) when the lfsck_stop hung, &quot;lctl dk &amp;gt; /tmp/lustre.log&quot;&lt;br/&gt;
5) echo t &amp;gt; /proc/sysrq-trigger&lt;br/&gt;
6) dmesg &amp;gt; /tmp/lustre.dmesg&lt;/p&gt;

&lt;p&gt;Please attach the lustre.log and lustre.dmesg. Thanks!&lt;/p&gt;</comment>
                            <comment id="117419" author="ferner" created="Thu, 4 Jun 2015 11:09:17 +0000"  >&lt;p&gt;I have reproduced it, files are attached. &lt;/p&gt;

&lt;p&gt;(Note this was before applying the patch from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6683&quot; title=&quot;OSS crash when starting lfsck layout check&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6683&quot;&gt;&lt;del&gt;LU-6683&lt;/del&gt;&lt;/a&gt;, so all OSSes were down while the lfsck_stop was hanging.)&lt;/p&gt;</comment>
                            <comment id="117658" author="yong.fan" created="Sat, 6 Jun 2015 01:54:08 +0000"  >&lt;p&gt;According to the log, the fsck_stop was waiting for the the layout LFSCK thread to exit, but the latter one was in sending RPC to the OST. At that time, the connection between the MDT and the OST was broken, and the MDT was trying to reconnect, but the reconnect RPC expired and re-try reconnect...&lt;/p&gt;</comment>
                            <comment id="117687" author="adilger" created="Sun, 7 Jun 2015 04:44:38 +0000"  >&lt;p&gt;I think that stopping the MDT or OST in such a case is too much. Is the RPC stuck at the ptlrpc layer?  Is it the RPC sent by lfsck_stop itself to the OSS to stop layout lfsck that is stuck or is lfsck_stop stuck waiting for something else?  Is this RPC sent by ptlrpcd or could ctrl-C interrupt the wait like some normal user process?&lt;/p&gt;

&lt;p&gt;Having a stack trace would be useful. Fan Yong can you please create a sanity-lfsck test case for this and then collect a stack trace so it is more clear what is stuck and where. &lt;/p&gt;</comment>
                            <comment id="117688" author="yong.fan" created="Sun, 7 Jun 2015 06:00:01 +0000"  >&lt;p&gt;The stack trace is clear as following:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lfsck_layout  S 0000000000000003     0  3643      2 0x00000000
 ffff880158e75a40 0000000000000046 0000000000000000 0000000000000000
 ffff8802a40b0ef0 ffff8802a40b0ec0 00020c3c3569f55d ffff8802a40b0ef0
 ffff880158e75a10 000000012256a121 ffff88012c1b05f8 ffff880158e75fd8
Call Trace:
 [&amp;lt;ffffffff8152b102&amp;gt;] schedule_timeout+0x192/0x2e0
 [&amp;lt;ffffffff810874f0&amp;gt;] ? process_timeout+0x0/0x10
 [&amp;lt;ffffffffa09bd0e2&amp;gt;] ptlrpc_set_wait+0x2b2/0x890 [ptlrpc]
 [&amp;lt;ffffffffa09b29c0&amp;gt;] ? ptlrpc_interrupted_set+0x0/0x110 [ptlrpc]
 [&amp;lt;ffffffff81064b90&amp;gt;] ? default_wake_function+0x0/0x20
 [&amp;lt;ffffffffa09c7dc6&amp;gt;] ? lustre_msg_set_jobid+0xb6/0x140 [ptlrpc]
 [&amp;lt;ffffffffa09bd741&amp;gt;] ptlrpc_queue_wait+0x81/0x220 [ptlrpc]
 [&amp;lt;ffffffffa0a356d1&amp;gt;] out_remote_sync+0x111/0x200 [ptlrpc]
 [&amp;lt;ffffffffa144ca92&amp;gt;] osp_attr_get+0x352/0x600 [osp]
 [&amp;lt;ffffffffa1219e50&amp;gt;] lfsck_layout_assistant_handler_p1+0x530/0x19f0 [lfsck]
 [&amp;lt;ffffffffa066b1c1&amp;gt;] ? libcfs_debug_msg+0x41/0x50 [libcfs]
 [&amp;lt;ffffffffa11e06e6&amp;gt;] lfsck_assistant_engine+0x496/0x1de0 [lfsck]
 [&amp;lt;ffffffff81064b90&amp;gt;] ? default_wake_function+0x0/0x20
 [&amp;lt;ffffffffa11e0250&amp;gt;] ? lfsck_assistant_engine+0x0/0x1de0 [lfsck]
 [&amp;lt;ffffffff8109e66e&amp;gt;] kthread+0x9e/0xc0
 [&amp;lt;ffffffff8100c20a&amp;gt;] child_rip+0xa/0x20
 [&amp;lt;ffffffff8109e5d0&amp;gt;] ? kthread+0x0/0xc0
 [&amp;lt;ffffffff8100c200&amp;gt;] ? child_rip+0x0/0x20
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It was inside ptlrpc layout, the LFSCK control/flags could not wakeup such thread. At that time, the target OSS was down, so the RPC (for attr_get, not for lfsck_stop) expired, then triggered the re-connection and would try to re-send the RPC after the connection recovered. Because LFSCK shares the same logic with cross-MDT operations (from OSP =&amp;gt; ptlrpc view, they are indistinguishable) for RPC handing, we cannot simply make the RPC to be set as no_resend.&lt;br/&gt;
On the other hand, the LFSCK engine is background thread, cannot receive ctrl-C, but maybe we can use &quot;kill -9 $PID&quot; for that. But I am not sure whether we should allow someone to kill the background LFSCK engine by SIGKILL instead of the lfsck_stop interface.&lt;/p&gt;</comment>
                            <comment id="117696" author="adilger" created="Sun, 7 Jun 2015 16:30:33 +0000"  >&lt;p&gt;If would be possible for &quot;lctl lfsck_stop&quot; to send SIGINT or SIGKILL to the lfsck thread to interrupt it, if it has the right LWI handler in out_remote_sync().&lt;/p&gt;</comment>
                            <comment id="117714" author="yong.fan" created="Mon, 8 Jun 2015 07:44:28 +0000"  >&lt;p&gt;In theory, we can do that. But the LWI is declared inside ptlrpc layer. If we want to make the (LFSCK) thread that is waiting on the LWI to handle SIGKILL, that means any thread (not only LFSCK engine, but also other RPC service thread, ptlrpcd thread, and so on) can by killed by user via &quot;kill -9 $PID&quot;. It is not what we want, especially that someone may do that by wrong.&lt;/p&gt;

&lt;p&gt;If we want the SIGKILL only to be handled by LFSCK engine, then we need some mechanism to make the ptlrpc layer to distinguish the LFSCK engine from other threads. But within current server-side API and stack framework, it is difficult to do that unless some very ugly hack.&lt;/p&gt;</comment>
                            <comment id="123300" author="adilger" created="Wed, 5 Aug 2015 08:23:16 +0000"  >&lt;p&gt;Nasf, I don&apos;t think we can require users to do things &quot;in the right order&quot; for them to work (i.e. to deactivate the OST/MDT manually before running &quot;lctl lfsck_stop&quot;) if the OST is down.  It definitely seems preferable to allow lfsck_stop to work properly regardless of the connection state.&lt;/p&gt;

&lt;p&gt;Would it be possible to allow the threads to be woken up by SIGINT but have them return -EINTR or -EAGAIN to the callers, and they decide whether to retry in that case?  I agree it isn&apos;t good to actually kill the ptlrpc threads.  Maybe &lt;tt&gt;ptlrpc_set_wait()&lt;/tt&gt; could be interruptible and cause ptlrpcd to abort those RPCs?  It seems that something like this is already close to possible&lt;/p&gt;</comment>
                            <comment id="123330" author="yong.fan" created="Wed, 5 Aug 2015 14:19:19 +0000"  >&lt;p&gt;It is NOT important to deactivate the OST/MDT manually before or after the &quot;lctl lfsck_stop&quot;, so it does not involve in &quot;the right order&quot;. Since the ptlrpcd thread can handle the deactivate event, is it still necessary to introduce new SIGINT handlers?&lt;/p&gt;</comment>
                            <comment id="132491" author="gerrit" created="Tue, 3 Nov 2015 15:59:06 +0000"  >&lt;p&gt;Fan Yong (fan.yong@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/17032&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/17032&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6684&quot; title=&quot;lctl lfsck_stop hangs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6684&quot;&gt;&lt;del&gt;LU-6684&lt;/del&gt;&lt;/a&gt; lfsck: stop lfsck even if some servers offline&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 51f3f69fb300c5f65cbed46a99ec8307cdc9a4f4&lt;/p&gt;</comment>
                            <comment id="134705" author="maximus" created="Mon, 30 Nov 2015 11:01:53 +0000"  >&lt;p&gt;Andreas and Fan,&lt;br/&gt;
What will happen in dry-run mode of OI scrub if MDS recovery happen or  MDT/OST down and reconnecting? Attaching log file 15.lctl.tgz for reference.&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;Here MDS is going in recovery while OI scrubbing operation is underway.&lt;/li&gt;
	&lt;li&gt;The lfsck ns assistant stage2 is restarted and post operation done.&lt;/li&gt;
	&lt;li&gt;Test expects dry-run to be completed in 6 sec but due to failover and MDS undergoing recovery, it&apos;s taking more time( &amp;gt; 6 sec)&lt;/li&gt;
&lt;/ul&gt;
</comment>
                            <comment id="135644" author="yong.fan" created="Wed, 9 Dec 2015 12:33:53 +0000"  >&lt;p&gt;There are several cases:&lt;/p&gt;

&lt;p&gt;1) The LFSCK/OI scrub is running on the MDS which to be remounted.&lt;/p&gt;

&lt;p&gt;1.1) If the MDT is umounted when the LFSCK/OI scrub running at background, then the LFSCK/OI scrub status will be marked as paused. And when the MDT is remounted up, after the recovery done, the paused LFSCK/OI scrub will be resumed from the latest checkpoint, its status will be set as the one before paused.&lt;/p&gt;

&lt;p&gt;1.2) If the MDT crashed when the LFSCK/OI scrub running at background, then there is no time for LFSCK/OI scrub to change its status. When the MDT is remounted up, its status will be marked as crashed, and after the recovery done, the crashed LFSCK/OI scrub will be resumed from the latest checkpoint, its status will be set as the one before crashed.&lt;/p&gt;

&lt;p&gt;2) Assume the LFSCK/OI scrub is running on one MDT_a, another related server MDT_b/OST_c to be remounted.&lt;/p&gt;

&lt;p&gt;2.1) If the LFSCK on the MDT_a needs to talk with the MDT_b/OST_c for verification that is amounted/crashed, then the LFSCK on MDT_a will get related connection failure, and then it knows that some of the peer server has left the LFSCK, and then the LFSCK on MDT_a will go ahead to verify part of the system, neither wait for ever nor fail out unless you specified &quot;-e abort&quot;. So the LFSCK on the MDT_a can finish finally, and the status will be &apos;partial&apos; if no other failure happened.&lt;/p&gt;

&lt;p&gt;2.2) If we want to stop the LFSCK on the MDT_a, then the MDT_a needs to notify related peer MDT_b/OST_c to stop the LFSCK also. But it found the peer server MDT_b/OST_c is offline already, then the LFSCK on the MDT_a will go ahead to handle the stop process.&lt;/p&gt;

&lt;p&gt;In this ticket, we hit trouble in the 2.2) case. Because the LFSCK did not detect OST_c offline, the lfsck_stop was blocked by the reconnection to the OST_c.&lt;/p&gt;</comment>
                            <comment id="138880" author="gerrit" created="Thu, 14 Jan 2016 03:59:27 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/17032/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/17032/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6684&quot; title=&quot;lctl lfsck_stop hangs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6684&quot;&gt;&lt;del&gt;LU-6684&lt;/del&gt;&lt;/a&gt; lfsck: stop lfsck even if some servers offline&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: afcf3026c6ad203b9882eaeac76326357f26fe71&lt;/p&gt;</comment>
                            <comment id="138916" author="yong.fan" created="Thu, 14 Jan 2016 15:22:31 +0000"  >&lt;p&gt;The patch has been landed to master.&lt;/p&gt;</comment>
                            <comment id="139050" author="yujian" created="Fri, 15 Jan 2016 17:49:28 +0000"  >&lt;p&gt;sanity-lfsck test 32 still hung on master branch:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;stop LFSCK
CMD: onyx-57vm7 /usr/sbin/lctl lfsck_stop -M lustre-MDT0000
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/e45d9b64-bbac-11e5-acbb-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/e45d9b64-bbac-11e5-acbb-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/34b63ba8-bb61-11e5-acbb-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/34b63ba8-bb61-11e5-acbb-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="139167" author="adilger" created="Mon, 18 Jan 2016 08:39:57 +0000"  >&lt;p&gt;And I verified that these two failures are on commits that include the fix that was recently landed here. &lt;/p&gt;</comment>
                            <comment id="139251" author="jamesanunez" created="Tue, 19 Jan 2016 16:37:16 +0000"  >&lt;p&gt;More failures on master and all have the previous patch landed for this ticket:&lt;br/&gt;
2016-01-15 15:29:21 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/48126330-bbce-11e5-8506-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/48126330-bbce-11e5-8506-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-15 20:20:20 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/7ec04c5e-bbfa-11e5-acbb-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/7ec04c5e-bbfa-11e5-acbb-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-16 00:40:11 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/4988556c-bc05-11e5-8f65-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/4988556c-bc05-11e5-8f65-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-18 22:08:02 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/3a54dfd8-be63-11e5-92e8-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/3a54dfd8-be63-11e5-92e8-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-18 22:59:29 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/642d055a-be69-11e5-92e8-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/642d055a-be69-11e5-92e8-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-18 23:21:01 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/c75e157e-be6e-11e5-b113-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/c75e157e-be6e-11e5-b113-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-19 07:37:19 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/325db7ae-beb4-11e5-8c8a-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/325db7ae-beb4-11e5-8c8a-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-19 12:10:06 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/144d9d36-bed9-11e5-ad7e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/144d9d36-bed9-11e5-ad7e-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-19 22:11:45 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/a2f0fede-bf2e-11e5-a659-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/a2f0fede-bf2e-11e5-a659-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-19 22:26:33 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/dc0ed974-bf2f-11e5-8f04-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/dc0ed974-bf2f-11e5-8f04-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-19 23:59:17 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/01d6b960-bf3f-11e5-8f04-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/01d6b960-bf3f-11e5-8f04-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-21 11:12:25 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/cd343b46-c061-11e5-a8e5-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/cd343b46-c061-11e5-a8e5-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-21 13:03:12 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/c0b04f0e-c070-11e5-956d-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/c0b04f0e-c070-11e5-956d-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-21 14:41:59 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/e4cffce0-c07f-11e5-a8e5-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/e4cffce0-c07f-11e5-a8e5-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-21 21:40:44 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/85d45ece-c0bc-11e5-9620-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/85d45ece-c0bc-11e5-9620-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2016-01-22 03:45:40 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/abf22bb0-c0ec-11e5-8d88-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/abf22bb0-c0ec-11e5-8d88-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="139490" author="gerrit" created="Wed, 20 Jan 2016 20:08:11 +0000"  >&lt;p&gt;James Nunez (james.a.nunez@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/18059&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/18059&lt;/a&gt;&lt;br/&gt;
Subject: Revert &quot;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6684&quot; title=&quot;lctl lfsck_stop hangs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6684&quot;&gt;&lt;del&gt;LU-6684&lt;/del&gt;&lt;/a&gt; lfsck: stop lfsck even if some servers offline&quot;&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 2505fd07b29ebfddcd29f16954908f6fe4670276&lt;/p&gt;</comment>
                            <comment id="139597" author="gerrit" created="Thu, 21 Jan 2016 16:46:57 +0000"  >&lt;p&gt;Fan Yong (fan.yong@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/18082&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/18082&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6684&quot; title=&quot;lctl lfsck_stop hangs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6684&quot;&gt;&lt;del&gt;LU-6684&lt;/del&gt;&lt;/a&gt; lfsck: set the lfsck notify as interruptable&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 68c078328be253735658fcf43fa98afff936ec6c&lt;/p&gt;</comment>
                            <comment id="139734" author="bogl" created="Fri, 22 Jan 2016 14:44:25 +0000"  >&lt;p&gt;another on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/85d45ece-c0bc-11e5-9620-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/85d45ece-c0bc-11e5-9620-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="140205" author="simmonsja" created="Wed, 27 Jan 2016 15:26:50 +0000"  >&lt;p&gt;This is also delaying the landing of several patches.&lt;/p&gt;</comment>
                            <comment id="140355" author="bogl" created="Thu, 28 Jan 2016 15:49:39 +0000"  >&lt;p&gt;another on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/150c07e2-c575-11e5-825e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/150c07e2-c575-11e5-825e-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="140620" author="yujian" created="Sun, 31 Jan 2016 20:58:05 +0000"  >&lt;p&gt;This is blocking patch review testing on master branch:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/a29caebe-c709-11e5-9b6d-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/a29caebe-c709-11e5-9b6d-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/fbfee2be-c70f-11e5-a037-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/fbfee2be-c70f-11e5-a037-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="140737" author="gerrit" created="Tue, 2 Feb 2016 04:30:42 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/18082/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/18082/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6684&quot; title=&quot;lctl lfsck_stop hangs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6684&quot;&gt;&lt;del&gt;LU-6684&lt;/del&gt;&lt;/a&gt; lfsck: set the lfsck notify as interruptable&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 069a9cf551c2e985ea254a1c570b22ed1d72d914&lt;/p&gt;</comment>
                            <comment id="140745" author="yong.fan" created="Tue, 2 Feb 2016 04:38:14 +0000"  >&lt;p&gt;The patch has been landed to master.&lt;/p&gt;</comment>
                            <comment id="141018" author="standan" created="Wed, 3 Feb 2016 17:48:47 +0000"  >&lt;p&gt;Another instance found for tag 2.7.66 for Full - EL6.7 Server/EL6.7 Client&lt;br/&gt;
On master, build# 3314&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/35490a0c-ca6e-11e5-9215-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/35490a0c-ca6e-11e5-9215-5254006e85c2&lt;/a&gt;&lt;br/&gt;
Date : 02/02/2016 Time: 9:20 am MST&lt;/p&gt;</comment>
                            <comment id="141103" author="yong.fan" created="Thu, 4 Feb 2016 02:37:51 +0000"  >&lt;blockquote&gt;
&lt;p&gt;Another instance found for tag 2.7.66 for Full - EL6.7 Server/EL6.7 Client&lt;br/&gt;
On master, build# 3314&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/35490a0c-ca6e-11e5-9215-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/35490a0c-ca6e-11e5-9215-5254006e85c2&lt;/a&gt;&lt;br/&gt;
Date : 02/02/2016 Time: 9:20 am MST&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;The patch 18082 has been landed just after new tag 2.7.66, please test the latest master.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="34098">LU-7662</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="49541">LU-10321</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="19753" name="15.lctl.tgz" size="646613" author="maximus" created="Mon, 30 Nov 2015 11:00:37 +0000"/>
                            <attachment id="18057" name="lustre.dmesg.bz2" size="37599" author="ferner" created="Thu, 4 Jun 2015 11:09:17 +0000"/>
                            <attachment id="18058" name="lustre.log.bz2" size="1444920" author="ferner" created="Thu, 4 Jun 2015 11:09:17 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzxeuf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>