<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:57:54 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-6173] CPU stalled with obd_zombid running</title>
                <link>https://jira.whamcloud.com/browse/LU-6173</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Yesterday experienced a network problem. Consequently, we had a number of clients stalled. At least four were hanged in this situation. We captured a vmcore on one of the systems.&lt;/p&gt;

&lt;p&gt;Console logs showed one of the CPUs was detected to stall:&lt;br/&gt;
&quot;INFO: rcu_sched_state detected stall on CPU 9.&quot;&lt;/p&gt;

&lt;p&gt;All CPU&apos;s at r305i7n2 except CPU 9 were running migration process and&lt;br/&gt;
the rcu_sched_state detected CPU was running obd_zombid.&lt;br/&gt;
The console logs of other three systems confirmed the stalled CPU were&lt;br/&gt;
running obd_zombid also, but without vmcore I can not say for sure that&lt;br/&gt;
other CPU&apos;s were running &apos;migration&apos; as r305i7n2 did.&lt;/p&gt;

&lt;p&gt;The stack trace is:&lt;/p&gt;

&lt;p&gt;PID: 5070   TASK: ffff88046f086300  CPU: 9   COMMAND: &quot;obd_zombid&quot;&lt;br/&gt;
 #0 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc27e40&amp;#93;&lt;/span&gt; crash_nmi_callback at ffffffff810245af&lt;br/&gt;
 #1 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc27e50&amp;#93;&lt;/span&gt; notifier_call_chain at ffffffff81475847&lt;br/&gt;
 #2 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc27e80&amp;#93;&lt;/span&gt; __atomic_notifier_call_chain at ffffffff8147588d&lt;br/&gt;
 #3 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc27e90&amp;#93;&lt;/span&gt; notify_die at ffffffff814758dd&lt;br/&gt;
 #4 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc27ec0&amp;#93;&lt;/span&gt; default_do_nmi at ffffffff81472d37&lt;br/&gt;
 #5 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc27ee0&amp;#93;&lt;/span&gt; do_nmi at ffffffff81472f68&lt;br/&gt;
 #6 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc27ef0&amp;#93;&lt;/span&gt; restart_nmi at ffffffff814724b1&lt;br/&gt;
    &lt;span class=&quot;error&quot;&gt;&amp;#91;exception RIP: native_halt+1&amp;#93;&lt;/span&gt;&lt;br/&gt;
    RIP: ffffffff810300b1  RSP: ffff88087fc23de0  RFLAGS: 00000082&lt;br/&gt;
    RAX: 0000000000000000  RBX: 0000000000000000  RCX: 000000000000080f&lt;br/&gt;
    RDX: 0000000000000000  RSI: 00000000000000ff  RDI: 000000000000080f&lt;br/&gt;
    RBP: ffff88046d96fd78   R8: 0000000000000150   R9: ffffe8ffffc20738&lt;br/&gt;
    R10: 0000000000000006  R11: ffffffff8102b430  R12: 0000000000000000&lt;br/&gt;
    R13: 0000000000000006  R14: 0000000000000006  R15: 00000000fffffffb&lt;br/&gt;
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018&lt;br/&gt;
&amp;#8212; &amp;lt;NMI exception stack&amp;gt; &amp;#8212;&lt;br/&gt;
 #7 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc23de0&amp;#93;&lt;/span&gt; native_halt at ffffffff810300b1&lt;br/&gt;
 #8 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc23de0&amp;#93;&lt;/span&gt; halt_current_cpu at ffffffff81024959&lt;br/&gt;
 #9 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc23df0&amp;#93;&lt;/span&gt; lkdb_main_loop at ffffffff812548ec&lt;br/&gt;
#10 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc23ef0&amp;#93;&lt;/span&gt; kdba_main_loop at ffffffff8139bef2&lt;br/&gt;
#11 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc23f20&amp;#93;&lt;/span&gt; kdb at ffffffff8125199f&lt;br/&gt;
#12 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc23f80&amp;#93;&lt;/span&gt; kdb_ipi at ffffffff8124ea07&lt;br/&gt;
#13 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc23f90&amp;#93;&lt;/span&gt; smp_kdb_interrupt at ffffffff8139b656&lt;br/&gt;
#14 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88087fc23fb0&amp;#93;&lt;/span&gt; kdb_interrupt at ffffffff8147aca3&lt;br/&gt;
&amp;#8212; &amp;lt;IRQ stack&amp;gt; &amp;#8212;&lt;br/&gt;
#15 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88046d96fd78&amp;#93;&lt;/span&gt; kdb_interrupt at ffffffff8147aca3&lt;br/&gt;
    &lt;span class=&quot;error&quot;&gt;&amp;#91;exception RIP: _raw_spin_lock+24&amp;#93;&lt;/span&gt;&lt;br/&gt;
    RIP: ffffffff81471a88  RSP: ffff88046d96fe28  RFLAGS: 00000206&lt;br/&gt;
    RAX: 0000000000001700  RBX: ffff880867d28810  RCX: ffff880856c3be00&lt;br/&gt;
    RDX: 0000000000008000  RSI: ffff880856c3be00  RDI: ffff880430b100f8&lt;br/&gt;
    RBP: ffff880864634078   R8: 0000000000000002   R9: 0000000000000000&lt;br/&gt;
    R10: 0000000010000008  R11: 0000000000000000  R12: ffffffff8147ac9e&lt;br/&gt;
    R13: ffffffff811458be  R14: ffff880867d28810  R15: 0000000000000206&lt;br/&gt;
    ORIG_RAX: ffffffffffffff01  CS: 0010  SS: 0018&lt;br/&gt;
#16 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88046d96fe28&amp;#93;&lt;/span&gt; osc_cleanup at ffffffffa0a48829 &lt;span class=&quot;error&quot;&gt;&amp;#91;osc&amp;#93;&lt;/span&gt;&lt;br/&gt;
#17 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88046d96fe38&amp;#93;&lt;/span&gt; class_decref at ffffffffa076eed4 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
#18 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88046d96fea8&amp;#93;&lt;/span&gt; class_export_destroy at ffffffffa074c1de &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
#19 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88046d96fec8&amp;#93;&lt;/span&gt; obd_zombie_impexp_cull at ffffffffa074c61d &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
#20 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88046d96fee8&amp;#93;&lt;/span&gt; obd_zombie_impexp_thread at ffffffffa074c7bd &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
#21 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff88046d96ff48&amp;#93;&lt;/span&gt; kernel_thread_helper at ffffffff8147aae4&lt;/p&gt;</description>
                <environment>Git repo can be found at &lt;a href=&quot;https://github.com/jlan/lustre-nas&quot;&gt;https://github.com/jlan/lustre-nas&lt;/a&gt;&lt;br/&gt;
Server: centos 6.4 2.6.32-358.23.2.el6, lustre 2.4.3-12nasS&lt;br/&gt;
Client: sles11sp3 3.0.101-0.31.1, lustre 2.4.3-11nasC</environment>
        <key id="28442">LU-6173</key>
            <summary>CPU stalled with obd_zombid running</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="emoly.liu">Emoly Liu</assignee>
                                    <reporter username="jaylan">Jay Lan</reporter>
                        <labels>
                    </labels>
                <created>Wed, 28 Jan 2015 23:52:06 +0000</created>
                <updated>Thu, 14 Jun 2018 21:41:37 +0000</updated>
                            <resolved>Mon, 25 May 2015 22:41:49 +0000</resolved>
                                    <version>Lustre 2.7.0</version>
                    <version>Lustre 2.4.3</version>
                    <version>Lustre 2.5.3</version>
                                    <fixVersion>Lustre 2.8.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="105070" author="pjones" created="Thu, 29 Jan 2015 07:33:26 +0000"  >&lt;p&gt;Emoly&lt;/p&gt;

&lt;p&gt;Could you please advise?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="105482" author="emoly.liu" created="Tue, 3 Feb 2015 09:03:41 +0000"  >&lt;p&gt;Jay, could you please upload the full vmcore file for further investigation? Thanks.&lt;/p&gt;</comment>
                            <comment id="105483" author="emoly.liu" created="Tue, 3 Feb 2015 09:16:48 +0000"  >&lt;p&gt;BTW, do you have the dmesg log? I want to know what happened during the network problem.&lt;/p&gt;</comment>
                            <comment id="105571" author="jaylan" created="Tue, 3 Feb 2015 19:10:55 +0000"  >&lt;p&gt;File r305i7n2-20150128.bz2 contains the console log of the system when the system getting into trouble. Since stack traces of all CPU&apos;s were printed every 10 minutes, the network errors messages were flushed out from the demsg buffer, so I attached the console log here instead.&lt;/p&gt;

&lt;p&gt;User ID in the log has been replaced with &apos;xxx&apos;.&lt;/p&gt;

&lt;p&gt;Vmcore can only be seen by US citizen. I can send crash analysis information to you if that is OK. Otherwise, I will consult with other guys on how to send you encrypted vmcore. Please advise.&lt;/p&gt;</comment>
                            <comment id="105764" author="emoly.liu" created="Thu, 5 Feb 2015 02:11:22 +0000"  >&lt;p&gt;Jay, I am not a US citizen. You can send the crash analysis information to me first. If necessary, I will ask for my other US citizen colleague to help.&lt;/p&gt;</comment>
                            <comment id="105766" author="jaylan" created="Thu, 5 Feb 2015 02:48:04 +0000"  >&lt;p&gt;This tarball contains output of &apos;bt -a&apos;, &apos;ps -a&apos;, &apos;kmem -i&apos; and &apos;kmem -s&apos;.&lt;/p&gt;

&lt;p&gt;Let me know if you want me to provide you output of other crash commands.&lt;/p&gt;</comment>
                            <comment id="105948" author="green" created="Thu, 5 Feb 2015 22:44:56 +0000"  >&lt;p&gt;Hm, this really reminds me of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2543&quot; title=&quot;obd_zombid oops&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2543&quot;&gt;&lt;del&gt;LU-2543&lt;/del&gt;&lt;/a&gt; that I was hitting in pre 2.4 times, but we had a pretty effective patch for that that you do have in your tree.&lt;br/&gt;
I wonder if there was another corner case left that was closed later.&lt;/p&gt;</comment>
                            <comment id="105961" author="jaylan" created="Thu, 5 Feb 2015 23:51:07 +0000"  >&lt;p&gt;Slightly different. &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2543&quot; title=&quot;obd_zombid oops&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2543&quot;&gt;&lt;del&gt;LU-2543&lt;/del&gt;&lt;/a&gt; resulted in a kernel panic while we experienced system hang in this case.&lt;/p&gt;</comment>
                            <comment id="105969" author="green" created="Fri, 6 Feb 2015 00:25:17 +0000"  >&lt;p&gt;Well, you see - I hit kernel panic in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2543&quot; title=&quot;obd_zombid oops&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2543&quot;&gt;&lt;del&gt;LU-2543&lt;/del&gt;&lt;/a&gt; because I was running with a special kernel with a lot of extra debug turned on. So every time there&apos;s access to freed memory - it&apos;l crash, or if there&apos;s invalid spinlock magic - It&apos;ll crash and so on.&lt;br/&gt;
Those options are very heavy weight, so nobody runs with them in production.&lt;br/&gt;
So in your kernel if you encounter such a spinlock with random garbage in memory (there is not even a magic in the non-debug code so that it could be faster) - it&apos;ll just take this garbage at face value and will wait until the other holed (imagined) will go away and since there&apos;s nobody actually there holding tha tspinlock - the code will wait there forever - this is what we see in your backtrace.&lt;/p&gt;

&lt;p&gt;And that&apos;s why I think it&apos;s something very similar.&lt;/p&gt;</comment>
                            <comment id="105991" author="emoly.liu" created="Fri, 6 Feb 2015 03:20:39 +0000"  >&lt;p&gt;I ever suspected &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2543&quot; title=&quot;obd_zombid oops&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2543&quot;&gt;&lt;del&gt;LU-2543&lt;/del&gt;&lt;/a&gt; too, but its patch has been included. This issue needs more investigation.&lt;/p&gt;</comment>
                            <comment id="105994" author="green" created="Fri, 6 Feb 2015 04:24:43 +0000"  >&lt;p&gt;Yes, it does need more investigation, no question about that.&lt;br/&gt;
It does look like this is another case of use after free to me.&lt;br/&gt;
I need lustre kernel modules with debug info from your build to get some more info, please.&lt;/p&gt;</comment>
                            <comment id="106069" author="jaylan" created="Fri, 6 Feb 2015 18:32:31 +0000"  >&lt;p&gt;Hi Oleg,&lt;/p&gt;

&lt;p&gt;The lustre client debuginfo rpm has been uploaded to ftp.whamcloud.com. I attached &quot;.&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6173&quot; title=&quot;CPU stalled with obd_zombid running&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6173&quot;&gt;&lt;del&gt;LU-6173&lt;/del&gt;&lt;/a&gt;&quot; to the end of the rpm file.&lt;/p&gt;</comment>
                            <comment id="106403" author="green" created="Tue, 10 Feb 2015 04:28:47 +0000"  >&lt;p&gt;So, poking around in the crashdump, it looks like it is indeed something very similar to &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2543&quot; title=&quot;obd_zombid oops&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2543&quot;&gt;&lt;del&gt;LU-2543&lt;/del&gt;&lt;/a&gt;.&lt;br/&gt;
What we are seeing in the log is that the filesystem with name nbp6 is being unmounted while there are some communication problems to its OSTs (probably network hiccup mentioned).&lt;br/&gt;
By the time the crash happened the umount of nbp6 was already unmounted and the sbi structure freed, but OSC cleanups are still ongoing and those do access content of the sbi struct (ll_cache member of it). Since it contains garbage, attempt to geta spinlock fails.&lt;br/&gt;
This is evident since the only two left lustre filesystems mounted are nbp5 and nbp9.&lt;/p&gt;

&lt;p&gt;So, examining the disconnect code, it looks like client_common_put_super assumes the mere call to obd_disconnect(sbi-&amp;gt;ll_dt_exp); just marks the import disconnected, but if there are any requests in flight (highly likely if you have a broken connection and requests take seconds to timeout), then the actual final import put would not happen until this last request is finished (every request holds an import reference), and only then the final class_import_put() would happen that would call obd_zombie_import_add() increasing the zombie task list count and would stall obd_zombie_barrier().&lt;/p&gt;

&lt;p&gt;So the &quot;fix&quot; for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2543&quot; title=&quot;obd_zombid oops&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2543&quot;&gt;&lt;del&gt;LU-2543&lt;/del&gt;&lt;/a&gt; really failed to consider this scenario of inflight requests for all imports.&lt;br/&gt;
I see that ll_cache itself has a refcounter inside it, and perhaps that might be a much better proxy to determine when we are fine freeing the sbi struct. Niu?&lt;/p&gt;

&lt;p&gt;Actually I guess that would lead to unmount hanging until all requests finish processing which might not be ideal either in the face of broken connection, so potentially sbi freeing could be made asynchronous too.&lt;br/&gt;
This bug exists in master too, btw.&lt;/p&gt;</comment>
                            <comment id="106417" author="niu" created="Tue, 10 Feb 2015 08:23:01 +0000"  >&lt;blockquote&gt;
&lt;p&gt;So, examining the disconnect code, it looks like client_common_put_super assumes the mere call to obd_disconnect(sbi-&amp;gt;ll_dt_exp); just marks the import disconnected, but if there are any requests in flight (highly likely if you have a broken connection and requests take seconds to timeout), then the actual final import put would not happen until this last request is finished (every request holds an import reference), and only then the final class_import_put() would happen that would call obd_zombie_import_add() increasing the zombie task list count and would stall obd_zombie_barrier().&lt;br/&gt;
So the &quot;fix&quot; for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2543&quot; title=&quot;obd_zombid oops&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2543&quot;&gt;&lt;del&gt;LU-2543&lt;/del&gt;&lt;/a&gt; really failed to consider this scenario of inflight requests for all imports.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Will the inflight RPC hold the OSC export refcount as well? I was thinking that obd_disconnect() in client_common_put_super() shall put the last refcount of OSC export and make the umount wait in obd_zombie_barrier().&lt;/p&gt;</comment>
                            <comment id="106450" author="green" created="Tue, 10 Feb 2015 16:05:33 +0000"  >&lt;p&gt;Niu: It&apos;s right in the __ptlrpc_request_alloc():&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;                request-&amp;gt;rq_import = class_import_get(imp);
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;and the import stays put until all requests are drained, which might take awhile if the requests are stuck on the network.&lt;/p&gt;</comment>
                            <comment id="106604" author="gerrit" created="Wed, 11 Feb 2015 09:48:48 +0000"  >&lt;p&gt;Emoly Liu (emoly.liu@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/13727&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13727&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6173&quot; title=&quot;CPU stalled with obd_zombid running&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6173&quot;&gt;&lt;del&gt;LU-6173&lt;/del&gt;&lt;/a&gt; llite: allocate and free client cache asynchronously&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_4&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: ae23e1e99d072c3865ca2da538705eb61fc6c7c2&lt;/p&gt;</comment>
                            <comment id="106606" author="emoly.liu" created="Wed, 11 Feb 2015 09:51:03 +0000"  >&lt;p&gt;Thanks for Niu&amp;amp;Oleg&apos;s help! I pushed a patch for b2_4 for review.&lt;/p&gt;</comment>
                            <comment id="106625" author="pjones" created="Wed, 11 Feb 2015 13:55:41 +0000"  >&lt;p&gt;Emoly&lt;/p&gt;

&lt;p&gt;Is this patch also required for master/b2_5?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="106732" author="emoly.liu" created="Thu, 12 Feb 2015 00:47:48 +0000"  >&lt;p&gt;Peter, yes both master and b2_5 need the patch. I will create one for master later.&lt;/p&gt;</comment>
                            <comment id="106795" author="gerrit" created="Thu, 12 Feb 2015 14:12:56 +0000"  >&lt;p&gt;Emoly Liu (emoly.liu@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/13746&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13746&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6173&quot; title=&quot;CPU stalled with obd_zombid running&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6173&quot;&gt;&lt;del&gt;LU-6173&lt;/del&gt;&lt;/a&gt; llite: allocate and free client cache asynchronously&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 570a48915a6935b8d180dafded4befaa2447b585&lt;/p&gt;</comment>
                            <comment id="108586" author="gerrit" created="Tue, 3 Mar 2015 17:20:46 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/13746/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13746/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6173&quot; title=&quot;CPU stalled with obd_zombid running&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6173&quot;&gt;&lt;del&gt;LU-6173&lt;/del&gt;&lt;/a&gt; llite: allocate and free client cache asynchronously&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 302c5bfebe61e988dbd27063becc4ef90befc6df&lt;/p&gt;</comment>
                            <comment id="116349" author="pjones" created="Mon, 25 May 2015 22:41:49 +0000"  >&lt;p&gt;Landed for 2.8&lt;/p&gt;</comment>
                            <comment id="116466" author="jaylan" created="Tue, 26 May 2015 23:59:38 +0000"  >&lt;p&gt;Could you provide a 2.5 back port? Thanks!&lt;/p&gt;</comment>
                            <comment id="116468" author="pjones" created="Wed, 27 May 2015 00:18:09 +0000"  >&lt;p&gt;Yes this is being worked on&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                                <inwardlinks description="is related to">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="16863" name="LU6173.crash-analysis.tgz" size="14431" author="jaylan" created="Thu, 5 Feb 2015 02:48:04 +0000"/>
                            <attachment id="16838" name="r305i7n2-20150128.bz2" size="320368" author="jaylan" created="Tue, 3 Feb 2015 19:10:55 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzx55b:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>17274</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>