<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:54:23 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5771] Crashed OSSs when unmounting OST without cleanuping orphan inodes properly</title>
                <link>https://jira.whamcloud.com/browse/LU-5771</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;During Tests, we hit something like following:&lt;/p&gt;

&lt;p&gt;&amp;lt;2&amp;gt;LDISKFS-fs error (device dm-2): __ldiskfs_ext_check_block: bad header/extent in inode #659: invalid magic - magic e000, entries 456, max 0(0), depth 51424(0)&lt;br/&gt;
&amp;lt;3&amp;gt;Aborting journal on device dm-2-8.&lt;br/&gt;
&amp;lt;2&amp;gt;LDISKFS-fs error (device dm-2) in ldiskfs_free_blocks: Journal has aborted&lt;br/&gt;
&amp;lt;2&amp;gt;LDISKFS-fs error (device dm-2) in ldiskfs_free_blocks: Journal has aborted&lt;br/&gt;
&amp;lt;2&amp;gt;LDISKFS-fs (dm-2): Remounting filesystem read-only&lt;br/&gt;
&amp;lt;2&amp;gt;LDISKFS-fs error (device dm-2) in ldiskfs_free_blocks: Journal has aborted&lt;br/&gt;
&amp;lt;2&amp;gt;LDISKFS-fs error (device dm-2) in ldiskfs_ext_remove_space: Journal has aborted&lt;br/&gt;
&amp;lt;2&amp;gt;LDISKFS-fs error (device dm-2) in ldiskfs_reserve_inode_write: Journal has aborted&lt;br/&gt;
&amp;lt;2&amp;gt;LDISKFS-fs error (device dm-2) in ldiskfs_ext_truncate: Journal has aborted&lt;br/&gt;
&amp;lt;4&amp;gt;LDISKFS-fs warning (device dm-2): ldiskfs_delete_inode: couldn&apos;t extend journal (err -5)&lt;br/&gt;
&amp;lt;3&amp;gt;LDISKFS-fs (dm-2): Inode 280 (ffff8803a9ecb6d8): orphan list check failed!&lt;/p&gt;

&lt;p&gt;Some bad thing happen which forces filesystem to readonly, and there are still in memory orphan inode that was not cleared, which cause following problem:&lt;/p&gt;

&lt;p&gt;&amp;lt;4&amp;gt;Pid: 45622, comm: umount Not tainted 2.6.32-431.17.1.el6_lustre.2.5.18.ddn2.x86_64 #1 Dell Inc. PowerEdge R620/01W23F&lt;br/&gt;
&amp;lt;4&amp;gt;RIP: 0010:&lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa169081a&amp;gt;&amp;#93;&lt;/span&gt;  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa169081a&amp;gt;&amp;#93;&lt;/span&gt; ldiskfs_put_super+0x33a/0x380 &lt;span class=&quot;error&quot;&gt;&amp;#91;ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt;RSP: 0018:ffff88027ecf39f8  EFLAGS: 00010296&lt;br/&gt;
&amp;lt;4&amp;gt;RAX: 003fffffffffffd4 RBX: ffff88102359c800 RCX: 00400000000000a4&lt;br/&gt;
&amp;lt;4&amp;gt;RDX: 0000000000000000 RSI: 0000000000000046 RDI: ffffffffa16a55b8&lt;br/&gt;
&amp;lt;4&amp;gt;RBP: ffff88027ecf3a38 R08: 0000000000000000 R09: ffffffff81645da0&lt;br/&gt;
&amp;lt;4&amp;gt;R10: 0000000000000001 R11: 0000000000000000 R12: ffff88102359c000&lt;br/&gt;
&amp;lt;4&amp;gt;R13: ffff88102359c980 R14: ffff88102359c9f0 R15: 004000000000006c&lt;br/&gt;
&amp;lt;4&amp;gt;FS:  00007fe81d661740(0000) GS:ffff880061c00000(0000) knlGS:0000000000000000&lt;br/&gt;
&amp;lt;4&amp;gt;CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b&lt;br/&gt;
&amp;lt;4&amp;gt;CR2: 00007fff13c76010 CR3: 0000001d707f0000 CR4: 00000000001407f0&lt;br/&gt;
&amp;lt;4&amp;gt;DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000&lt;br/&gt;
&amp;lt;4&amp;gt;DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400&lt;br/&gt;
&amp;lt;4&amp;gt;Process umount (pid: 45622, threadinfo ffff88027ecf2000, task ffff8810258ac040)&lt;br/&gt;
&amp;lt;4&amp;gt;Stack:&lt;br/&gt;
&amp;lt;4&amp;gt; ffff880200000000 ffff88027ecf39f8 ffff88102359c000 ffff88102359c000&lt;br/&gt;
&amp;lt;4&amp;gt;&amp;lt;d&amp;gt; ffffffffa169d5a0 ffffffff81c06500 ffff88102359c000 ffff880f0371e138&lt;br/&gt;
&amp;lt;4&amp;gt;&amp;lt;d&amp;gt; ffff88027ecf3a58 ffffffff8118af0b ffff881fe54cb540 0000000000000003&lt;br/&gt;
&amp;lt;4&amp;gt;Call Trace:&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8118af0b&amp;gt;&amp;#93;&lt;/span&gt; generic_shutdown_super+0x5b/0xe0&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8118afc1&amp;gt;&amp;#93;&lt;/span&gt; kill_block_super+0x31/0x50&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8118b797&amp;gt;&amp;#93;&lt;/span&gt; deactivate_super+0x57/0x80&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff811aa79f&amp;gt;&amp;#93;&lt;/span&gt; mntput_no_expire+0xbf/0x110&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa1719d99&amp;gt;&amp;#93;&lt;/span&gt; osd_umount+0x79/0x150 &lt;span class=&quot;error&quot;&gt;&amp;#91;osd_ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa171e9b7&amp;gt;&amp;#93;&lt;/span&gt; osd_device_fini+0x147/0x190 &lt;span class=&quot;error&quot;&gt;&amp;#91;osd_ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa103b973&amp;gt;&amp;#93;&lt;/span&gt; class_cleanup+0x573/0xd30 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa100e366&amp;gt;&amp;#93;&lt;/span&gt; ? class_name2dev+0x56/0xe0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa103d69a&amp;gt;&amp;#93;&lt;/span&gt; class_process_config+0x156a/0x1ad0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa1035ff3&amp;gt;&amp;#93;&lt;/span&gt; ? lustre_cfg_new+0x2d3/0x6e0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa103dd79&amp;gt;&amp;#93;&lt;/span&gt; class_manual_cleanup+0x179/0x6f0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa100c83b&amp;gt;&amp;#93;&lt;/span&gt; ? class_export_put+0x10b/0x2c0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa1723c65&amp;gt;&amp;#93;&lt;/span&gt; osd_obd_disconnect+0x1c5/0x1d0 &lt;span class=&quot;error&quot;&gt;&amp;#91;osd_ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa104031b&amp;gt;&amp;#93;&lt;/span&gt; lustre_put_lsi+0x1ab/0x11a0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa10488c8&amp;gt;&amp;#93;&lt;/span&gt; lustre_common_put_super+0x5d8/0xbf0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa1070f6d&amp;gt;&amp;#93;&lt;/span&gt; server_put_super+0x1bd/0xf60 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8118af0b&amp;gt;&amp;#93;&lt;/span&gt; generic_shutdown_super+0x5b/0xe0&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8118aff6&amp;gt;&amp;#93;&lt;/span&gt; kill_anon_super+0x16/0x60&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa103fc26&amp;gt;&amp;#93;&lt;/span&gt; lustre_kill_super+0x36/0x60 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8118b797&amp;gt;&amp;#93;&lt;/span&gt; deactivate_super+0x57/0x80&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff811aa79f&amp;gt;&amp;#93;&lt;/span&gt; mntput_no_expire+0xbf/0x110&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff811ab2eb&amp;gt;&amp;#93;&lt;/span&gt; sys_umount+0x7b/0x3a0&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8108a281&amp;gt;&amp;#93;&lt;/span&gt; ? sigprocmask+0x71/0x110&lt;br/&gt;
&amp;lt;4&amp;gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8100b072&amp;gt;&amp;#93;&lt;/span&gt; system_call_fastpath+0x16/0x1b&lt;br/&gt;
&amp;lt;4&amp;gt;Code: 01 00 00 4d 39 fe 75 11 4c 3b b3 f0 01 00 00 0f 84 81 fe ff ff 0f 0b eb fe 49 8d 87 68 ff ff ff 49 8d 4f 38 48 c7 c7 b8 55 6a a1 &amp;lt;48&amp;gt; 8b b0 d8 01 00 00 44 8b 88 1c 01 00 00 44 0f b7 80 7e 01 00 &lt;br/&gt;
&amp;lt;1&amp;gt;RIP  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa169081a&amp;gt;&amp;#93;&lt;/span&gt; ldiskfs_put_super+0x33a/0x380 &lt;span class=&quot;error&quot;&gt;&amp;#91;ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
&amp;lt;4&amp;gt; RSP &amp;lt;ffff88027ecf39f8&amp;gt;&lt;/p&gt;

&lt;p&gt;This maybe because a free after accessing problem, from codes inode memory is freed, and in ext4_put_super, we will access it which maybe cause problem(I am not sure about this part analysis.)&lt;/p&gt;

&lt;p&gt;But even above problem is not true, we still can run into:&lt;/p&gt;

&lt;p&gt;J_ASSERT(list_empty(&amp;amp;sbi-&amp;gt;s_orphan))&lt;/p&gt;

&lt;p&gt;which will crash kernel, so we need fix this problem.&lt;/p&gt;</description>
                <environment></environment>
        <key id="27102">LU-5771</key>
            <summary>Crashed OSSs when unmounting OST without cleanuping orphan inodes properly</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="ys">Yang Sheng</assignee>
                                    <reporter username="wangshilong">Wang Shilong</reporter>
                        <labels>
                            <label>patch</label>
                    </labels>
                <created>Mon, 20 Oct 2014 15:21:13 +0000</created>
                <updated>Sun, 14 Jun 2015 13:38:18 +0000</updated>
                            <resolved>Wed, 31 Dec 2014 05:27:42 +0000</resolved>
                                    <version>Lustre 2.5.3</version>
                                    <fixVersion>Lustre 2.7.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="96698" author="wangshilong" created="Mon, 20 Oct 2014 15:23:01 +0000"  >&lt;p&gt;This is patch that i tried to fix this problem:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/#/c/12349/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/12349/&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="96718" author="pjones" created="Mon, 20 Oct 2014 16:29:39 +0000"  >&lt;p&gt;Yang Sheng&lt;/p&gt;

&lt;p&gt;Could you please advise on this issue and proposed patch?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="96853" author="adilger" created="Tue, 21 Oct 2014 17:21:11 +0000"  >&lt;p&gt;It looks like this was fixed in the upstream kernel commit in 2.6.35:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;commit 4538821993f4486c76090dfb377c60c0a0e71ba3
Author: Theodore Ts&apos;o &amp;lt;tytso@mit.edu&amp;gt;
Date:   Thu Jul 29 15:06:10 2010 -0400

    ext4: drop inode from orphan list &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; ext4_delete_inode() fails
    
    There were some error paths in ext4_delete_inode() which was not
    dropping the inode from the orphan list.  This could lead to a BUG_ON
    on umount when the orphan list is discovered to be non-empty.
    
    Signed-off-by: &lt;span class=&quot;code-quote&quot;&gt;&quot;Theodore Ts&apos;o&quot;&lt;/span&gt; &amp;lt;tytso@mit.edu&amp;gt;

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index a52d5af..533b607 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -221,6 +221,7 @@ void ext4_delete_inode(struct inode *inode)
                                     &lt;span class=&quot;code-quote&quot;&gt;&quot;couldn&apos;t extend journal (err %d)&quot;&lt;/span&gt;, err);
                stop_handle:
                        ext4_journal_stop(handle);
+                       ext4_orphan_del(NULL, inode);
                        &lt;span class=&quot;code-keyword&quot;&gt;goto&lt;/span&gt; no_delete;
                }
        }
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="96961" author="wangshilong" created="Wed, 22 Oct 2014 02:03:50 +0000"  >&lt;p&gt;Hello Andreas Dilger,&lt;/p&gt;

&lt;p&gt;Thanks for confirming, i missed it, so i will keep original commit message and resend my patch.&lt;/p&gt;

&lt;p&gt;Best Regard,&lt;br/&gt;
Wang Shilong&lt;/p&gt;</comment>
                            <comment id="97491" author="adilger" created="Sat, 25 Oct 2014 00:12:30 +0000"  >&lt;p&gt;This patch will also be needed for RHEL6.6.&lt;/p&gt;

&lt;p&gt;Wang Shilong, is it possible for you to submit a bug upstream to RH asking them to merge this patch into their RHEL6 kernel patches?  Please include the reference to the upstream kernel patch.&lt;/p&gt;

&lt;p&gt;If not, Yang Sheng, can you do this?&lt;/p&gt;</comment>
                            <comment id="97492" author="wangshilong" created="Sat, 25 Oct 2014 00:52:29 +0000"  >&lt;p&gt;Hello Andreas Dilger,&lt;/p&gt;

&lt;p&gt;I am glad to do this!&lt;br/&gt;
&lt;a href=&quot;https://bugzilla.redhat.com/show_bug.cgi?id=1156661&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://bugzilla.redhat.com/show_bug.cgi?id=1156661&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Best Regards,&lt;br/&gt;
Wang Shilong&lt;/p&gt;</comment>
                            <comment id="97547" author="pjones" created="Mon, 27 Oct 2014 13:11:59 +0000"  >&lt;p&gt;Thanks Wang Shilong! I have passed along to Red Hat that we are also interested in seeing this fix land.&lt;/p&gt;</comment>
                            <comment id="98031" author="wangshilong" created="Fri, 31 Oct 2014 04:27:31 +0000"  >&lt;p&gt;Hello,&lt;/p&gt;

&lt;p&gt;Seems this patch applies only for rhel6.5&#65292;with previous version there are conflicts,&lt;br/&gt;
so my question is whether i need give separate  patch for different version?&lt;/p&gt;


&lt;p&gt;Best Regards,&lt;br/&gt;
Wang Shilong&lt;/p&gt;</comment>
                            <comment id="98046" author="wangshilong" created="Fri, 31 Oct 2014 12:35:40 +0000"  >&lt;p&gt;One more question:&lt;/p&gt;

&lt;p&gt;I noticed Latest Lustre master seems not applying cleanly for rhel6.4, see following messages:&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@localhost linux-2.6.32-358.el6.x86_64&amp;#93;&lt;/span&gt;# quilt push -av&lt;br/&gt;
Applying patch patches/mpt-fusion-max-sge-rhel6.patch&lt;br/&gt;
patching file drivers/message/fusion/Kconfig&lt;br/&gt;
patching file drivers/message/fusion/mptbase.h&lt;br/&gt;
Hunk #1 succeeded at 166 (offset 1 line).&lt;/p&gt;

&lt;p&gt;Applying patch patches/raid5-mmp-unplug-dev-rhel6.patch&lt;br/&gt;
patching file drivers/md/raid5.c&lt;br/&gt;
Hunk #1 FAILED at 2177.&lt;br/&gt;
Hunk #2 succeeded at 4198 (offset 66 lines).&lt;br/&gt;
1 out of 2 hunks FAILED &amp;#8211; rejects in file drivers/md/raid5.c&lt;br/&gt;
Restoring drivers/md/raid5.c&lt;br/&gt;
Patch patches/raid5-mmp-unplug-dev-rhel6.patch does not apply (enforce with -f)&lt;br/&gt;
Restoring drivers/md/raid5.c&lt;/p&gt;

&lt;p&gt;So latest Lustre did not apply patches cleanly for rhel6.4, but i use series is &lt;br/&gt;
lustre-release/lustre/kernel_patches/series/2.6-rhel6.series series&lt;/p&gt;

&lt;p&gt;So my question is master could not guarantee applying patches cleanly for all rhel6 series?&lt;br/&gt;
Best regards,&lt;br/&gt;
Wang Shilong&lt;/p&gt;</comment>
                            <comment id="98050" author="simmonsja" created="Fri, 31 Oct 2014 13:35:36 +0000"  >&lt;p&gt;We should see if this fix is needed for SLES11SP3.&lt;/p&gt;</comment>
                            <comment id="100506" author="adilger" created="Tue, 2 Dec 2014 20:42:55 +0000"  >&lt;p&gt;James, this shouldn&apos;t be needed for SLES11 since that is based on at least 3.0 kernels, and the bug was fixed in the upstream kernel in 2.6.35.  Only the RHEL6 kernels are originally based on 2.6.32 (with a large number of other ext4 patches, but strangely not this one).&lt;/p&gt;</comment>
                            <comment id="101832" author="gerrit" created="Wed, 17 Dec 2014 17:48:04 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/12349/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/12349/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5771&quot; title=&quot;Crashed OSSs when unmounting OST without cleanuping orphan inodes properly&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5771&quot;&gt;&lt;del&gt;LU-5771&lt;/del&gt;&lt;/a&gt; ldiskfs: cleanup orphan inode in error path&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 2dc56b1132a1680d664e8093a33f5ce799865abb&lt;/p&gt;</comment>
                            <comment id="102444" author="ys" created="Wed, 31 Dec 2014 05:27:42 +0000"  >&lt;p&gt;Patch landed. Close this ticket.&lt;/p&gt;</comment>
                            <comment id="104835" author="gerrit" created="Tue, 27 Jan 2015 11:25:47 +0000"  >&lt;p&gt;Shilong Wang (wshilong@ddn.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/13533&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13533&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5771&quot; title=&quot;Crashed OSSs when unmounting OST without cleanuping orphan inodes properly&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5771&quot;&gt;&lt;del&gt;LU-5771&lt;/del&gt;&lt;/a&gt; ldiskfs: cleanup orphan inode in error path&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_5&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: d4c07f1ef0ce637861a0f40de4dfacde11e392af&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwyzr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>16196</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>