<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:51:19 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-12295] MDS Panic on DNE2 directory removing</title>
                <link>https://jira.whamcloud.com/browse/LU-12295</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;MDS Panic when &lt;tt&gt;handling remote object fails.&lt;/tt&gt;&lt;/p&gt;

&lt;p&gt;&lt;tt&gt;Steps to reproduce are as follows:&lt;/tt&gt;&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
1) create/delete files and directorys under striped directory
[client]# lfs mkdir -c 2 -i 0 /mnt/lustre/dir
[client]# lfs mkdir -c 2 -i 0 -D /mnt/lustre/dir
[client]# &lt;span class=&quot;code-keyword&quot;&gt;while&lt;/span&gt; :; &lt;span class=&quot;code-keyword&quot;&gt;do&lt;/span&gt; rm -rf /mnt/lustre/dir/*;  ./mdtest -v -n 1000 -p 1 -i 3 -d /mnt/lustre/dir; done

2) simulate ENOSPC error at remote object handling (that is, out_tx_write_exec() function)
[MDS1]# &lt;span class=&quot;code-keyword&quot;&gt;while&lt;/span&gt; :; &lt;span class=&quot;code-keyword&quot;&gt;do&lt;/span&gt; sysctl lnet.fail_loc=0x1704 ; sleep 3; sysctl lnet.fail_loc=0; sleep 5; done
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;{{}}&lt;/p&gt;

&lt;p&gt;&lt;tt&gt;MDS console and dump:&lt;/tt&gt;&lt;/p&gt;

&lt;p&gt;{{}}&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
Message from syslogd@rx200-076 at May 10 20:08:27 ...
 kernel:LustreError: 20269:0:(osd_handler.c:3229:osd_destroy()) ASSERTION( osd_inode_unlinked(inode) || inode-&amp;gt;i_nlink == 1 || inode-&amp;gt;i_nlink == 2 ) failed:

Message from syslogd@rx200-076 at May 10 20:08:27 ...
 kernel:LustreError: 20269:0:(osd_handler.c:3229:osd_destroy()) LBUG

 [9798957.173503] Call Trace:
[9798957.190509]  [&amp;lt;ffffffffb3b0d78e&amp;gt;] dump_stack+0x19/0x1b
[9798957.223630]  [&amp;lt;ffffffffb3b07a90&amp;gt;] panic+0xe8/0x21f
[9798957.254673]  [&amp;lt;ffffffffc0ad18cb&amp;gt;] lbug_with_loc+0x9b/0xa0 [libcfs]
[9798957.294020]  [&amp;lt;ffffffffc1133dd0&amp;gt;] osd_destroy+0x710/0x750 [osd_ldiskfs]
[9798957.335950]  [&amp;lt;ffffffffc1132bcd&amp;gt;] ? osd_ref_del+0x1ad/0x6a0 [osd_ldiskfs]
[9798957.378897]  [&amp;lt;ffffffffc1132141&amp;gt;] ? osd_attr_set+0x201/0xae0 [osd_ldiskfs]
[9798957.422331]  [&amp;lt;ffffffffb3b120d2&amp;gt;] ? down_write+0x12/0x3d
[9798957.456457]  [&amp;lt;ffffffffc0f6c851&amp;gt;] out_obj_destroy+0x101/0x2c0 [ptlrpc]
[9798957.497826]  [&amp;lt;ffffffffc0f6cac0&amp;gt;] out_tx_destroy_exec+0x20/0x190 [ptlrpc]
[9798957.540746]  [&amp;lt;ffffffffc0f67591&amp;gt;] out_tx_end+0xe1/0x5c0 [ptlrpc]
[9798957.578950]  [&amp;lt;ffffffffc0f6b6d3&amp;gt;] out_handle+0x1453/0x1bc0 [ptlrpc]
[9798957.618701]  [&amp;lt;ffffffffc0efbf72&amp;gt;] ? lustre_msg_get_opc+0x22/0xf0 [ptlrpc]
[9798957.661558]  [&amp;lt;ffffffffc0f5fc69&amp;gt;] ? tgt_request_preprocess.isra.26+0x299/0x790 [ptlrpc]
[9798957.711684]  [&amp;lt;ffffffffc0f6138a&amp;gt;] tgt_request_handle+0x92a/0x1370 [ptlrpc]
[9798957.755032]  [&amp;lt;ffffffffc0f09e4b&amp;gt;] ptlrpc_server_handle_request+0x23b/0xaa0 [ptlrpc]
[9798957.803047]  [&amp;lt;ffffffffc0f06478&amp;gt;] ? ptlrpc_wait_event+0x98/0x340 [ptlrpc]
[9798957.845811]  [&amp;lt;ffffffffb34cee92&amp;gt;] ? default_wake_function+0x12/0x20
[9798957.885436]  [&amp;lt;ffffffffb34c4abb&amp;gt;] ? __wake_up_common+0x5b/0x90
[9798957.922487]  [&amp;lt;ffffffffc0f0d592&amp;gt;] ptlrpc_main+0xa92/0x1e40 [ptlrpc]
[9798957.962103]  [&amp;lt;ffffffffc0f0cb00&amp;gt;] ? ptlrpc_register_service+0xe30/0xe30 [ptlrpc]
[9798958.008436]  [&amp;lt;ffffffffb34bae31&amp;gt;] kthread+0xd1/0xe0
[9798958.039672]  [&amp;lt;ffffffffb34bad60&amp;gt;] ? insert_kthread_work+0x40/0x40
[9798958.078163]  [&amp;lt;ffffffffb3b1f5f7&amp;gt;] ret_from_fork_nospec_begin+0x21/0x21
[9798958.119234]  [&amp;lt;ffffffffb34bad60&amp;gt;] ? insert_kthread_work+0x40/0x40

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;{{}}&lt;/p&gt;

&lt;p&gt;&lt;tt&gt;Could you please look into this one?&lt;/tt&gt;&lt;/p&gt;</description>
                <environment></environment>
        <key id="55623">LU-12295</key>
            <summary>MDS Panic on DNE2 directory removing</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="laisiyao">Lai Siyao</assignee>
                                    <reporter username="takamura">Tatsushi Takamura</reporter>
                        <labels>
                    </labels>
                <created>Mon, 13 May 2019 10:49:25 +0000</created>
                <updated>Sat, 19 Dec 2020 13:15:48 +0000</updated>
                            <resolved>Sat, 12 Sep 2020 15:49:46 +0000</resolved>
                                    <version>Lustre 2.10.5</version>
                                    <fixVersion>Lustre 2.14.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="247125" author="green" created="Tue, 14 May 2019 23:00:25 +0000"  >&lt;p&gt;hm, it looks like I hit a very similar failure in master-next two days ago and yesterday:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 5930.469393] LustreError: 9370:0:(osd_handler.c:3573:osd_destroy()) ASSERTION( osd_inode_unlinked(inode) || inode-&amp;gt;i_nlink == 1 || inode-&amp;gt;i_nlink == 2 ) failed: 
[ 5930.502768] LustreError: 9370:0:(osd_handler.c:3573:osd_destroy()) LBUG
[ 5930.505164] Pid: 9370, comm: mdt_rdpg07_003 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
[ 5930.509233] Call Trace:
[ 5930.511319]  [&amp;lt;ffffffffa02b27dc&amp;gt;] libcfs_call_trace+0x8c/0xc0 [libcfs]
[ 5930.514891]  [&amp;lt;ffffffffa02b288c&amp;gt;] lbug_with_loc+0x4c/0xa0 [libcfs]
[ 5930.522770]  [&amp;lt;ffffffffa0c4eeb3&amp;gt;] osd_destroy+0x713/0x750 [osd_ldiskfs]
[ 5930.527762]  [&amp;lt;ffffffffa0e8f83b&amp;gt;] lod_sub_destroy+0x1bb/0x450 [lod]
[ 5930.531206]  [&amp;lt;ffffffffa0e777a0&amp;gt;] lod_destroy+0x140/0x820 [lod]
[ 5930.546681]  [&amp;lt;ffffffffa0d39e26&amp;gt;] mdd_close+0x846/0xf30 [mdd]
[ 5930.549991]  [&amp;lt;ffffffffa0db7aab&amp;gt;] mdt_mfd_close+0x3fb/0x850 [mdt]
[ 5930.555677]  [&amp;lt;ffffffffa0dbd401&amp;gt;] mdt_close_internal+0xb1/0x220 [mdt]
[ 5930.560137]  [&amp;lt;ffffffffa0dbd790&amp;gt;] mdt_close+0x220/0x740 [mdt]
[ 5930.564650]  [&amp;lt;ffffffffa072eb05&amp;gt;] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[ 5930.567750]  [&amp;lt;ffffffffa06d12b9&amp;gt;] ptlrpc_server_handle_request+0x259/0xad0 [ptlrpc]
[ 5930.584402]  [&amp;lt;ffffffffa06d52bc&amp;gt;] ptlrpc_main+0xb6c/0x20b0 [ptlrpc]
[ 5930.585599]  [&amp;lt;ffffffff810b4ed4&amp;gt;] kthread+0xe4/0xf0
[ 5930.587608]  [&amp;lt;ffffffff817c4c5d&amp;gt;] ret_from_fork_nospec_begin+0x7/0x21
[ 5930.588809]  [&amp;lt;ffffffffffffffff&amp;gt;] 0xffffffffffffffff
[ 5930.589680] Kernel panic - not syncing: LBUG
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;and&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[13720.662563] LustreError: 14705:0:(osd_handler.c:3573:osd_destroy()) ASSERTION( osd_inode_unlinked(inode) || inode-&amp;gt;i_nlink == 1 || inode-&amp;gt;i_nlink == 2 ) failed: 
[13720.683253] LustreError: 14705:0:(osd_handler.c:3573:osd_destroy()) LBUG
[13720.684186] Pid: 14705, comm: mdt04_003 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
[13720.685838] Call Trace:
[13720.686625]  [&amp;lt;ffffffffa02cb7dc&amp;gt;] libcfs_call_trace+0x8c/0xc0 [libcfs]
[13720.688731]  [&amp;lt;ffffffffa02cb88c&amp;gt;] lbug_with_loc+0x4c/0xa0 [libcfs]
[13720.690977]  [&amp;lt;ffffffffa0c2aeb3&amp;gt;] osd_destroy+0x713/0x750 [osd_ldiskfs]
[13720.701737]  [&amp;lt;ffffffffa0e6b83b&amp;gt;] lod_sub_destroy+0x1bb/0x450 [lod]
[13720.707438]  [&amp;lt;ffffffffa0e537a0&amp;gt;] lod_destroy+0x140/0x820 [lod]
[13720.712593]  [&amp;lt;ffffffffa0d0aa63&amp;gt;] mdd_finish_unlink+0x123/0x410 [mdd]
[13720.714811]  [&amp;lt;ffffffffa0d0cce4&amp;gt;] mdd_unlink+0x9c4/0xad0 [mdd]
[13720.719251]  [&amp;lt;ffffffffa0dc177f&amp;gt;] mdo_unlink+0x43/0x45 [mdt]
[13720.721165]  [&amp;lt;ffffffffa0d83c15&amp;gt;] mdt_reint_unlink+0xb25/0x13e0 [mdt]
[13720.728197]  [&amp;lt;ffffffffa0d8a7c0&amp;gt;] mdt_reint_rec+0x80/0x210 [mdt]
[13720.734164]  [&amp;lt;ffffffffa0d66a40&amp;gt;] mdt_reint_internal+0x780/0xb50 [mdt]
[13720.736305]  [&amp;lt;ffffffffa0d71aa7&amp;gt;] mdt_reint+0x67/0x140 [mdt]
[13720.744742]  [&amp;lt;ffffffffa0727b05&amp;gt;] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[13720.758897]  [&amp;lt;ffffffffa06ca2b9&amp;gt;] ptlrpc_server_handle_request+0x259/0xad0 [ptlrpc]
[13720.798963]  [&amp;lt;ffffffffa06ce2bc&amp;gt;] ptlrpc_main+0xb6c/0x20b0 [ptlrpc]
[13720.801378]  [&amp;lt;ffffffff810b4ed4&amp;gt;] kthread+0xe4/0xf0
[13720.822348]  [&amp;lt;ffffffff817c4c5d&amp;gt;] ret_from_fork_nospec_begin+0x7/0x21
[13720.824379]  [&amp;lt;ffffffffffffffff&amp;gt;] 0xffffffffffffffff
[13720.826530] Kernel panic - not syncing: LBUG
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I have cashdumps too.&lt;/p&gt;</comment>
                            <comment id="264038" author="ofaaland" created="Tue, 25 Feb 2020 18:35:05 +0000"  >&lt;p&gt;I don&apos;t recall seeing this specific bug at LLNL, but we&apos;ve seen a variety of failures when MDTs run out of space.&#160; It would be nice to work them so that users can recover on their own by deleting files/directories, and so that readdir/stat/open/close succeed while the housecleaning is being done.&lt;/p&gt;</comment>
                            <comment id="278120" author="gerrit" created="Wed, 26 Aug 2020 15:27:42 +0000"  >&lt;p&gt;Lai Siyao (lai.siyao@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/39734&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39734&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12295&quot; title=&quot;MDS Panic on DNE2 directory removing&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12295&quot;&gt;&lt;del&gt;LU-12295&lt;/del&gt;&lt;/a&gt; osd-ldiskfs: don&apos;t LBUG() if dir nlink is wrong&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 1f563d379c6415b93fbc50d5613e532ebd6a9d34&lt;/p&gt;</comment>
                            <comment id="279428" author="gerrit" created="Sat, 12 Sep 2020 15:43:47 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/39734/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39734/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12295&quot; title=&quot;MDS Panic on DNE2 directory removing&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12295&quot;&gt;&lt;del&gt;LU-12295&lt;/del&gt;&lt;/a&gt; mdd: don&apos;t LBUG() if dir nlink is wrong&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: afa39b3cceabccd19e7c412ff90667e95cbfe3e8&lt;/p&gt;</comment>
                            <comment id="279448" author="pjones" created="Sat, 12 Sep 2020 15:49:46 +0000"  >&lt;p&gt;Landed for 2.14&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                                <inwardlinks description="is related to">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                    <customfield id="customfield_10030" key="com.atlassian.jira.plugin.system.customfieldtypes:labels">
                        <customfieldname>Epic/Theme</customfieldname>
                        <customfieldvalues>
                                        <label>dne</label>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00g6n:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>