<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:26:06 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-9428] ASSERTION( de-&gt;d_op == &amp;ll_d_ops)</title>
                <link>https://jira.whamcloud.com/browse/LU-9428</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We have recently seen frequent occurrences of the LBUG below.&lt;/p&gt;

&lt;p&gt;The affected machines are all exporting our Lustre file system via NFS to other Linux machines.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;May  2 06:59:03 i05-storage1 kernel: LustreError: 3023:0:(dcache.c:236:ll_d_init()) ASSERTION( de-&amp;gt;d_op == &amp;amp;ll_d_ops ) failed: 
May  2 06:59:03 i05-storage1 kernel: LustreError: 3023:0:(dcache.c:236:ll_d_init()) LBUG
May  2 06:59:03 i05-storage1 kernel: Pid: 3023, comm: nfsd
May  2 06:59:03 i05-storage1 kernel: 
May  2 06:59:03 i05-storage1 kernel: Call Trace:
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa0383895&amp;gt;] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa0383e97&amp;gt;] lbug_with_loc+0x47/0xb0 [libcfs]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa097e69f&amp;gt;] ll_d_init+0x2ff/0x540 [lustre]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa09c1b5b&amp;gt;] ll_iget_for_nfs+0x20b/0x300 [lustre]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa09c1d89&amp;gt;] ll_fh_to_dentry+0x99/0xa0 [lustre]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa0b3871c&amp;gt;] exportfs_decode_fh+0x5c/0x2bc [exportfs]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa0bcc8e0&amp;gt;] ? nfsd_acceptable+0x0/0x120 [nfsd]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa0b56da0&amp;gt;] ? cache_check+0x60/0x370 [sunrpc]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffff8117f76b&amp;gt;] ? cache_alloc_refill+0x15b/0x240
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa0bccdda&amp;gt;] fh_verify+0x32a/0x640 [nfsd]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa0bcfda1&amp;gt;] nfsd_open+0x31/0x240 [nfsd]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa0bd022b&amp;gt;] nfsd_commit+0x3b/0xa0 [nfsd]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffff810aff24&amp;gt;] ? groups_free+0x54/0x60
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa0bd769d&amp;gt;] nfsd3_proc_commit+0x9d/0x100 [nfsd]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa0bc9405&amp;gt;] nfsd_dispatch+0xe5/0x230 [nfsd]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa0b4ccf4&amp;gt;] svc_process_common+0x344/0x640 [sunrpc]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffff8106c500&amp;gt;] ? default_wake_function+0x0/0x20
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa0b4d390&amp;gt;] svc_process+0x110/0x160 [sunrpc]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa0bc9c82&amp;gt;] nfsd+0xc2/0x160 [nfsd]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffffa0bc9bc0&amp;gt;] ? nfsd+0x0/0x160 [nfsd]
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffff810a640e&amp;gt;] kthread+0x9e/0xc0
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffff8100c28a&amp;gt;] child_rip+0xa/0x20
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffff810a6370&amp;gt;] ? kthread+0x0/0xc0
May  2 06:59:03 i05-storage1 kernel: [&amp;lt;ffffffff8100c280&amp;gt;] ? child_rip+0x0/0x20
May  2 06:59:03 i05-storage1 kernel: 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This looks similar to &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9241&quot; title=&quot;ASSERTION( de-&amp;gt;d_op == &amp;amp;ll_d_ops ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9241&quot;&gt;&lt;del&gt;LU-9241&lt;/del&gt;&lt;/a&gt; but the stack trace is not quite the same and also the patch is against master while we are running b2_7_fe, so would need a fix for that. &lt;/p&gt;

&lt;p&gt;We are still investigating the events leading to the crash, hoping for a reproducer....&lt;/p&gt;</description>
                <environment>RHEL6 server</environment>
        <key id="45804">LU-9428</key>
            <summary>ASSERTION( de-&gt;d_op == &amp;ll_d_ops)</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="laisiyao">Lai Siyao</assignee>
                                    <reporter username="ferner">Frederik Ferner</reporter>
                        <labels>
                    </labels>
                <created>Tue, 2 May 2017 10:23:50 +0000</created>
                <updated>Thu, 29 Jun 2017 12:54:41 +0000</updated>
                            <resolved>Thu, 29 Jun 2017 12:54:41 +0000</resolved>
                                    <version>Lustre 2.7.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>3</watches>
                                                                            <comments>
                            <comment id="194172" author="pjones" created="Tue, 2 May 2017 16:39:10 +0000"  >&lt;p&gt;Lai&lt;/p&gt;

&lt;p&gt;Can you please assist with this issue?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="195440" author="ferner" created="Thu, 11 May 2017 10:31:15 +0000"  >&lt;p&gt;Any updates to this? We are still seeing this frequently, though unfortunately haven&apos;t been able to detect a pattern or develop a reproducer yet, however it is definitely affecting our users.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Frederik&lt;/p&gt;</comment>
                            <comment id="196328" author="ferner" created="Thu, 18 May 2017 13:14:56 +0000"  >&lt;p&gt;I noticed that the patch in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9241&quot; title=&quot;ASSERTION( de-&amp;gt;d_op == &amp;amp;ll_d_ops ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9241&quot;&gt;&lt;del&gt;LU-9241&lt;/del&gt;&lt;/a&gt; has been merged and appears to apply cleanly to our b2.7 tree.&lt;/p&gt;

&lt;p&gt;Can we get feedback if it should be safe to cherry-pick this commit and test on our clients?&lt;/p&gt;

&lt;p&gt;thanks,&lt;br/&gt;
Frederik&lt;/p&gt;</comment>
                            <comment id="196338" author="laisiyao" created="Thu, 18 May 2017 13:48:13 +0000"  >&lt;p&gt;Yes, it&apos;s safe to cherry-pick to 2.7. It&apos;s a trivial fix to client code.&lt;/p&gt;</comment>
                            <comment id="196473" author="ferner" created="Fri, 19 May 2017 16:23:09 +0000"  >&lt;p&gt;Thanks for confirming. We have rebuild our client with this patch applied and have started testing.&lt;/p&gt;

&lt;p&gt;As we don&apos;t have a know reproducer and it seems quite unpredictable when it happens, it will take a while until we can be confident that this fixed our problem. We&apos;ll report back.&lt;/p&gt;

&lt;p&gt;Frederik&lt;/p&gt;</comment>
                            <comment id="199836" author="pjones" created="Wed, 21 Jun 2017 13:44:08 +0000"  >&lt;p&gt;Frederik&lt;/p&gt;

&lt;p&gt;Has this been long enough to ascertain whether the fix has helped?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="200567" author="ferner" created="Thu, 29 Jun 2017 10:53:14 +0000"  >&lt;p&gt;Peter, All,&lt;/p&gt;

&lt;p&gt;apologies for the delay, I&apos;ve been away.&lt;/p&gt;

&lt;p&gt;Without a clear reproducer it is always going to be hard to be absolutely sure and the problem seems to come and go in waves. However, we have so far not seen this problem on a NFS server running the patched version. So I feel confident to say it&apos;s looking good so far, it certainly seems to have helped.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Frederik&lt;/p&gt;</comment>
                            <comment id="200573" author="pjones" created="Thu, 29 Jun 2017 12:54:41 +0000"  >&lt;p&gt;Thanks Frederik. Let&apos;s close out this ticket for now then and open a new one if you do ever get a reoccurence.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                            <outwardlinks description="duplicates">
                                        <issuelink>
            <issuekey id="45788">LU-9421</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzblr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>