<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:09:56 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-736] LBUG and kernel panic on client unmount</title>
                <link>https://jira.whamcloud.com/browse/LU-736</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We had a few hundred clients all LBUG and then kernel panic on unmount of a lustre filesystem recently.  All the ones that I checked have the same backtrace.  See the attached sierra32_console.txt.&lt;/p&gt;

&lt;p&gt;It looks like others have hit this in earlier 1.8 versions.  See bugzilla.lustre.org bug 23861.&lt;/p&gt;</description>
                <environment>1.8.5.0-5chaos.  &lt;a href=&quot;https://github.com/chaos/lustre/tree/1.8.5.0-5chaos&quot;&gt;https://github.com/chaos/lustre/tree/1.8.5.0-5chaos&lt;/a&gt;</environment>
        <key id="12018">LU-736</key>
            <summary>LBUG and kernel panic on client unmount</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="2">Won&apos;t Fix</resolution>
                                        <assignee username="morrone">Christopher Morrone</assignee>
                                    <reporter username="morrone">Christopher Morrone</reporter>
                        <labels>
                            <label>llnl</label>
                    </labels>
                <created>Tue, 4 Oct 2011 19:12:24 +0000</created>
                <updated>Fri, 22 Jan 2016 05:13:08 +0000</updated>
                            <resolved>Fri, 22 Jan 2016 05:13:07 +0000</resolved>
                                                                        <due></due>
                            <votes>0</votes>
                                    <watches>8</watches>
                                                                            <comments>
                            <comment id="20766" author="morrone" created="Tue, 4 Oct 2011 19:15:20 +0000"  >&lt;p&gt;To make this issue more searchable, the LBUG is here:&lt;/p&gt;

&lt;p&gt;2011-09-29 07:51:30 LustreError: 19065:0:(ldlm_lock.c:1568:ldlm_lock_cancel()) ### lock still has references ns: lsa-MDT0000-mdc-ffff810332040400 lock: ffff810263a92e00/0xb23761f5d085be87 lrc: 4/0,1 mode: PW/PW res: 578792285/4020328757 rrc: 2 type: FLK pid: 21451 &lt;span class=&quot;error&quot;&gt;&amp;#91;0-&amp;gt;9223372036854775807&amp;#93;&lt;/span&gt; flags: 0x22002890 remote: 0x1f055096a089059 expref: -99 pid: 21451 timeout: 0&lt;br/&gt;
2011-09-29 07:51:30 LustreError: 19065:0:(ldlm_lock.c:1569:ldlm_lock_cancel()) LBUG&lt;/p&gt;</comment>
                            <comment id="20767" author="pjones" created="Tue, 4 Oct 2011 19:46:20 +0000"  >&lt;p&gt;HongChao&lt;/p&gt;

&lt;p&gt;Could you please look into this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="21210" author="pjones" created="Thu, 13 Oct 2011 11:18:48 +0000"  >&lt;p&gt;Hongchao&lt;/p&gt;

&lt;p&gt;Could you please provide a status update?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="21417" author="hongchao.zhang" created="Tue, 18 Oct 2011 10:40:13 +0000"  >&lt;p&gt;the readers/writers of the flock&apos;s LDLM lock isn&apos;t zero until it is canceled by unlock request, the reference will only be&lt;br/&gt;
dropped by &quot;ldlm_flock_completion_ast&quot; in &quot;cleanup_resource&quot; by setting &quot;LDLM_FL_LOCAL_ONLY|LDLM_FL_FAILED&quot; flag in the&lt;br/&gt;
LDLM lock.&lt;/p&gt;

&lt;p&gt;in this case, the flag of the lock is &quot;0x22002890&quot;, only contains LDLM_FL_FAILED, no LDLM_FL_LOCAL_ONLY. and during umount&lt;br/&gt;
this flag will be set only if obd-&amp;gt;obd_force is set, &lt;/p&gt;

&lt;p&gt;if there are flock&apos;s LDLM lock during umount and obd-&amp;gt;obd_force isn&apos;t set, then this issue wil be triggered.&lt;/p&gt;

&lt;p&gt;Hi Chris, &lt;br/&gt;
Do you add &quot;-f&quot; flag when you umount the Lustre client? thanks.&lt;/p&gt;</comment>
                            <comment id="21422" author="hongchao.zhang" created="Tue, 18 Oct 2011 11:22:33 +0000"  >&lt;p&gt;Hi Chris, &lt;br/&gt;
could you please also check whether your application running on the Lustre client leave some flocks unlocked?&lt;br/&gt;
I have tested locally with leaving some flock unlocked deliberately and umount Lustre, which triggers this LBUG.&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;</comment>
                            <comment id="21558" author="morrone" created="Thu, 20 Oct 2011 18:07:35 +0000"  >&lt;p&gt;I will find out what the admins did to umount lustre.&lt;/p&gt;

&lt;p&gt;It is going to be rather difficult to track down whether an of the various applications are using flock, and how.  Most of our users won&apos;t know the answer to that that, even if there application IS using flock.&lt;/p&gt;

&lt;p&gt;Perhaps it is relevant that we are mounting with the &quot;flock&quot; option enabled.&lt;/p&gt;</comment>
                            <comment id="21560" author="morrone" created="Thu, 20 Oct 2011 18:16:24 +0000"  >&lt;p&gt;As far as they can recall, they did not use the umount -f option.&lt;/p&gt;</comment>
                            <comment id="34690" author="hongchao.zhang" created="Fri, 13 Apr 2012 06:27:45 +0000"  >&lt;p&gt;the initial patch is tracked at &lt;a href=&quot;http://review.whamcloud.com/#change,2535&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,2535&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="34738" author="morrone" created="Fri, 13 Apr 2012 14:11:47 +0000"  >&lt;p&gt;Thanks.  FYI unless this is also a problem for 2.1, this ticket is very low priority compared to our many 2.1 bugs.  We do not plan to fix any 1.8 bugs in production.&lt;/p&gt;</comment>
                            <comment id="139553" author="hongchao.zhang" created="Thu, 21 Jan 2016 13:02:43 +0000"  >&lt;p&gt;Hi Chris,&lt;br/&gt;
Do you need any more work on this ticket? Or are we OK to close it? Thanks&lt;/p&gt;</comment>
                            <comment id="139659" author="morrone" created="Thu, 21 Jan 2016 19:39:55 +0000"  >&lt;p&gt;This is so old that I think you can close it with resolution &quot;Won&apos;t Fix&quot;.&lt;/p&gt;</comment>
                            <comment id="139706" author="hongchao.zhang" created="Fri, 22 Jan 2016 05:13:08 +0000"  >&lt;p&gt;Chris, Thanks!&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="10522" name="sierra32_console.txt" size="3727" author="morrone" created="Tue, 4 Oct 2011 19:12:24 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                    <customfield id="customfield_10020" key="com.atlassian.jira.plugin.system.customfieldtypes:float">
                        <customfieldname>Bugzilla ID</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>23861.0</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvybb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9743</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>