<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:49:58 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5264] ASSERTION( info-&gt;oti_r_locks == 0 ) at OST umount</title>
                <link>https://jira.whamcloud.com/browse/LU-5264</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;While stopping a Lustre filesystem, the following LBUG occurred on an OSS:&lt;/p&gt;

&lt;p&gt;----8&amp;lt; ----&lt;br/&gt;
LustreError: 4581:0:(osd_handler.c:5343:osd_key_exit()) ASSERTION( info-&amp;gt;oti_r_locks == 0 ) failed:&lt;br/&gt;
Lustre: server umount scratch3-OST0130 complete&lt;br/&gt;
LustreError: 4581:0:(osd_handler.c:5343:osd_key_exit()) LBUG&lt;br/&gt;
Pid: 4581, comm: ll_ost00_070&lt;/p&gt;

&lt;p&gt;Call Trace:&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0d8c895&amp;gt;&amp;#93;&lt;/span&gt; libcfs_debug_dumpstack+0x55/0x80 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0d8ce97&amp;gt;&amp;#93;&lt;/span&gt; lbug_with_loc+0x47/0xb0 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa14da67b&amp;gt;&amp;#93;&lt;/span&gt; osd_key_exit+0x5b/0xc0 &lt;span class=&quot;error&quot;&gt;&amp;#91;osd_ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0e5d9f8&amp;gt;&amp;#93;&lt;/span&gt; lu_context_exit+0x58/0xa0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0ffd749&amp;gt;&amp;#93;&lt;/span&gt; ptlrpc_main+0xa59/0x1700 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0ffccf0&amp;gt;&amp;#93;&lt;/span&gt; ? ptlrpc_main+0x0/0x1700 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8100c20a&amp;gt;&amp;#93;&lt;/span&gt; child_rip+0xa/0x20&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0ffccf0&amp;gt;&amp;#93;&lt;/span&gt; ? ptlrpc_main+0x0/0x1700 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0ffccf0&amp;gt;&amp;#93;&lt;/span&gt; ? ptlrpc_main+0x0/0x1700 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8100c200&amp;gt;&amp;#93;&lt;/span&gt; ? child_rip+0x0/0x20&lt;/p&gt;

&lt;p&gt;Kernel panic - not syncing: LBUG&lt;br/&gt;
Pid: 4581, comm: ll_ost00_070 Tainted: G W --------------- 2.6.32-431.11.2.el6.Bull.48.x86_64 #1&lt;br/&gt;
Call Trace:&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff81528393&amp;gt;&amp;#93;&lt;/span&gt; ? panic+0xa7/0x16f&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0d8ceeb&amp;gt;&amp;#93;&lt;/span&gt; ? lbug_with_loc+0x9b/0xb0 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa14da67b&amp;gt;&amp;#93;&lt;/span&gt; ? osd_key_exit+0x5b/0xc0 &lt;span class=&quot;error&quot;&gt;&amp;#91;osd_ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0e5d9f8&amp;gt;&amp;#93;&lt;/span&gt; ? lu_context_exit+0x58/0xa0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0ffd749&amp;gt;&amp;#93;&lt;/span&gt; ? ptlrpc_main+0xa59/0x1700 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0ffccf0&amp;gt;&amp;#93;&lt;/span&gt; ? ptlrpc_main+0x0/0x1700 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8100c20a&amp;gt;&amp;#93;&lt;/span&gt; ? child_rip+0xa/0x20&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0ffccf0&amp;gt;&amp;#93;&lt;/span&gt; ? ptlrpc_main+0x0/0x1700 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0ffccf0&amp;gt;&amp;#93;&lt;/span&gt; ? ptlrpc_main+0x0/0x1700 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8100c200&amp;gt;&amp;#93;&lt;/span&gt; ? child_rip+0x0/0x20&lt;br/&gt;
----8&amp;lt; ----&lt;/p&gt;

&lt;p&gt;There were 15 OSTs mounted on the OSS. One umount process has completed while others were still present at crash time. All umount processes were running in parallel because of Shine (shine stop -f scratch3 -n @io)&lt;/p&gt;

&lt;p&gt;----8&amp;lt; ----&lt;br/&gt;
crash&amp;gt; ps | grep umount&lt;br/&gt;
  21639 21636 25 ffff88084445e080 IN 0.0 105176 760 umount&lt;br/&gt;
  21642 21638 25 ffff8807e0f86b40 IN 0.0 105176 760 umount&lt;br/&gt;
  21643 21637 26 ffff880b6a78e100 IN 0.0 105176 760 umount&lt;br/&gt;
  21646 21640 25 ffff880c789fcb40 IN 0.0 106068 756 umount&lt;br/&gt;
  21649 21644 2 ffff880f9c9740c0 IN 0.0 105176 760 umount&lt;br/&gt;
  21651 21645 17 ffff88083ecfb580 IN 0.0 106068 756 umount&lt;br/&gt;
  21653 21648 15 ffff880f97935580 IN 0.0 105176 760 umount&lt;br/&gt;
  21655 21650 6 ffff880fc5c554c0 IN 0.0 105176 760 umount&lt;br/&gt;
  21657 21652 25 ffff881076294080 IN 0.0 105176 756 umount&lt;br/&gt;
  21659 21654 19 ffff880f8f245500 IN 0.0 106068 760 umount&lt;br/&gt;
  21661 21656 11 ffff8807ec1214c0 IN 0.0 105176 764 umount&lt;br/&gt;
  21663 21660 30 ffff8808122a9500 IN 0.0 106068 764 umount&lt;br/&gt;
  21664 21658 3 ffff880b3d1d7500 IN 0.0 105176 764 umount&lt;br/&gt;
  21665 21662 5 ffff8807ec120a80 IN 0.0 106068 764 umount&lt;br/&gt;
----8&amp;lt; ----&lt;/p&gt;

&lt;p&gt;Backtrace of the process:&lt;/p&gt;

&lt;p&gt;----8&amp;lt; ----&lt;br/&gt;
crash&amp;gt; bt&lt;br/&gt;
PID: 4581 TASK: ffff881004af1540 CPU: 8 COMMAND: &quot;ll_ost00_070&quot;&lt;br/&gt;
 #0 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3cb8&amp;#93;&lt;/span&gt; machine_kexec at ffffffff8103915b&lt;br/&gt;
 #1 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3d18&amp;#93;&lt;/span&gt; crash_kexec at ffffffff810c5e42&lt;br/&gt;
 #2 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3de8&amp;#93;&lt;/span&gt; panic at ffffffff8152839a&lt;br/&gt;
 #3 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3e68&amp;#93;&lt;/span&gt; lbug_with_loc at ffffffffa0d8ceeb &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
 #4 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3e88&amp;#93;&lt;/span&gt; osd_key_exit at ffffffffa14da67b &lt;span class=&quot;error&quot;&gt;&amp;#91;osd_ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
 #5 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3e98&amp;#93;&lt;/span&gt; lu_context_exit at ffffffffa0e5d9f8 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
 #6 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3eb8&amp;#93;&lt;/span&gt; ptlrpc_main at ffffffffa0ffd749 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
 #7 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3f48&amp;#93;&lt;/span&gt; kernel_thread at ffffffff8100c20a&lt;br/&gt;
crash&amp;gt; bt -l 4581&lt;br/&gt;
PID: 4581 TASK: ffff881004af1540 CPU: 8 COMMAND: &quot;ll_ost00_070&quot;&lt;br/&gt;
 #0 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3cb8&amp;#93;&lt;/span&gt; machine_kexec at ffffffff8103915b&lt;br/&gt;
    /usr/src/debug/kernel-2.6/linux-2.6.32-431.11.2.el6.Bull.48.x86_64/arch/x86/kernel/machine_kexec_64.c: 336&lt;br/&gt;
 #1 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3d18&amp;#93;&lt;/span&gt; crash_kexec at ffffffff810c5e42&lt;br/&gt;
    /usr/src/debug/kernel-2.6/linux-2.6.32-431.11.2.el6.Bull.48.x86_64/kernel/kexec.c: 1106&lt;br/&gt;
 #2 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3de8&amp;#93;&lt;/span&gt; panic at ffffffff8152839a&lt;br/&gt;
    /usr/src/debug/kernel-2.6/linux-2.6.32-431.11.2.el6.Bull.48.x86_64/kernel/panic.c: 111&lt;br/&gt;
 #3 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3e68&amp;#93;&lt;/span&gt; lbug_with_loc at ffffffffa0d8ceeb &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
    /usr/src/debug/lustre-2.4.3/libcfs/libcfs/linux/linux-debug.c: 176&lt;br/&gt;
 #4 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3e88&amp;#93;&lt;/span&gt; osd_key_exit at ffffffffa14da67b &lt;span class=&quot;error&quot;&gt;&amp;#91;osd_ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
    /usr/src/debug/lustre-2.4.3/lustre/osd-ldiskfs/osd_handler.c: 5345&lt;br/&gt;
 #5 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3e98&amp;#93;&lt;/span&gt; lu_context_exit at ffffffffa0e5d9f8 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
    /usr/src/debug/lustre-2.4.3/lustre/obdclass/lu_object.c: 1662&lt;br/&gt;
 #6 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3eb8&amp;#93;&lt;/span&gt; ptlrpc_main at ffffffffa0ffd749 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
    /usr/src/debug/lustre-2.4.3/lustre/ptlrpc/service.c: 2514&lt;br/&gt;
 #7 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff881004af3f48&amp;#93;&lt;/span&gt; kernel_thread at ffffffff8100c20a&lt;br/&gt;
    /usr/src/debug//////////////////////////////////////////////////////////////////kernel-2.6/linux-2.6.32-431.11.2.el6.Bull.48.x86_64/arch/x86/kernel/entry_64.S: 1235&lt;br/&gt;
----8&amp;lt; ----&lt;/p&gt;

&lt;p&gt;You can find attached:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;dmesg.txt: log from the crash&lt;/li&gt;
	&lt;li&gt;bt-all.merged.txt: merged foreach bt from the crash&lt;/li&gt;
&lt;/ul&gt;
</description>
                <environment>RHEL6 w/ kernel 2.6.32-431.17.1.el6.x86_64</environment>
        <key id="25341">LU-5264</key>
            <summary>ASSERTION( info-&gt;oti_r_locks == 0 ) at OST umount</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="bfaccini">Bruno Faccini</assignee>
                                    <reporter username="bruno.travouillon">Bruno Travouillon</reporter>
                        <labels>
                            <label>p4b</label>
                    </labels>
                <created>Fri, 27 Jun 2014 14:44:57 +0000</created>
                <updated>Thu, 14 May 2020 12:06:34 +0000</updated>
                            <resolved>Wed, 20 May 2015 13:03:59 +0000</resolved>
                                    <version>Lustre 2.4.3</version>
                                    <fixVersion>Lustre 2.8.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>10</watches>
                                                                            <comments>
                            <comment id="87699" author="bfaccini" created="Fri, 27 Jun 2014 16:15:09 +0000"  >&lt;p&gt;Hello Bruno, nice to read from you even in a JIRA !!&lt;br/&gt;
Is it a one shoot ? Also is there a crash-dump available for this (one of) occurrence ?&lt;/p&gt;</comment>
                            <comment id="87701" author="bruno.travouillon" created="Fri, 27 Jun 2014 16:23:26 +0000"  >&lt;p&gt;Hi Bruno,&lt;/p&gt;

&lt;p&gt;Yes, AFAIK, this is a one shot.&lt;br/&gt;
I won&apos;t be able to provide the crash-dump because the LBUG occurred on a blacksite. However, please tell me if you need some input, I will try to extract it from the dump.&lt;/p&gt;</comment>
                            <comment id="88131" author="bfaccini" created="Thu, 3 Jul 2014 18:27:59 +0000"  >&lt;p&gt;Humm after having a look to the assembly code of the routines in panic/LBUG stack, it will be difficult/painful for me to really detail the actions/crash-subcommands to get what I need from the crash-dump ... But let&apos;s try and 1st I would like to get both the &quot;bt -f&quot; and &quot;bt -F&quot; output for the panic/LBUG task, is it possible ?&lt;/p&gt;

&lt;p&gt;Also, this ticket duplicates &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4776&quot; title=&quot;suite sanity-scrub: ASSERTION( info-&amp;gt;oti_r_locks == 0 )&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4776&quot;&gt;LU-4776&lt;/a&gt;, generated from auto-tests similar failures...&lt;/p&gt;</comment>
                            <comment id="88832" author="bruno.travouillon" created="Fri, 11 Jul 2014 14:48:10 +0000"  >&lt;p&gt;Hi Bruno,&lt;/p&gt;

&lt;p&gt;You will find attached both &apos;bt -f&apos; and &apos;bt -F&apos; from the crash dump.&lt;/p&gt;

&lt;p&gt;HTH,&lt;/p&gt;

&lt;p&gt;Bruno&lt;/p&gt;</comment>
                            <comment id="89721" author="bfaccini" created="Tue, 22 Jul 2014 12:00:45 +0000"  >&lt;p&gt;Hello Bruno, can you get me more infos out from the crash-dump+site ?&lt;/p&gt;

&lt;p&gt;If yes, here is what I need :&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;p/x lu_keys
lu_env 0xffff88041e7c5d80
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;This last command will print you the lu_env struct containing the context ant its lc_value array of pointers address.&lt;/p&gt;

&lt;p&gt;And since current index of interest, in both lu_keys[] and lc_value, seems to be #21 ... :&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;p/x *lu_keys[21]
rd &amp;lt;lc_value address&amp;gt; 40
osd_thread_info &amp;lt;lc_value[21] pointer value&amp;gt; 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Don&apos;t forget to load the Lustre modules containing debuginfo stuff before with &quot;mod -S &amp;lt;&lt;span class=&quot;error&quot;&gt;&amp;#91;debuginfo,modules&amp;#93;&lt;/span&gt; root dir&amp;gt;&quot;.&lt;/p&gt;</comment>
                            <comment id="94357" author="bruno.travouillon" created="Thu, 18 Sep 2014 09:12:50 +0000"  >&lt;p&gt;Hi Bruno,&lt;/p&gt;

&lt;p&gt;Here is the requested output. Sorry for the delay.&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;crash&amp;gt; p/x lu_keys
$8 = {0xffffffffa0ec53e0, 0xffffffffa0ec7aa0, 0xffffffffa0ec6280, 0xffffffffa0ecd720, 0xffffffffa0eb2a20, 0xffffffffa10a39c0, 0xffffffffa0820ac0, 0xffffffffa090d320, 0xffffffffa11b77a0, 0xffffffffa11beac0, 0
xffffffffa11b9c60, 0xffffffffa1254960, 0xffffffffa12549a0, 0xffffffffa12ca080, 0xffffffffa12ca0c0, 0xffffffffa1391f20, 0xffffffffa1391f60, 0xffffffffa1393220, 0xffffffffa1393260, 0xffffffffa1438700, 0xffffff
ffa14386c0, 0xffffffffa15197e0, 0xffffffffa1594f60, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}
crash&amp;gt; lu_env 0xffff88041e7c5d80
struct lu_env {
  le_ctx = {
    lc_tags = 2952790018, 
    lc_state = LCS_LEFT, 
    lc_thread = 0xffff88041227f140, 
    lc_value = 0xffff88041e7c8e00, 
    lc_remember = {
      next = 0xffff881057a58ac0, 
      prev = 0xffff880848277d58
    }, 
    lc_version = 25, 
    lc_cookie = 6
  }, 
  le_ses = 0x0
}
crash&amp;gt; p/x *lu_keys[21]
$9 = {
  lct_tags = 0x400000c3, 
  lct_init = 0xffffffffa14e0330, 
  lct_fini = 0xffffffffa14db1a0, 
  lct_exit = 0xffffffffa14da620, 
  lct_index = 0x15, 
  lct_used = {
    counter = 0x2
  }, 
  lct_owner = 0xffffffffa1526680, 
  lct_reference = {&amp;lt;No data fields&amp;gt;}
}
crash&amp;gt; rd 0xffff88041e7c8e00 40
ffff88041e7c8e00:  ffff88041e7c8c00 0000000000000000   ..|.............
ffff88041e7c8e10:  ffff88041e7c9800 0000000000000000   ..|.............
ffff88041e7c8e20:  0000000000000000 ffff88041e7cac00   ..........|.....
ffff88041e7c8e30:  ffff880418c30d40 ffff88041e7c5cc0   @........\|.....
ffff88041e7c8e40:  ffff88041e7c7dc0 0000000000000000   .}|.............
ffff88041e7c8e50:  ffff88041e7c8a00 0000000000000000   ..|.............
ffff88041e7c8e60:  0000000000000000 0000000000000000   ................
ffff88041e7c8e70:  0000000000000000 0000000000000000   ................
ffff88041e7c8e80:  0000000000000000 0000000000000000   ................
ffff88041e7c8e90:  0000000000000000 0000000000000000   ................
ffff88041e7c8ea0:  0000000000000000 0000000000000000   ................
ffff88041e7c8eb0:  0000000000000000 0000000000000000   ................
ffff88041e7c8ec0:  0000000000000000 0000000000000000   ................
ffff88041e7c8ed0:  0000000000000000 0000000000000000   ................
ffff88041e7c8ee0:  0000000000000000 0000000000000000   ................
ffff88041e7c8ef0:  0000000000000000 0000000000000000   ................
ffff88041e7c8f00:  0000000000000000 0000000000000000   ................
ffff88041e7c8f10:  0000000000000000 0000000000000000   ................
ffff88041e7c8f20:  0000000000000000 0000000000000000   ................
ffff88041e7c8f30:  0000000000000000 0000000000000000   ................
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;There is no &amp;lt;lc_value&lt;span class=&quot;error&quot;&gt;&amp;#91;21&amp;#93;&lt;/span&gt; pointer value&amp;gt; here... Did I miss something?&lt;/p&gt;

&lt;p&gt;We had a new occurrence of this bug last week on the same OSS. I will try to get the output from the new dump.&lt;/p&gt;</comment>
                            <comment id="95416" author="bfaccini" created="Wed, 1 Oct 2014 14:15:59 +0000"  >&lt;p&gt;Yes, this is strange because you should have crashed with a Kernel Oops/BUG() instead of an LBUG ...&lt;br/&gt;
But this may also be the result of a race where lc_value&lt;span class=&quot;error&quot;&gt;&amp;#91;21&amp;#93;&lt;/span&gt; should have still been populated when referenced, causing the LBUG thread to trigger the Assert when the concurent/racing thread may the have zero&apos;ed it ...&lt;br/&gt;
BTW, the &quot;good&quot; news is that during the testing of an other patch where I was running MDT/OST mount/umounts in a loop, I have triggered the same LBUG. And its 1st crash analysis steps show the same strange behavior !!! More to come soon.&lt;/p&gt;</comment>
                            <comment id="101450" author="bfaccini" created="Fri, 12 Dec 2014 14:21:49 +0000"  >&lt;p&gt;I have spent sometime this week doing more analysis on my own crash-dump and I should be able to push a patch soon now to fix the suspected race during umounts.&lt;/p&gt;

&lt;p&gt;BTW, can you check in your on-site crash-dump/logs that this was the last OST to be unmounted ?&lt;/p&gt;
</comment>
                            <comment id="101794" author="gerrit" created="Wed, 17 Dec 2014 10:40:34 +0000"  >&lt;p&gt;Faccini Bruno (bruno.faccini@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/13103&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13103&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5264&quot; title=&quot;ASSERTION( info-&amp;gt;oti_r_locks == 0 ) at OST umount&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5264&quot;&gt;&lt;del&gt;LU-5264&lt;/del&gt;&lt;/a&gt; obdclass: fix race during key quiescency&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 66dcaa9824f4ae35fb375bc9f9da18c481b7a8f7&lt;/p&gt;</comment>
                            <comment id="101796" author="bfaccini" created="Wed, 17 Dec 2014 10:48:10 +0000"  >&lt;p&gt;After more debugging on my own crash-dump, I think problem comes from the fact that upon umount, presumably of last device using same OSD back-end, to prepare for module unload, lu_context_key_quiesce() is run to remove all module&apos;s key reference in any context linked on lu_context_remembered list. Thus threads must protect against such transversal processing when exiting from their context. &lt;br/&gt;
Master patch to implement this is at &lt;a href=&quot;http://review.whamcloud.com/13103&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13103&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="101797" author="bruno.travouillon" created="Wed, 17 Dec 2014 11:06:49 +0000"  >&lt;p&gt;Hi Bruno,&lt;/p&gt;

&lt;p&gt;I was just looking at the dumps at the customer site. For the first crash, there were still 14 umount processes running for 15 OSTs. For the second, 3 umount processes were remaining for 8 OSTs.&lt;/p&gt;</comment>
                            <comment id="101798" author="bruno.travouillon" created="Wed, 17 Dec 2014 12:24:07 +0000"  >&lt;p&gt;I mean 14 umount processes running for 15 OSTs previously mounted, ie 1 OST has been successfully unmounted for first crash and 5 OSTs have been successfully unmounted for the second one.&lt;/p&gt;</comment>
                            <comment id="101941" author="bfaccini" created="Thu, 18 Dec 2014 14:16:49 +0000"  >&lt;p&gt;Thanks Bruno, but I believe this could be the effect of Shine tool running all umounts in parallel ...&lt;/p&gt;</comment>
                            <comment id="106689" author="bruno.travouillon" created="Wed, 11 Feb 2015 20:23:11 +0000"  >&lt;p&gt;Bruno,&lt;/p&gt;

&lt;p&gt;Can we go ahead and backport the patch to b2_5?&lt;/p&gt;</comment>
                            <comment id="106790" author="bfaccini" created="Thu, 12 Feb 2015 11:52:41 +0000"  >&lt;p&gt;Humm master patch version, has successfully passed auto-test but definitelly needs to pass the review step. Will try to get more involvment from the reviewers. &lt;/p&gt;</comment>
                            <comment id="108517" author="gerrit" created="Tue, 3 Mar 2015 02:16:49 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/13103/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13103/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5264&quot; title=&quot;ASSERTION( info-&amp;gt;oti_r_locks == 0 ) at OST umount&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5264&quot;&gt;&lt;del&gt;LU-5264&lt;/del&gt;&lt;/a&gt; obdclass: fix race during key quiescency&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 42fdf8355791cb682c6120f7950bb2ecd50f97aa&lt;/p&gt;</comment>
                            <comment id="115994" author="pjones" created="Wed, 20 May 2015 13:03:59 +0000"  >&lt;p&gt;Landed for 2.8&lt;/p&gt;</comment>
                            <comment id="121673" author="gerrit" created="Mon, 20 Jul 2015 15:03:19 +0000"  >&lt;p&gt;Gr&#233;goire Pichon (gregoire.pichon@bull.net) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/15647&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/15647&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5264&quot; title=&quot;ASSERTION( info-&amp;gt;oti_r_locks == 0 ) at OST umount&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5264&quot;&gt;&lt;del&gt;LU-5264&lt;/del&gt;&lt;/a&gt; obdclass: fix race during key quiescency&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_5&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: cf26b808c1f1187a600de0674baf59061bdefc30&lt;/p&gt;</comment>
                            <comment id="125997" author="bruno.travouillon" created="Wed, 2 Sep 2015 09:09:17 +0000"  >&lt;p&gt;Gr&#233;goire,&lt;/p&gt;

&lt;p&gt;Maybe you should abandon this backport to b2_5 because of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6800&quot; title=&quot;Significant performance regression with patch LU-5264&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6800&quot;&gt;&lt;del&gt;LU-6800&lt;/del&gt;&lt;/a&gt;. Backporting &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6800&quot; title=&quot;Significant performance regression with patch LU-5264&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6800&quot;&gt;&lt;del&gt;LU-6800&lt;/del&gt;&lt;/a&gt; would be a better answer to this issue.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                                        </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="23696">LU-4776</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="30925">LU-6800</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="15368" name="bt-F" size="2795" author="bruno.travouillon" created="Fri, 11 Jul 2014 14:48:10 +0000"/>
                            <attachment id="15270" name="bt-all.merged.txt" size="152899" author="bruno.travouillon" created="Fri, 27 Jun 2014 14:44:57 +0000"/>
                            <attachment id="15367" name="bt-f" size="2866" author="bruno.travouillon" created="Fri, 11 Jul 2014 14:48:10 +0000"/>
                            <attachment id="15271" name="dmesg.txt" size="216833" author="bruno.travouillon" created="Fri, 27 Jun 2014 14:44:57 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwq33:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>14690</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>