<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:53:51 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5713] Interop 2.5&lt;-&gt;2.7 sanity-lfsck test_8: (mdt_handler.c:4378:mdt_fini()) ASSERTION( atomic_read(&amp;d-&gt;ld_ref) == 0 ) failed</title>
                <link>https://jira.whamcloud.com/browse/LU-5713</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This issue was created by maloo for sarah &amp;lt;sarah@whamcloud.com&amp;gt;&lt;/p&gt;

&lt;p&gt;This issue relates to the following test suite run: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/34941bc6-4732-11e4-8a80-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/34941bc6-4732-11e4-8a80-5254006e85c2&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The sub-test test_8 failed with the following error:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;test failed to respond and timed out
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;MDS console&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;09:49:10:Lustre: DEBUG MARKER: == sanity-lfsck test 8: LFSCK state machine == 15:48:59 (1411832939)
09:49:10:Lustre: DEBUG MARKER: grep -c /mnt/mds1&apos; &apos; /proc/mounts
09:49:10:Lustre: DEBUG MARKER: umount -d -f /mnt/mds1
09:49:10:LustreError: 24584:0:(qsd_reint.c:54:qsd_reint_completion()) lustre-MDT0000: failed to enqueue global quota lock, glb fid:[0x200000006:0x10000:0x0], rc:-5
09:49:10:LustreError: 24584:0:(qsd_reint.c:54:qsd_reint_completion()) Skipped 2 previous similar messages
09:49:10:LustreError: 25070:0:(ldlm_lib.c:2106:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery
09:49:10:Lustre: 24587:0:(ldlm_lib.c:1781:target_recovery_overseer()) recovery is aborted, evict exports in recovery
09:49:10:LustreError: 24567:0:(osp_precreate.c:725:osp_precreate_cleanup_orphans()) lustre-OST0000-osc-MDT0000: cannot cleanup orphans: rc = -5
09:49:10:LustreError: 24569:0:(osp_precreate.c:466:osp_precreate_send()) lustre-OST0001-osc-MDT0000: can&apos;t precreate: rc = -5
09:49:10:LustreError: 24569:0:(osp_precreate.c:976:osp_precreate_thread()) lustre-OST0001-osc-MDT0000: cannot precreate objects: rc = -5
09:49:10:Lustre: *** cfs_fail_loc=1602, val=0***
09:49:10:LustreError: 2658:0:(client.c:1075:ptlrpc_import_delay_req()) @@@ IMP_CLOSED   req@ffff88007be6f400 x1480397462816312/t0(0) o13-&amp;gt;lustre-OST0001-osc-MDT0000@10.1.6.53@tcp:7/4 lens 224/368 e 0 to 0 dl 0 ref 1 fl Rpc:/0/ffffffff rc 0/-1
09:49:10:LustreError: 2658:0:(client.c:1075:ptlrpc_import_delay_req()) Skipped 12 previous similar messages
09:49:10:LustreError: 2643:0:(lod_dev.c:913:lod_device_free()) ASSERTION( atomic_read(&amp;amp;lu-&amp;gt;ld_ref) == 0 ) failed: 
09:49:10:LustreError: 25070:0:(mdt_handler.c:4378:mdt_fini()) ASSERTION( atomic_read(&amp;amp;d-&amp;gt;ld_ref) == 0 ) failed: 
09:49:10:LustreError: 25070:0:(mdt_handler.c:4378:mdt_fini()) LBUG
09:49:10:Pid: 25070, comm: umount
09:49:10:
09:49:10:Call Trace:
09:49:10: [&amp;lt;ffffffffa0483895&amp;gt;] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
09:49:10: [&amp;lt;ffffffffa0483e97&amp;gt;] lbug_with_loc+0x47/0xb0 [libcfs]
09:49:10: [&amp;lt;ffffffffa0ecbb9f&amp;gt;] mdt_device_fini+0xc8f/0xcd0 [mdt]
09:49:10: [&amp;lt;ffffffffa05b95d6&amp;gt;] ? class_disconnect_exports+0x116/0x2f0 [obdclass]
09:49:10: [&amp;lt;ffffffffa05d8f52&amp;gt;] class_cleanup+0x562/0xd20 [obdclass]
09:49:10: [&amp;lt;ffffffffa05b6f16&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
09:49:10: [&amp;lt;ffffffffa05dac6a&amp;gt;] class_process_config+0x155a/0x1ac0 [obdclass]
09:49:10: [&amp;lt;ffffffffa05d39b5&amp;gt;] ? lustre_cfg_new+0x4f5/0x6f0 [obdclass]
09:49:10: [&amp;lt;ffffffffa05db347&amp;gt;] class_manual_cleanup+0x177/0x700 [obdclass]
09:49:10: [&amp;lt;ffffffffa05b6f16&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
09:49:10: [&amp;lt;ffffffffa0617a57&amp;gt;] server_put_super+0xb37/0xe50 [obdclass]
09:49:10: [&amp;lt;ffffffff8118b63b&amp;gt;] generic_shutdown_super+0x5b/0xe0
09:49:10: [&amp;lt;ffffffff8118b726&amp;gt;] kill_anon_super+0x16/0x60
09:49:10: [&amp;lt;ffffffffa05dd246&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
09:49:10: [&amp;lt;ffffffff8118bec7&amp;gt;] deactivate_super+0x57/0x80
09:49:10: [&amp;lt;ffffffff811ab8cf&amp;gt;] mntput_no_expire+0xbf/0x110
09:49:10: [&amp;lt;ffffffff811ac41b&amp;gt;] sys_umount+0x7b/0x3a0
09:49:10: [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
09:49:10:
09:49:10:LustreError: 2643:0:(lod_dev.c:913:lod_device_free()) LBUG
09:49:10:Pid: 2643, comm: obd_zombid
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Info required for matching: sanity-lfsck 8&lt;/p&gt;</description>
                <environment>server: lustre-master build # 2671&lt;br/&gt;
client: 2.5.3</environment>
        <key id="26894">LU-5713</key>
            <summary>Interop 2.5&lt;-&gt;2.7 sanity-lfsck test_8: (mdt_handler.c:4378:mdt_fini()) ASSERTION( atomic_read(&amp;d-&gt;ld_ref) == 0 ) failed</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="maloo">Maloo</reporter>
                        <labels>
                    </labels>
                <created>Tue, 7 Oct 2014 18:10:08 +0000</created>
                <updated>Mon, 17 Apr 2017 21:34:06 +0000</updated>
                            <resolved>Mon, 17 Apr 2017 21:34:06 +0000</resolved>
                                    <version>Lustre 2.7.0</version>
                    <version>Lustre 2.8.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="106517" author="adilger" created="Tue, 10 Feb 2015 20:05:56 +0000"  >&lt;p&gt;Hit this again: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/856f4938-aea4-11e4-98a3-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/856f4938-aea4-11e4-98a3-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="108217" author="bfaccini" created="Fri, 27 Feb 2015 09:46:28 +0000"  >&lt;p&gt;Andreas,&lt;br/&gt;
I had also one failure of my patch&apos;s auto-tests sessions (&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/6e9169d6-a179-11e4-9db6-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/6e9169d6-a179-11e4-9db6-5254006e85c2&lt;/a&gt;) that has been automatically (??) linked to this ticket, but having an in-deep look of its logs I did not found the presence of the &quot;(mdt_handler.c:4378:mdt_fini()) ASSERTION( atomic_read(&amp;amp;d-&amp;gt;ld_ref) == 0 ) &quot; Assert being addressed in this ticket,  but rather a &quot;sanity-lfsck test_4: &apos;(7) unexpected status&apos;&quot; (tracked in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6147&quot; title=&quot;sanity-lfsck test_4: &amp;#39;(7) unexpected status&amp;#39; &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6147&quot;&gt;&lt;del&gt;LU-6147&lt;/del&gt;&lt;/a&gt;) with a further timeout/consequence in sanity-lfscktest_5.&lt;br/&gt;
Having a look in your own/reported failed session, it seems very similar, and thus much more related to &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6147&quot; title=&quot;sanity-lfsck test_4: &amp;#39;(7) unexpected status&amp;#39; &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6147&quot;&gt;&lt;del&gt;LU-6147&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="125832" author="sarah" created="Tue, 1 Sep 2015 03:03:31 +0000"  >&lt;p&gt;Hit this error when running sanity test_65j with 2.8 server (DNE mode) and 2.5.3 client:&lt;/p&gt;

&lt;p&gt;MDS&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: DEBUG MARKER: == sanity test 65j: set default striping on root directory (bug 6367)=== 19:58:05 (1441076285)
LustreError: 46370:0:(client.c:1138:ptlrpc_import_delay_req()) @@@ IMP_CLOSED   req@ffff880821fb8cc0 x1511074876772396/t0(0) o1000-&amp;gt;lustre-MDT0001-osp-MDT0000@0@lo:24/4 lens 248/16608 e 0 to 0 dl 0 ref 2 fl Rpc:/0/ffffffff rc 0/-1
LustreError: 46370:0:(client.c:1138:ptlrpc_import_delay_req()) Skipped 3 previous similar messages
LustreError: 46370:0:(osp_object.c:586:osp_attr_get()) lustre-MDT0001-osp-MDT0000:osp_attr_get update error [0x240000402:0x1:0x0]: rc = -5
LustreError: 46370:0:(llog.c:180:llog_cancel_rec()) lustre-MDT0001-osp-MDT0000: fail to write header for llog #0x1:1073742850#00000000: rc = -5
Lustre: lustre-MDT0000: Not available for connect from 10.2.4.56@tcp (stopping)
Lustre: Skipped 10 previous similar messages
LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation obd_ping to node 0@lo failed: rc = -107
Lustre: lustre-MDT0000-osp-MDT0002: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 2 previous similar messages
LustreError: Skipped 7 previous similar messages
Lustre: lustre-MDT0000: Not available for connect from 10.2.4.56@tcp (stopping)
Lustre: Skipped 18 previous similar messages
Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 3. Is it stuck?
LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) header@ffff880400337c80[0x0, 1, [0x1:0x0:0x0] hash exist]{

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) ....local_storage@ffff880400337cd0

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) ....osd-ldiskfs@ffff8804002f4e40osd-ldiskfs-object@ffff8804002f4e40(i:ffff88041678d6e0:81/3187989141)[plain]

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) } header@ffff880400337c80

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) header@ffff8804358af840[0x0, 1, [0x200000003:0x0:0x0] hash exist]{

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) ....local_storage@ffff8804358af890

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) ....osd-ldiskfs@ffff8804012a5200osd-ldiskfs-object@ffff8804012a5200(i:ffff880401c1b560:79/3187989106)[plain]

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) } header@ffff8804358af840

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) header@ffff8804002476c0[0x0, 1, [0x200000003:0x2:0x0] hash exist]{

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) ....local_storage@ffff880400247710

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) ....osd-ldiskfs@ffff880400f2ea80osd-ldiskfs-object@ffff880400f2ea80(i:ffff88042a53b350:80/3187989107)[plain]

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) } header@ffff8804002476c0

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) header@ffff88040004a5c0[0x0, 1, [0xa:0x0:0x0] hash exist]{

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) ....local_storage@ffff88040004a610

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) ....osd-ldiskfs@ffff88040031d800osd-ldiskfs-object@ffff88040031d800(i:ffff8804166a3110:82/3187989175)[plain]

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) } header@ffff88040004a5c0

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) header@ffff88040031dec0[0x1, 1, [0x200000001:0x1017:0x0] hash exist]{

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) ....local_storage@ffff88040031df10

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) ....osd-ldiskfs@ffff88040002f840osd-ldiskfs-object@ffff88040002f840(i:ffff8804220777a0:12/1262305354)[plain]

LustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) } header@ffff88040031dec0

LustreError: 46370:0:(mdt_handler.c:4275:mdt_fini()) ASSERTION( atomic_read(&amp;amp;d-&amp;gt;ld_ref) == 0 ) failed: 
LustreError: 46370:0:(mdt_handler.c:4275:mdt_fini()) LBUG

Message fromLustreError: 4752:0:(osp_dev.c:1255:osp_device_free()) header@ffInitializing cgroup subsys cpuset
Initializing cgroup subsys cpu
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="133833" author="heckes" created="Wed, 18 Nov 2015 13:33:08 +0000"  >&lt;p&gt;Error happens also during soak testing of build &apos;20151116&apos; (see &lt;a href=&quot;https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20151116&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20151116&lt;/a&gt;) upon umounting MDTs:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;&amp;lt;0&amp;gt;LustreError: 6529:0:(lod_dev.c:1570:lod_device_free()) ASSERTION( atomic_read(&amp;amp;lu-&amp;gt;ld_ref) == 0 ) failed: lu is ffff8804006dc000
&amp;lt;0&amp;gt;LustreError: 6529:0:(lod_dev.c:1570:lod_device_free()) LBUG
&amp;lt;4&amp;gt;Pid: 6529, comm: obd_zombid
&amp;lt;4&amp;gt;
&amp;lt;4&amp;gt;Call Trace:
&amp;lt;0&amp;gt;LustreError: 13065:0:(mdt_handler.c:4295:mdt_fini()) ASSERTION( atomic_read(&amp;amp;d-&amp;gt;ld_ref) == 0 ) failed: 
&amp;lt;0&amp;gt;LustreError: 13065:0:(mdt_handler.c:4295:mdt_fini()) LBUG
&amp;lt;4&amp;gt;Pid: 13065, comm: umount
&amp;lt;4&amp;gt;
&amp;lt;4&amp;gt;Call Trace:
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa08a5875&amp;gt;] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa08a5e77&amp;gt;] lbug_with_loc+0x47/0xb0 [libcfs]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa08a5875&amp;gt;] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa08a5e77&amp;gt;] lbug_with_loc+0x47/0xb0 [libcfs]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa13d7ba1&amp;gt;] lod_device_free+0x2c1/0x330 [lod]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa1308b79&amp;gt;] mdt_device_fini+0xed9/0xf40 [mdt]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09d00d6&amp;gt;] ? class_disconnect_exports+0x116/0x2f0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09ea1f2&amp;gt;] class_cleanup+0x572/0xd20 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09e3fed&amp;gt;] class_decref+0x3dd/0x4c0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09cadb6&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09ec876&amp;gt;] class_process_config+0x1ed6/0x2830 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa08b16c1&amp;gt;] ? libcfs_debug_msg+0x41/0x50 [libcfs]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8117523c&amp;gt;] ? __kmalloc+0x21c/0x230
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09ceffc&amp;gt;] obd_zombie_impexp_cull+0x61c/0xac0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09ed68f&amp;gt;] class_manual_cleanup+0x4bf/0x8e0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09cadb6&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0a2558c&amp;gt;] server_put_super+0xa0c/0xed0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09cf505&amp;gt;] obd_zombie_impexp_thread+0x65/0x190 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811ac776&amp;gt;] ? invalidate_inodes+0xf6/0x190
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81190b7b&amp;gt;] generic_shutdown_super+0x5b/0xe0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81190c66&amp;gt;] kill_anon_super+0x16/0x60
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81064c00&amp;gt;] ? default_wake_function+0x0/0x20
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09f0546&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81191407&amp;gt;] deactivate_super+0x57/0x80
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811b10df&amp;gt;] mntput_no_expire+0xbf/0x110
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811b1c2b&amp;gt;] sys_umount+0x7b/0x3a0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8100b0d2&amp;gt;] system_call_fastpath+0x16/0x1b
&amp;lt;4&amp;gt;
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09cf4a0&amp;gt;] ? obd_zombie_impexp_thread+0x0/0x190 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8109e78e&amp;gt;] kthread+0x9e/0xc0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8100c28a&amp;gt;] child_rip+0xa/0x20
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8109e6f0&amp;gt;] ? kthread+0x0/0xc0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8100c280&amp;gt;] ? child_rip+0x0/0x20
&amp;lt;4&amp;gt;
&amp;lt;0&amp;gt;Kernel panic - not syncing: LBUG
&amp;lt;4&amp;gt;Pid: 13065, comm: umount Tainted: P           ---------------    2.6.32-504.30.3.el6_lustre.gb64632c.x86_64 #1
&amp;lt;4&amp;gt;Call Trace:
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81529c9c&amp;gt;] ? panic+0xa7/0x16f
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa08a5ecb&amp;gt;] ? lbug_with_loc+0x9b/0xb0 [libcfs]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa1308b79&amp;gt;] ? mdt_device_fini+0xed9/0xf40 [mdt]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09d00d6&amp;gt;] ? class_disconnect_exports+0x116/0x2f0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09ea1f2&amp;gt;] ? class_cleanup+0x572/0xd20 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09cadb6&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09ec876&amp;gt;] ? class_process_config+0x1ed6/0x2830 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa08b16c1&amp;gt;] ? libcfs_debug_msg+0x41/0x50 [libcfs]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8117523c&amp;gt;] ? __kmalloc+0x21c/0x230
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09ed68f&amp;gt;] ? class_manual_cleanup+0x4bf/0x8e0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09cadb6&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0a2558c&amp;gt;] ? server_put_super+0xa0c/0xed0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811ac776&amp;gt;] ? invalidate_inodes+0xf6/0x190
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81190b7b&amp;gt;] ? generic_shutdown_super+0x5b/0xe0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81190c66&amp;gt;] ? kill_anon_super+0x16/0x60
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa09f0546&amp;gt;] ? lustre_kill_super+0x36/0x60 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81191407&amp;gt;] ? deactivate_super+0x57/0x80
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811b10df&amp;gt;] ? mntput_no_expire+0xbf/0x110
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811b1c2b&amp;gt;] ? sys_umount+0x7b/0x3a0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8100b0d2&amp;gt;] ? system_call_fastpath+0x16/0x1b
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;crash dump file is available via cluster &lt;em&gt;lola&apos;s&lt;/em&gt; head node: &lt;tt&gt;lhn:/scratch/crashdumps/lu-5713/lola-8-127.0.0.1-2015-11-18-04:11:26&lt;/tt&gt;&lt;/p&gt;</comment>
                            <comment id="136567" author="jamesanunez" created="Wed, 16 Dec 2015 16:49:18 +0000"  >&lt;p&gt;We hit this LBUG on master (not interop testing):&lt;br/&gt;
2015-12-15 19:41:33 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/7ff1ea68-a392-11e5-9b3d-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/7ff1ea68-a392-11e5-9b3d-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="192364" author="adilger" created="Mon, 17 Apr 2017 21:34:06 +0000"  >&lt;p&gt;Close old issue.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="23034">LU-4595</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwy1b:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>16027</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>