<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:02:55 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-6749] kernel panic during umount</title>
                <link>https://jira.whamcloud.com/browse/LU-6749</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;I met this a few times on master branch on my local test. what I did is&lt;br/&gt;
1. MDSCOUNT=4 sh llmount.&lt;br/&gt;
2. sh llmountcleanup.sh&lt;br/&gt;
3. MDSCOUNT=4 sh llmountcleanup.sh&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;&amp;lt;4&amp;gt;Lustre: 71545:0:(client.c:2003:ptlrpc_expire_one_request()) Skipped 5 previous similar messages
&amp;lt;3&amp;gt;LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
&amp;lt;3&amp;gt;LustreError: Skipped 40 previous similar messages
&amp;lt;3&amp;gt;LustreError: 124973:0:(llog.c:155:llog_cancel_rec()) lustre-MDT0000-osp-MDT0001: fail to write header for llog #0x1:1025#00000000: rc = -5
&amp;lt;3&amp;gt;LustreError: 11-0: lustre-MDT0001-osp-MDT0003: operation obd_ping to node 0@lo failed: rc = -107
&amp;lt;4&amp;gt;Lustre: lustre-MDT0001-osp-MDT0002: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
&amp;lt;4&amp;gt;Lustre: Skipped 2 previous similar messages
&amp;lt;6&amp;gt;Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping)
&amp;lt;6&amp;gt;Lustre: Skipped 2 previous similar messages
&amp;lt;3&amp;gt;LustreError: Skipped 3 previous similar messages
&amp;lt;4&amp;gt;general protection fault: 0000 [#1] SMP
&amp;lt;4&amp;gt;last sysfs file: /sys/devices/system/cpu/possible
&amp;lt;4&amp;gt;CPU 6
&amp;lt;4&amp;gt;Modules linked in: zfs(P)(U) zcommon(P)(U) znvpair(P)(U) zavl(P)(U) zunicode(P)(U) spl(U) zlib_deflate ofd(U) osp(U) lod(U) mdt(U) mdd(U) osd_ldiskfs(U) ldiskfs(U) exportfs lquota(U) lfsck(U) jbd mgc(U) fid(U) fld(U) ptlrpc(U) obdclass(U) ksocklnd(U) lnet(U) sha512_generic crc32c_intel libcfs(U) rfcomm ebtable_nat ebtables ipt_MASQUERADE iptable_nat nf_nat xt_CHECKSUM iptable_mangle sco bridge bnep l2cap autofs4 nfs lockd fscache auth_rpcgss nfs_acl sunrpc 8021q garp stp llc ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 vhost_net macvtap macvlan tun kvm_intel kvm uinput microcode vmware_balloon btusb bluetooth rfkill snd_ens1371 snd_rawmidi snd_ac97_codec ac97_bus snd_seq snd_seq_device snd_pcm snd_timer snd soundcore snd_page_alloc e1000 sg i2c_piix4 i2c_core shpchp ext4 jbd2 mbcache sd_mod crc_t10dif sr_mod cdrom mptspi mptscsih mptbase scsi_transport_spi pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: lmv]
&amp;lt;4&amp;gt;
&amp;lt;4&amp;gt;Pid: 124973, comm: umount Tainted: P           ---------------    2.6.32-504.3.3.el6_lustre.gf8babaf.x86_64 #1 VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform
&amp;lt;4&amp;gt;RIP: 0010:[&amp;lt;ffffffff81293ec0&amp;gt;]  [&amp;lt;ffffffff81293ec0&amp;gt;] strchr+0x0/0x30
&amp;lt;4&amp;gt;RSP: 0018:ffff880238b8b7b0  EFLAGS: 00010206
&amp;lt;4&amp;gt;RAX: ffffffff81adee60 RBX: ffff880238b8b800 RCX: 0000000000000000
&amp;lt;4&amp;gt;RDX: ffff880238b8b810 RSI: 000000000000002f RDI: 5a5a5a5a5a5a5a5a
&amp;lt;4&amp;gt;RBP: ffff880238b8b7e8 R08: 0000000000000002 R09: 0000000000000000
&amp;lt;4&amp;gt;R10: ffff88023aeefaa0 R11: 0000000000000008 R12: 5a5a5a5a5a5a5a5a
&amp;lt;4&amp;gt;R13: 5a5a5a5a5a5a5a5a R14: 5a5a5a5a5a5a5a5a R15: ffff880238b8b810
&amp;lt;4&amp;gt;FS:  00007faa4b608740(0000) GS:ffff88002f6c0000(0000) knlGS:0000000000000000
&amp;lt;4&amp;gt;CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
&amp;lt;4&amp;gt;CR2: 00007f9427e09000 CR3: 000000019a54a000 CR4: 00000000001407e0
&amp;lt;4&amp;gt;DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
&amp;lt;4&amp;gt;DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
&amp;lt;4&amp;gt;Process umount (pid: 124973, threadinfo ffff880238b8a000, task ffff88023951cae0)
&amp;lt;4&amp;gt;Stack:
&amp;lt;4&amp;gt; ffffffff811ff7f5 ffff880238b8b828 5a5a5a5a5a5a5a5a ffff8801bc39c138
&amp;lt;4&amp;gt;&amp;lt;d&amp;gt; ffffffffa10e30a0 ffff880238b8b8b8 0000000000000000 ffff880238b8b838
&amp;lt;4&amp;gt;&amp;lt;d&amp;gt; ffffffff8120060b 00000000000000a0 5a5a5a5a5a5a5a5a ffff8801b9eb92c0
&amp;lt;4&amp;gt;Call Trace:
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811ff7f5&amp;gt;] ? __xlate_proc_name+0x45/0xf0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8120060b&amp;gt;] remove_proc_subtree+0x3b/0x180
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06d1cab&amp;gt;] proc_remove+0x1b/0x20 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06d1cc9&amp;gt;] lprocfs_remove+0x19/0x30 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa10b9b53&amp;gt;] lod_procfs_fini+0x33/0x70 [lod]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa10ac3f6&amp;gt;] lod_device_fini+0xd6/0x220 [lod]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06e6ac2&amp;gt;] class_cleanup+0x552/0xd10 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06c7136&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06e926a&amp;gt;] class_process_config+0x1fea/0x27c0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81174f4c&amp;gt;] ? __kmalloc+0x20c/0x220
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06e2225&amp;gt;] ? lustre_cfg_new+0x435/0x630 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06e9b61&amp;gt;] class_manual_cleanup+0x121/0x870 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06c62b8&amp;gt;] ? class_disconnect+0xa8/0x4a0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa10ac88a&amp;gt;] lod_obd_disconnect+0x12a/0x1f0 [lod]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0f70501&amp;gt;] mdd_process_config+0x331/0x5d0 [mdd]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0fe5138&amp;gt;] mdt_stack_fini+0x718/0x1240 [mdt]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0fe6570&amp;gt;] mdt_device_fini+0x910/0x1370 [mdt]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06ca366&amp;gt;] ? class_disconnect_exports+0x116/0x2f0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06e6ac2&amp;gt;] class_cleanup+0x552/0xd10 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06c7136&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06e926a&amp;gt;] class_process_config+0x1fea/0x27c0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81174f4c&amp;gt;] ? __kmalloc+0x20c/0x220
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06e2225&amp;gt;] ? lustre_cfg_new+0x435/0x630 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06e9b61&amp;gt;] class_manual_cleanup+0x121/0x870 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06c7136&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa07225b7&amp;gt;] server_put_super+0xb17/0xea0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8119082b&amp;gt;] generic_shutdown_super+0x5b/0xe0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81190916&amp;gt;] kill_anon_super+0x16/0x60
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06ebdc6&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811910b7&amp;gt;] deactivate_super+0x57/0x80
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811b0cef&amp;gt;] mntput_no_expire+0xbf/0x110
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811b183b&amp;gt;] sys_umount+0x7b/0x3a0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
&amp;lt;4&amp;gt;Code: 75 19 48 83 e9 01 84 c0 74 06 48 83 ea 01 75 db 31 c0 c9 c3 0f 1f 80 00 00 00 00 44 38 c0 c9 19 c0 83 c8 01 c3 66 0f 1f 44 00 00 &amp;lt;0f&amp;gt; b6 17 55 48 89 f8 48 89 e5 40 38 f2 75 15 eb 19 0f 1f 80 00
&amp;lt;1&amp;gt;RIP  [&amp;lt;ffffffff81293ec0&amp;gt;] strchr+0x0/0x30
&amp;lt;4&amp;gt; RSP &amp;lt;ffff880238b8b7b0&amp;gt;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="30755">LU-6749</key>
            <summary>kernel panic during umount</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="bobijam">Zhenyu Xu</assignee>
                                    <reporter username="di.wang">Di Wang</reporter>
                        <labels>
                    </labels>
                <created>Sun, 21 Jun 2015 00:18:25 +0000</created>
                <updated>Mon, 31 Aug 2015 13:08:22 +0000</updated>
                            <resolved>Mon, 31 Aug 2015 13:08:22 +0000</resolved>
                                    <version>Lustre 2.8.0</version>
                                    <fixVersion>Lustre 2.8.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="124342" author="jgmitter" created="Mon, 17 Aug 2015 18:57:53 +0000"  >&lt;p&gt;Hi Bobijam,&lt;br/&gt;
Can you look into this?&lt;br/&gt;
Thanks.&lt;br/&gt;
Joe&lt;/p&gt;</comment>
                            <comment id="124376" author="gerrit" created="Tue, 18 Aug 2015 04:36:19 +0000"  >&lt;p&gt;Bobi Jam (bobijam@hotmail.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/16011&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/16011&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6749&quot; title=&quot;kernel panic during umount&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6749&quot;&gt;&lt;del&gt;LU-6749&lt;/del&gt;&lt;/a&gt; lod: properly remove proc entry&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: c6a501a8f4669ffe4cf1b4b48ea7286a098b21bf&lt;/p&gt;</comment>
                            <comment id="125633" author="gerrit" created="Sun, 30 Aug 2015 23:06:15 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/16011/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/16011/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6749&quot; title=&quot;kernel panic during umount&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6749&quot;&gt;&lt;del&gt;LU-6749&lt;/del&gt;&lt;/a&gt; lod: properly remove proc entry&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: f31d05319bae3640ba9bb047f842d6f12723cf7b&lt;/p&gt;</comment>
                            <comment id="125673" author="jgmitter" created="Mon, 31 Aug 2015 13:08:22 +0000"  >&lt;p&gt;Landed for 2.8.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzxg8v:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>