<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:42:15 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4385] replay-single test 61d causes oops in osd_device_fini()</title>
                <link>https://jira.whamcloud.com/browse/LU-4385</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;My local test runs shows this bug almost every time in test 61d replay-single.sh&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;Dec 14 13:09:17 nodez kernel: Lustre: DEBUG MARKER: == replay-single test 61d: error in llog_setup should cleanup the llog context correctly == 13:09:16 (1387012156)
Dec 14 13:09:17 nodez kernel: Lustre: Failing over lustre-MDT0000
Dec 14 13:09:17 nodez kernel: Lustre: server umount lustre-MDT0000 complete
Dec 14 13:09:17 nodez kernel: LDISKFS-fs (loop0): mounted filesystem with ordered data mode. quota=on. Opts: 
Dec 14 13:09:17 nodez kernel: Lustre: *** cfs_fail_loc=605, val=0***
Dec 14 13:09:17 nodez kernel: LustreError: 8279:0:(llog_obd.c:207:llog_setup()) MGS: ctxt 0 lop_setup=ffffffffa0e26d90 failed: rc = -95
Dec 14 13:09:17 nodez kernel: LustreError: 8279:0:(obd_config.c:572:class_setup()) setup MGS failed (-95)
Dec 14 13:09:17 nodez kernel: LustreError: 8279:0:(obd_mount.c:199:lustre_start_simple()) MGS setup error -95
Dec 14 13:09:17 nodez kernel: LustreError: 8279:0:(obd_mount_server.c:134:server_deregister_mount()) MGS not registered
Dec 14 13:09:17 nodez kernel: LustreError: 15e-a: Failed to start MGS &lt;span class=&quot;code-quote&quot;&gt;&apos;MGS&apos;&lt;/span&gt; (-95). Is the &lt;span class=&quot;code-quote&quot;&gt;&apos;mgs&apos;&lt;/span&gt; module loaded?
Dec 14 13:09:17 nodez kernel: LustreError: 8279:0:(obd_mount_server.c:844:lustre_disconnect_lwp()) lustre-MDT0000-lwp-MDT0000: Can&apos;t end config log lustre-client.
Dec 14 13:09:17 nodez kernel: LustreError: 8279:0:(obd_mount_server.c:1419:server_put_super()) lustre-MDT0000: failed to disconnect lwp. (rc=-2)
Dec 14 13:09:17 nodez kernel: LustreError: 8279:0:(obd_mount_server.c:1449:server_put_super()) no obd lustre-MDT0000
Dec 14 13:09:17 nodez kernel: LustreError: 8279:0:(obd_mount_server.c:134:server_deregister_mount()) lustre-MDT0000 not registered
Dec 14 13:09:18 nodez kernel: general protection fault: 0000 [#1] SMP 
Dec 14 13:09:18 nodez kernel: last sysfs file: /sys/devices/system/cpu/possible
Dec 14 13:09:18 nodez kernel: CPU 1 
Dec 14 13:09:18 nodez kernel: Modules linked in: lustre ofd osp lod ost mdt mdd mgs osd_ldiskfs ldiskfs lquota lfsck obdecho mgc lov osc mdc lmv fid fld ptlrpc obdclass ksocklnd lnet libcfs zfs(P) zcommon(P) znvpair(P) zavl(P) zunicode(P) spl vboxsf vboxguest [last unloaded: libcfs]
Dec 14 13:09:18 nodez kernel: 
Dec 14 13:09:18 nodez kernel: Pid: 8279, comm: mount.lustre Tainted: P           ---------------  T 2.6.32 #0 innotek GmbH VirtualBox/VirtualBox
Dec 14 13:09:18 nodez kernel: RIP: 0010:[&amp;lt;ffffffffa0e46f03&amp;gt;]  [&amp;lt;ffffffffa0e46f03&amp;gt;] lprocfs_remove_nolock+0x33/0x100 [obdclass]
Dec 14 13:09:18 nodez kernel: RSP: 0018:ffff88003d34d928  EFLAGS: 00010202
Dec 14 13:09:18 nodez kernel: RAX: ffffffffa0ec08e0 RBX: 6b6b6b6b6b6b6b6b RCX: 0000000000000000
Dec 14 13:09:18 nodez kernel: RDX: 0000000000000000 RSI: 0000000000000030 RDI: ffff8800327b73c0
Dec 14 13:09:18 nodez kernel: RBP: 6b6b6b6b6b6b6b6b R08: 0000000000000158 R09: 0000000000000000
Dec 14 13:09:18 nodez kernel: R10: ffff880033c82a98 R11: ffff880033c829c0 R12: ffff8800327b74c8
Dec 14 13:09:18 nodez kernel: R13: 6b6b6b6b6b6b6b6b R14: 0000000000000002 R15: ffff88003c5e7aa0
Dec 14 13:09:18 nodez kernel: FS:  00007fadb4325700(0000) GS:ffff880001e80000(0000) knlGS:0000000000000000
Dec 14 13:09:18 nodez kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
Dec 14 13:09:18 nodez kernel: CR2: 00007f7e81b12ea0 CR3: 000000002b85f000 CR4: 00000000000006e0
Dec 14 13:09:18 nodez kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Dec 14 13:09:18 nodez kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Dec 14 13:09:18 nodez kernel: &lt;span class=&quot;code-object&quot;&gt;Process&lt;/span&gt; mount.lustre (pid: 8279, threadinfo ffff88003d34c000, task ffff88003e7547f0)
Dec 14 13:09:18 nodez kernel: Stack:
Dec 14 13:09:18 nodez kernel: ffff88003d6a2ed8 ffff880036490b78 ffff88003d6a2f80 ffff8800327b73c0
Dec 14 13:09:18 nodez kernel: &amp;lt;d&amp;gt; ffff88003d34d9d8 ffff8800327b74c8 0000000000000008 ffffffffa0e474a8
Dec 14 13:09:18 nodez kernel: &amp;lt;d&amp;gt; ffff8800327b7330 ffffffffa0660952 ffff88003d620000 ffff88003d34d9d8
Dec 14 13:09:18 nodez kernel: Call Trace:
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e474a8&amp;gt;] ? lprocfs_remove+0x18/0x30 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0660952&amp;gt;] ? qsd_fini+0x72/0x440 [lquota]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0742152&amp;gt;] ? osd_shutdown+0x32/0xe0 [osd_ldiskfs]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0742549&amp;gt;] ? osd_device_fini+0x119/0x180 [osd_ldiskfs]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e56784&amp;gt;] ? class_cleanup+0x804/0xd90 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e35ae0&amp;gt;] ? class_name2dev+0x70/0xd0 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e5b645&amp;gt;] ? class_process_config+0x1d45/0x2e50 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e5ca0a&amp;gt;] ? class_manual_cleanup+0x2ba/0xd60 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffff810e6f44&amp;gt;] ? cache_alloc_debugcheck_after+0x123/0x192
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffff810e88bc&amp;gt;] ? __kmalloc+0x123/0x18e
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e5cc8d&amp;gt;] ? class_manual_cleanup+0x53d/0xd60 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa074a6c4&amp;gt;] ? osd_obd_disconnect+0x164/0x1d0 [osd_ldiskfs]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e6243d&amp;gt;] ? lustre_put_lsi+0x19d/0xe90 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e641d8&amp;gt;] ? lustre_common_put_super+0x5b8/0xbe0 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e95802&amp;gt;] ? server_put_super+0x172/0x2190 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e97f8d&amp;gt;] ? server_fill_super+0x76d/0x15c0 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e673c0&amp;gt;] ? lustre_fill_super+0x0/0x520 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e67598&amp;gt;] ? lustre_fill_super+0x1d8/0x520 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e673c0&amp;gt;] ? lustre_fill_super+0x0/0x520 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e673c0&amp;gt;] ? lustre_fill_super+0x0/0x520 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffff810f863f&amp;gt;] ? get_sb_nodev+0x4e/0x84
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffffa0e5f52c&amp;gt;] ? lustre_get_sb+0x1c/0x30 [obdclass]
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffff810f838d&amp;gt;] ? vfs_kern_mount+0x96/0x15b
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffff810f84b3&amp;gt;] ? do_kern_mount+0x49/0xe7
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffff8110dcd5&amp;gt;] ? do_mount+0x7a1/0x824
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffff8110dde0&amp;gt;] ? sys_mount+0x88/0xc4
Dec 14 13:09:18 nodez kernel: [&amp;lt;ffffffff81008a42&amp;gt;] ? system_call_fastpath+0x16/0x1b
Dec 14 13:09:18 nodez kernel: Code: ec 18 48 8b 1f 48 c7 07 00 00 00 00 48 85 db 74 4c 48 81 fb 00 f0 ff ff 77 43 4c 8b 6b 48 4d 85 ed 75 08 e9 90 00 00 00 48 89 eb &amp;lt;48&amp;gt; 8b 6b 50 48 85 ed 75 f4 4c 8b 63 08 48 8b 6b 48 4c 89 e7 e8 
Dec 14 13:09:18 nodez kernel: RIP  [&amp;lt;ffffffffa0e46f03&amp;gt;] lprocfs_remove_nolock+0x33/0x100 [obdclass]
Dec 14 13:09:18 nodez kernel: RSP &amp;lt;ffff88003d34d928&amp;gt;
Dec 14 13:09:18 nodez kernel: ---[ end trace 5f7830ce85deef31 ]---
Dec 14 13:09:18 nodez kernel: Kernel panic - not syncing: Fatal exception
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I&apos;ve found that osd_device_fini() cleanup things in wrong order, it should cleanup procfs after osd_shutdown() but not before because quota uses osd procfs as well.&lt;/p&gt;</description>
                <environment></environment>
        <key id="22463">LU-4385</key>
            <summary>replay-single test 61d causes oops in osd_device_fini()</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="tappro">Mikhail Pershin</assignee>
                                    <reporter username="tappro">Mikhail Pershin</reporter>
                        <labels>
                    </labels>
                <created>Sat, 14 Dec 2013 09:52:26 +0000</created>
                <updated>Tue, 3 Jun 2014 14:53:35 +0000</updated>
                            <resolved>Wed, 18 Dec 2013 18:26:48 +0000</resolved>
                                                                        <due></due>
                            <votes>0</votes>
                                    <watches>3</watches>
                                                                            <comments>
                            <comment id="73535" author="tappro" created="Sat, 14 Dec 2013 10:08:33 +0000"  >&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/#/c/8579/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/8579/&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="73541" author="di.wang" created="Sat, 14 Dec 2013 20:36:15 +0000"  >&lt;p&gt;duplicate with &lt;a href=&quot;https://jira.hpdd.intel.com/browse/LU-3857&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://jira.hpdd.intel.com/browse/LU-3857&lt;/a&gt;, same patch in &lt;a href=&quot;http://review.whamcloud.com/#/c/8506/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/8506/&lt;/a&gt;  &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/wink.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</comment>
                            <comment id="73773" author="adilger" created="Wed, 18 Dec 2013 18:26:48 +0000"  >&lt;p&gt;Patch &lt;a href=&quot;http://review.whamcloud.com/8506&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/8506&lt;/a&gt; from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3857&quot; title=&quot;panic in  lprocfs_remove_nolock+0x3b/0x100&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3857&quot;&gt;&lt;del&gt;LU-3857&lt;/del&gt;&lt;/a&gt; was landed to master.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                            <outwardlinks description="duplicates">
                                        <issuelink>
            <issuekey id="20706">LU-3857</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwbdr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>12020</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>