<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:23:19 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-2213] sanity-scrub.sh test_10b: osd_scrub_cleanup()) ASSERTION( dev-&gt;od_otable_it == ((void *)0) ) failed</title>
                <link>https://jira.whamcloud.com/browse/LU-2213</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;I recently hit this problem in running sanity-scrub.sh:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;LustreError: 140-5: Server testfs-MDT0000 requested index 0, but that index is already in use. Use --writeconf to force
mgs_write_log_target()) Can&apos;t get index (-98)
mgs_handle_target_reg()) Failed to write testfs-MDT0000 log (-98)
erver_register_target()) Cannot talk to the MGS: -98, not fatal
LustreError: 32638:0:(osd_scrub.c:1122:osd_scrub_cleanup()) ASSERTION( dev-&amp;gt;od_otable_it == ((void *)0) ) failed

Pid: 32638, comm: umount
Call Trace:
[&amp;lt;ffffffffa08fb905&amp;gt;] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
[&amp;lt;ffffffffa08fbf17&amp;gt;] lbug_with_loc+0x47/0xb0 [libcfs]
[&amp;lt;ffffffffa0fc096f&amp;gt;] osd_scrub_cleanup+0xdf/0xe0 [osd_ldiskfs]
[&amp;lt;ffffffffa0f9d323&amp;gt;] osd_shutdown+0x33/0x110 [osd_ldiskfs]
[&amp;lt;ffffffffa0fa9ff5&amp;gt;] osd_process_config+0x165/0x1b0 [osd_ldiskfs]
[&amp;lt;ffffffffa0d97611&amp;gt;] lod_process_config+0x451/0xa70 [lod]
[&amp;lt;ffffffffa0ed9ac0&amp;gt;] mdd_process_config+0x210/0x7e0 [mdd]
[&amp;lt;ffffffffa1027272&amp;gt;] mdt_stack_fini+0x172/0xbf0 [mdt]
[&amp;lt;ffffffffa1027fb7&amp;gt;] mdt_device_fini+0x2c7/0x510 [mdt]
[&amp;lt;ffffffffa0a8d4c7&amp;gt;] class_cleanup+0x577/0xdc0 [obdclass]
[&amp;lt;ffffffffa0a8edb5&amp;gt;] class_process_config+0x10a5/0x1ca0 [obdclass]
[&amp;lt;ffffffffa0a8fb29&amp;gt;] class_manual_cleanup+0x179/0x6f0 [obdclass]
[&amp;lt;ffffffffa0a9d0ac&amp;gt;] server_put_super+0x61c/0x1300 [obdclass]
[&amp;lt;ffffffff8117d34b&amp;gt;] generic_shutdown_super+0x5b/0xe0
[&amp;lt;ffffffff8117d436&amp;gt;] kill_anon_super+0x16/0x60
[&amp;lt;ffffffffa0a919a6&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
[&amp;lt;ffffffff8117e4b0&amp;gt;] deactivate_super+0x70/0x90
[&amp;lt;ffffffff8119a4ff&amp;gt;] mntput_no_expire+0xbf/0x110
[&amp;lt;ffffffff8119af9b&amp;gt;] sys_umount+0x7b/0x3a0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Alex&apos;s patch in &lt;a href=&quot;http://review.whamcloud.com/4217&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/4217&lt;/a&gt; to be landed was created for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2033&quot; title=&quot;MDT cannot mount after restored from file-level backup if the mount option &amp;quot;noscrub&amp;quot; specified&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2033&quot;&gt;&lt;del&gt;LU-2033&lt;/del&gt;&lt;/a&gt;, but since that bug was closed and actually related to a separate issue, I&apos;d rather file a new bug instead of re-opening that one.  That patch works around the duplicate index==0 issue by resetting the filesystem label after formatting (to clear the &quot;VIRGIN&quot; flag), though my preference would be if the MDT itself detected that it had been restored from backup and reset the label internally.  At least the proposed solution will also work for older versions of Lustre as well, so a single restore procedure can be documented, so I&apos;m not dead-set against this part of the patch.&lt;/p&gt;

&lt;p&gt;The osd_scrub_cleanup() assertion is also addressed by Alex&apos;s patch, but Fan Yong rightfully objected to that fix because it still implies that the scrub thread is running when the MDT is being stopped, so there is some other cleanup/serialization needed.&lt;/p&gt;</description>
                <environment>Single-node test configuration (dual-core x86_64, 1 MDT, 3 OST)</environment>
        <key id="16411">LU-2213</key>
            <summary>sanity-scrub.sh test_10b: osd_scrub_cleanup()) ASSERTION( dev-&gt;od_otable_it == ((void *)0) ) failed</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="yong.fan">nasf</assignee>
                                    <reporter username="adilger">Andreas Dilger</reporter>
                        <labels>
                    </labels>
                <created>Sat, 20 Oct 2012 18:03:14 +0000</created>
                <updated>Fri, 19 Apr 2013 20:38:23 +0000</updated>
                            <resolved>Mon, 29 Oct 2012 07:15:40 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                                    <fixVersion>Lustre 2.4.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>3</watches>
                                                                            <comments>
                            <comment id="46818" author="bzzz" created="Sun, 21 Oct 2012 01:52:16 +0000"  >&lt;p&gt;right, so the problem should be fixed by a correct sequence of -&amp;gt;ldo_process_config(LCFG_CLEANUP) in MDD and OSD.&lt;/p&gt;</comment>
                            <comment id="46842" author="yong.fan" created="Mon, 22 Oct 2012 13:20:36 +0000"  >&lt;p&gt;The root reason is that, the LFSCK should be stopped before osd_shutdown. But currently it is not.&lt;/p&gt;

&lt;p&gt;This is the patch:&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/#change,4217,set3&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,4217,set3&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="47003" author="bzzz" created="Mon, 29 Oct 2012 07:15:40 +0000"  >&lt;p&gt;landed&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvapr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>5271</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>