<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:33:45 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3420] OI scrubbing could not automatically engage after restoring a secondary MDT from a (file-level) backup</title>
                <link>https://jira.whamcloud.com/browse/LU-3420</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;When adapting sanity-scrub 4 to exercise not only MDT 0 but also the secondary MDTs, I found that, after restoring a secondary MDT from its file-level backup, looking up corresponding &quot;remote&quot; directory would return ENOENT on clients:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@linux tests&amp;#93;&lt;/span&gt;# ls /mnt/lustre/d0.sanity-scrub/d4/mdt1&lt;br/&gt;
ls: cannot access /mnt/lustre/d0.sanity-scrub/d4/mdt1: No such file or directory&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;&quot;mdt1&quot; was created by &quot;lfs mkdir -i 1&quot;.  And, OI scrubbing did not engage automatically:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@linux tests&amp;#93;&lt;/span&gt;# cat /proc/fs/lustre/osd-ldiskfs/lustre-MDT0001/oi_scrub &lt;br/&gt;
name: OI_scrub&lt;br/&gt;
magic: 0x4c5fd252&lt;br/&gt;
oi_files: 64&lt;br/&gt;
status: init&lt;br/&gt;
flags: inconsistent&lt;br/&gt;
param:&lt;br/&gt;
time_since_last_completed: N/A&lt;br/&gt;
time_since_latest_start: N/A&lt;br/&gt;
time_since_last_checkpoint: N/A&lt;br/&gt;
latest_start_position: N/A&lt;br/&gt;
last_checkpoint_position: N/A&lt;br/&gt;
first_failure_position: N/A&lt;br/&gt;
checked: 0&lt;br/&gt;
updated: 0&lt;br/&gt;
failed: 0&lt;br/&gt;
prior_updated: 0&lt;br/&gt;
noscrub: 0&lt;br/&gt;
igif: 0&lt;br/&gt;
success_count: 0&lt;br/&gt;
run_time: 0 seconds&lt;br/&gt;
average_speed: 0 objects/sec&lt;br/&gt;
real-time_speed: N/A&lt;br/&gt;
current_position: N/A&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;The debug log shows that MDT 0 sent an UPDATE_OBJ OBJ_ATTR_GET RPC to MDT 1.  The FID was found in the OI but the ino was (naturally) stale:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;00000004:00000002:0.0:1369882480.737242:0:7229:0:(osd_handler.c:226:osd_iget()) unmatched inode: ino = 102, gen0 = 2698313523, gen1 = 294820613&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;According to osd_fid_lookup(), OI scrubbing is not triggered in this case.&lt;/p&gt;</description>
                <environment></environment>
        <key id="19226">LU-3420</key>
            <summary>OI scrubbing could not automatically engage after restoring a secondary MDT from a (file-level) backup</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="yong.fan">nasf</assignee>
                                    <reporter username="liwei">Li Wei</reporter>
                        <labels>
                            <label>mq313</label>
                    </labels>
                <created>Thu, 30 May 2013 14:20:55 +0000</created>
                <updated>Fri, 13 Sep 2013 03:43:39 +0000</updated>
                            <resolved>Wed, 10 Jul 2013 02:37:59 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                                    <fixVersion>Lustre 2.4.1</fixVersion>
                    <fixVersion>Lustre 2.5.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="59631" author="liwei" created="Thu, 30 May 2013 14:24:16 +0000"  >&lt;p&gt;Attached the debug log.  Note that this was a single-node setup.&lt;/p&gt;</comment>
                            <comment id="59632" author="liwei" created="Thu, 30 May 2013 14:25:22 +0000"  >&lt;p&gt;CC&apos;ed Wang Di and Fan Yong.&lt;/p&gt;</comment>
                            <comment id="59633" author="liwei" created="Thu, 30 May 2013 14:28:13 +0000"  >&lt;p&gt;This and &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3332&quot; title=&quot;sanity-scrub and sanity-lfsck need to support DNE&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3332&quot;&gt;&lt;del&gt;LU-3332&lt;/del&gt;&lt;/a&gt; depends on each other.&lt;/p&gt;</comment>
                            <comment id="59669" author="adilger" created="Thu, 30 May 2013 17:49:56 +0000"  >&lt;p&gt;Fan Yong, I understand that remote directory checking for DNE MDTs is part of LFSCK Phase III, but could you please investigate what work would be needed to fix the file-level backup/restore?&lt;/p&gt;

&lt;p&gt;Li Wei, do you know if this is a problem on mdt0 or mdt1?  Were both of them backed up and restored, or just mdt1?&lt;/p&gt;</comment>
                            <comment id="59715" author="liwei" created="Fri, 31 May 2013 01:12:12 +0000"  >&lt;p&gt;Andreas, all MDTs (MDSCOUNT=2, so both MDT 0 and 1) were backed up and restored during the test.  The problem, as far as I discussed with Fan Yong yesterday, was on MDT 1---the direct FID lookup (without a prior name lookup) does not trigger OI scrubbing.&lt;/p&gt;</comment>
                            <comment id="59835" author="yong.fan" created="Sat, 1 Jun 2013 07:40:42 +0000"  >&lt;p&gt;I have made a patch to fix it:&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/#change,6515&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,6515&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Related reason has been described in the patch commit message.&lt;/p&gt;</comment>
                            <comment id="61994" author="yong.fan" created="Wed, 10 Jul 2013 02:37:59 +0000"  >&lt;p&gt;The patch has been landed to Lustre-2.5&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="18991">LU-3332</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="12961" name="ls-remote-dir-enoent.log" size="808506" author="liwei" created="Thu, 30 May 2013 14:24:16 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvsaf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>8481</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>