<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:18:15 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-15430] Index cannot be reused after permanently removing OST</title>
                <link>https://jira.whamcloud.com/browse/LU-15430</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;when i follow the manual&lt;br/&gt;
1. mds: lctl set_param osp.lustre-OST0130*.max_create_count=0&lt;br/&gt;
2. client: lfs find ./ --ost 304 &#65372; lfs_migrate -y&lt;br/&gt;
3. mgs: lctl conf_param lustre-OST0130.osc.active=0&lt;br/&gt;
4. oss: umount /dev/ost304_dev&lt;br/&gt;
Then execute lfs df /mnt/&amp;lt;mountpoint&amp;gt; on the client and find that OST304 has disappeared, but the record of OST0130 can still be seen by using lctl dl | grep OST0130.&lt;br/&gt;
Finally, execute lctl --device MGS llog_print lustre-client | egrep &quot;OST0130&quot; on the MGS to obtain the llog index of OST0130, and then use lctl --device MGS llog_cancel lustre-client &amp;lt;index&amp;gt; to delete all OST0130 indexes.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
#lctl --device MGS llog_print muzitest-client | grep OST0130
- { index: 80, event: attach, device: muzitest-OST0130-osc, type: osc, UUID: muzitest-clilov_UUID }
- { index: 81, event: setup, device: muzitest-OST0130-osc, UUID: muzitest-OST0130_UUID, node: 10.0.0.48@tcp }
- { index: 83, event: add_conn, device: muzitest-OST0130-osc, node: 10.0.0.48@tcp }
- { index: 84, event: add_osc, device: muzitest-clilov, ost: muzitest-OST0130_UUID, index: 304, gen: 1 }
- { index: 185, event: conf_param, device: muzitest-OST0130-osc, parameter: osc.active=0 } 

#lctl --device MGS llog_cancel muzitest-client 185
index 185 was canceled.
#lctl --device MGS llog_cancel muzitest-client 84
index 84 was canceled.
#lctl --device MGS llog_cancel muzitest-client 83
index 83 was canceled.
#lctl --device MGS llog_cancel muzitest-client 81
index 81 was canceled.
#lctl --device MGS llog_cancel muzitest-client 80
index 80 was canceled.&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Then execute lctl dl | grep OST0130 on the client and find that the record of OST0130 is gone.&lt;br/&gt;
Suppose I want to restore OST130 at this time, after oss executes mount.lustre -o max_sectors_kb=128 /dev/vdb /mnt/lustre_OST0130, after mgs executes lctl conf_param muzitest-OST0130.osc.active=1, it does not work on the client side See OST0130 recovery, and remount after client umount, there will be the following error&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
mount.lustre 10.0.0.32:/lustre /mnt/lustre/
mount.lustre: mount 10.0.0.32:/lustre at /mnt/lustre failed: Invalid argument
This may have multiple causes.
Is &lt;span class=&quot;code-quote&quot;&gt;&apos;lustre&apos;&lt;/span&gt; the correct filesystem name?
Are the mount options correct?
Check the syslog &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; more info. &lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Check in lctl --device MGS llog_print muzitest-client | grep OST0130 , and see that there is a newly generated llog log. After deleting the log, the mount is restored, but OST0130 still cannot be restored. What should I do in this situation, thank you very much&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</description>
                <environment></environment>
        <key id="67907">LU-15430</key>
            <summary>Index cannot be reused after permanently removing OST</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="jiahaoli">Jiahao Li</reporter>
                        <labels>
                    </labels>
                <created>Tue, 11 Jan 2022 06:45:53 +0000</created>
                <updated>Wed, 12 Jan 2022 16:06:28 +0000</updated>
                                            <version>Lustre 2.12.4</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>2</watches>
                                                                            <comments>
                            <comment id="322434" author="eaujames" created="Wed, 12 Jan 2022 12:43:42 +0000"  >&lt;p&gt;Hello,&lt;/p&gt;

&lt;p&gt;The procedure that you tried came from the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7668&quot; title=&quot;permanently remove deactivated OSTs from configuration log&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7668&quot;&gt;&lt;del&gt;LU-7668&lt;/del&gt;&lt;/a&gt;. You could get additional information from there.&lt;/p&gt;

&lt;p&gt;It seems you remove OST0130 from client configurations but not on the MDT configurations ($fsname-MDTxxxx) on the MGS.&lt;br/&gt;
You could verify that by executing on the MGS:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lctl --device MGS llog_print muzitest-MDT0000 | egrep  &quot;OST0130|10.0.0.48@tcp&quot;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To list the MGS&apos;s configurations files you could use (there is bug on that tool for 2.12.4 : &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13609&quot; title=&quot;lctl --device MGS llog_catlist doesn&amp;#39;t list all config files.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13609&quot;&gt;&lt;del&gt;LU-13609&lt;/del&gt;&lt;/a&gt;):&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lctl --device MGS llog_catlist
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you have a backup of &quot; muzitest-client&quot; (before removing the target) you could restore it by mounting your MGT target in ldiskfs and then restore it in CONFIGS dir.&lt;br/&gt;
If not you could follow the &quot;--replace&quot; procedure: &lt;a href=&quot;https://build.whamcloud.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#lustremaint.restore_ost&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://build.whamcloud.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#lustremaint.restore_ost&lt;/a&gt;.&lt;br/&gt;
If you mess up your configurations you have to &quot;--writeconf&quot;: &lt;a href=&quot;https://build.whamcloud.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#lustremaint.regenerateConfigLogs&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://build.whamcloud.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#lustremaint.regenerateConfigLogs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am not an expert on that subject, so please be careful and double check this information.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10040" key="com.atlassian.jira.plugin.system.customfieldtypes:labels">
                        <customfieldname>Epic</customfieldname>
                        <customfieldvalues>
                                        <label>server</label>
    
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10030" key="com.atlassian.jira.plugin.system.customfieldtypes:labels">
                        <customfieldname>Epic/Theme</customfieldname>
                        <customfieldvalues>
                                        <label>mgs</label>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i02eaf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>