<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:55:45 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5931] Deactivated  OST still contains data</title>
                <link>https://jira.whamcloud.com/browse/LU-5931</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Not sure this is the appropriate location for this issue but have little evidence elsewhere of a similar nature.&lt;/p&gt;

&lt;p&gt;We have two ZFS backed OSTs (1 and 2) which we would like to remove from our Lustre environment for maintenance purposes. So we ran a &apos;lfs find /RSF1 --ost 1,2&apos; to locate any stripes and subsequently copied the data to new files and removed the old. Running &quot;lfs getstripe&quot; confirms the new file resides and the remaining OSTs. The mystery is that the original OSTs still indicate that they house a significant amount of data.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;UUID                   1K-blocks        Used   Available Use% Mounted on
RSF1-MDT0000_UUID       76477312     6747648    69727616   9% /RSF1[MDT:0]
RSF1-MDT0001_UUID       76416000     7168256    69245696   9% /RSF1[MDT:1]
* RSF1-OST0001_UUID     8053647744  3711864960  4341780736  46% /RSF1[OST:1]
* RSF1-OST0002_UUID     8053646848  3706844416  4346767616  46% /RSF1[OST:2]
RSF1-OST2776_UUID    12387717248  6162569728  6225144320  50% /RSF1[OST:10102]
RSF1-OST2840_UUID    12387719040  5993136000  6394579328  48% /RSF1[OST:10304]
RSF1-OST290a_UUID    12387720832  6174761856  6212955520  50% /RSF1[OST:10506]
RSF1-OST29d4_UUID    12387713408  6129103104  6258606848  49% /RSF1[OST:10708]
RSF1-OST2a9e_UUID    12387713536  5944379008  6443330944  48% /RSF1[OST:10910]
RSF1-OST2b68_UUID    12387710464  5959099904  6428606592  48% /RSF1[OST:11112]
RSF1-OST2c32_UUID    12387712384  6011423872  6376284800  49% /RSF1[OST:11314]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;On the OSS nodes mounting those ZFS backed OSTs we run &apos;zdb -dd OST1 | grep &quot;ZFS plain file&quot;&apos; for example and using the zfsobj2fid utility to map the resultant list of ZFS OID to FIDs. Then on a Lustre client we run:&lt;/p&gt;

&lt;p&gt;lfs fid2path /RSF1 &lt;span class=&quot;error&quot;&gt;&amp;#91;0x280005221:0x11424:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
fid2path error: No such file or directory&lt;/p&gt;

&lt;p&gt;on all the FIDs but nothing is found.&lt;/p&gt;

&lt;p&gt;This situation is concerning us as we will be permanently removing the two OSTs in question but is valid data still housed there?&lt;/p&gt;

&lt;p&gt;Does the data still reside on the two OSTs in question because they are deactivated and thus read-only?&lt;/p&gt;

&lt;p&gt;Sorry if this is a duplicate or an inappropriate location but have few avenues left to try and seems like a bug to us.&lt;/p&gt;</description>
                <environment>ZFS on Linux 0.6.2, Scientific Linux 6.4</environment>
        <key id="27637">LU-5931</key>
            <summary>Deactivated  OST still contains data</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="ekolb">Eric Kolb</reporter>
                        <labels>
                            <label>zfs</label>
                    </labels>
                <created>Tue, 18 Nov 2014 17:48:46 +0000</created>
                <updated>Wed, 8 Jul 2015 19:12:31 +0000</updated>
                            <resolved>Wed, 8 Jul 2015 19:12:31 +0000</resolved>
                                    <version>Lustre 2.4.1</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="99491" author="green" created="Tue, 18 Nov 2014 18:14:46 +0000"  >&lt;p&gt;did you deactivate OSTs on the MDS by any chance, that would cause exact same symptoms because MDS is not able to clean up objects on the OSTs in that case.&lt;/p&gt;</comment>
                            <comment id="99495" author="ekolb" created="Tue, 18 Nov 2014 18:22:13 +0000"  >&lt;p&gt;Thanks. Yes we did deactivate the OSTs on the MDS because the documentation instructed us too?&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://build.hpdd.intel.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#section_k3l_4gt_tl&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://build.hpdd.intel.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#section_k3l_4gt_tl&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Is this not the case?&lt;/p&gt;</comment>
                            <comment id="99508" author="adilger" created="Tue, 18 Nov 2014 20:03:30 +0000"  >&lt;p&gt;There is a bit of a disconnect between the documented process and the 2.4 releases and beyond. Deactivating the OST on the MDS stops new files from being created there, but since 2.4 it also prevents the MDS from deleting those files. This is indeed a bug that needs to be fixed.&lt;/p&gt;

&lt;p&gt;As for using &lt;tt&gt;lfs fid2path&lt;/tt&gt; to check these objects shows that the MDS inode that was previously referencing the OST object is no longer there, because you deleted or migrated them. That is as it should be.&lt;/p&gt;

&lt;p&gt;One option is to keep the OSTs around until your next outage and then reactivate the OSTs after user processes that create files have been stopped.  That would allow the OST objects to be deleted if you are concerned about the remaining objects.&lt;/p&gt;

&lt;p&gt;As for a long-term solution to this problem I think there are two options. Firstly we can allow the OST to be marked inactive for object creation on the MDS without preventing it from destroying the OST objects.  The second is to have a setting on the OST to prevent the MDS from selecting it for new object allocation. There are two hooks for this second option already: mark OST full/ENOSPC so the MDS ignores it completely, and to mark the OST in RAID rebuild so that the MDS avoids it. This could be enhanced to make it a hard stop on file creation. &lt;/p&gt;</comment>
                            <comment id="99601" author="ekolb" created="Wed, 19 Nov 2014 18:14:12 +0000"  >&lt;p&gt;Thanks for the information. With this we are more comfortable in&lt;br/&gt;
proceeding with our maintenance.&lt;/p&gt;</comment>
                            <comment id="119428" author="sean" created="Wed, 24 Jun 2015 08:00:10 +0000"  >&lt;p&gt;This is a duplicate of &lt;a href=&quot;https://jira.hpdd.intel.com/browse/LU-4825&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://jira.hpdd.intel.com/browse/LU-4825&lt;/a&gt;. &lt;/p&gt;</comment>
                            <comment id="120749" author="adilger" created="Wed, 8 Jul 2015 19:12:31 +0000"  >&lt;p&gt;Closing as a duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="23911">LU-4825</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10040" key="com.atlassian.jira.plugin.system.customfieldtypes:labels">
                        <customfieldname>Epic</customfieldname>
                        <customfieldvalues>
                                        <label>zfs</label>
    
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10030" key="com.atlassian.jira.plugin.system.customfieldtypes:labels">
                        <customfieldname>Epic/Theme</customfieldname>
                        <customfieldvalues>
                                        <label>zfs</label>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzx15r:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>16565</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>