<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:21:58 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-8953] ZFS-MDT 100% full. Request for verification of plan to fix</title>
                <link>https://jira.whamcloud.com/browse/LU-8953</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;The MDT for one of our filesystems is full, and it&apos;s not possible to delete any files, rendering the filesystem unusable from the users point of view.&lt;/p&gt;

&lt;p&gt;It&apos;s possible to manually track files that could be deleted via fid to ZFS objects on the disk. But we haven&apos;t found a way to delete objects via zdb. A recovery procedure using something like that would probably be good to have if more people run in to this. &lt;/p&gt;

&lt;p&gt;Given that it&apos;s almost Christmas vacation, so lets keep this simple and low risk. I&apos;ve thrown some more disks into the MDS. Given that the filesystem with problems looks like this:&lt;/p&gt;

&lt;p&gt;        lustre-mdt0                 ONLINE       0     0     0&lt;br/&gt;
          mirror-0                  ONLINE       0     0     0&lt;br/&gt;
            mds9_sdm-mdt_fouo6_sdm  ONLINE       0     0     0&lt;br/&gt;
            mds9_sdn-mdt_fouo6_sdn  ONLINE       0     0     0&lt;br/&gt;
          mirror-1                  ONLINE       0     0     0&lt;br/&gt;
            mds9_sdo-mdt_fouo6_sdo  ONLINE       0     0     0&lt;br/&gt;
            mds9_sdp-mdt_fouo6_sdp  ONLINE       0     0     0&lt;/p&gt;

&lt;p&gt;Would it be safe (and fix the problem) to expand it by adding another mirror?:&lt;/p&gt;

&lt;p&gt;zpool add lustre-mdt0 mirror /dev/exp_sdq/mdt_fouo6exp_sdq /dev/exp_sdt/mdt_fouo6exp_sdt&lt;/p&gt;

&lt;p&gt;(This is probably the same issue as &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8856&quot; title=&quot;ZFS-MDT 100% full. Cannot delete files.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8856&quot;&gt;&lt;del&gt;LU-8856&lt;/del&gt;&lt;/a&gt;, so feel free to merge them if it makes sense.)&lt;/p&gt;</description>
                <environment>Centos 6, Lustre from llnl chaos branch</environment>
        <key id="42499">LU-8953</key>
            <summary>ZFS-MDT 100% full. Request for verification of plan to fix</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="10000">Done</resolution>
                                        <assignee username="utopiabound">Nathaniel Clark</assignee>
                                    <reporter username="zino">Peter Bortas</reporter>
                        <labels>
                    </labels>
                <created>Mon, 19 Dec 2016 15:13:47 +0000</created>
                <updated>Tue, 20 Dec 2016 15:37:50 +0000</updated>
                            <resolved>Tue, 20 Dec 2016 15:37:50 +0000</resolved>
                                    <version>Lustre 2.5.3</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="178356" author="zino" created="Mon, 19 Dec 2016 15:35:33 +0000"  >&lt;p&gt;Addendum: Our intention is to expand the pool without shutting down Lustre. Either way &lt;em&gt;should&lt;/em&gt; be fine, but expanding it live and giving Lustre at least the chance of completing any outstanding operations feels like the more sound way. Please let us know if you disagree.&lt;/p&gt;</comment>
                            <comment id="178372" author="pjones" created="Mon, 19 Dec 2016 16:21:41 +0000"  >&lt;p&gt;Nathaniel&lt;/p&gt;

&lt;p&gt;Could you please advise?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="178434" author="utopiabound" created="Mon, 19 Dec 2016 20:41:16 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=zino&quot; class=&quot;user-hover&quot; rel=&quot;zino&quot;&gt;zino&lt;/a&gt;,&lt;/p&gt;

&lt;p&gt;I know taking the FS down and growing the MDT will alleviate your issue.  I &lt;sub&gt;think&lt;/sub&gt; growing the MDT live will be okay, but I would want to double check (run a test locally) before I could bless that course of action.&lt;/p&gt;

&lt;p&gt;&amp;#8211;&lt;br/&gt;
&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=utopiabound&quot; class=&quot;user-hover&quot; rel=&quot;utopiabound&quot;&gt;utopiabound&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="178501" author="zino" created="Tue, 20 Dec 2016 08:57:31 +0000"  >&lt;p&gt;Nathaniel,&lt;/p&gt;

&lt;p&gt;When do you think you could run that test?&lt;/p&gt;</comment>
                            <comment id="178505" author="zino" created="Tue, 20 Dec 2016 11:56:38 +0000"  >&lt;p&gt;After having talked through the failure scenarios of shutting down the FS in this state we decided to do it after unmounting since it&apos;s the procedure you know works. Seems to have worked out without any failures I can detect so far. For the record this is what we did:&lt;/p&gt;

&lt;p&gt;umount lustre-mdt0/fouo6&lt;br/&gt;
zpool add lustre-mdt0 mirror /dev/exp_sdq/mdt_fouo6exp_sdq /dev/exp_sdt/mdt_fouo6exp_sdt&lt;br/&gt;
mount -t lustre lustre-mdt0/fouo6 /mnt/lustre/local/fouo6&lt;/p&gt;</comment>
                            <comment id="178524" author="utopiabound" created="Tue, 20 Dec 2016 15:37:50 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=zino&quot; class=&quot;user-hover&quot; rel=&quot;zino&quot;&gt;zino&lt;/a&gt;,&lt;/p&gt;

&lt;p&gt;I&apos;m glad that worked for you.  I&apos;ll close this bug seeing as you&apos;ve completed your expansion of the MDT.  If I&apos;m mistaken that you need something else from this bug, please feel free to re-open.&lt;/p&gt;

&lt;p&gt;&amp;#8211;&lt;br/&gt;
&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=utopiabound&quot; class=&quot;user-hover&quot; rel=&quot;utopiabound&quot;&gt;utopiabound&lt;/a&gt;&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzyypj:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>