<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:14:09 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-1168] changing allocation of ost</title>
                <link>https://jira.whamcloud.com/browse/LU-1168</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We add two new OSS server and we would like to redistribute the OST on these 2 new OSS.&lt;br/&gt;
The old OSS are:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;10.121.13.31@tcp&lt;/li&gt;
	&lt;li&gt;10.121.13.62@tcp&lt;br/&gt;
The new OSS are:&lt;/li&gt;
	&lt;li&gt;10.121.13.59@tcp&lt;/li&gt;
	&lt;li&gt;10.121.13.28@tcp&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;I have un-mounted all the clients, and the mgt, mdt, osts. I want to change the allocation of ost06 from 10.121.13.62@tcp to 10.121.13.28@tcp. The original configuration of ost06 was:&lt;br/&gt;
Parameters: mgsnode=10.121.13.31@tcp failover.node=10.121.13.31@tcp ost.quota_type=ug&lt;br/&gt;
and the ost06 was hosted on 10.121.13.62@tcp&lt;/p&gt;

&lt;p&gt;I have use tunefs.lustre to modify the ost configuration with this command line:&lt;br/&gt;
tunefs.lustre --erase-param --mgsnode=10.121.13.31@tcp --mgsnode=10.121.13.62@tcp --param=&quot;failover.node=10.121.13.59@tcp ost_quota=ug&quot; --writeconf /dev/mapper/ost06p1&lt;br/&gt;
I want host the ost06 on 10.121.13.28@tcp with a failover on 10.121.13.59@tcp&lt;/p&gt;


&lt;p&gt;When I starting up, I see in the logs that the mds try to connect to OST0006 on  10.121.13.62@tcp and give some ost_connection error.&lt;/p&gt;

&lt;p&gt;Could you provide me the correct procedure to relocate the ost06 ?&lt;/p&gt;

&lt;p&gt;Thanks in advanced&lt;/p&gt;</description>
                <environment>Red Hat Enterprise Linux 5.7 + lustre 1.8.7-wc1</environment>
        <key id="13420">LU-1168</key>
            <summary>changing allocation of ost</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="cliffw">Cliff White</assignee>
                                    <reporter username="lustre.support">Supporto Lustre Jnet2000</reporter>
                        <labels>
                            <label>ldiskfs</label>
                    </labels>
                <created>Sat, 3 Mar 2012 07:05:30 +0000</created>
                <updated>Tue, 6 Mar 2012 11:13:46 +0000</updated>
                            <resolved>Tue, 6 Mar 2012 11:13:46 +0000</resolved>
                                    <version>Lustre 1.8.7</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>1</watches>
                                                                            <comments>
                            <comment id="30421" author="cliffw" created="Sat, 3 Mar 2012 12:28:51 +0000"  >&lt;p&gt;You need to run tunefs.lustre --writeconf on the MDS also, have you done this? See Section 14.1.4 in the Lustre Manual, &quot;Changing a Server NID&quot;&lt;/p&gt;</comment>
                            <comment id="30437" author="lustre.support" created="Sun, 4 Mar 2012 06:17:19 +0000"  >&lt;p&gt;Yes, thanks. We follow this procedure:&lt;/p&gt;

&lt;p&gt;1. unmount all the clients, and all the server.&lt;br/&gt;
2. tunefs.lustre --writeconf on the MDT and all the OST&lt;br/&gt;
3. on OST06 I do:&lt;br/&gt;
tunefs.lustre --erase-param --mgsnode=10.121.13.31@tcp --mgsnode=10.121.13.62@tcp --param=&quot;failover.node=10.121.13.59@tcp ost.quota_type=ug&quot; --writeconf /dev/mapper/ost06p1&lt;br/&gt;
4 mount the mgs&lt;br/&gt;
5 mount the mdt and check with &quot;lctl dl&quot; that the osc are NOT present&lt;br/&gt;
6 mount all the ost&lt;br/&gt;
7 check with &quot;lctl dl&quot; that all the osc are present&lt;br/&gt;
7 mount the client&lt;/p&gt;

&lt;p&gt;all was done...&lt;/p&gt;

&lt;p&gt;thanks please close this issue.&lt;/p&gt;</comment>
                            <comment id="30612" author="cliffw" created="Tue, 6 Mar 2012 11:13:46 +0000"  >&lt;p&gt;Please reopen if you have further issues relating to this.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10040" key="com.atlassian.jira.plugin.system.customfieldtypes:labels">
                        <customfieldname>Epic</customfieldname>
                        <customfieldvalues>
                                        <label>server</label>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvh7r:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>6438</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10020"><![CDATA[1]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>