<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:51:31 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5441] Client logs broken after update to 2.5.2</title>
                <link>https://jira.whamcloud.com/browse/LU-5441</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;I updated a Lustre installation from Lustre 2.1.6 to 2.5.2. Now something seems to be wrong with the llogs. After doing writeconf on all targets I can mount a client &lt;b&gt;once&lt;/b&gt;, but after unmounting the client again, all subsequent tries to mount clients result in errors like this:&lt;/p&gt;

&lt;p&gt;2014-07-31T12:57:12.198389+02:00 l3mds1 &amp;lt;kern.err&amp;gt; kernel:LustreError: 68983:0:(obd_mount.c:1323:lustre_fill_super()) Unable to mount  (-5)&lt;br/&gt;
2014-07-31T12:57:13.198412+02:00 l3mds1 &amp;lt;kern.err&amp;gt; kernel:LustreError: 15c-8: MGC10.3.6.21@o2ib: The configuration from log &apos;lustre3-client&apos; failed (-5). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information.&lt;br/&gt;
2014-07-31T12:57:13.198428+02:00 l3mds1 &amp;lt;kern.err&amp;gt; kernel:LustreError: 69606:0:(llite_lib.c:1046:ll_fill_super()) Unable to process log: -5&lt;/p&gt;

&lt;p&gt;Can the client logs be brought back into a consistent state? Is writeconf not supposed to fix any llogs issues?&lt;/p&gt;</description>
                <environment></environment>
        <key id="25841">LU-5441</key>
            <summary>Client logs broken after update to 2.5.2</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="6">Not a Bug</resolution>
                                        <assignee username="pjones">Peter Jones</assignee>
                                    <reporter username="omangold">Oliver Mangold</reporter>
                        <labels>
                    </labels>
                <created>Fri, 1 Aug 2014 06:55:52 +0000</created>
                <updated>Tue, 19 Aug 2014 20:58:05 +0000</updated>
                            <resolved>Tue, 19 Aug 2014 20:58:04 +0000</resolved>
                                    <version>Lustre 2.5.2</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="90624" author="pjones" created="Fri, 1 Aug 2014 12:33:47 +0000"  >&lt;p&gt;Sorry to hear about the troubles with your upgrade Oliver.&lt;/p&gt;</comment>
                            <comment id="90707" author="efocht" created="Mon, 4 Aug 2014 16:18:00 +0000"  >&lt;p&gt;It turned out that the llogs were incomplete, only 9 of the 48 OSTs actually had llogs on the MGS. Also the client llog was incomplete. The procedure leading to the trouble was:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;delete llogs with tunefs.lustre --writeconf on MDT and OSTs&lt;/li&gt;
	&lt;li&gt;mount MDT&lt;/li&gt;
	&lt;li&gt;mount OSTs all at once (more or less, controlled by HA)&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;Correct results were achieved when the third step was done mounting each OST in order, one by one.&lt;/p&gt;

&lt;p&gt;While there is still the question: &quot;why does this happen?&quot;, I think we can close this ticket.&lt;/p&gt;</comment>
                            <comment id="91982" author="jfc" created="Tue, 19 Aug 2014 20:58:05 +0000"  >&lt;p&gt;Thanks Erich!&lt;/p&gt;

&lt;p&gt;Best regards,&lt;br/&gt;
~ jfc.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwssv:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>15151</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>