<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:13:38 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-14888] We are uncertain that we may hit the bug</title>
                <link>https://jira.whamcloud.com/browse/LU-14888</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We are finding an issue that is frequently disconnecting OST from the client&lt;/p&gt;

&lt;p&gt;We currently have OSS1-4 and MDS1-2 which works as Lustre server , and there is the version of 2.12.3 provided by HPE&lt;/p&gt;

&lt;p&gt;Our client are using lustre client of 2.12.2&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;On OSS1, we noticed many disconnections and reconnections of Lustre clients from various OSTs as shown below.&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://mail.escl.com.hk/owa/service.svc/s/GetFileAttachment?id=AAMkAGRjNDBjYTNiLTQ5ZTMtNDUzMi1hMjdhLTNlYjYyNDAxOTBkOQBGAAAAAABwRbFuLU4EQbGU%2F7qhZ3%2FyBwAyf5V%2Bs4llR65cwAnqZhGPAAAAAAEMAAAyf5V%2Bs4llR65cwAnqZhGPAACPrYjmAAABEgAQAABPGVaKajdOpyeS%2BVWcBFM%3D&amp;amp;X-OWA-CANARY=njoW7VbXh06XtqA8zk-UDO6Lx0cXUdkI3LLSDGmP8vIqmw-3NskF2BNhGIa5wLplHRV8EFxnZT4.&quot; height=&quot;570&quot; width=&quot;945&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;In particular, bulk IO read error was reported for the client at 192.168.3.182 (NFS2).&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://mail.escl.com.hk/owa/service.svc/s/GetFileAttachment?id=AAMkAGRjNDBjYTNiLTQ5ZTMtNDUzMi1hMjdhLTNlYjYyNDAxOTBkOQBGAAAAAABwRbFuLU4EQbGU%2F7qhZ3%2FyBwAyf5V%2Bs4llR65cwAnqZhGPAAAAAAEMAAAyf5V%2Bs4llR65cwAnqZhGPAACPrYjmAAABEgAQALdph2DCwaJAhp%2Bg3F%2BH5Fk%3D&amp;amp;X-OWA-CANARY=njoW7VbXh06XtqA8zk-UDO6Lx0cXUdkI3LLSDGmP8vIqmw-3NskF2BNhGIa5wLplHRV8EFxnZT4.&quot; height=&quot;525&quot; width=&quot;1590&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&#160;The NFS2 even got under very high loading on 15 Jun morning, and we rebooted it.&lt;/p&gt;

&lt;p&gt;Since then it could no longer mount any Lustre file systems with error below in NFS2&#8217;s dmesg&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://mail.escl.com.hk/owa/service.svc/s/GetFileAttachment?id=AAMkAGRjNDBjYTNiLTQ5ZTMtNDUzMi1hMjdhLTNlYjYyNDAxOTBkOQBGAAAAAABwRbFuLU4EQbGU%2F7qhZ3%2FyBwAyf5V%2Bs4llR65cwAnqZhGPAAAAAAEMAAAyf5V%2Bs4llR65cwAnqZhGPAACPrYjmAAABEgAQAHD8S8Du9OxBpRzoHvpx154%3D&amp;amp;X-OWA-CANARY=njoW7VbXh06XtqA8zk-UDO6Lx0cXUdkI3LLSDGmP8vIqmw-3NskF2BNhGIa5wLplHRV8EFxnZT4.&quot; height=&quot;531&quot; width=&quot;1387&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Upon investigation, we obtained the MDS1-2 and OSS1-2 sosreport and one of the compute node that are reporting OST disconnection, which are included in the link below&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://drive.google.com/open?id=1_tR7DiXCjzXWEd5ctPjPA5NFn_FweFvq&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://drive.google.com/open?id=1_tR7DiXCjzXWEd5ctPjPA5NFn_FweFvq&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;-------------&lt;/p&gt;

&lt;p&gt;We are suspecting we faced the bug of the below , &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13719&quot; title=&quot;lov tgt 36 not cleaned! deathrow=0, lovrc=1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13719&quot;&gt;&lt;del&gt;LU-13719&lt;/del&gt;&lt;/a&gt;, and we want to make sure that is true.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13719&quot; class=&quot;external-link&quot; rel=&quot;nofollow&quot;&gt;https://jira.whamcloud.com/browse/LU-13719&lt;/a&gt;&lt;/p&gt;</description>
                <environment>Client : 2.12.2&lt;br/&gt;
Server : 2.12.3 , from HPE lustre </environment>
        <key id="65406">LU-14888</key>
            <summary>We are uncertain that we may hit the bug</summary>
                <type id="9" iconUrl="https://jira.whamcloud.com/images/icons/issuetypes/undefined.png">Question/Request</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="pjones">Peter Jones</assignee>
                                    <reporter username="itsupport.cgs">Hong Kong University</reporter>
                        <labels>
                    </labels>
                <created>Tue, 27 Jul 2021 16:03:09 +0000</created>
                <updated>Thu, 29 Jul 2021 14:49:22 +0000</updated>
                                            <version>Lustre 2.12.3</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>2</watches>
                                                                            <comments>
                            <comment id="308660" author="pjones" created="Wed, 28 Jul 2021 16:08:45 +0000"  >&lt;p&gt;To be clear - do you mean that this is for the HPE Clusterstor distribution or just the unpatched 2.12.3 that HPE passed on to you? &lt;/p&gt;</comment>
                            <comment id="308728" author="itsupport.cgs" created="Thu, 29 Jul 2021 01:28:38 +0000"  >&lt;p&gt;Scalable_Storage_with_Lustre_2.12.3_for_Gen9_and_Gen10_systems_P9L65-10015.&lt;/p&gt;</comment>
                            <comment id="308785" author="pjones" created="Thu, 29 Jul 2021 14:49:22 +0000"  >&lt;p&gt;I am not familiar with that at all so cannot offer authoritative advice but, generally speaking, getting a more current version from HPE could be advantageous.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i020an:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>