<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:30:10 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-9884] lustre servers running rhel6.8 miss ldiskfs kernel patches</title>
                <link>https://jira.whamcloud.com/browse/LU-9884</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;In recent incidents (&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9410&quot; title=&quot;on-disk bitmap corrupted&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9410&quot;&gt;&lt;del&gt;LU-9410&lt;/del&gt;&lt;/a&gt;) we found our lustre 2.7.3 servers on CentOS 6.8 missed one ldiskfs kernel patch; &quot;ext4-corrupted-inode-block-bitmaps-handling-patches.patch&quot;.&lt;/p&gt;

&lt;p&gt;So, I compared the ldiskfs-2.6-rhel6.7.series with ldiskfs-2.6-rhel6.8.series. Well in addition to the one above, there are two more ldiskfs kernel patches landed to 6.7 did not make to 6.8:&lt;br/&gt;
    rhel6.4/ext4-fix-mbgroups-access.patch&lt;br/&gt;
    rhel6.3/ext4-fix-ext4_mb_add_n_trim.patch&lt;/p&gt;

&lt;p&gt;Please advise if I need these two patches...&lt;br/&gt;
Lustre-2.9.0 running on rhel6.8 would also miss these three patches.&lt;/p&gt;</description>
                <environment>lustre servers in rhel/centos 6.8.</environment>
        <key id="47840">LU-9884</key>
            <summary>lustre servers running rhel6.8 miss ldiskfs kernel patches</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="ys">Yang Sheng</assignee>
                                    <reporter username="jaylan">Jay Lan</reporter>
                        <labels>
                    </labels>
                <created>Tue, 15 Aug 2017 23:17:04 +0000</created>
                <updated>Wed, 16 Aug 2017 19:13:31 +0000</updated>
                            <resolved>Wed, 16 Aug 2017 19:13:31 +0000</resolved>
                                    <version>Lustre 2.7.0</version>
                    <version>Lustre 2.9.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>3</watches>
                                                                            <comments>
                            <comment id="205478" author="pjones" created="Tue, 15 Aug 2017 23:31:15 +0000"  >&lt;p&gt;Yang Sheng&lt;/p&gt;

&lt;p&gt;Can you please advise as to whether the referenced patches are needed for RHEL 6.8?&lt;/p&gt;

&lt;p&gt;Jay&lt;/p&gt;

&lt;p&gt;RHEL 6.x servers are not officially support on Lustre 2.9&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="205479" author="jaylan" created="Tue, 15 Aug 2017 23:39:48 +0000"  >&lt;p&gt;Thank you Peter.&lt;br/&gt;
So, forget about 2.9.0. Our planning server version is 2.10.0 running CentOS 7.3.&lt;/p&gt;</comment>
                            <comment id="205491" author="ys" created="Wed, 16 Aug 2017 08:58:55 +0000"  >&lt;p&gt;Hi, Jay,&lt;/p&gt;

&lt;p&gt;Looks like &quot;ext4-corrupted-inode-block-bitmaps-handling-patches.patch&quot; has been landed to rhel6.8. &lt;br/&gt;
rhel6.8 has back-ported this change from upstream. So ext4-fix-mbgroups-access.patch  needn&apos;t anymore.&lt;br/&gt;
Also for ext4-fix-ext4_mb_add_n_trim.patch, rhel6.8 has fixed this problem in other approach. So we remove it from rhel6.8 series.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
YangSheng&lt;/p&gt;</comment>
                            <comment id="205539" author="jaylan" created="Wed, 16 Aug 2017 19:11:33 +0000"  >&lt;p&gt;Hi Yang,&lt;/p&gt;

&lt;p&gt;Thanks for your investigation!&lt;/p&gt;

&lt;p&gt;I confirmed that the rhel6.8 codes where affected by both ext4-fix-mbgroups-access.patch and ext4-fix-ext4_mb_add_n_trim.patch were implemented differently, so the patches are either no longer needed or no longer appliable.&lt;/p&gt;

&lt;p&gt;But for the record, ext4-corrupted-inode-block-bitmaps-handling-patches.patch is not in the rhel 6.8 kernel code nor in the b2_7_fe ldiskfs/kernel_patches/series/ldiskfs-2.6-rhel6.8.series. I added the patch to ldiskfs-2.6-rhel6.8.series for NASA only.&lt;/p&gt;

&lt;p&gt;You addressed my question wrt that two patches. Thanks, and please close the ticket.&lt;/p&gt;</comment>
                            <comment id="205540" author="pjones" created="Wed, 16 Aug 2017 19:13:31 +0000"  >&lt;p&gt;Thanks Jay!&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzijr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10021"><![CDATA[2]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>