<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:32:04 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-17034] memory corruption caused by bug in qmt_seed_glbe_all</title>
                <link>https://jira.whamcloud.com/browse/LU-17034</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;The code in qmt_seed_glbe_all doesn&apos;t support a case when OST index is larger than the number of OSTs. For example, if the system has 4 OSTs with indexes 0001, 0002, 00c9, 00ca. As could be seen from the below code index 00c9 would cause writing outside lqeg_arr which has 64 elements by default.&#160;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;void qmt_seed_glbe_all(const struct lu_env *env, struct lqe_glbl_data *lgd,
                       bool qunit, bool edquot)
{
...
                for (j = 0; j &amp;lt; slaves_cnt; j++) {
                        idx = qmt_sarr_get_idx(qpi, j);
                        LASSERT(idx &amp;gt;= 0);

                        if (edquot) {
                                int lge_edquot, new_edquot, edquot_nu;

                                lge_edquot = lgd-&amp;gt;lqeg_arr[idx].lge_edquot;
                                edquot_nu = lgd-&amp;gt;lqeg_arr[idx].lge_edquot_nu;
                                new_edquot = lqe-&amp;gt;lqe_edquot;

                                if (lge_edquot == new_edquot ||
                                    (edquot_nu &amp;amp;&amp;amp; lge_edquot == 1))
                                        goto qunit_lbl;
                                lgd-&amp;gt;lqeg_arr[idx].lge_edquot = new_edquot;&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;3 things are required to make this bug possible:&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;enabled quota(quota_slave.enalbed != 0) and quota limits set for at least one ID(user/group/project).&lt;/li&gt;
	&lt;li&gt;at least one OST pool in the system&lt;/li&gt;
	&lt;li&gt;at least one OST in the OST pool with index &amp;gt; 64(QMT_INIT_SLV_CNT)&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;This bug may cause different kind of kernel panics, but on the system where it often occurred in 80% of all cases it corrupted UUID and NID rhashtables. All of these panics are described in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-16930&quot; title=&quot;BUG: nid_keycmp+0x6&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-16930&quot;&gt;&lt;del&gt;LU-16930&lt;/del&gt;&lt;/a&gt;. By default the size of lqeg_arr is 64*16=1024. It means that with high probability it would corrupt the neighbor kmalloc-1024 region.&#160;&lt;/p&gt;</description>
                <environment></environment>
        <key id="77477">LU-17034</key>
            <summary>memory corruption caused by bug in qmt_seed_glbe_all</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="scherementsev">Sergey Cheremencev</assignee>
                                    <reporter username="scherementsev">Sergey Cheremencev</reporter>
                        <labels>
                    </labels>
                <created>Wed, 16 Aug 2023 13:30:29 +0000</created>
                <updated>Wed, 24 Jan 2024 17:05:18 +0000</updated>
                            <resolved>Sat, 18 Nov 2023 21:53:11 +0000</resolved>
                                                    <fixVersion>Lustre 2.16.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="383714" author="gerrit" created="Fri, 25 Aug 2023 12:23:49 +0000"  >&lt;p&gt;&quot;Sergey Cheremencev &amp;lt;scherementsev@ddn.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/52094&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/52094&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-17034&quot; title=&quot;memory corruption caused by bug in qmt_seed_glbe_all&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-17034&quot;&gt;&lt;del&gt;LU-17034&lt;/del&gt;&lt;/a&gt; quota: lqeg_arr memmory corruption&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 3db2668fd0e161875ed20ac8b14184de1a8046b9&lt;/p&gt;</comment>
                            <comment id="384175" author="yujian" created="Wed, 30 Aug 2023 04:52:05 +0000"  >&lt;p&gt;With sparse OST indexes &quot;OST_INDEX_LIST=&lt;span class=&quot;error&quot;&gt;&amp;#91;0,10,20,40,55,60,80&amp;#93;&lt;/span&gt;&quot; (for OSTCOUNT=7) and &quot;ENABLE_QUOTA=yes&quot;, performance-sanity test 2 and sanity-benchmark test dbench crashed on master branch:&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/aa85d42f-f125-48a0-9b9f-c001b6ec3349&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/aa85d42f-f125-48a0-9b9f-c001b6ec3349&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/2a8e95b6-fb76-40fe-bebc-809f9a5959df&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/2a8e95b6-fb76-40fe-bebc-809f9a5959df&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[  265.154037] Lustre: DEBUG MARKER: == sanity-benchmark test dbench: dbench ================== 01:14:05 (1693358045)
[  265.448184] LustreError: 16616:0:(qmt_entry.c:865:qmt_adjust_edquot_qunit_notify()) ASSERTION( idx &amp;lt;= lgd-&amp;gt;lqeg_num_used ) failed: 
[  265.450565] LustreError: 16616:0:(qmt_entry.c:865:qmt_adjust_edquot_qunit_notify()) LBUG
[  265.452116] Pid: 16616, comm: mdt_rdpg00_003 4.18.0-477.15.1.el8_lustre.x86_64 #1 SMP Tue Aug 1 06:59:39 UTC 2023
[  265.454013] Call Trace TBD:
[  265.454761] [&amp;lt;0&amp;gt;] libcfs_call_trace+0x6f/0xa0 [libcfs]
[  265.455838] [&amp;lt;0&amp;gt;] lbug_with_loc+0x3f/0x70 [libcfs]
[  265.456807] [&amp;lt;0&amp;gt;] qmt_adjust_edquot_qunit_notify+0x4e1/0x4f0 [lquota]
[  265.458122] [&amp;lt;0&amp;gt;] qmt_dqacq0+0x1b00/0x2430 [lquota]
[  265.459108] [&amp;lt;0&amp;gt;] qmt_intent_policy+0x942/0xfe0 [lquota]
[  265.460151] [&amp;lt;0&amp;gt;] mdt_intent_opc+0xa66/0xc30 [mdt]
[  265.461270] [&amp;lt;0&amp;gt;] mdt_intent_policy+0xe8/0x460 [mdt]
[  265.462259] [&amp;lt;0&amp;gt;] ldlm_lock_enqueue+0x455/0xaf0 [ptlrpc]
[  265.463809] [&amp;lt;0&amp;gt;] ldlm_handle_enqueue+0x645/0x1870 [ptlrpc]
[  265.464983] [&amp;lt;0&amp;gt;] tgt_enqueue+0xa8/0x230 [ptlrpc]
[  265.466042] [&amp;lt;0&amp;gt;] tgt_request_handle+0xd20/0x19c0 [ptlrpc]
[  265.467193] [&amp;lt;0&amp;gt;] ptlrpc_server_handle_request+0x31d/0xbc0 [ptlrpc]
[  265.468460] [&amp;lt;0&amp;gt;] ptlrpc_main+0xc91/0x15a0 [ptlrpc]
[  265.469535] [&amp;lt;0&amp;gt;] kthread+0x134/0x150
[  265.470333] [&amp;lt;0&amp;gt;] ret_from_fork+0x35/0x40
[  265.471167] Kernel panic - not syncing: LBUG
[  265.472006] CPU: 0 PID: 16616 Comm: mdt_rdpg00_003 Kdump: loaded Tainted: G           OE    --------- -  - 4.18.0-477.15.1.el8_lustre.x86_64 #1
[  265.474318] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
[  265.475391] Call Trace:
[  265.475914]  dump_stack+0x41/0x60
[  265.476588]  panic+0xe7/0x2ac
[  265.477194]  ? ret_from_fork+0x35/0x40
[  265.477931]  lbug_with_loc.cold.8+0x18/0x18 [libcfs]
[  265.478883]  qmt_adjust_edquot_qunit_notify+0x4e1/0x4f0 [lquota]
[  265.480027]  qmt_dqacq0+0x1b00/0x2430 [lquota]
[  265.480909]  ? qmt_intent_policy+0x942/0xfe0 [lquota]
[  265.481906]  qmt_intent_policy+0x942/0xfe0 [lquota]
[  265.482863]  mdt_intent_opc+0xa66/0xc30 [mdt]
[  265.483752]  ? lprocfs_counter_add+0x12a/0x1a0 [obdclass]
[  265.485025]  mdt_intent_policy+0xe8/0x460 [mdt]
[  265.485920]  ldlm_lock_enqueue+0x455/0xaf0 [ptlrpc]
[  265.486933]  ? cfs_hash_bd_add_locked+0x1f/0x90 [libcfs]
[  265.487962]  ? cfs_hash_multi_bd_lock+0xa0/0xa0 [libcfs]
[  265.488978]  ldlm_handle_enqueue+0x645/0x1870 [ptlrpc]
[  265.490054]  tgt_enqueue+0xa8/0x230 [ptlrpc]
[  265.490977]  tgt_request_handle+0xd20/0x19c0 [ptlrpc]
[  265.492024]  ptlrpc_server_handle_request+0x31d/0xbc0 [ptlrpc]
[  265.493246]  ? lprocfs_counter_add+0x12a/0x1a0 [obdclass]
[  265.494312]  ptlrpc_main+0xc91/0x15a0 [ptlrpc]
[  265.495246]  ? __schedule+0x2d9/0x870
[  265.495972]  ? ptlrpc_wait_event+0x590/0x590 [ptlrpc]
[  265.497025]  kthread+0x134/0x150
[  265.497677]  ? set_kthread_struct+0x50/0x50
[  265.498474]  ret_from_fork+0x35/0x40
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="384928" author="gerrit" created="Wed, 6 Sep 2023 09:35:38 +0000"  >&lt;p&gt;&quot;Sergey Cheremencev &amp;lt;scherementsev@ddn.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/52293&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/52293&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-17034&quot; title=&quot;memory corruption caused by bug in qmt_seed_glbe_all&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-17034&quot;&gt;&lt;del&gt;LU-17034&lt;/del&gt;&lt;/a&gt; tests: memory corruption in PQ&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 3cf0ee70e918030f33f2efba4f7a9974afe96c9f&lt;/p&gt;</comment>
                            <comment id="393490" author="gerrit" created="Sat, 18 Nov 2023 21:40:41 +0000"  >&lt;p&gt;&quot;Oleg Drokin &amp;lt;green@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/52094/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/52094/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-17034&quot; title=&quot;memory corruption caused by bug in qmt_seed_glbe_all&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-17034&quot;&gt;&lt;del&gt;LU-17034&lt;/del&gt;&lt;/a&gt; quota: lqeg_arr memmory corruption&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 67f90e42889ff22d574e82cc647f6076e48c65a5&lt;/p&gt;</comment>
                            <comment id="393522" author="pjones" created="Sat, 18 Nov 2023 21:53:11 +0000"  >&lt;p&gt;Landed for 2.16&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="76757">LU-16930</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="77492">LU-17037</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="77465">LU-17033</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i03t4v:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>