<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:03:01 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-13645] Various data corruptions possible in lustre.</title>
                <link>https://jira.whamcloud.com/browse/LU-13645</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Two groups  data corruption cases possible with a lustre, but both is addressed to the lock cancel without osc object assigned to lock.&lt;br/&gt;
This is possible for the DoM and for the Lock Ahead cases.&lt;br/&gt;
Lock Ahead bug have a partial fix - it&apos;s &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11670&quot; title=&quot;Incorrect size when using lockahead&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11670&quot;&gt;&lt;del&gt;LU-11670&lt;/del&gt;&lt;/a&gt;/LUS-6747.&lt;/p&gt;

&lt;p&gt;1) first bug is addressed to the situation when check_and_discard function can found lock without l_ast_data assigned, this block to discard a pages from page cache and leave as is.&lt;br/&gt;
Next lock cancel will found this lock and page discard is skipped due lack of the osc object assigned. Pages can be read from page cache in next time by ll_do_fast_read which relies an page flags and provide a data from data cache.&lt;/p&gt;

&lt;p&gt;For the Lock Ahead case, it don&apos;t have a logs and other conformation - but it looks possible.&lt;br/&gt;
for the DoM cases - this is confirmed case.&lt;br/&gt;
second lock cancel trace&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;      ldlm_bl_13-35551 [034] 164201.591130: funcgraph_entry:                   |  ll_dom_lock_cancel() {
      ldlm_bl_13-35551 [034] 164201.591132: funcgraph_entry:                   |    cl_env_get() {
      ldlm_bl_13-35551 [034] 164201.591132: funcgraph_entry:        0.054 us   |      _raw_read_lock();
      ldlm_bl_13-35551 [034] 164201.591132: funcgraph_entry:        0.039 us   |      lu_env_refill();
      ldlm_bl_13-35551 [034] 164201.591133: funcgraph_entry:        0.046 us   |      cl_env_init0();
      ldlm_bl_13-35551 [034] 164201.591133: funcgraph_entry:        0.035 us   |      lu_context_enter();
      ldlm_bl_13-35551 [034] 164201.591133: funcgraph_entry:        0.034 us   |      lu_context_enter();
      ldlm_bl_13-35551 [034] 164201.591134: funcgraph_exit:         1.811 us   |    }
      ldlm_bl_13-35551 [034] 164201.591134: funcgraph_entry:                   |    cl_object_flush() {
      ldlm_bl_13-35551 [034] 164201.591134: funcgraph_entry:                   |      lov_object_flush() {
      ldlm_bl_13-35551 [034] 164201.591134: funcgraph_entry:        0.115 us   |        down_read();
      ldlm_bl_13-35551 [034] 164201.591135: funcgraph_entry:                   |        lov_flush_composite() {
      ldlm_bl_13-35551 [034] 164201.591135: funcgraph_entry:                   |          cl_object_flush() {
      ldlm_bl_13-35551 [034] 164201.591135: funcgraph_entry:                   |            mdc_object_flush() {
      ldlm_bl_13-35551 [034] 164201.591136: funcgraph_entry:                   |              mdc_dlm_blocking_ast0() {
      ldlm_bl_13-35551 [034] 164201.591136: funcgraph_entry:                   |                lock_res_and_lock() {
      ldlm_bl_13-35551 [034] 164201.591136: funcgraph_entry:        0.114 us   |                  _raw_spin_lock();
      ldlm_bl_13-35551 [034] 164201.591136: funcgraph_entry:        0.030 us   |                  _raw_spin_lock();
      ldlm_bl_13-35551 [034] 164201.591137: funcgraph_exit:         0.677 us   |                }
      ldlm_bl_13-35551 [034] 164201.591137: funcgraph_entry:        0.031 us   |                unlock_res_and_lock();
      ldlm_bl_13-35551 [034] 164201.591137: funcgraph_exit:         1.363 us   |              }
      ldlm_bl_13-35551 [034] 164201.591137: funcgraph_exit:         1.674 us   |            }
      ldlm_bl_13-35551 [034] 164201.591137: funcgraph_exit:         2.207 us   |          }
      ldlm_bl_13-35551 [034] 164201.591138: funcgraph_exit:         2.596 us   |        }
      ldlm_bl_13-35551 [034] 164201.591138: funcgraph_entry:        0.042 us   |        up_read();
      ldlm_bl_13-35551 [034] 164201.591138: funcgraph_exit:         3.714 us   |      }
      ldlm_bl_13-35551 [034] 164201.591138: funcgraph_exit:         4.279 us   |    }
      ldlm_bl_13-35551 [034] 164201.591138: funcgraph_entry:                   |    cl_env_put() {
      ldlm_bl_13-35551 [034] 164201.591138: funcgraph_entry:        0.034 us   |      lu_context_exit();
      ldlm_bl_13-35551 [034] 164201.591139: funcgraph_entry:        0.030 us   |      lu_context_exit();
      ldlm_bl_13-35551 [034] 164201.591139: funcgraph_entry:        0.030 us   |      _raw_read_lock();
      ldlm_bl_13-35551 [034] 164201.591139: funcgraph_exit:         0.990 us   |    }
      ldlm_bl_13-35551 [034] 164201.591140: funcgraph_exit:         8.253 us   |  }
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;easy to see - mdc_dlm_blocking_ast0 skipped at begin, it mean lock isn&apos;t granted or no l_ast_data aka osc object assigned. Data was obtained from page cache later.&lt;/p&gt;


&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;          &amp;lt;...&amp;gt;-40843 [000] 164229.430007: funcgraph_entry:                   |  ll_do_fast_read() {
           &amp;lt;...&amp;gt;-40843 [000] 164229.430009: funcgraph_entry:                   |    generic_file_read_iter() {
           &amp;lt;...&amp;gt;-40843 [000] 164229.430010: funcgraph_entry:        0.044 us   |      _cond_resched();
           &amp;lt;...&amp;gt;-40843 [000] 164229.430010: funcgraph_entry:                   |      pagecache_get_page() {
           &amp;lt;...&amp;gt;-40843 [000] 164229.430010: funcgraph_entry:        0.706 us   |        find_get_entry();
           &amp;lt;...&amp;gt;-40843 [000] 164229.430011: funcgraph_exit:         1.078 us   |      }
           &amp;lt;...&amp;gt;-40843 [000] 164229.430012: funcgraph_entry:                   |      mark_page_accessed() {
           &amp;lt;...&amp;gt;-40843 [000] 164229.430012: funcgraph_entry:        0.088 us   |        activate_page();
           &amp;lt;...&amp;gt;-40843 [000] 164229.430012: funcgraph_entry:        0.143 us   |        workingset_activation();
           &amp;lt;...&amp;gt;-40843 [000] 164229.430013: funcgraph_exit:         0.925 us   |      }
           &amp;lt;...&amp;gt;-40843 [000] 164229.430014: funcgraph_entry:        0.032 us   |      _cond_resched();
           &amp;lt;...&amp;gt;-40843 [000] 164229.430014: funcgraph_entry:                   |      pagecache_get_page() {
           &amp;lt;...&amp;gt;-40843 [000] 164229.430014: funcgraph_entry:        0.070 us   |        find_get_entry();
           &amp;lt;...&amp;gt;-40843 [000] 164229.430014: funcgraph_exit:         0.401 us   |      }
           &amp;lt;...&amp;gt;-40843 [000] 164229.430015: funcgraph_entry:                   |      mark_page_accessed() {
           &amp;lt;...&amp;gt;-40843 [000] 164229.430015: funcgraph_entry:        0.037 us   |        activate_page();
           &amp;lt;...&amp;gt;-40843 [000] 164229.430015: funcgraph_entry:        0.039 us   |        workingset_activation();
           &amp;lt;...&amp;gt;-40843 [000] 164229.430015: funcgraph_exit:         0.649 us   |      }
....
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Short description - how it was hit. &lt;br/&gt;
getattr_by_name provide a &quot;DoM&quot; bit in response while client have an DoM lock already, but no io under this lock was exist.&lt;/p&gt;

&lt;p&gt;2) DoM&apos;s read on open corruption.&lt;br/&gt;
Scenario is near to same as previously. Open provide a data which moved to the page cache very early, with Uptodata flag set. But osc object isn&apos;t assigned to the lock.&lt;br/&gt;
Data was read with ll_do_fast_read and no real IO + lock match in mdc_enqueue_send().&lt;br/&gt;
Lock was canceled without pages flush, but client continue to read a stale data via ll_do_fast_read.&lt;/p&gt;

&lt;p&gt;... &lt;/p&gt;</description>
                <environment></environment>
        <key id="59478">LU-13645</key>
            <summary>Various data corruptions possible in lustre.</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="shadow">Alexey Lyashkov</assignee>
                                    <reporter username="shadow">Alexey Lyashkov</reporter>
                        <labels>
                    </labels>
                <created>Mon, 8 Jun 2020 12:24:17 +0000</created>
                <updated>Fri, 3 Mar 2023 17:14:52 +0000</updated>
                            <resolved>Fri, 30 Oct 2020 11:53:29 +0000</resolved>
                                    <version>Lustre 2.14.0</version>
                    <version>Lustre 2.12.5</version>
                                    <fixVersion>Lustre 2.14.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>13</watches>
                                                                            <comments>
                            <comment id="272399" author="shadow" created="Tue, 9 Jun 2020 15:59:55 +0000"  >&lt;p&gt;Several other corruption cases related to situation &quot;lock without l_ast_data assigned&quot;. Inspired discussion of review of KMS bug (&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12681&quot; title=&quot;Data corruption - due incorrect KMS with SEL files&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12681&quot;&gt;&lt;del&gt;LU-12681&lt;/del&gt;&lt;/a&gt; osc: wrong cache of LVB attrs).&lt;/p&gt;

&lt;p&gt;1) layout change vs lock cancel. layout change disconnects an locks from it&apos;s object and wait it will pickup at lock enqueue time. Lock cancel run have no chance to flush pages in this case.&lt;/p&gt;

&lt;p&gt;2) Inode destroy case. Inode destroy will cause a ast disconnect also, but inode recreation can found an old lock during check_and_discard run without l_ast_data assigned and page flush is not possible.&lt;/p&gt;

&lt;p&gt;3) Layout change vs DoM cancel lock. MD lock can downgraded to lost all bits except an DoM, so it will go through lov to flush a data, but situation where lov can&apos;t find a DoM component is possible. So pages  will still in page cache.&lt;/p&gt;

&lt;p&gt;4) it looks like a tiny write (    &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9409&quot; title=&quot;Lustre small IO write performance improvement&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9409&quot;&gt;&lt;del&gt;LU-9409&lt;/del&gt;&lt;/a&gt; llite: Add tiny write support) affect by this also. As have no sync with lustre internals we can lost a cache flush in case lock without l_ast_data exist.&lt;/p&gt;</comment>
                            <comment id="273271" author="shadow" created="Fri, 19 Jun 2020 11:54:40 +0000"  >&lt;p&gt;I can drop a some cases after research with Vitaly.&lt;br/&gt;
Layout look change drop a whole client cache for object, it&apos;s good for data correctness but too bad for the SEL. As extending layout want to flush a whole dirty memory for the object which want a noticeable time on loaded cluster.&lt;br/&gt;
simple reproducer verify it situation.&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
--- a/lustre/tests/sanity-pfl.sh
+++ b/lustre/tests/sanity-pfl.sh
@@ -855,8 +855,10 @@ test19_io_base() {
                        error &#8220;Create $comp_file failed&#8221;
        fi
+       dd &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt;=/dev/zero of=$comp_file bs=100K count=1 conv=notrunc ||
+               error &#8220;dd to extend faied&#8221;
        # write past end of first component, so it is extended
-       dd &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt;=/dev/zero of=$comp_file bs=1M count=1 seek=127 conv=notrunc ||
+       dd &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt;=/dev/zero of=$comp_file bs=100K count=1 seek=1270 conv=notrunc ||
                error &#8220;dd to extend failed&#8221;
        local ost_idx1=$($LFS getstripe -I1 -i $comp_file)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;result is&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== sanity-pfl test 19a: Simple test of extension behavior ============================================ 18:07:26 (1592492846)
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.0327618 s, 32.0 MB/s
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.0535577 s, 19.6 MB/s
Pass!
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Layout change vs DoM cancel is still possible but very hard to reach. I think adding a LASSERT in this place will be good to confirm data isn&apos;t corrupted, once assert will hit - dom blocking callback for &quot;complex&quot; ibit locks need reworked.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13128&quot; title=&quot;a race between glimpse and lock cancel is not handled correctly&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13128&quot;&gt;&lt;del&gt;LU-13128&lt;/del&gt;&lt;/a&gt; &quot;osc: glimpse and lock cancel race&quot; have a backside effect in mdc changes. This fixes a the lock conversion bug with DoM bit, where an osc object removed from lock early. It causes a stale data in cache as DoM cancel will skip due osc object lost.&lt;/p&gt;

&lt;p&gt;Vitaly investigation about group lock problem say - it hard to reproduce but it don&apos;t 100% same logic as expected for the extents locks. Additional problem is group id generation for swap layout. random id used for this case, but it&apos;s not a unique value over large cluster and should avoid as possible.&lt;/p&gt;

&lt;p&gt;So currently we can focus on two confirmed bugs.&lt;/p&gt;

&lt;p&gt;1) mdc check_and_discard function can skip a object discard due lock without osc object assigned.&lt;br/&gt;
(similar to the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11670&quot; title=&quot;Incorrect size when using lockahead&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11670&quot;&gt;&lt;del&gt;LU-11670&lt;/del&gt;&lt;/a&gt;/LUS-6747). Patch ready - summits son.&lt;/p&gt;

&lt;p&gt;2) and fixing an dom read-on-open, which put a uptodate pages into page cache, but ldlm lock don&apos;t have an osc object assigned and have no way to the flush any data. Patch in process.&lt;/p&gt;



</comment>
                            <comment id="274343" author="shadow" created="Fri, 3 Jul 2020 06:19:51 +0000"  >&lt;p&gt;It looks bugs affected an any Lustre versions includes an DoM and Lock Ahead features. Initial testing say - most bugs can be fixed with two low risks patches. Some problems with group locks/unprotected layout change invested separately. &lt;/p&gt;

&lt;p&gt;Patch submit blocked due master branch build breakage with Redhat debug kernel caused a James S. backports for xarray.&lt;/p&gt;</comment>
                            <comment id="274507" author="spitzcor" created="Mon, 6 Jul 2020 13:05:49 +0000"  >&lt;p&gt;From Alexey in Linux Lustre Client slack:&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;it&#8217;s EASY to replicate. reproducer is IOR with rewrite - bug hit in 1-5h from start. Fix verification - no corruption in 24h for load. But in general it want just getattr after open. (edited) &lt;br/&gt;
once getattr will return a ibit with 0x40+0x1b - bug will hit.&lt;br/&gt;
as about lock ahead part - this just because LA locks is same as DoM lock. both don&#8217;t have an osc object assigned before usage. so bugs is similar - i think glimpse bug will be same for DoM with LA fixed early.&lt;br/&gt;
and you are wrong this is not an SEL only bug - as SEL it&#8217;s just PFL - so PFL is in under attack and other layout modification. Currently, Vitaly confirm just a problems with delete layout components.&lt;br/&gt;
for SEL - page cache is flushed - so very low risk for bug.&lt;/p&gt;&lt;/blockquote&gt;</comment>
                            <comment id="274771" author="gerrit" created="Wed, 8 Jul 2020 16:48:06 +0000"  >&lt;p&gt;Alexey Lyashkov (alexey.lyashkov@hpe.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/39319&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39319&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13645&quot; title=&quot;Various data corruptions possible in lustre.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13645&quot;&gt;&lt;del&gt;LU-13645&lt;/del&gt;&lt;/a&gt; llite: flush an read-on-open pages&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 10c3c4af36b716a6c8d8c683e6030ca5d070cefa&lt;/p&gt;</comment>
                            <comment id="275553" author="gerrit" created="Thu, 16 Jul 2020 13:58:47 +0000"  >&lt;p&gt;Vitaly Fertman (vitaly.fertman@hpe.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/39405&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39405&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13645&quot; title=&quot;Various data corruptions possible in lustre.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13645&quot;&gt;&lt;del&gt;LU-13645&lt;/del&gt;&lt;/a&gt; ldlm: re-process ldlm lock cleanup&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 1e450528a954bfaa7fe4bc9d72e46f16c3f5efa4&lt;/p&gt;</comment>
                            <comment id="275554" author="gerrit" created="Thu, 16 Jul 2020 13:58:47 +0000"  >&lt;p&gt;Vitaly Fertman (vitaly.fertman@hpe.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/39406&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39406&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13645&quot; title=&quot;Various data corruptions possible in lustre.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13645&quot;&gt;&lt;del&gt;LU-13645&lt;/del&gt;&lt;/a&gt; ldlm: group locks for DOM ibit lock&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: c7d44c3c1791b9d67912946c7454aa40402808a4&lt;/p&gt;</comment>
                            <comment id="277463" author="gerrit" created="Thu, 13 Aug 2020 14:50:06 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/39405/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39405/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13645&quot; title=&quot;Various data corruptions possible in lustre.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13645&quot;&gt;&lt;del&gt;LU-13645&lt;/del&gt;&lt;/a&gt; ldlm: re-process ldlm lock cleanup&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: d7e6b6d2ab8718b55271be56afc4ee5f2beae84b&lt;/p&gt;</comment>
                            <comment id="280046" author="gerrit" created="Sat, 19 Sep 2020 14:11:52 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/39318/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39318/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13645&quot; title=&quot;Various data corruptions possible in lustre.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13645&quot;&gt;&lt;del&gt;LU-13645&lt;/del&gt;&lt;/a&gt; ldlm: don&apos;t use a locks without l_ast_data&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: a6798c5806088dc1892dd752012a54f0ec8f1798&lt;/p&gt;</comment>
                            <comment id="283748" author="gerrit" created="Fri, 30 Oct 2020 06:19:59 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/39406/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39406/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13645&quot; title=&quot;Various data corruptions possible in lustre.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13645&quot;&gt;&lt;del&gt;LU-13645&lt;/del&gt;&lt;/a&gt; ldlm: group locks for DOM IBIT lock&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 06740440363424bff6cfdb467fcc5544e42cabc1&lt;/p&gt;</comment>
                            <comment id="283749" author="gerrit" created="Fri, 30 Oct 2020 06:20:11 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/39878/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39878/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13645&quot; title=&quot;Various data corruptions possible in lustre.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13645&quot;&gt;&lt;del&gt;LU-13645&lt;/del&gt;&lt;/a&gt; ldlm: extra checks for DOM locks&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 0a3c72f13045309573f74f2e02771035d734cc05&lt;/p&gt;</comment>
                            <comment id="283770" author="pjones" created="Fri, 30 Oct 2020 11:53:29 +0000"  >&lt;p&gt;All patches landed for 2.14&lt;/p&gt;</comment>
                            <comment id="364850" author="gerrit" created="Fri, 3 Mar 2023 17:14:51 +0000"  >&lt;p&gt;&quot;Etienne AUJAMES &amp;lt;eaujames@ddn.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/50199&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/50199&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13645&quot; title=&quot;Various data corruptions possible in lustre.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13645&quot;&gt;&lt;del&gt;LU-13645&lt;/del&gt;&lt;/a&gt; ldlm: re-process ldlm lock cleanup&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 7d64d487ad53f365b900408155a340b9385cf54f&lt;/p&gt;</comment>
                            <comment id="364851" author="gerrit" created="Fri, 3 Mar 2023 17:14:52 +0000"  >&lt;p&gt;&quot;Etienne AUJAMES &amp;lt;eaujames@ddn.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/50200&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/50200&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13645&quot; title=&quot;Various data corruptions possible in lustre.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13645&quot;&gt;&lt;del&gt;LU-13645&lt;/del&gt;&lt;/a&gt; ldlm: don&apos;t use a locks without l_ast_dat&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 722dda219811cae47816f9928aea9348fa1f2bd6&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                                        </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="56698">LU-12681</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="54037">LU-11670</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="57783">LU-13128</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="45980">LU-9479</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="59866">LU-13759</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="61410">LU-14084</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i01267:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>