<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:42:10 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-11239] sanity-lfsck test 36a fails with &apos;Fail to split mirror&apos;</title>
                <link>https://jira.whamcloud.com/browse/LU-11239</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;sanity-lfsck test_36a started failing on August 6, 2018 with Lustre version 2.11.53.52 build #3774. Note, sanity-lfsck test 36a landed to master with build #3774. &lt;/p&gt;

&lt;p&gt;In the test_log for these failures, for example at &lt;a href=&quot;https://testing.whamcloud.com/test_sets/0e30317a-9ad2-11e8-a9f7-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/0e30317a-9ad2-11e8-a9f7-52540065bddc&lt;/a&gt;, we see the following:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;&#8230;
/mnt/lustre/d36a.sanity-lfsck/f0
  lcm_layout_gen:    10
  lcm_mirror_count:  3
  lcm_entry_count:   6
    lcme_id:             65537
    lcme_mirror_id:      1
    lcme_flags:          init
    lcme_extent.e_start: 0
    lcme_extent.e_end:   1048576
      lmm_stripe_count:  2
      lmm_stripe_size:   1048576
      lmm_pattern:       raid0
      lmm_layout_gen:    0
      lmm_stripe_offset: 0
      lmm_objects:
      - 0: { l_ost_idx: 0, l_fid: [0x100000000:0x9b6:0x0] }
      - 1: { l_ost_idx: 1, l_fid: [0x100010000:0x768:0x0] }

    lcme_id:             65538
    lcme_mirror_id:      1
    lcme_flags:          init
    lcme_extent.e_start: 1048576
    lcme_extent.e_end:   EOF
      lmm_stripe_count:  1
      lmm_stripe_size:   1048576
      lmm_pattern:       raid0
      lmm_layout_gen:    0
      lmm_stripe_offset: 2
      lmm_objects:
      - 0: { l_ost_idx: 2, l_fid: [0x100020000:0x78e:0x0] }

    lcme_id:             131075
    lcme_mirror_id:      2
    lcme_flags:          init
    lcme_extent.e_start: 0
    lcme_extent.e_end:   2097152
      lmm_stripe_count:  2
      lmm_stripe_size:   1048576
      lmm_pattern:       raid0
      lmm_layout_gen:    0
      lmm_stripe_offset: 1
      lmm_objects:
      - 0: { l_ost_idx: 1, l_fid: [0x100010000:0x769:0x0] }
      - 1: { l_ost_idx: 2, l_fid: [0x100020000:0x788:0x0] }

    lcme_id:             131076
    lcme_mirror_id:      2
    lcme_flags:          init
    lcme_extent.e_start: 2097152
    lcme_extent.e_end:   EOF
      lmm_stripe_count:  1
      lmm_stripe_size:   1048576
      lmm_pattern:       raid0
      lmm_layout_gen:    0
      lmm_stripe_offset: 0
      lmm_objects:
      - 0: { l_ost_idx: 0, l_fid: [0x100000000:0x9bc:0x0] }

    lcme_id:             196613
    lcme_mirror_id:      3
    lcme_flags:          init,stale
    lcme_extent.e_start: 0
    lcme_extent.e_end:   3145728
      lmm_stripe_count:  2
      lmm_stripe_size:   1048576
      lmm_pattern:       raid0
      lmm_layout_gen:    0
      lmm_stripe_offset: 2
      lmm_objects:
      - 0: { l_ost_idx: 2, l_fid: [0x100020000:0x789:0x0] }
      - 1: { l_ost_idx: 0, l_fid: [0x100000000:0x9b7:0x0] }

    lcme_id:             196614
    lcme_mirror_id:      3
    lcme_flags:          init,stale
    lcme_extent.e_start: 3145728
    lcme_extent.e_end:   EOF
      lmm_stripe_count:  1
      lmm_stripe_size:   1048576
      lmm_pattern:       raid0
      lmm_layout_gen:    0
      lmm_stripe_offset: 1
      lmm_objects:
      - 0: { l_ost_idx: 1, l_fid: [0x100010000:0x76e:0x0] }
&#8230;
Inject failure, to simulate the case of missing one mirror in LOV
CMD: trevis-7vm4 /usr/sbin/lctl set_param fail_loc=0x1616
fail_loc=0x1616
error: lfs mirror split: setting &apos;stale&apos; is not supported
 sanity-lfsck test_36a: @@@@@@ FAIL: (12) Fail to split 1st mirror from /mnt/lustre/d36a.sanity-lfsck/f0 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In comparing sanity-lfsck test 36a that pass testing and those that don&#8217;t, the ones that fail have components with &#8220;init,stale&#8221; flags and the tests that pass don&#8217;t have &#8220;stale flags&lt;/p&gt;

&lt;p&gt;More logs for this failure are at&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/1131f45c-99d4-11e8-a9f7-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/1131f45c-99d4-11e8-a9f7-52540065bddc&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/6481868c-9ab4-11e8-a9f7-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/6481868c-9ab4-11e8-a9f7-52540065bddc&lt;/a&gt;&lt;/p&gt;</description>
                <environment></environment>
        <key id="52955">LU-11239</key>
            <summary>sanity-lfsck test 36a fails with &apos;Fail to split mirror&apos;</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="bobijam">Zhenyu Xu</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                    </labels>
                <created>Mon, 13 Aug 2018 19:19:31 +0000</created>
                <updated>Thu, 21 Nov 2019 21:33:56 +0000</updated>
                            <resolved>Mon, 9 Sep 2019 18:06:20 +0000</resolved>
                                    <version>Lustre 2.12.0</version>
                    <version>Lustre 2.13.0</version>
                    <version>Lustre 2.12.1</version>
                    <version>Lustre 2.12.2</version>
                    <version>Lustre 2.12.3</version>
                                    <fixVersion>Lustre 2.13.0</fixVersion>
                    <fixVersion>Lustre 2.12.4</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="233966" author="pjones" created="Tue, 25 Sep 2018 10:20:37 +0000"  >&lt;p&gt;Lai&lt;/p&gt;

&lt;p&gt;This test was added with &lt;a href=&quot;https://git.whamcloud.com/?p=fs/lustre-release.git;a=commit;h=36ba989752c62cc76b06089373fcd6cec6da9008&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://git.whamcloud.com/?p=fs/lustre-release.git;a=commit;h=36ba989752c62cc76b06089373fcd6cec6da9008&lt;/a&gt;&#160;. Should we revert this change or is it just a faulty test that we should add to the ALWAYS_EXCEPT list until it is corrected?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="234014" author="laisiyao" created="Wed, 26 Sep 2018 13:58:37 +0000"  >&lt;p&gt;IMO it&apos;s test script problem. Bobi is more familiar with related code, maybe he knows more.&lt;/p&gt;</comment>
                            <comment id="234540" author="pjones" created="Sat, 6 Oct 2018 14:14:24 +0000"  >&lt;p&gt;Bobi&lt;/p&gt;

&lt;p&gt;Any suggestions?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="234695" author="gerrit" created="Wed, 10 Oct 2018 06:28:02 +0000"  >&lt;p&gt;Bobi Jam (bobijam@hotmail.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/33330&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/33330&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11239&quot; title=&quot;sanity-lfsck test 36a fails with &amp;#39;Fail to split mirror&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11239&quot;&gt;&lt;del&gt;LU-11239&lt;/del&gt;&lt;/a&gt; test: sanity-lfsck test_36a fix&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: c19a05a5c896ced8dfe962df409cb0892f9e3950&lt;/p&gt;</comment>
                            <comment id="235162" author="adilger" created="Fri, 19 Oct 2018 18:09:30 +0000"  >&lt;p&gt;Can you please explain why we need to call &lt;tt&gt;lfs mirror resync&lt;/tt&gt; multiple times on a file to make it correct?  If this is a problem for regular users, then it could lead to data loss if they &lt;em&gt;think&lt;/em&gt; the file is sync&apos;d to all of the mirrors (e.g. changelog watcher that is calling &quot;&lt;tt&gt;lfs mirror resync&lt;/tt&gt;&quot; on each file once), but in fact there are mirrors that are not uptodate.&lt;/p&gt;

&lt;p&gt;If there is a bug in how resync is working, then that should be fixed in lfs, and not in the test script.  If this is really a bug in the test script, please explain why, so that we are sure that there is not going to be data loss for the users (and we can land the existing patch and get it out of the way for 2.12).&lt;/p&gt;</comment>
                            <comment id="235164" author="bobijam" created="Fri, 19 Oct 2018 18:14:46 +0000"  >&lt;p&gt;Don&apos;t know why this case why resync does not successfully sync some components. And lfs mirror resync was designed to not report failure if some components are not sync-ed for some reason, but I don&apos;t see why the test_36a() could fail the resync.&lt;/p&gt;</comment>
                            <comment id="235461" author="bobijam" created="Thu, 25 Oct 2018 02:19:05 +0000"  >&lt;p&gt;&lt;a href=&quot;https://testing.whamcloud.com/test_logs/85810774-d76b-11e8-82f2-52540065bddc/download&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_logs/85810774-d76b-11e8-82f2-52540065bddc/download&lt;/a&gt; (client log)&lt;br/&gt;
shows that -28 is from write&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;00000008:00000001:1.0:1540371611.978254:0:13766:0:(osc_request.c:1770:osc_brw_fini_request()) Process leaving via out (rc=18446744073709551588 : -28 : 0xffffffffffffffe4)
00000008:00000001:1.0:1540371611.978256:0:13766:0:(osc_request.c:1905:osc_brw_fini_request()) Process leaving (rc=18446744073709551588 : -28 : ffffffffffffffe4)
00000008:00000002:1.0:1540371611.978257:0:13766:0:(osc_request.c:2025:brw_interpret()) request ffff893865d41800 aa ffff893865d41970 rc -28
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And the OST log&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_logs/8580f360-d76b-11e8-82f2-52540065bddc/download&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_logs/8580f360-d76b-11e8-82f2-52540065bddc/download&lt;/a&gt; (ost log)&lt;br/&gt;
shows&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;00000001:00000001:0.0:1540371612.134157:0:12237:0:(osd_io.c:1810:osd_declare_write()) Process leaving (rc=0 : 0 : 0)
00080000:00000001:0.0:1540371612.134160:0:12237:0:(osd_handler.c:1916:osd_trans_start()) Process leaving (rc=0 : 0 : 0)
00000001:00000002:0.0:1540371612.134161:0:12237:0:(osd_io.c:1415:osd_write_commit()) Skipping [0] == -28
00000001:00000002:0.0:1540371612.134164:0:12237:0:(osd_io.c:1415:osd_write_commit()) Skipping [1] == -28
...
00000001:00000002:0.0:1540371612.134355:0:12237:0:(osd_io.c:1415:osd_write_commit()) Skipping [254] == -28
00000001:00000002:0.0:1540371612.134356:0:12237:0:(osd_io.c:1415:osd_write_commit()) Skipping [255] == -28
00000001:00000001:0.0:1540371612.134357:0:12237:0:(osd_io.c:330:osd_do_bio()) Process entered
00000001:00000001:0.0:1540371612.134358:0:12237:0:(osd_io.c:462:osd_do_bio()) Process leaving (rc=0 : 0 : 0)
00000001:00000001:0.0:1540371612.134359:0:12237:0:(osd_io.c:1484:osd_write_commit()) Process leaving (rc=0 : 0 : 0)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="235472" author="adilger" created="Thu, 25 Oct 2018 05:58:03 +0000"  >&lt;p&gt;Running out of space is an acceptable reason to fail resync, so long as some error is printed for the user.&lt;/p&gt;

&lt;p&gt;However, this shouldn&apos;t happen during this particular test.  Is the test file too large, or is there some leak of space?&lt;/p&gt;</comment>
                            <comment id="235478" author="bobijam" created="Thu, 25 Oct 2018 07:52:05 +0000"  >&lt;p&gt;It has something about grant. OST reports that it has 62,066,688 bytes free, and 54,898,688 bytes available, minus the space already granted to the client, it has 897,024 left for the incoming write, and it&apos;s not enough for the requested write size (1,073,152)&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;00002000:00000020:0.0:1540371612.048313:0:12237:0:(tgt_grant.c:413:tgt_grant_statfs()) lustre-OST0002: cli 31207193-e3e7-03d0-da8d-e086caad4da3/ffff9f173946e000 free: 62066688 avail: 54898688
00002000:00000020:0.0:1540371612.048315:0:12237:0:(tgt_grant.c:477:tgt_grant_space_left()) lustre-OST0002: cli 31207193-e3e7-03d0-da8d-e086caad4da3/ffff9f173946e000 avail 54898688 left 897024 unstable 0 tot_grant 53999552 pending 0
00002000:00000001:0.0:1540371612.048316:0:12237:0:(tgt_grant.c:479:tgt_grant_space_left()) Process leaving (rc=897024 : 897024 : db000)
00002000:00000001:0.0:1540371612.048317:0:12237:0:(tgt_grant.c:504:tgt_grant_incoming()) Process entered
00002000:00000020:0.0:1540371612.048318:0:12237:0:(tgt_grant.c:520:tgt_grant_incoming()) lustre-OST0002: cli 31207193-e3e7-03d0-da8d-e086caad4da3/ffff9f173946e000 reports grant 43601920 dropped 0, local 46772224
00002000:00000001:0.0:1540371612.048319:0:12237:0:(tgt_grant.c:567:tgt_grant_incoming()) Process leaving
00002000:00000001:0.0:1540371612.048320:0:12237:0:(tgt_grant.c:705:tgt_grant_check()) Process entered
00002000:00000020:0.0:1540371612.048320:0:12237:0:(tgt_grant.c:812:tgt_grant_check()) lustre-OST0002: cli 31207193-e3e7-03d0-da8d-e086caad4da3/ffff9f173946e000 idx 0 no space for 1073152
00002000:00000020:0.0:1540371612.048322:0:12237:0:(tgt_grant.c:832:tgt_grant_check()) lustre-OST0002: cli 31207193-e3e7-03d0-da8d-e086caad4da3/ffff9f173946e000 granted: 0 ungranted: 0 grant: 46772224 dirty: 3170304
00002000:00000001:0.0:1540371612.048323:0:12237:0:(tgt_grant.c:855:tgt_grant_check()) Process leaving
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
</comment>
                            <comment id="235548" author="adilger" created="Fri, 26 Oct 2018 02:00:27 +0000"  >&lt;p&gt;Is that because the mirror resync writes do not use the grant already held by the client?  It doesn&apos;t make sense that this would fail on a test system, because there are only 2 clients that might even be holding grant.&lt;/p&gt;</comment>
                            <comment id="235551" author="bobijam" created="Fri, 26 Oct 2018 03:30:14 +0000"  >&lt;p&gt;mirror resync uses direct IO which I think don&apos;t consider grant on the client, and OST find the (avail - grants_assigned_to_the_client) &amp;lt; request_direct_write_size. &lt;/p&gt;</comment>
                            <comment id="235556" author="adilger" created="Fri, 26 Oct 2018 04:50:16 +0000"  >&lt;p&gt;I know we&apos;ve gone back and forth on &lt;tt&gt;O_DIRECT&lt;/tt&gt; writes consuming grant in the past, but IMHO we &lt;em&gt;should&lt;/em&gt; consume grant from the client &lt;b&gt;if it is available&lt;/b&gt;, and return grant from the server if available.  However, we shouldn&apos;t &lt;em&gt;need&lt;/em&gt; grant for &lt;tt&gt;O_DIRECT&lt;/tt&gt; writes if there is none on the client.&lt;/p&gt;

&lt;p&gt;This would allow the best of both worlds - clients that have grant would consume it during &lt;tt&gt;O_DIRECT&lt;/tt&gt; writes so that they do not run out of space on the OST.  Even if space is becoming short on an OST and grant is restricted then the client can still submit large O_DIRECT writes without grant if there is any free space on the OST.&lt;/p&gt;</comment>
                            <comment id="236064" author="adilger" created="Wed, 31 Oct 2018 18:20:30 +0000"  >&lt;p&gt;As a starting point, we should add error messages to &quot;&lt;tt&gt;lfs mirror resync&lt;/tt&gt;&quot; to print the &lt;tt&gt;-ENOSPC&lt;/tt&gt; errors during resync, and always return an error to the shell if any resync fails (though it shouldn&apos;t &lt;b&gt;stop&lt;/b&gt; resync of an error is hit, just save it until the end). We shouldn&apos;t have to dig through Lustre debug logs to find that out, and users shouldn&apos;t have to do that either. &lt;/p&gt;</comment>
                            <comment id="236065" author="adilger" created="Wed, 31 Oct 2018 18:26:02 +0000"  >&lt;p&gt;Jian, can you please work on fixing the &lt;tt&gt;lfs mirror resync&lt;/tt&gt; error handling as described above.&lt;/p&gt;

&lt;p&gt;Bobijam, can you please take a look at fixing O_DIRECT writes to consume grant (if available). &lt;/p&gt;</comment>
                            <comment id="236093" author="yujian" created="Wed, 31 Oct 2018 23:52:22 +0000"  >&lt;p&gt;Sure, Andreas, will do.&lt;/p&gt;</comment>
                            <comment id="236110" author="bobijam" created="Thu, 1 Nov 2018 05:03:44 +0000"  >&lt;p&gt;Yes, working on it, and revive/revise a patch (consume grant for sync write patch of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4664&quot; title=&quot;sync write should consume grant on client&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4664&quot;&gt;&lt;del&gt;LU-4664&lt;/del&gt;&lt;/a&gt;) and testing whether that could work on this issue.&lt;/p&gt;</comment>
                            <comment id="236114" author="gerrit" created="Thu, 1 Nov 2018 05:33:18 +0000"  >&lt;p&gt;Bobi Jam (bobijam@hotmail.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/33537&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/33537&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11239&quot; title=&quot;sanity-lfsck test 36a fails with &amp;#39;Fail to split mirror&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11239&quot;&gt;&lt;del&gt;LU-11239&lt;/del&gt;&lt;/a&gt; lfs: fix mirror resync error handling&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: d2ab5bfd9d3a5710177ea8f07c2182178dab1d2f&lt;/p&gt;</comment>
                            <comment id="236189" author="bobijam" created="Fri, 2 Nov 2018 02:59:20 +0000"  >&lt;p&gt;it seems that the debug patch upon &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4664&quot; title=&quot;sync write should consume grant on client&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4664&quot;&gt;&lt;del&gt;LU-4664&lt;/del&gt;&lt;/a&gt; patch shows that the grant for sync write does fix the -ENOSPC error during resync in the sanity_lfsck test_36a case.&lt;/p&gt;</comment>
                            <comment id="253773" author="yujian" created="Wed, 28 Aug 2019 17:42:58 +0000"  >&lt;p&gt;+1 on Lustre b2_12 branch: &lt;a href=&quot;https://testing.whamcloud.com/test_sets/4f773b5e-c96d-11e9-97d5-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/4f773b5e-c96d-11e9-97d5-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="254308" author="gerrit" created="Sat, 7 Sep 2019 01:34:39 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/33537/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/33537/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11239&quot; title=&quot;sanity-lfsck test 36a fails with &amp;#39;Fail to split mirror&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11239&quot;&gt;&lt;del&gt;LU-11239&lt;/del&gt;&lt;/a&gt; lfs: fix mirror resync error handling&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 0f670d1ca9dd5af697bfbf3b95a301c61a8b4447&lt;/p&gt;</comment>
                            <comment id="254402" author="pfarrell" created="Mon, 9 Sep 2019 18:06:20 +0000"  >&lt;p&gt;Patch landed.&lt;/p&gt;</comment>
                            <comment id="255739" author="gerrit" created="Tue, 1 Oct 2019 17:43:34 +0000"  >&lt;p&gt;Minh Diep (mdiep@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/36341&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/36341&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11239&quot; title=&quot;sanity-lfsck test 36a fails with &amp;#39;Fail to split mirror&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11239&quot;&gt;&lt;del&gt;LU-11239&lt;/del&gt;&lt;/a&gt; lfs: fix mirror resync error handling&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 3d2705121a7bedb2ebbaefae752258806d868065&lt;/p&gt;</comment>
                            <comment id="258594" author="gerrit" created="Thu, 21 Nov 2019 07:31:41 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/36341/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/36341/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11239&quot; title=&quot;sanity-lfsck test 36a fails with &amp;#39;Fail to split mirror&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11239&quot;&gt;&lt;del&gt;LU-11239&lt;/del&gt;&lt;/a&gt; lfs: fix mirror resync error handling&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 195ffb9e46709c74c34cbfc2a86378ad86d2062f&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="52628">LU-11111</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="23274">LU-4664</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="56901">LU-12757</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i000mf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>