<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:55:23 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-12757] sanity-lfsck test 36a fails with &apos;(N) Fail to resync /mnt/lustre/d36a.sanity-lfsck/f2&apos;</title>
                <link>https://jira.whamcloud.com/browse/LU-12757</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We see sanity-lfsck test_36a fail in resync for the last two of the following three calls to &#8216;lfs mirror resync&#8217; from test 36a:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
5271         $LFS mirror resync $DIR/$tdir/f0 ||
5272                 error &lt;span class=&quot;code-quote&quot;&gt;&quot;(6) Fail to resync $DIR/$tdir/f0&quot;&lt;/span&gt;
5273         $LFS mirror resync $DIR/$tdir/f1 ||
5274                 error &lt;span class=&quot;code-quote&quot;&gt;&quot;(7) Fail to resync $DIR/$tdir/f1&quot;&lt;/span&gt;
5275         $LFS mirror resync $DIR/$tdir/f2 ||
5276                 error &lt;span class=&quot;code-quote&quot;&gt;&quot;(8) Fail to resync $DIR/$tdir/f2&quot;&lt;/span&gt;
5277 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It looks like this test started failing with these two errors on 07-September-2019 with Lustre master version 2.12.57.54.&lt;/p&gt;

&lt;p&gt;Looking at the suite_log for &lt;a href=&quot;https://testing.whamcloud.com/test_sets/a5f2b938-d438-11e9-a2b6-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/a5f2b938-d438-11e9-a2b6-52540065bddc&lt;/a&gt;, we see&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lfs mirror mirror: component 131075 not synced
: No space left on device (28)
lfs mirror mirror: component 131076 not synced
: No space left on device (28)
lfs mirror mirror: component 196613 not synced
: No space left on device (28)
lfs mirror: &apos;/mnt/lustre/d36a.sanity-lfsck/f1&apos; llapi_mirror_resync_many: No space left on device.
 sanity-lfsck test_36a: @@@@@@ FAIL: (7) Fail to resync /mnt/lustre/d36a.sanity-lfsck/f1 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Similarly, looking at the suite_log for &lt;a href=&quot;https://testing.whamcloud.com/test_sets/42fbb9fe-d575-11e9-9fc9-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/42fbb9fe-d575-11e9-9fc9-52540065bddc&lt;/a&gt;, we see&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lfs mirror mirror: component 131075 not synced
: No space left on device (28)
lfs mirror mirror: component 131076 not synced
: No space left on device (28)
lfs mirror mirror: component 196613 not synced
: No space left on device (28)
lfs mirror: &apos;/mnt/lustre/d36a.sanity-lfsck/f2&apos; llapi_mirror_resync_many: No space left on device.
 sanity-lfsck test_36a: @@@@@@ FAIL: (8) Fail to resync /mnt/lustre/d36a.sanity-lfsck/f2 
 &lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It is possible that we are running out of disk space on an OST, but it seems strange that this just started earlier this month.&lt;/p&gt;

&lt;p&gt;Logs for other failures are at&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sessions/279dd05c-e122-4f8f-bafe-b8299e8e0e61&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sessions/279dd05c-e122-4f8f-bafe-b8299e8e0e61&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sessions/fe936f3a-df7d-4d23-9d28-721da7ab8f76&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sessions/fe936f3a-df7d-4d23-9d28-721da7ab8f76&lt;/a&gt;&lt;/p&gt;</description>
                <environment></environment>
        <key id="56901">LU-12757</key>
            <summary>sanity-lfsck test 36a fails with &apos;(N) Fail to resync /mnt/lustre/d36a.sanity-lfsck/f2&apos;</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                            <label>HPv2</label>
                    </labels>
                <created>Thu, 12 Sep 2019 20:11:56 +0000</created>
                <updated>Mon, 16 Jan 2023 22:30:45 +0000</updated>
                            <resolved>Wed, 12 Aug 2020 19:38:19 +0000</resolved>
                                    <version>Lustre 2.13.0</version>
                    <version>Lustre 2.12.3</version>
                    <version>Lustre 2.12.4</version>
                    <version>Lustre 2.12.5</version>
                                    <fixVersion>Lustre 2.14.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="254643" author="jamesanunez" created="Thu, 12 Sep 2019 23:00:29 +0000"  >&lt;p&gt;I added &apos;lfs df&apos; to a patch that hits this problem consistently, patch &lt;a href=&quot;https://review.whamcloud.com/33919/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/33919/&lt;/a&gt; , and it doesn&apos;t look like we are out of space&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== sanity-lfsck test 36a: rebuild LOV EA for mirrored file (1) ======================================= 22:37:15 (1568327835)
#####
The target MDT-object&apos;s LOV EA corrupted as to lose one of the 
mirrors information. The layout LFSCK should rebuild the LOV EA 
with the PFID EA of related OST-object(s) belong to the mirror.
#####
4+0 records in
4+0 records out
4194304 bytes (4.2 MB) copied, 0.126361 s, 33.2 MB/s
4+0 records in
4+0 records out
4194304 bytes (4.2 MB) copied, 0.0600958 s, 69.8 MB/s
4+0 records in
4+0 records out
4194304 bytes (4.2 MB) copied, 0.0601316 s, 69.8 MB/s
UUID                   1K-blocks        Used   Available Use% Mounted on
lustre-MDT0000_UUID        43584        2168       37428   6% /mnt/lustre[MDT:0]
lustre-OST0000_UUID        71100        3452       57504   6% /mnt/lustre[OST:0]
lustre-OST0001_UUID        71100        1272       62396   2% /mnt/lustre[OST:1]
lustre-OST0002_UUID        71100        1272       61948   3% /mnt/lustre[OST:2]
lustre-OST0003_UUID        71100        1880       61748   3% /mnt/lustre[OST:3]

filesystem_summary:       284400        7876      243596   4% /mnt/lustre

lfs mirror mirror: component 131075 not synced
: No space left on device (28)
lfs mirror mirror: component 131076 not synced
: No space left on device (28)
lfs mirror mirror: component 196613 not synced
: No space left on device (28)
lfs mirror: &apos;/mnt/lustre/d36a.sanity-lfsck/f2&apos; llapi_mirror_resync_many: No space left on device.
 sanity-lfsck test_36a: @@@@@@ FAIL: (8) Fail to resync /mnt/lustre/d36a.sanity-lfsck/f2 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="254645" author="adilger" created="Thu, 12 Sep 2019 23:14:46 +0000"  >&lt;p&gt;It is very likely that the culprit is:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;commit 0f670d1ca9dd5af697bfbf3b95a301c61a8b4447
Author:     Bobi Jam &amp;lt;bobijam@whamcloud.com&amp;gt;
AuthorDate: Wed Oct 10 14:23:55 2018 +0800

    LU-11239 lfs: fix mirror resync error handling
    
    This patch returns error for partially successful mirror resync.
    
    Signed-off-by: Bobi Jam &amp;lt;bobijam@whamcloud.com&amp;gt;
    Change-Id: I9d6c9ef5aca1674ceb7a9cbc6b790f3f7276ff5d
    Reviewed-on: https://review.whamcloud.com/33537
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;though this is just &lt;em&gt;returning&lt;/em&gt; the error, it isn&apos;t &lt;em&gt;causing&lt;/em&gt; the error, AFAICS.&lt;/p&gt;</comment>
                            <comment id="254647" author="gerrit" created="Fri, 13 Sep 2019 00:26:14 +0000"  >&lt;p&gt;Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/36176&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/36176&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12757&quot; title=&quot;sanity-lfsck test 36a fails with &amp;#39;(N) Fail to resync /mnt/lustre/d36a.sanity-lfsck/f2&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12757&quot;&gt;&lt;del&gt;LU-12757&lt;/del&gt;&lt;/a&gt; utils: avoid newline inside error message&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 4d39bed4489f3b38b388ec69e449fdc65afe6f19&lt;/p&gt;</comment>
                            <comment id="254648" author="adilger" created="Fri, 13 Sep 2019 00:33:57 +0000"  >&lt;p&gt;The above patch does not fix the test failure here, it is just cosmetic to fix the error message to not have a newline in the middle:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lfs mirror mirror: component 131075 not synced
: No space left on device (28)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I also notice that &lt;tt&gt;progname&lt;/tt&gt; is &quot;&lt;tt&gt;lfs mirror mirror&lt;/tt&gt;&quot;, which is also not correct.  That is because one or more of &lt;tt&gt;lfs_setstripe_internal()&lt;/tt&gt; is appending &lt;tt&gt;argv&lt;span class=&quot;error&quot;&gt;&amp;#91;0&amp;#93;&lt;/span&gt;&lt;/tt&gt; to &lt;tt&gt;progname&lt;/tt&gt; internally (via the &lt;tt&gt;cmd[]&lt;/tt&gt; buffer), &lt;b&gt;and&lt;/b&gt; printing both &lt;tt&gt;progname&lt;/tt&gt; and &lt;tt&gt;argv&lt;span class=&quot;error&quot;&gt;&amp;#91;0&amp;#93;&lt;/span&gt;&lt;/tt&gt; explicitly in error messages, &lt;b&gt;and&lt;/b&gt; &lt;tt&gt;lfs_mirror()&lt;/tt&gt; is appending &lt;tt&gt;argv&lt;span class=&quot;error&quot;&gt;&amp;#91;0&amp;#93;&lt;/span&gt;&lt;/tt&gt; to progname, &lt;b&gt;and&lt;/b&gt; &lt;tt&gt;llapi_error()-&amp;gt;error_callback_default()&lt;/tt&gt; is appending &lt;tt&gt;liblustreapi_cmd&lt;/tt&gt; as well.  That is very confusing.  That should be fixed in a separate patch.&lt;/p&gt;</comment>
                            <comment id="255975" author="arshad512" created="Sun, 6 Oct 2019 02:31:48 +0000"  >&lt;p&gt;Again detected under -&amp;gt; &lt;a href=&quot;https://testing.whamcloud.com/test_sets/b004f920-e795-11e9-b62b-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/b004f920-e795-11e9-b62b-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="256660" author="sarah" created="Fri, 18 Oct 2019 18:01:28 +0000"  >&lt;p&gt;Found similar error on PPC client 2.12.3&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/d9ac1a0c-eb0e-11e9-b62b-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/d9ac1a0c-eb0e-11e9-b62b-52540065bddc&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== sanity-lfsck test 36a: rebuild LOV EA for mirrored file (1) ======================================= 22:20:32 (1570573232)
#####
The target MDT-object&apos;s LOV EA corrupted as to lose one of the 
mirrors information. The layout LFSCK should rebuild the LOV EA 
with the PFID EA of related OST-object(s) belong to the mirror.
#####
4+0 records in
4+0 records out
4194304 bytes (4.2 MB) copied, 0.147001 s, 28.5 MB/s
4+0 records in
4+0 records out
4194304 bytes (4.2 MB) copied, 0.0381554 s, 110 MB/s
4+0 records in
4+0 records out
4194304 bytes (4.2 MB) copied, 0.0381345 s, 110 MB/s
lfs mirror mirror: cannot get WRITE lease, ext 1: Device or resource busy (16)
lfs mirror: &apos;/mnt/lustre/d36a.sanity-lfsck/f0&apos; llapi_lease_get_ext resync failed: Device or resource busy.
 sanity-lfsck test_36a: @@@@@@ FAIL: (6) Fail to resync /mnt/lustre/d36a.sanity-lfsck/f0 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="258954" author="adilger" created="Thu, 28 Nov 2019 09:56:40 +0000"  >&lt;p&gt;It looks like the small filesystems used by sanity-lfsck.sh are causing problems in this test, because the client is consuming all of the free space as grant because &quot;&lt;tt&gt;lfs mirror resync&lt;/tt&gt;&quot; is using &lt;tt&gt;O_DIRECT&lt;/tt&gt; to do the writes, which currently does not consume grants.&lt;/p&gt;</comment>
                            <comment id="259293" author="gerrit" created="Fri, 6 Dec 2019 01:07:35 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/36176/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/36176/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12757&quot; title=&quot;sanity-lfsck test 36a fails with &amp;#39;(N) Fail to resync /mnt/lustre/d36a.sanity-lfsck/f2&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12757&quot;&gt;&lt;del&gt;LU-12757&lt;/del&gt;&lt;/a&gt; utils: avoid newline inside error message&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 3f3a241498be7e043df7e416da7fc8722a559498&lt;/p&gt;</comment>
                            <comment id="259334" author="pjones" created="Fri, 6 Dec 2019 02:42:51 +0000"  >&lt;p&gt;Landed for 2.14&lt;/p&gt;</comment>
                            <comment id="259369" author="jamesanunez" created="Fri, 6 Dec 2019 15:33:58 +0000"  >&lt;p&gt;Reopening this ticket because patch that landed changes error message formatting but does not address the problem described in this ticket. &lt;/p&gt;</comment>
                            <comment id="260806" author="adilger" created="Wed, 8 Jan 2020 23:31:52 +0000"  >&lt;p&gt;To fix this, we need one of the patches from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4664&quot; title=&quot;sync write should consume grant on client&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4664&quot;&gt;&lt;del&gt;LU-4664&lt;/del&gt;&lt;/a&gt; and/or &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12687&quot; title=&quot;Fast ENOSPC on direct I/O&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12687&quot;&gt;&lt;del&gt;LU-12687&lt;/del&gt;&lt;/a&gt; to be landed, so that &lt;tt&gt;O_DIRECT&lt;/tt&gt; writes used by resync do not consume all of the grants.&lt;/p&gt;</comment>
                            <comment id="274847" author="adilger" created="Thu, 9 Jul 2020 07:38:42 +0000"  >&lt;p&gt;+1 on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/18633cde-dc95-4d48-add3-405591582c3f&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/18633cde-dc95-4d48-add3-405591582c3f&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Patch &lt;a href=&quot;https://review.whamcloud.com/35896&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/35896&lt;/a&gt; &quot;&lt;tt&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12687&quot; title=&quot;Fast ENOSPC on direct I/O&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12687&quot;&gt;&lt;del&gt;LU-12687&lt;/del&gt;&lt;/a&gt; osc: consume grants for direct I/O&lt;/tt&gt;&quot; is in master-next and should resolve this issue.  This ticket can be closed once that patch lands and this problem is no longer seen.&lt;/p&gt;</comment>
                            <comment id="274964" author="vilapa" created="Fri, 10 Jul 2020 12:50:36 +0000"  >&lt;p&gt;Additional details about test, I hope the will be useful to reproduce issue.&#160; In most cases failed tests contains only single MDT (list from link below was analyzed for sanity-lfsck.sh/test36a) &lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.whamcloud.com/search?horizon=2332800&amp;amp;test_set_script_id=4f25830c-64fe-11e2-bfb2-52540035b04c&amp;amp;sub_test_script_id=1bd8f58e-6f10-11e8-a55d-52540065bddc&amp;amp;source=sub_tests#redirect&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/search?horizon=2332800&amp;amp;test_set_script_id=4f25830c-64fe-11e2-bfb2-52540035b04c&amp;amp;sub_test_script_id=1bd8f58e-6f10-11e8-a55d-52540065bddc&amp;amp;source=sub_tests#redirect&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, sometimes tests with one MDT passed as well.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;For example failed tests: output from test&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
UUID                   1K-blocks        Used   Available Use% Mounted on
lustre-MDT0000_UUID        43584        2444       37152   7% /mnt/lustre[MDT:0]
lustre-OST0000_UUID        71100        7556       56544  12% /mnt/lustre[OST:0]
lustre-OST0001_UUID        71100        5376       58724   9% /mnt/lustre[OST:1]
lustre-OST0002_UUID        71100       10496       50508  18% /mnt/lustre[OST:2]
lustre-OST0003_UUID        71100        1280       61948   3% /mnt/lustre[OST:3]&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Test where multiple MDTs used - passed: output from test&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
UUID                   1K-blocks        Used   Available Use% Mounted on
lustre-MDT0000_UUID       283520        3968      277504   2% /mnt/lustre[MDT:0]
lustre-MDT0001_UUID       283520        3200      278272   2% /mnt/lustre[MDT:1]
lustre-MDT0002_UUID       283520        3200      278272   2% /mnt/lustre[MDT:2]
lustre-MDT0003_UUID       283520        3200      278272   2% /mnt/lustre[MDT:3]
lustre-OST0000_UUID       282624       16384      264192   6% /mnt/lustre[OST:0]
lustre-OST0001_UUID       282624       10240      270336   4% /mnt/lustre[OST:1]
lustre-OST0002_UUID       282624       22528      258048   9% /mnt/lustre[OST:2]
lustre-OST0003_UUID       282624        4096      276480   2% /mnt/lustre[OST:3]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;When OST&apos;s number less than 3 test is skipped, so in this cases no failure in test.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="277363" author="jamesanunez" created="Wed, 12 Aug 2020 19:38:19 +0000"  >&lt;p&gt;The patch that Andreas references that should fix this issue, &lt;a href=&quot;https://review.whamcloud.com/#/c/35896/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/#/c/35896/&lt;/a&gt;, landed to master on July 10 and we haven&#8217;t seen this issue since July 9. It looks like this issue is fixed and we can close this ticket.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="23274">LU-4664</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="52955">LU-11239</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="56732">LU-12687</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00mpb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>