<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:14:18 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-8062]  recovery-small test_115b: @@@@@@ FAIL: dd success </title>
                <link>https://jira.whamcloud.com/browse/LU-8062</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;== recovery-small test 115b: write: late REQ MDunlink and no bulk == 21:12:09 (1461384729)&lt;br/&gt;
Filesystem           1K-blocks   Used Available Use% Mounted on&lt;br/&gt;
onyx-38vm7@tcp:/lustre&lt;br/&gt;
                      74157152 309236  69890576   1% /mnt/lustre&lt;br/&gt;
fail_loc=0x8000051b&lt;br/&gt;
fail_val=4&lt;br/&gt;
Filesystem           1K-blocks   Used Available Use% Mounted on&lt;br/&gt;
onyx-38vm7@tcp:/lustre&lt;br/&gt;
                      74157152 309236  69890576   1% /mnt/lustre&lt;br/&gt;
CMD: onyx-38vm8 lctl set_param fail_val=0 fail_loc=0x80000215&lt;br/&gt;
fail_val=0&lt;br/&gt;
fail_loc=0x80000215&lt;br/&gt;
1+0 records in&lt;br/&gt;
1+0 records out&lt;br/&gt;
4096 bytes (4.1 kB) copied, 2.13538 s, 1.9 kB/s&lt;br/&gt;
 recovery-small test_115b: @@@@@@ FAIL: dd success &lt;br/&gt;
  Trace dump:&lt;br/&gt;
  = /usr/lib64/lustre/tests/test-framework.sh:4764:error()&lt;br/&gt;
  = /usr/lib64/lustre/tests/recovery-small.sh:2161:test_115_write()&lt;br/&gt;
  = /usr/lib64/lustre/tests/recovery-small.sh:2181:test_115b()&lt;br/&gt;
  = /usr/lib64/lustre/tests/test-framework.sh:5028:run_one()&lt;br/&gt;
  = /usr/lib64/lustre/tests/test-framework.sh:5067:run_one_logged()&lt;br/&gt;
  = /usr/lib64/lustre/tests/test-framework.sh:4914:run_test()&lt;br/&gt;
  = /usr/lib64/lustre/tests/recovery-small.sh:2183:main()&lt;br/&gt;
Dumping lctl log to /logdir/test_logs/2016-04-22/lustre-reviews-el6_7-x86_64-&lt;del&gt;review-dne-part-1&lt;/del&gt;-1_6_1_&lt;em&gt;38438&lt;/em&gt;_-70130481106820-100010/recovery-small.test_115b.*.1461384732.log&lt;/p&gt;</description>
                <environment></environment>
        <key id="36357">LU-8062</key>
            <summary> recovery-small test_115b: @@@@@@ FAIL: dd success </summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="529964">Bhagyesh Dudhediya</reporter>
                        <labels>
                    </labels>
                <created>Mon, 25 Apr 2016 05:04:39 +0000</created>
                <updated>Thu, 3 Aug 2017 23:04:04 +0000</updated>
                            <resolved>Wed, 19 Jul 2017 03:36:54 +0000</resolved>
                                    <version>Lustre 2.8.0</version>
                    <version>Lustre 2.9.0</version>
                    <version>Lustre 2.10.0</version>
                                    <fixVersion>Lustre 2.10.1</fixVersion>
                    <fixVersion>Lustre 2.11.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>16</watches>
                                                                            <comments>
                            <comment id="149983" author="529964" created="Mon, 25 Apr 2016 05:08:19 +0000"  >&lt;p&gt;In recovery-small/test_115b :&lt;br/&gt;
An IO RPC is infected with some fail_loc value so that it would wait for some completion (req/reply lnet buffers unlink or even bulk unlink) for a while.&lt;br/&gt;
Also a delay is injected before it is send.&lt;br/&gt;
The delay gives some time to inject another fail_loc for the server side this time, the server will return an error in this particular case, whereas the 1st fail_loc is injected accurately, there is no more IO RPCs at this time, the problem with the test is that 2nd fail_loc is too generic, whatever RPC would be handled, it returns an error on it. it means if some other RPC appears just before our IO RPC, it gets an error but our IO does not.&lt;br/&gt;
On the OST side logs following lines are seen indicating that fail_loc which should ne caught in IO path is actually caught by the statfs thread:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;00002000:00000020:1.0:1461384731.470636:0:7993:0:(ofd_obd.c:842:ofd_statfs()) 2317411 blocks: 2307535 free, 2183803 avail; 618944 objects: 618375 free; state 0
00002000:02000000:1.0:1461384731.470641:0:7993:0:(libcfs_fail.h:96:cfs_fail_check_set()) *** cfs_fail_loc=215, val=0***
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;i.e. statfs is in progress by thread 7993 and it catches 215 fail_loc. &lt;/p&gt;</comment>
                            <comment id="149985" author="adilger" created="Mon, 25 Apr 2016 05:58:07 +0000"  >&lt;p&gt;It looks like this failure has only been happening since 2016-04-21:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/sub_tests/b09ba16c-07d9-11e6-9b34-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/sub_tests/b09ba16c-07d9-11e6-9b34-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;so it is likely related to some patch that landed shortly before that time.&lt;/p&gt;</comment>
                            <comment id="149986" author="adilger" created="Mon, 25 Apr 2016 06:13:23 +0000"  >&lt;p&gt;This appears to be the top failing test on master in the past week, so increasing the priority.&lt;/p&gt;

&lt;p&gt;I checked a couple of the failing patches, and there doesn&apos;t seem to be any common source of the failure (i.e. it isn&apos;t caused by a bad patch that is repeatedly being retested) and some of the failures are on master.  The few that I looked at more closely are based on commit 0f62ba942939c26edf07176d2eb082d38e95caec:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;CommitDate: Thu Apr 21 02:28:41 2016 +0000

    LU-8024 kernel: kernel update [SLES12 SP1 3.12.57-60.35]
    
    Update target and kernel_config files for new version
    
    Test-Parameters: clientdistro=sles12 testgroup=review-ldiskfs \
      mdsdistro=sles12 ossdistro=sles12 mdsfilesystemtype=ldiskfs \
      mdtfilesystemtype=ldiskfs ostfilesystemtype=ldiskfs
    
    Signed-off-by: Bob Glossman &amp;lt;bob.glossman@intel.com&amp;gt;
    Change-Id: Ic9f33c4877f93249b37726d8ad60c98ee624f719
    Reviewed-on: http://review.whamcloud.com/19593
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Which isn&apos;t to imply that this is the root cause of the problem (many of the test failures are on RHEL6.7, not SLES at all), but rather that this is the point where the other patches are failing this test, so the problematic commit is somewhere before this point.&lt;/p&gt;</comment>
                            <comment id="149987" author="adilger" created="Mon, 25 Apr 2016 06:18:00 +0000"  >&lt;p&gt;It appears the root cause of these failures is the following patch, which added test_115b and landed on 2016-04-21:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;commit 55f8520817a31dabf19fe0a8ac2492b85d039c38
Author:     Vitaly Fertman &amp;lt;vitaly.fertman@seagate.com&amp;gt;
CommitDate: Thu Apr 21 02:27:54 2016 +0000

    LU-7434 ptlrpc: lost bulk leads to a hang
    
    The reverse order of request_out_callback() and reply_in_callback()
    puts the RPC into UNREGISTERING state, which is waiting for RPC &amp;amp;
    bulk md unlink, whereas only RPC md unlink has been called so far.
    If bulk is lost, even expired_set does not check for UNREGISTERING
    state.
    
    The same for write if server returns an error.
    
    This phase is ambiguous, split to UNREG_RPC and UNREG_BULK.
    
    Signed-off-by: Vitaly Fertman &amp;lt;vitaly.fertman@seagate.com&amp;gt;
    Reviewed-on: http://review.whamcloud.com/17221
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="149988" author="adilger" created="Mon, 25 Apr 2016 06:19:28 +0000"  >&lt;p&gt;Vitaly, could you please take a look at this?  Is there a simple fix, or should the patch be reverted to give you more time to look into it?&lt;/p&gt;</comment>
                            <comment id="149989" author="gerrit" created="Mon, 25 Apr 2016 06:40:10 +0000"  >&lt;p&gt;Bhagyesh Dudhediya (bhagyesh.dudhediya@seagate.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/19758&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/19758&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8062&quot; title=&quot; recovery-small test_115b: @@@@@@ FAIL: dd success &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8062&quot;&gt;&lt;del&gt;LU-8062&lt;/del&gt;&lt;/a&gt; test: fix fail_val in recovery-small/115b&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: f5eb98a57ed6cfd36321cb2e7e6e2ac2eb309528&lt;/p&gt;</comment>
                            <comment id="150027" author="rhenwood" created="Mon, 25 Apr 2016 14:42:35 +0000"  >&lt;p&gt;Another occurrence of this failure, on Master, on review-dne-part-1 &lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/sub_tests/637e1824-0a36-11e6-855a-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/sub_tests/637e1824-0a36-11e6-855a-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="150141" author="adilger" created="Tue, 26 Apr 2016 01:24:42 +0000"  >&lt;p&gt;Closing this bug, since the problematic patch was reverted, and &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7434&quot; title=&quot;lost bulk leads to a hang&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7434&quot;&gt;&lt;del&gt;LU-7434&lt;/del&gt;&lt;/a&gt; is still open to track the re-landing of the fixed patch.&lt;/p&gt;</comment>
                            <comment id="150152" author="parinay" created="Tue, 26 Apr 2016 04:25:10 +0000"  >&lt;p&gt; s/&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7343&quot; title=&quot;sanity test_129: iam_lfix_init_new+0x5/0x20 [osd_ldiskfs]&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7343&quot;&gt;&lt;del&gt;LU-7343&lt;/del&gt;&lt;/a&gt;/&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7434&quot; title=&quot;lost bulk leads to a hang&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7434&quot;&gt;&lt;del&gt;LU-7434&lt;/del&gt;&lt;/a&gt;/ &lt;br/&gt;
(correcting the typo)&lt;/p&gt;</comment>
                            <comment id="150173" author="vitaly_fertman" created="Tue, 26 Apr 2016 09:03:06 +0000"  >&lt;blockquote&gt;&lt;p&gt; Andreas Dilger added a comment - 8 hours ago - edited&lt;br/&gt;
Closing this bug, since the problematic patch was reverted, and &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7434&quot; title=&quot;lost bulk leads to a hang&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7434&quot;&gt;&lt;del&gt;LU-7434&lt;/del&gt;&lt;/a&gt; is still open to track the re-landing of the fixed patch. &lt;/p&gt;&lt;/blockquote&gt; 
&lt;p&gt;actually, the fix was submitted above. please re-land the original patch with the fix above.&lt;/p&gt;</comment>
                            <comment id="157641" author="niu" created="Tue, 5 Jul 2016 07:20:16 +0000"  >&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/5766e584-41c7-11e6-bbf5-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/5766e584-41c7-11e6-bbf5-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The failure reoccurred in master review.&lt;/p&gt;</comment>
                            <comment id="161008" author="yujian" created="Fri, 5 Aug 2016 23:52:30 +0000"  >&lt;p&gt;One more failure instance on master branch: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/11d1b22e-5b2b-11e6-b2e2-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/11d1b22e-5b2b-11e6-b2e2-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="172182" author="yong.fan" created="Thu, 3 Nov 2016 15:15:18 +0000"  >&lt;p&gt;+1 on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/0409e532-a1d3-11e6-9ab0-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/0409e532-a1d3-11e6-9ab0-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="173041" author="niu" created="Thu, 10 Nov 2016 03:28:08 +0000"  >&lt;p&gt;Hit on master: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/3c6bacec-a680-11e6-8859-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/3c6bacec-a680-11e6-8859-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="173748" author="yong.fan" created="Wed, 16 Nov 2016 00:29:02 +0000"  >&lt;p&gt;+1 on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/0cf1f83e-ab7d-11e6-a726-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/0cf1f83e-ab7d-11e6-a726-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="175605" author="emoly.liu" created="Wed, 30 Nov 2016 02:00:29 +0000"  >&lt;p&gt;+1 on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/552d00bc-b663-11e6-b603-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/552d00bc-b663-11e6-b603-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="176559" author="yujian" created="Mon, 5 Dec 2016 20:54:38 +0000"  >&lt;p&gt;+1 on master:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/c78d6572-b396-11e6-85c4-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/c78d6572-b396-11e6-85c4-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="177385" author="yong.fan" created="Sun, 11 Dec 2016 01:06:19 +0000"  >&lt;p&gt;+1 on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/c1bf78dc-be55-11e6-9f18-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/c1bf78dc-be55-11e6-9f18-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="178928" author="yong.fan" created="Fri, 23 Dec 2016 09:56:59 +0000"  >&lt;p&gt;+1 on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/91f16c44-c8d4-11e6-8515-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/91f16c44-c8d4-11e6-8515-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="181681" author="bogl" created="Sat, 21 Jan 2017 22:23:55 +0000"  >&lt;p&gt;another on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/7b0a1850-df81-11e6-be8a-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/7b0a1850-df81-11e6-be8a-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="182106" author="bogl" created="Wed, 25 Jan 2017 17:19:19 +0000"  >&lt;p&gt;another on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/f3a0e4da-e321-11e6-981b-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/f3a0e4da-e321-11e6-981b-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="183480" author="bogl" created="Sat, 4 Feb 2017 21:28:49 +0000"  >&lt;p&gt;another on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/8bb48412-eadd-11e6-9fd4-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/8bb48412-eadd-11e6-9fd4-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="183534" author="sbuisson" created="Mon, 6 Feb 2017 10:39:52 +0000"  >&lt;p&gt;+1 on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/904c2bbc-ea9f-11e6-b844-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/904c2bbc-ea9f-11e6-b844-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="183615" author="adilger" created="Mon, 6 Feb 2017 19:27:59 +0000"  >&lt;p&gt;As &lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=529964&quot; class=&quot;user-hover&quot; rel=&quot;529964&quot;&gt;529964&lt;/a&gt; mentioned in his first comment, this problem looks to be a problem with the test itself, rather than a problem with the code.  There is a race condition because &lt;tt&gt;fail_loc=0x215&lt;/tt&gt; (&lt;tt&gt;OBD_FAIL_OST_ENOSPC&lt;/tt&gt;) is insufficiently specific to cause &lt;em&gt;only&lt;/em&gt; the write to fail, but it also fails for unrelated &lt;tt&gt;OST_STATFS&lt;/tt&gt; RPCs on that OST (e.g. from the MDS).&lt;/p&gt;</comment>
                            <comment id="183617" author="gerrit" created="Mon, 6 Feb 2017 19:43:44 +0000"  >&lt;p&gt;Andreas Dilger (andreas.dilger@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/25279&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/25279&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8062&quot; title=&quot; recovery-small test_115b: @@@@@@ FAIL: dd success &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8062&quot;&gt;&lt;del&gt;LU-8062&lt;/del&gt;&lt;/a&gt; tests: fix recovery-small test_115b fail_loc&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 8b7d81d7c6437ce085870d0c434b7b44c3d6601f&lt;/p&gt;</comment>
                            <comment id="190179" author="casperjx" created="Thu, 30 Mar 2017 18:48:45 +0000"  >&lt;p&gt;Saw a very similar write not blocked issue with master b3541:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;fail_loc=0x720
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.0340116 s, 120 kB/s
 sanity test_313: @@@@@@ FAIL: write should failed
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/1c10446a-0a05-11e7-9053-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/1c10446a-0a05-11e7-9053-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="192393" author="emoly.liu" created="Tue, 18 Apr 2017 03:05:53 +0000"  >&lt;p&gt;+1 on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/41cb5880-23e0-11e7-b742-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/41cb5880-23e0-11e7-b742-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="193394" author="vitaly_fertman" created="Tue, 25 Apr 2017 16:38:27 +0000"  >&lt;blockquote&gt;
&lt;p&gt;As Bhagyesh Dudhediya mentioned in his first comment, this problem looks to be a problem with the test itself, rather than a problem with the code. There is a race condition because fail_loc=0x215 (OBD_FAIL_OST_ENOSPC) is insufficiently specific to cause only the write to fail, but it also fails for unrelated OST_STATFS RPCs on that OST (e.g. from the MDS).&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;can you clarify how does it happen? there is a protection made with $OSTCOUNT, that failcheck is triggered only if the fail_val matches the ostid. not supposed to happen.&lt;/p&gt;</comment>
                            <comment id="193449" author="gerrit" created="Tue, 25 Apr 2017 21:10:30 +0000"  >&lt;p&gt;Vitaly Fertman (vitaly.fertman@seagate.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/26815&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/26815&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8062&quot; title=&quot; recovery-small test_115b: @@@@@@ FAIL: dd success &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8062&quot;&gt;&lt;del&gt;LU-8062&lt;/del&gt;&lt;/a&gt; libcfs: schedule_timeout fix&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: fbc35bf832d17afa3aa7c7c9e4178986e7cc0458&lt;/p&gt;</comment>
                            <comment id="202574" author="gerrit" created="Wed, 19 Jul 2017 03:28:40 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/26815/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/26815/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8062&quot; title=&quot; recovery-small test_115b: @@@@@@ FAIL: dd success &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8062&quot;&gt;&lt;del&gt;LU-8062&lt;/del&gt;&lt;/a&gt; ptlrpc: increase sleep time in ptlrpc_request_bufs_pack()&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: e9e744ea7352ea0d1a5d9b2bd05e0e7c19f08596&lt;/p&gt;</comment>
                            <comment id="203126" author="gerrit" created="Fri, 21 Jul 2017 19:51:50 +0000"  >&lt;p&gt;James Simmons (uja.ornl@yahoo.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/28181&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/28181&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8062&quot; title=&quot; recovery-small test_115b: @@@@@@ FAIL: dd success &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8062&quot;&gt;&lt;del&gt;LU-8062&lt;/del&gt;&lt;/a&gt; ptlrpc: increase sleep time in ptlrpc_request_bufs_pack()&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_10&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: b13e61f2e689efb8de1b558e9499c497921187e1&lt;/p&gt;</comment>
                            <comment id="204380" author="gerrit" created="Thu, 3 Aug 2017 21:33:27 +0000"  >&lt;p&gt;John L. Hammond (john.hammond@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/28181/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/28181/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8062&quot; title=&quot; recovery-small test_115b: @@@@@@ FAIL: dd success &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8062&quot;&gt;&lt;del&gt;LU-8062&lt;/del&gt;&lt;/a&gt; ptlrpc: increase sleep time in ptlrpc_request_bufs_pack()&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_10&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 39c090bdb9beacc0837cf921d87a451308364131&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="36356">LU-8061</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="33160">LU-7434</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="36388">LU-8067</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                    <customfield id="customfield_10030" key="com.atlassian.jira.plugin.system.customfieldtypes:labels">
                        <customfieldname>Epic/Theme</customfieldname>
                        <customfieldvalues>
                                        <label>test</label>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzy94v:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>