<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:36:10 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3700] sanity-hsm test_21 Error: &apos;wrong block number&apos; </title>
                <link>https://jira.whamcloud.com/browse/LU-3700</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;from &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/0afc2c56-fc86-11e2-8ce2-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/0afc2c56-fc86-11e2-8ce2-52540035b04c&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This sanity-hsm test 21 seems to be hitting a lot right now &lt;br/&gt;
Wrong block number is one of the errors seen. &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;test_21 	

    Error: &apos;wrong block number&apos;
    Failure Rate: 33.00% of last 100 executions [all branches] 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;


&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== sanity-hsm test 21: Simple release tests == 23:18:20 (1375510700)
2+0 records in
2+0 records out
2097152 bytes (2.1 MB) copied, 0.353933 s, 5.9 MB/s
 sanity-hsm test_21: @@@@@@ FAIL: wrong block number 
  Trace dump:
  = /usr/lib64/lustre/tests/test-framework.sh:4202:error_noexit()
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
</description>
                <environment>Patches submitted to autotest </environment>
        <key id="20190">LU-3700</key>
            <summary>sanity-hsm test_21 Error: &apos;wrong block number&apos; </summary>
                <type id="7" iconUrl="https://jira.whamcloud.com/images/icons/issuetypes/task_agile.png">Technical task</type>
                            <parent id="20020">LU-3647</parent>
                                    <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="bfaccini">Bruno Faccini</assignee>
                                    <reporter username="keith">Keith Mannthey</reporter>
                        <labels>
                            <label>HSM</label>
                            <label>zfs</label>
                    </labels>
                <created>Mon, 5 Aug 2013 20:40:51 +0000</created>
                <updated>Mon, 30 Dec 2013 20:27:24 +0000</updated>
                            <resolved>Mon, 30 Dec 2013 20:27:24 +0000</resolved>
                                    <version>Lustre 2.5.0</version>
                                    <fixVersion>Lustre 2.6.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="64594" author="bfaccini" created="Tue, 20 Aug 2013 12:59:41 +0000"  >&lt;p&gt;&quot;stat -c %b &amp;lt;file&amp;gt;&quot; failed to return 0 after &quot;lfs hsm_set --archived --exist &amp;lt;file&amp;gt;&quot;.&lt;/p&gt;

&lt;p&gt;I am currently investigating test&apos;s lustre debug-logs, but it misses HSM debug traces ...&lt;/p&gt;

&lt;p&gt;May be currently modified sanity-hsm tests to mimic future copytool behavior need to also use commands to wait for HSM actions to complete ? Like &quot;wait_request_state $fid ARCHIVE SUCCEED&quot; in case of concerned &quot;lfs hsm_set --archived --exist &amp;lt;file&amp;gt;&quot; when used as a replacement for &quot;lfs hsm_archive &amp;lt;file&amp;gt;&quot;.&lt;/p&gt;

&lt;p&gt;Will try to setup a platform to reproduce problem, with HSM debug traces enabled on Client/MDS VMs, and running sanity-hsm tests in a loop.&lt;/p&gt;
</comment>
                            <comment id="65434" author="bfaccini" created="Fri, 30 Aug 2013 13:11:23 +0000"  >&lt;p&gt;I am not able to reproduce problem with current master, even by running sanity-hsm/test_21 in a loop.&lt;/p&gt;

&lt;p&gt;BTW, according to Maloo reports test_21 failures for &apos;wrong block number&apos; stopped around Aug. 13th. And this seems to match with landing of patch for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3561&quot; title=&quot;Add a sanity test for HSM&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3561&quot;&gt;&lt;del&gt;LU-3561&lt;/del&gt;&lt;/a&gt; that brings &quot;real&quot; HSM features (copytool, lfs hsm-commands usage instead of hsm-flags setting) in tests and according tools testing.&lt;/p&gt;

&lt;p&gt;So my strong assumption is that that this ticket can be closed because unrelated now.&lt;/p&gt;
</comment>
                            <comment id="65558" author="bfaccini" created="Mon, 2 Sep 2013 16:40:29 +0000"  >&lt;p&gt;To be re-opened in case of re-occurence.&lt;/p&gt;</comment>
                            <comment id="72386" author="utopiabound" created="Wed, 27 Nov 2013 13:14:28 +0000"  >&lt;p&gt;This has been happening on ZFS pretty regularly:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/5af74a0a-575e-11e3-8d5c-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/5af74a0a-575e-11e3-8d5c-52540035b04c&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/6b9c8f22-5741-11e3-a296-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/6b9c8f22-5741-11e3-a296-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="72400" author="adilger" created="Wed, 27 Nov 2013 15:51:44 +0000"  >&lt;p&gt;Failed again on ZFS. &lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://maloo.whamcloud.com/test_sessions/58a20a92-5759-11e3-8d5c-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sessions/58a20a92-5759-11e3-8d5c-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="72603" author="bfaccini" created="Mon, 2 Dec 2013 16:01:27 +0000"  >&lt;p&gt;Could it be that there is some timing delay, and more likely with ZFS, between hsm_release and st_blocks to become 1 ?&lt;br/&gt;
Will setup a ZFS platform and run sanity-hsm/test_21 in a loop to reproduce.&lt;/p&gt;</comment>
                            <comment id="72655" author="adilger" created="Mon, 2 Dec 2013 22:46:59 +0000"  >&lt;p&gt;It would make sense to me that &quot;&lt;tt&gt;lfs hsm_release&lt;/tt&gt;&quot; would cause all of the DLM locks to be revoked from the client, so any stat from the client would return st_blocks == 1.  This should be visible in the debug logs, if this test is run with at least &lt;tt&gt;+dlmtrace&lt;/tt&gt; enabled.&lt;/p&gt;

&lt;p&gt;I think in the short term it makes sense to fix sanity-hsm.sh test_21 to enable full debug for this test (using debugsave() and debugrestore()), and print the actual block number that is returned.  Maybe it is as simple as ZFS returning 2 with an external xattr or something, which might even happen with ldiskfs?  Probably it makes sense to also allow some small margin, like 5 blocks or so.  This test is also bad because there are two places that print &quot;wrong block number&quot;, and it isn&apos;t even clear which one is failing.&lt;/p&gt;

&lt;p&gt;I think it makes sense to submit a patch to change this immediately to the following:&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;        local fid=$(make_small $f)
        local orig_size=$(stat -c &lt;span class=&quot;code-quote&quot;&gt;&quot;%s&quot;&lt;/span&gt; $f)
        local orig_blocks=$(stat -c &lt;span class=&quot;code-quote&quot;&gt;&quot;%b&quot;&lt;/span&gt; $f)

        check_hsm_flags $f &lt;span class=&quot;code-quote&quot;&gt;&quot;0x00000000&quot;&lt;/span&gt;
        $LFS hsm_archive $f || error &lt;span class=&quot;code-quote&quot;&gt;&quot;could not archive file&quot;&lt;/span&gt;
        wait_request_state $fid ARCHIVE SUCCEED

        local blocks=$(stat -c &lt;span class=&quot;code-quote&quot;&gt;&quot;%b&quot;&lt;/span&gt; $f)
        [ $blocks -eq $orig_blocks ] || error &lt;span class=&quot;code-quote&quot;&gt;&quot;$f: wrong blocks after archive: $blocks != $orig_blocks&quot;&lt;/span&gt;
        local size=$(stat -c &lt;span class=&quot;code-quote&quot;&gt;&quot;%s&quot;&lt;/span&gt; $f)
        [ $size -eq $orig_size ] || error &lt;span class=&quot;code-quote&quot;&gt;&quot;$f: wrong size after archive: $size != $orig_size&quot;&lt;/span&gt;

        # Release and check states
        $LFS hsm_release $f || error &lt;span class=&quot;code-quote&quot;&gt;&quot;$f: could not release file&quot;&lt;/span&gt;
        check_hsm_flags $f &lt;span class=&quot;code-quote&quot;&gt;&quot;0x0000000d&quot;&lt;/span&gt;

        blocks=$(stat -c &lt;span class=&quot;code-quote&quot;&gt;&quot;%b&quot;&lt;/span&gt; $f)
        [ $blocks -gt 5 ] || error &lt;span class=&quot;code-quote&quot;&gt;&quot;$f: too many blocks after release: $blocks &amp;gt; 5&quot;&lt;/span&gt;
        size=$(stat -c &lt;span class=&quot;code-quote&quot;&gt;&quot;%s&quot;&lt;/span&gt; $f)
        [ $size -ne $orig_size ] || error &lt;span class=&quot;code-quote&quot;&gt;&quot;$f: wrong size after release: $size != $orig_size&quot;&lt;/span&gt;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Maybe this will allow ZFS to pass, but even if it doesn&apos;t then we will have more information to debug the problem.&lt;/p&gt;</comment>
                            <comment id="72697" author="utopiabound" created="Tue, 3 Dec 2013 14:21:21 +0000"  >&lt;p&gt;I&apos;ll work up and test a patch per Andreas&apos;s comment.&lt;/p&gt;</comment>
                            <comment id="72698" author="bfaccini" created="Tue, 3 Dec 2013 14:29:50 +0000"  >&lt;p&gt;Patch is at &lt;a href=&quot;http://review.whamcloud.com/8467&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/8467&lt;/a&gt;.&lt;br/&gt;
But I wonder how, even for ZFS, released file st_blocks can be nothing else than 1 after my patch #7776 to force it as part of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3864&quot; title=&quot;stat() on HSM released file returns st_blocks = 0&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3864&quot;&gt;&lt;del&gt;LU-3864&lt;/del&gt;&lt;/a&gt; &#8230;&lt;/p&gt;</comment>
                            <comment id="72850" author="adilger" created="Wed, 4 Dec 2013 20:43:56 +0000"  >&lt;p&gt;Nathaniel, can you please submit a separate patch to disable this test for ZFS only.  That can test and possibly land in parallel with 8467 if that does not give us any relief.&lt;/p&gt;</comment>
                            <comment id="73015" author="utopiabound" created="Fri, 6 Dec 2013 21:31:29 +0000"  >&lt;p&gt;In case 8467 doesn&apos;t fix 21 for ZFS, here is a patch to EXCEPT it: &lt;a href=&quot;http://review.whamcloud.com/8503&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/8503&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="73206" author="utopiabound" created="Tue, 10 Dec 2013 17:27:10 +0000"  >&lt;p&gt;There was a failure &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/8d7a2b66-6134-11e3-bd66-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/8d7a2b66-6134-11e3-bd66-52540035b04c&lt;/a&gt; post #8467.  It is, so far, unique (but not seemingly patch related).  &lt;/p&gt;</comment>
                            <comment id="73245" author="adilger" created="Tue, 10 Dec 2013 21:51:51 +0000"  >&lt;p&gt;Hmm, looking at the test more closely, now that we have a test failure, it seems the test itself is defective.  The blocks count == 1 should not be true after ARCHIVE (as it was checked beofre the 8467 patch), only after RELEASE.  The failed test reported:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# error &quot;$f: wrong size after archive: $size != $orig_size&quot;

wrong block number after archive:  4103 != 1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;so it is setting &quot;orig_size&quot; == 1.&lt;/p&gt;

&lt;p&gt;That the test passes on ldiskfs is likely just a race, because it is a bit faster and the client does not have the actual on-disk blocks count, so it will return &quot;st_blocks=1&quot; to userspace to avoid bugs in tar/rsync that think st_blocks=0 means the file has no data in it.  The zfs test is a bit slower to archive, and the client gets the actual st_blocks value back, and (incorrectly) fails this check.&lt;/p&gt;

&lt;p&gt;I suspect that even with make_small using conv=fsync that the client still has cached the original blocks count, so what is needed to fix this bug is to flush the locks on the client before getting orig_blocks:&lt;/p&gt;

&lt;p&gt;        cancel_lru_locks osc&lt;/p&gt;

&lt;p&gt;That way, it returns an accurate value for st_blocks for later testing.&lt;/p&gt;

&lt;p&gt;Nathaniel, can you please submit a patch ASAP so maybe we can land this by tomorrow.&lt;/p&gt;</comment>
                            <comment id="73250" author="bfaccini" created="Tue, 10 Dec 2013 23:25:13 +0000"  >&lt;p&gt;Andreas, I am not sure to understand you. Do you mean that on ZFS, to get the correct st_blocks from &quot;stat -c %b&quot; just after make_small(), we need to &quot;cancel_lru_locks osc&quot; in between ?&lt;/p&gt;</comment>
                            <comment id="73258" author="utopiabound" created="Wed, 11 Dec 2013 01:17:47 +0000"  >&lt;p&gt;The OSS is definitely returning a funny (not the same) number of blocks after doing hsm_archive. I checked this on my local setup with wireshark and OST_GETATTR is replying with different number of blocks than before doing hsm_archive.&lt;/p&gt;

&lt;p&gt;It may be that the passing tests are the ones hitting the cached value on the client and the failure is the one getting the value from the OST.&lt;/p&gt;

&lt;p&gt;OST debug log:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;1386629604.515513:0:1344:0:(ldlm_lockd.c:1167:ldlm_handle_enqueue0()) ### server-side enqueue handler START
1386629604.515525:0:1344:0:(ldlm_lockd.c:1253:ldlm_handle_enqueue0()) ### server-side enqueue handler, new lock created ns: filter-lustre-OST0001_UUID lock: ffff880072cb7740/0xb2c4c9585d128715 lrc: 2/0,0 mode: --/PR res: [0x21aa:0x0:0x0].0 rrc: 2 type: EXT [0-&amp;gt;0] (req 0-&amp;gt;0) flags: 0x40000000000000 nid: local remote: 0x59a0a522abcc8a58 expref: -99 pid: 1344 timeout: 0 lvb_type: 0
1386629604.515559:0:1344:0:(client.c:1473:ptlrpc_send_new_req()) Sending RPC pname:cluuid:pid:xid:nid:opc ll_ost00_002:lustre-OST0001_UUID:1344:1453973284418480:10.10.16.236@tcp:106
1386629604.515582:0:1344:0:(client.c:2123:ptlrpc_set_wait()) set ffff880072921ec0 going to sleep for 6 seconds
1386629604.516784:0:1344:0:(ofd_lvb.c:195:ofd_lvbo_update()) res: [0x21aa:0x0:0x0] updating lvb size: 0 -&amp;gt; 2097152
1386629604.516789:0:1344:0:(ofd_lvb.c:207:ofd_lvbo_update()) res: [0x21aa:0x0:0x0] updating lvb atime: 0 -&amp;gt; 1386629600
1386629604.516799:0:1344:0:(ofd_lvb.c:265:ofd_lvbo_update()) res: [0x100000000:0x21aa:0x0] updating lvb blocks from disk: 1 -&amp;gt; 4103
1386629604.516813:0:1344:0:(client.c:1840:ptlrpc_check_set()) Completed RPC pname:cluuid:pid:xid:nid:opc ll_ost00_002:lustre-OST0001_UUID:1344:1453973284418480:10.10.16.236@tcp:106
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
</comment>
                            <comment id="73520" author="utopiabound" created="Fri, 13 Dec 2013 22:14:36 +0000"  >&lt;p&gt;ZFS reports blocks written to disk, and not # of blocks used if the file were fully written to disk, so the number actually changes after the write completes in ZFS.  It&apos;s not a locking or consistency issue in Lustre.&lt;/p&gt;</comment>
                            <comment id="73522" author="utopiabound" created="Fri, 13 Dec 2013 22:24:44 +0000"  >&lt;p&gt;My previous patch actually skipped wrong test.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/8575&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/8575&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="73536" author="adilger" created="Sat, 14 Dec 2013 10:18:06 +0000"  >&lt;p&gt;I didn&apos;t see this comment before I inspected the patch. However, this implies a different bug in ZFS. Namely, the &quot;small_file&quot; helper in sanity-hsm is creating files with conv=fsync, so the fsync on close should flush all the blocks from the client cache and onto disk on the OST. After that point the blocks count for the file should not change.  Likewise, if the locks are cancelled at the client, it should flush all the dirty blocks from cache to disk on the OST.&lt;/p&gt;

&lt;p&gt;This makes me wonder if ZFS is implementing fsync properly at the server. &lt;/p&gt;</comment>
                            <comment id="73567" author="bfaccini" created="Mon, 16 Dec 2013 13:32:34 +0000"  >&lt;p&gt;BTW, since my change #8467 also enable full debug logs and has landed already, we may be able to use such logs gathered during recent failures of sanity-hsm/test_20 now with the &quot;wrong block number after archive: &#8230;&quot; symptom, to troubleshoot.&lt;/p&gt;</comment>
                            <comment id="73570" author="utopiabound" created="Mon, 16 Dec 2013 14:06:33 +0000"  >&lt;p&gt;conv=fsync causes an MDS_SYNC to be sent to the MDT (but no syncs are sent to the OSTs), it does not cause the OST_BRW_ASYNC flag to be cleared in the OST_WRITE, so the ost does not think it needs to sync the data to disk.&lt;/p&gt;

&lt;p&gt;So even changing conv=fsync to oflag=sync (which causes OST_SYNCs to be sent) here is the wire traffic:&lt;br/&gt;
OST_WRITE sent (OST_BRW_ASYNC is set), returns 1 block&lt;br/&gt;
OST_SYNC sent, returns 2053 blocks&lt;br/&gt;
OST_WRITE sent (OST_BRW_ASYNC is set), returns 2053 blocks&lt;br/&gt;
OST_SYNC sent, returns 4101 bocks (the correct amount)&lt;/p&gt;

&lt;p&gt;A stat of the file at this point only shows 2053 (the last OST_WRITE amount).&lt;/p&gt;

&lt;p&gt;For ldiskfs the wire traffic is:&lt;br/&gt;
OST_WRITE sent (OST_BRW_ASYNC is set), returns 2048 block&lt;br/&gt;
OST_SYNC sent, returns 2048 blocks&lt;br/&gt;
OST_WRITE sent (OST_BRW_ASYNC is set), returns 4096 blocks (the correct amount)&lt;br/&gt;
OST_SYNC sent, returns 4096 bocks&lt;/p&gt;

&lt;p&gt;Looking at the client code, on OST_SYNC&apos;s processing, the oa is ignored.&lt;/p&gt;</comment>
                            <comment id="73660" author="adilger" created="Tue, 17 Dec 2013 08:50:08 +0000"  >&lt;p&gt;Looks like two different bugs.&lt;/p&gt;

&lt;p&gt;The conv=fsync option should indeed result in a single OST_SYNC RPC sent at the end of the writes. I suspect that this was previously skipped because OST writes were always synchronous (so fsync() was a no-op), and the async journal commit feature was developed on b1_8 and this wasn&apos;t fixed in the CLIO code when it landed.  It should be noted that the Lustre OST_SYNC allows syncing a range of data on a single object, so the mapping of the VFS sync_page_range() method should map its range to the RPC, and extract that from the RPC on the server side(it migh already do this. &lt;/p&gt;

&lt;p&gt;The second problem about the client not updating the blocks count based on reply values should also be investigated. I expect that the ZFS block count is not updated by the time the write is submitted, so it doesn&apos;t reply with the new block count to the client. However, the subsequent OST_SYNC should result in the right blocks count being returned to the client and then being cached under the DLM lock. &lt;/p&gt;</comment>
                            <comment id="73685" author="utopiabound" created="Tue, 17 Dec 2013 16:04:31 +0000"  >&lt;p&gt;No OST_SYNC from fsync() is &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4388&quot; title=&quot;fsync on client does not cause OST_SYNCs to be issued&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4388&quot;&gt;&lt;del&gt;LU-4388&lt;/del&gt;&lt;/a&gt;&lt;br/&gt;
New blockcount from OST_SYNC not being kept is &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4389&quot; title=&quot;If OST_SYNC causes inode update, client does not reflect change&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4389&quot;&gt;&lt;del&gt;LU-4389&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="74080" author="adilger" created="Wed, 25 Dec 2013 20:10:12 +0000"  >&lt;p&gt;Patch &lt;a href=&quot;http://review.whamcloud.com/8575&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/8575&lt;/a&gt; was landed, so hopefully this test will now pass. The related failures (IMHO even more serious) still need to be fixed. &lt;/p&gt;</comment>
                            <comment id="74107" author="yong.fan" created="Fri, 27 Dec 2013 09:41:48 +0000"  >&lt;p&gt;Another failure instance:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/459d8ae0-6e86-11e3-b713-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/459d8ae0-6e86-11e3-b713-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="74172" author="jlevi" created="Mon, 30 Dec 2013 20:26:59 +0000"  >&lt;p&gt;I am closing this ticket per email comments from Andreas:&lt;br/&gt;
&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3700&quot; title=&quot;sanity-hsm test_21 Error: &amp;#39;wrong block number&amp;#39; &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3700&quot;&gt;&lt;del&gt;LU-3700&lt;/del&gt;&lt;/a&gt; has been worked around for now at the test level, so it can probably be closed. The other bugs (&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4388&quot; title=&quot;fsync on client does not cause OST_SYNCs to be issued&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4388&quot;&gt;&lt;del&gt;LU-4388&lt;/del&gt;&lt;/a&gt; and LU4389) are tracking the root cause of the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3700&quot; title=&quot;sanity-hsm test_21 Error: &amp;#39;wrong block number&amp;#39; &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3700&quot;&gt;&lt;del&gt;LU-3700&lt;/del&gt;&lt;/a&gt; failure.&lt;/p&gt;

&lt;p&gt;Cheers, Andreas&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="20195">LU-3704</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvx6v:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9548</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>