<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:53:13 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5638] sanity-quota test_33 for ZFS-based backend: Used inodes for user 60000 isn&apos;t 0. 1</title>
                <link>https://jira.whamcloud.com/browse/LU-5638</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This issue was created by maloo for nasf &amp;lt;fan.yong@intel.com&amp;gt;&lt;/p&gt;

&lt;p&gt;This issue relates to the following test suite run: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/a26efad0-3e95-11e4-916a-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/a26efad0-3e95-11e4-916a-5254006e85c2&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The sub-test test_33 failed with the following error:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Used inodes for user 60000 isn&apos;t 0. 1&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Please provide additional information about the failure here.&lt;/p&gt;

&lt;p&gt;Info required for matching: sanity-quota 33&lt;/p&gt;</description>
                <environment></environment>
        <key id="26634">LU-5638</key>
            <summary>sanity-quota test_33 for ZFS-based backend: Used inodes for user 60000 isn&apos;t 0. 1</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="hongchao.zhang">Hongchao Zhang</assignee>
                                    <reporter username="maloo">Maloo</reporter>
                        <labels>
                            <label>zfs</label>
                    </labels>
                <created>Thu, 18 Sep 2014 01:47:11 +0000</created>
                <updated>Thu, 2 Aug 2018 20:17:28 +0000</updated>
                            <resolved>Thu, 2 Aug 2018 20:17:28 +0000</resolved>
                                    <version>Lustre 2.7.0</version>
                                    <fixVersion>Lustre 2.12.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>17</watches>
                                                                            <comments>
                            <comment id="94347" author="isaac" created="Thu, 18 Sep 2014 04:00:33 +0000"  >&lt;p&gt;Lots of errors like these in the debug logs:&lt;br/&gt;
00000001:04000000:0.0:1410962344.043221:0:30818:0:(osd_quota.c:98:osd_acct_index_lookup()) lustre-OST0001: id ea60 not found in DMU accounting ZAP&lt;br/&gt;
00000001:04000000:0.0:1410962344.043236:0:30818:0:(osd_quota.c:117:osd_acct_index_lookup()) lustre-OST0001: id ea60 not found in accounting ZAP&lt;/p&gt;

&lt;p&gt;And 60000=0xea60, so it looked like the user hadn&apos;t created anything yet so the zap_lookup() returned negative ENOENT for both the DMU ZAP and the OSD ZAP. But in this case osd_acct_index_lookup() already sets both rec-&amp;gt;bspace and rec-&amp;gt;ispace to 0. I&apos;m a bit confused by the returns of osd_acct_index_lookup() though: it returns either +1 or -errno, but lquota_disk_read() callers expect 0 for success -ENOENT and others for error.&lt;/p&gt;

&lt;p&gt;Someone who knows the quota code should comment.&lt;/p&gt;</comment>
                            <comment id="94594" author="niu" created="Mon, 22 Sep 2014 02:28:14 +0000"  >&lt;blockquote&gt;
&lt;p&gt; I&apos;m a bit confused by the returns of osd_acct_index_lookup() though: it returns either +1 or -errno, but lquota_disk_read() callers expect 0 for success -ENOENT and others for error.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;dt_lookup() converted the return values.&lt;/p&gt;</comment>
                            <comment id="94597" author="niu" created="Mon, 22 Sep 2014 03:55:06 +0000"  >&lt;blockquote&gt;
&lt;p&gt;Lots of errors like these in the debug logs:&lt;br/&gt;
00000001:04000000:0.0:1410962344.043221:0:30818:0:(osd_quota.c:98:osd_acct_index_lookup()) lustre-OST0001: id ea60 not found in DMU accounting ZAP&lt;br/&gt;
00000001:04000000:0.0:1410962344.043236:0:30818:0:(osd_quota.c:117:osd_acct_index_lookup()) lustre-OST0001: id ea60 not found in accounting ZAP&lt;/p&gt;

&lt;p&gt;And 60000=0xea60, so it looked like the user hadn&apos;t created anything yet so the zap_lookup() returned negative ENOENT for both the DMU ZAP and the OSD ZAP. But in this case osd_acct_index_lookup() already sets both rec-&amp;gt;bspace and rec-&amp;gt;ispace to 0. &lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;These messages were from OSTs, and the test was failed for incorrect inode usage (which is inode usage on MDT), so I think those OST messages is irrelevant.&lt;/p&gt;

&lt;p&gt;I checked MDT log, but didn&apos;t find anything abnormal. I suspect this failure is caused by the race of updating inode accounting ZAP: zap_increment_int() doesn&apos;t take lock to make &quot;lookup -&amp;gt; update&quot; atomic. I believe the patch from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2600&quot; title=&quot;lustre metadata performance is very slow on zfs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2600&quot;&gt;&lt;del&gt;LU-2600&lt;/del&gt;&lt;/a&gt; could fix this problem, unfortunately, the patch was reverted for some regression. I&apos;m not sure if Alex is still working this.&lt;/p&gt;

&lt;p&gt;As a short term solution, probably we may introduce some lock in osd layer to serialize the zap_increment_int()?&lt;/p&gt;</comment>
                            <comment id="96916" author="isaac" created="Tue, 21 Oct 2014 20:06:15 +0000"  >&lt;p&gt;I think it makes sense to fix zap_increment_int() instead - it needs exclusive access to do zap_update() anyway.&lt;/p&gt;</comment>
                            <comment id="96982" author="bzzz" created="Wed, 22 Oct 2014 05:44:01 +0000"  >&lt;p&gt;doing so on every accounting change would be very expensive, IMO. instead we should be doing this at commit where all &quot;users&quot; transactions are done, when we have an exclusive access by definition.&lt;/p&gt;</comment>
                            <comment id="96985" author="isaac" created="Wed, 22 Oct 2014 06:25:34 +0000"  >&lt;p&gt;Yes of course, batching the updates at sync time would be the best solution. Actually that&apos;s exactly how DMU updates the DMU_USERUSED_OBJECT/DMU_GROUPUSED_OBJECT, in dsl_pool_sync()&lt;del&gt;&amp;gt;dmu_objset_do_userquota_updates()&lt;/del&gt;&amp;gt;do_userquota_update()-&amp;gt;zap_increment_int().&lt;/p&gt;</comment>
                            <comment id="96986" author="bzzz" created="Wed, 22 Oct 2014 06:31:34 +0000"  >&lt;p&gt;right, this is what I was trying to implement in &lt;a href=&quot;http://review.whamcloud.com/#/c/10785/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/10785/&lt;/a&gt;, but failed.&lt;/p&gt;</comment>
                            <comment id="97660" author="isaac" created="Tue, 28 Oct 2014 03:56:20 +0000"  >&lt;p&gt;Johann has asked me to work on adding dnode accounting support to ZFS in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2435&quot; title=&quot;inode accounting in osd-zfs is racy&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2435&quot;&gt;&lt;del&gt;LU-2435&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="120034" author="jamesanunez" created="Wed, 1 Jul 2015 15:03:29 +0000"  >&lt;p&gt;Another instance of this failure at &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/2caf1f82-1f45-11e5-a4d6-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/2caf1f82-1f45-11e5-a4d6-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="120395" author="fsaunier" created="Mon, 6 Jul 2015 13:50:56 +0000"  >&lt;p&gt;These tests seem hitting occurences of same issue (sanity quota 33, 34 and 35):&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_logs/63ba8d80-21ad-11e5-a979-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_logs/63ba8d80-21ad-11e5-a979-5254006e85c2&lt;/a&gt;&lt;/li&gt;
	&lt;li&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_logs/630f5262-21ad-11e5-a979-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_logs/630f5262-21ad-11e5-a979-5254006e85c2&lt;/a&gt;&lt;/li&gt;
	&lt;li&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_logs/63384fdc-21ad-11e5-a979-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_logs/63384fdc-21ad-11e5-a979-5254006e85c2&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</comment>
                            <comment id="120530" author="pichong" created="Tue, 7 Jul 2015 07:31:11 +0000"  >&lt;p&gt;two new occurrences on master&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/761c8276-2419-11e5-91e9-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/761c8276-2419-11e5-91e9-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/1b40755e-242a-11e5-87f6-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/1b40755e-242a-11e5-87f6-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="120628" author="jamesanunez" created="Tue, 7 Jul 2015 19:33:43 +0000"  >&lt;p&gt;sanity-quota test 11 started failing less than a week ago with inode quota issues. The test is failing with &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Used inodes(1) is less than 2
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It looks like the test 11 failures might be the same or related to this ticket because the MDS debug_log contains the same messages as above:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;(osd_quota.c:120:osd_acct_index_lookup()) lustre-MDT0000: id ea60 not found in DMU accounting ZAP
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In the cases below, sanity-quota tests 33, 34 and 35 all fail after test 11 fails:&lt;br/&gt;
2015-07-03 12:14:19 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/5ee4d41e-21ad-11e5-a979-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/5ee4d41e-21ad-11e5-a979-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2015-07-03 19:34:12 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/41c9e6e6-21e7-11e5-b398-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/41c9e6e6-21e7-11e5-b398-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2015-07-05 17:43:49 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/ff5d7856-2365-11e5-a6ad-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/ff5d7856-2365-11e5-a6ad-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2015-07-06 17:15:26 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/7a443f68-242a-11e5-87f6-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/7a443f68-242a-11e5-87f6-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2015-07-06 21:12:05 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/080b7f58-244b-11e5-91e9-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/080b7f58-244b-11e5-91e9-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="121007" author="bfaccini" created="Fri, 10 Jul 2015 17:25:41 +0000"  >&lt;p&gt;3 new+consecutive occurences for the same master patch (&lt;a href=&quot;http://review.whamcloud.com/14384/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14384/&lt;/a&gt;) review :&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/da77e524-21e6-11e5-b398-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/da77e524-21e6-11e5-b398-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/0a6988f4-2585-11e5-a6b1-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/0a6988f4-2585-11e5-a6b1-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/72ee971e-2627-11e5-8b33-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/72ee971e-2627-11e5-8b33-5254006e85c2&lt;/a&gt;&lt;br/&gt;
is there any activity ongoing for this issue?&lt;/p&gt;</comment>
                            <comment id="121176" author="gerrit" created="Mon, 13 Jul 2015 18:02:27 +0000"  >&lt;p&gt;James Nunez (james.a.nunez@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/15590&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/15590&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5638&quot; title=&quot;sanity-quota test_33 for ZFS-based backend: Used inodes for user 60000 isn&amp;#39;t 0. 1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5638&quot;&gt;&lt;del&gt;LU-5638&lt;/del&gt;&lt;/a&gt; tests: Skip sanity-quota 11 and 33 for ZFS&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: ff39144cf538fc5cac88b1ad61d16640b7854b09&lt;/p&gt;</comment>
                            <comment id="121178" author="jamesanunez" created="Mon, 13 Jul 2015 18:07:05 +0000"  >&lt;p&gt;Temporarily skipping sanity-quota tests 11 and 33 for review-zfs-part-* until the patch for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2435&quot; title=&quot;inode accounting in osd-zfs is racy&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2435&quot;&gt;&lt;del&gt;LU-2435&lt;/del&gt;&lt;/a&gt; lands.&lt;/p&gt;

&lt;p&gt;Patch at &lt;a href=&quot;http://review.whamcloud.com/#/c/15590&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/15590&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="121531" author="bogl" created="Fri, 17 Jul 2015 14:14:10 +0000"  >&lt;p&gt;another on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/e8dfb4f0-2c15-11e5-8c67-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/e8dfb4f0-2c15-11e5-8c67-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="121597" author="gerrit" created="Sat, 18 Jul 2015 01:25:26 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/15590/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/15590/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5638&quot; title=&quot;sanity-quota test_33 for ZFS-based backend: Used inodes for user 60000 isn&amp;#39;t 0. 1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5638&quot;&gt;&lt;del&gt;LU-5638&lt;/del&gt;&lt;/a&gt; tests: Skip sanity-quota tests for ZFS&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 87a28e4280e08947d852a317f49be174cc6f4cb6&lt;/p&gt;</comment>
                            <comment id="121603" author="pjones" created="Sat, 18 Jul 2015 13:56:48 +0000"  >&lt;p&gt;Landed for 2.8&lt;/p&gt;</comment>
                            <comment id="121612" author="jamesanunez" created="Sun, 19 Jul 2015 03:20:51 +0000"  >&lt;p&gt;This issue is not resolved. Only a patch to skip the tests was landed. The original problem causing sanity-quota 11, 33, 34, and 35 still exists.&lt;/p&gt;</comment>
                            <comment id="193060" author="adilger" created="Fri, 21 Apr 2017 17:43:35 +0000"  >&lt;p&gt;There is a belief that this was caused by slow ZFS metadata performance, which has been improved in Lustre 2.9.  It would be worthwhile to retest these skipped tests (with ZFS of course) to see if they now pass reliably.&lt;/p&gt;</comment>
                            <comment id="197884" author="bogl" created="Fri, 2 Jun 2017 12:51:31 +0000"  >&lt;p&gt;being seen in non-zfs tests too.  example:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/834ddd66-472e-11e7-b3fe-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/834ddd66-472e-11e7-b3fe-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I note that test 33 is skipped with ALWAYS_EXCEPT for test runs on zfs.  Maybe it needs to be skipped all the time on everything.&lt;/p&gt;</comment>
                            <comment id="198076" author="bogl" created="Sat, 3 Jun 2017 19:59:11 +0000"  >&lt;p&gt;another on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/1637dff4-4839-11e7-bc6c-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/1637dff4-4839-11e7-bc6c-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="198089" author="adilger" created="Sun, 4 Jun 2017 14:45:11 +0000"  >&lt;p&gt;I don&apos;t think skipping the test is the right way forward, except as a short-term workaround. Instead, someone needs to take the time to figure out what file is being left behind with this UID. &lt;/p&gt;</comment>
                            <comment id="198098" author="niu" created="Mon, 5 Jun 2017 01:52:48 +0000"  >&lt;p&gt;I think the old issue should have been fixed once the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2435&quot; title=&quot;inode accounting in osd-zfs is racy&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2435&quot;&gt;&lt;del&gt;LU-2435&lt;/del&gt;&lt;/a&gt; being landed, we can re-enable test_33 for zfs testing now, I&apos;ll cook a patch to re-enable it.&lt;/p&gt;

&lt;p&gt;The new occurrences on ldiskfs is another issue, I believe it&apos;s a defect in project quota:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;sanity-quota test_33: @@@@@@ FAIL: Used space &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; project 1000:18432, expected:20480
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;I think we&apos;d open a new ticket for it.&lt;/p&gt;</comment>
                            <comment id="198099" author="niu" created="Mon, 5 Jun 2017 02:03:12 +0000"  >&lt;p&gt;The new issue is created at &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9597&quot; title=&quot;sanity-quota test_33: &amp;#39;Used space for project 1000:18432, expected:20480&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9597&quot;&gt;&lt;del&gt;LU-9597&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="198101" author="gerrit" created="Mon, 5 Jun 2017 02:12:01 +0000"  >&lt;p&gt;Niu Yawei (yawei.niu@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/27423&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/27423&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5638&quot; title=&quot;sanity-quota test_33 for ZFS-based backend: Used inodes for user 60000 isn&amp;#39;t 0. 1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5638&quot;&gt;&lt;del&gt;LU-5638&lt;/del&gt;&lt;/a&gt; tests: re-enable zfs quota tests&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: ec72f9a800214354e537b3f95c83c1ea509fa178&lt;/p&gt;</comment>
                            <comment id="203390" author="jamesanunez" created="Mon, 24 Jul 2017 17:34:35 +0000"  >&lt;p&gt;It looks like sanity-quota test 33 is still failing with ZFS servers. &lt;/p&gt;

&lt;p&gt;Logs for two recent failures are at:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/506f3d2e-480d-11e7-91f4-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/506f3d2e-480d-11e7-91f4-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/ac775cfe-4a84-11e7-91f4-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/ac775cfe-4a84-11e7-91f4-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="204921" author="dilipkrx" created="Wed, 9 Aug 2017 16:51:34 +0000"  >&lt;p&gt;sanity-quota test 33 is failing. Maloo link to look at needed information&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/8442a52c-7bad-11e7-a168-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/8442a52c-7bad-11e7-a168-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Error: &apos;Used inode for user 60000 is 1, expected 10&apos; &lt;br/&gt;
Wait for setattr on objects finished...&lt;br/&gt;
sleep 5 for ZFS OSD&lt;br/&gt;
Waiting for local destroys to complete&lt;br/&gt;
CMD: onyx-45vm7,onyx-45vm8 lctl set_param -n osd*.&lt;b&gt;MDT&lt;/b&gt;.force_sync=1&lt;br/&gt;
CMD: onyx-45vm10 lctl set_param -n osd*.&lt;b&gt;OS&lt;/b&gt;.force_sync=1&lt;br/&gt;
Verify disk usage after write&lt;br/&gt;
Verify inode usage after write&lt;br/&gt;
 sanity-quota test_33: @@@@@@ FAIL: Used inode for user 60000 is 1, expected 10 &lt;br/&gt;
  Trace dump:&lt;br/&gt;
  = /usr/lib64/lustre/tests/test-framework.sh:5291:error()&lt;br/&gt;
  = /usr/lib64/lustre/tests/sanity-quota.sh:2423:test_33()&lt;br/&gt;
  = /usr/lib64/lustre/tests/test-framework.sh:5567:run_one()&lt;br/&gt;
  = /usr/lib64/lustre/tests/test-framework.sh:5606:run_one_logged()&lt;br/&gt;
  = /usr/lib64/lustre/tests/test-framework.sh:5453:run_test()&lt;br/&gt;
  = /usr/lib64/lustre/tests/sanity-quota.sh:2450:main()&lt;br/&gt;
Dumping lctl log to /test_logs/2017-08-07/lustre-reviews-el7-x86_64-&lt;del&gt;custom&lt;/del&gt;-1_101_1_&lt;em&gt;49353&lt;/em&gt;_-70097615897520-184409/sanity-quota.test_33.*.1502136383.log&lt;br/&gt;
CMD: onyx-45vm10,onyx-45vm1.onyx.hpdd.intel.com,onyx-45vm2,onyx-45vm7,onyx-45vm8 /usr/sbin/lctl dk &amp;gt; /test_logs/2017-08-07/lustre-reviews-el7-x86_64-&lt;del&gt;custom&lt;/del&gt;-1_101_1_&lt;em&gt;49353&lt;/em&gt;_-70097615897520-184409/sanity-quota.test_33.debug_log.\$(hostname -s).1502136383.log;&lt;br/&gt;
         dmesg &amp;gt; /test_logs/2017-08-07/lustre-reviews-el7-x86_64-&lt;del&gt;custom&lt;/del&gt;-1_101_1_&lt;em&gt;49353&lt;/em&gt;_-70097615897520-184409/sanity-quota.test_33.dmesg.\$(hostname -s).1502136383.log&lt;br/&gt;
Resetting fail_loc on all nodes...CMD: onyx-45vm10,onyx-45vm1.onyx.hpdd.intel.com,onyx-45vm2,onyx-45vm7,onyx-45vm8 lctl set_param -n fail_loc=0 	    fail_val=0 2&amp;gt;/dev/null&lt;br/&gt;
done.&lt;/p&gt;</comment>
                            <comment id="204933" author="pjones" created="Wed, 9 Aug 2017 18:22:48 +0000"  >&lt;p&gt;Hongchao&lt;/p&gt;

&lt;p&gt;Could you please advise on this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="224513" author="hongchao.zhang" created="Mon, 26 Mar 2018 08:21:12 +0000"  >&lt;p&gt;there is no abnormal information in the logs, and it could still be related to the ZFS performance.&lt;br/&gt;
this issue has not occurred since Dec 19, 2017&lt;/p&gt;</comment>
                            <comment id="227953" author="adilger" created="Wed, 16 May 2018 07:44:32 +0000"  >&lt;p&gt;It appears that this was &quot;fixed&quot; by the landing of &lt;a href=&quot;https://review.whamcloud.com/27093&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/27093&lt;/a&gt; which changed the detection of ZFS project quotas but broke detection of ZFS dnode accounting.  That patch landed to b2_10 on Dec 20, 2017 (master landing on Nov 9, 2017).&lt;/p&gt;</comment>
                            <comment id="229414" author="gerrit" created="Mon, 11 Jun 2018 16:07:24 +0000"  >&lt;p&gt;James Nunez (james.a.nunez@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/32694&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/32694&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5638&quot; title=&quot;sanity-quota test_33 for ZFS-based backend: Used inodes for user 60000 isn&amp;#39;t 0. 1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5638&quot;&gt;&lt;del&gt;LU-5638&lt;/del&gt;&lt;/a&gt; tests: resume running sanity-quota tests&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 78fee01723aece184b4e328335bab5e120667583&lt;/p&gt;</comment>
                            <comment id="230822" author="gerrit" created="Tue, 24 Jul 2018 15:59:27 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/32694/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/32694/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5638&quot; title=&quot;sanity-quota test_33 for ZFS-based backend: Used inodes for user 60000 isn&amp;#39;t 0. 1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5638&quot;&gt;&lt;del&gt;LU-5638&lt;/del&gt;&lt;/a&gt; tests: resume running sanity-quota tests&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 2fa651d7c9bbeb2ce87e149f25d753c7b66640ab&lt;/p&gt;</comment>
                            <comment id="231343" author="jamesanunez" created="Thu, 2 Aug 2018 20:17:28 +0000"  >&lt;p&gt;Patch landed to remove sanity-quota 33 from the ALWAYS_EXCEPT list for 2.11.54.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="52074">LU-11024</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="14304">LU-2435</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="35896">LU-7991</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="46437">LU-9592</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwwlr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>15788</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>