<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:36:36 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3752] sanity-quota test_18: expect 104857600, got 42991616. Verifying file failed!</title>
                <link>https://jira.whamcloud.com/browse/LU-3752</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This issue was created by maloo for sarah &amp;lt;sarah@whamcloud.com&amp;gt;&lt;/p&gt;

&lt;p&gt;This issue relates to the following test suite run: &lt;a href=&quot;http://maloo.whamcloud.com/test_sets/73a518da-029e-11e3-b384-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://maloo.whamcloud.com/test_sets/73a518da-029e-11e3-b384-52540035b04c&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The sub-test test_18 failed with the following error:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;expect 104857600, got 42991616. Verifying file failed!&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Info required for matching: sanity-quota 18&lt;/p&gt;</description>
                <environment></environment>
        <key id="20336">LU-3752</key>
            <summary>sanity-quota test_18: expect 104857600, got 42991616. Verifying file failed!</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="green">Oleg Drokin</assignee>
                                    <reporter username="maloo">Maloo</reporter>
                        <labels>
                            <label>yuc2</label>
                    </labels>
                <created>Tue, 13 Aug 2013 21:13:45 +0000</created>
                <updated>Mon, 17 Jul 2017 19:26:01 +0000</updated>
                                            <version>Lustre 2.4.1</version>
                    <version>Lustre 2.5.0</version>
                    <version>Lustre 2.6.0</version>
                    <version>Lustre 2.5.1</version>
                    <version>Lustre 2.8.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="64222" author="yujian" created="Wed, 14 Aug 2013 04:44:56 +0000"  >&lt;p&gt;I just found that this is a regression introduced by the patch in build #28 on Lustre b2_4 branch.&lt;/p&gt;

&lt;p&gt;Before build #28, sanity-quota test 18 always passed on Lustre b2_4 branch. Since build #28, there were 6 full test runs on build #29 against the RHEL6 and SLES11SP2 clients, 2 test runs hit sanity-quota test 18 failure:&lt;/p&gt;

&lt;p&gt;Failed test runs:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/f6c41656-0421-11e3-90ba-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/f6c41656-0421-11e3-90ba-52540035b04c&lt;/a&gt; (RHEL6)&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/1f547576-0282-11e3-a4b4-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/1f547576-0282-11e3-a4b4-52540035b04c&lt;/a&gt; (RHEL6)&lt;/p&gt;

&lt;p&gt;Passed test runs:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/8a5797c0-0248-11e3-a4b4-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/8a5797c0-0248-11e3-a4b4-52540035b04c&lt;/a&gt; (RHEL6)&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/31d735f2-02b0-11e3-a4b4-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/31d735f2-02b0-11e3-a4b4-52540035b04c&lt;/a&gt; (SLES11SP2)&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/b109a6b8-0259-11e3-b384-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/b109a6b8-0259-11e3-b384-52540035b04c&lt;/a&gt; (RHEL6)&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/e4b07626-039f-11e3-9824-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/e4b07626-039f-11e3-9824-52540035b04c&lt;/a&gt; (RHEL6)&lt;/p&gt;

&lt;p&gt;On master branch, the test passed on build #1582, and failed on build #1591. The builds between them were not tested. By looking over the patches on these builds and those patches in b2_4 build #28, the following ones are intersections:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3643&quot; title=&quot;hsm_restore caused OSS node crash&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3643&quot;&gt;&lt;del&gt;LU-3643&lt;/del&gt;&lt;/a&gt; ofd: get data version only if file exists&lt;br/&gt;
&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3585&quot; title=&quot;Client panic during IOR single file per process:  Lnet out of Memory&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3585&quot;&gt;&lt;del&gt;LU-3585&lt;/del&gt;&lt;/a&gt; ptlrpc: Fix a crash when dereferencing NULL pointer &lt;br/&gt;
&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3636&quot; title=&quot;HSM restore failed at copy end&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3636&quot;&gt;&lt;del&gt;LU-3636&lt;/del&gt;&lt;/a&gt; llapi: llapi_hsm_copy_end() on correct FID on restore.&lt;/p&gt;</comment>
                            <comment id="64249" author="pjones" created="Wed, 14 Aug 2013 15:10:08 +0000"  >&lt;p&gt;Oleg&lt;/p&gt;

&lt;p&gt;Can you please try to further identify the cause of this regression?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="64283" author="green" created="Wed, 14 Aug 2013 22:22:26 +0000"  >&lt;p&gt;A very suspicious common pattern is observed in those test results.&lt;/p&gt;

&lt;p&gt;All successful test runs have:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;running as uid/gid/euid/egid 60000/60000/60000/60000, groups:
 [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d0.sanity-quota/d18/f.sanity-quota.18] [count=100] [oflag=direct]
CMD: client-20-ib sync; sync; sync
Filesystem           1K-blocks      Used Available Use% Mounted on
client-20-ib@o2ib:/lustre
                      14222720     13440  14150272   1% /mnt/lustre
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;All failing runs have:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Write 100M (directio) ...
running as uid/gid/euid/egid 60000/60000/60000/60000, groups:
 [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d0.sanity-quota/d18/f.sanity-quota.18] [count=100] [oflag=direct]
CMD: client-26vm7 sync; sync; sync
Filesystem               1K-blocks   Used Available Use% Mounted on
client-26vm7@tcp:/lustre   1464484 264460   1118928  20% /mnt/lustre
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;So, my question is - why newer testruns (that are failing) have 10x less disk space? I bet this is why the test is dying with out of space error now - because striping is also not used and so with previously present files there&apos;s just not enough space in the new scheme of things.&lt;/p&gt;</comment>
                            <comment id="64404" author="yujian" created="Fri, 16 Aug 2013 15:23:27 +0000"  >&lt;p&gt;For failed test runs:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;MDSSIZE=1939865
OSTSIZE=223196
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;For passed test runs:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;MDSSIZE=2097152
OSTSIZE=2097152
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The real failure was:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;dd: writing `/mnt/lustre/d0.sanity-quota/d18/f.sanity-quota.18&apos;: No space left on device
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We need improve the test script to check available space.&lt;/p&gt;</comment>
                            <comment id="64419" author="bogl" created="Fri, 16 Aug 2013 18:38:40 +0000"  >&lt;p&gt;space check added&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/7366&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/7366&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Still leaves open the question of why we&apos;re running out of space in the first place.&lt;/p&gt;</comment>
                            <comment id="72771" author="sarah" created="Wed, 4 Dec 2013 01:33:39 +0000"  >&lt;p&gt;hit this issue in lustre-master build # 1784 &lt;br/&gt;
client is running SLES11 SP3&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/67469a2c-5bbe-11e3-8d79-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/67469a2c-5bbe-11e3-8d79-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="75174" author="yujian" created="Fri, 17 Jan 2014 10:56:07 +0000"  >&lt;p&gt;Lustre client build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b2_4/70/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b2_4/70/&lt;/a&gt; (2.4.2)&lt;br/&gt;
Lustre server build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b2_5/13/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b2_5/13/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The same failure occurred:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/56bce390-7e7e-11e3-925a-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/56bce390-7e7e-11e3-925a-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="79887" author="sarah" created="Thu, 20 Mar 2014 17:12:29 +0000"  >&lt;p&gt;Hit this failure in lustre-master tag-2.5.57(build # 1945) testing for zfs:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sessions/e29960fc-b031-11e3-9bc4-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sessions/e29960fc-b031-11e3-9bc4-52540035b04c&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the previous build 1944 and 1943, this test passed:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sessions/28690920-af2e-11e3-bac7-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sessions/28690920-af2e-11e3-bac7-52540035b04c&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sessions/1af5c146-ae6d-11e3-a4ae-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sessions/1af5c146-ae6d-11e3-a4ae-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="139406" author="sarah" created="Wed, 20 Jan 2016 04:21:58 +0000"  >&lt;p&gt;hit this on current master build#3305 RHEL6.7 zfs&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/91fc1ebc-bc84-11e5-b3b7-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/91fc1ebc-bc84-11e5-b3b7-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="141227" author="standan" created="Thu, 4 Feb 2016 19:03:10 +0000"  >&lt;p&gt;Encountered another instance for FULL - EL6.7 Server/EL6.7 Client - ZFS , master , build# 3314&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/9e6de21c-cb47-11e5-a59a-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/9e6de21c-cb47-11e5-a59a-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another instance on master for FULL - EL7.1 Server/EL7.1 Client - ZFS, build# 3314&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/e109a106-cb88-11e5-b49e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/e109a106-cb88-11e5-b49e-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="141895" author="standan" created="Wed, 10 Feb 2016 22:58:22 +0000"  >&lt;p&gt;Another instance found for Full tag 2.7.66 - EL6.7 Server/EL6.7 Client - ZFS, build# 3314&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/9e6de21c-cb47-11e5-a59a-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/9e6de21c-cb47-11e5-a59a-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another instance found for Full tag 2.7.66 -EL7.1 Server/EL7.1 Client - ZFS, build# 3314&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/e109a106-cb88-11e5-b49e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/e109a106-cb88-11e5-b49e-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvxwn:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9677</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>