<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:30:59 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-16905] sanity-quota/18 Failure (possible due to incorrect timeout)</title>
                <link>https://jira.whamcloud.com/browse/LU-16905</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>
&lt;p&gt;== sanity-quota test 18: MDS failover while writing, no watchdog triggered (b14840) ========================================================== 08:41:17 (1686832877)&lt;br/&gt;
sleep 5 for ZFS zfs&lt;br/&gt;
Waiting for MDT destroys to complete&lt;br/&gt;
Creating test directory&lt;br/&gt;
fail_val=0&lt;br/&gt;
fail_loc=0&lt;br/&gt;
Waiting 90s for &apos;u&apos;&lt;br/&gt;
Updated after 2s: want &apos;u&apos; got &apos;u&apos;&lt;br/&gt;
User quota (limit: 200)&lt;br/&gt;
Disk quotas for usr quota_usr (uid 60000):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
    /mnt/lustre       0       0  204800       -       0       0       0       -&lt;br/&gt;
lustre-MDT0000_UUID&lt;br/&gt;
                      0       -       0       -       0       -       0       -&lt;br/&gt;
lustre-OST0000_UUID&lt;br/&gt;
                      0       -       0       -       -       -       -       -&lt;br/&gt;
lustre-OST0001_UUID&lt;br/&gt;
                      0       -       0       -       -       -       -       -&lt;br/&gt;
Total allocated inode limit: 0, total allocated block limit: 0&lt;br/&gt;
sysctl: cannot stat /proc/sys/lustre/timeout: No such file or directory&lt;br/&gt;
Write 100M (buffered) ...&lt;br/&gt;
running as uid/gid/euid/egid 60000/60000/60000/60000, groups:&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;dd&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;if=/dev/zero&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;bs=1M&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;of=/mnt/lustre/d18.sanity-quota/f18.sanity-quota&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;count=100&amp;#93;&lt;/span&gt;&lt;br/&gt;
UUID                   1K-blocks        Used   Available Use% Mounted on&lt;br/&gt;
lustre-MDT0000_UUID      2210688        4096     2204544   1% /mnt/lustre&lt;span class=&quot;error&quot;&gt;&amp;#91;MDT:0&amp;#93;&lt;/span&gt;&lt;br/&gt;
lustre-OST0000_UUID      3771392        3072     3748864   1% /mnt/lustre&lt;span class=&quot;error&quot;&gt;&amp;#91;OST:0&amp;#93;&lt;/span&gt;&lt;br/&gt;
lustre-OST0001_UUID      3771392        3072     3766272   1% /mnt/lustre&lt;span class=&quot;error&quot;&gt;&amp;#91;OST:1&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;filesystem_summary:      7542784        6144     7515136   1% /mnt/lustre&lt;/p&gt;

&lt;p&gt;Fail mds for 0 seconds&lt;br/&gt;
Failing mds1 on oleg365-server&lt;br/&gt;
Stopping /mnt/lustre-mds1 (opts&lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt; on oleg365-server&lt;br/&gt;
08:41:31 (1686832891) shut down&lt;br/&gt;
Failover mds1 to oleg365-server&lt;br/&gt;
mount facets: mds1&lt;br/&gt;
Starting mds1: -o localrecov  lustre-mdt1/mdt1 /mnt/lustre-mds1&lt;br/&gt;
oleg365-server: oleg365-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8&lt;br/&gt;
pdsh@oleg365-client: oleg365-server: ssh exited with exit code 1&lt;br/&gt;
Started lustre-MDT0000&lt;br/&gt;
08:41:48 (1686832908) targets are mounted&lt;br/&gt;
08:41:48 (1686832908) facet_failover done&lt;br/&gt;
oleg365-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid&lt;br/&gt;
mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec&lt;br/&gt;
100+0 records in&lt;br/&gt;
100+0 records out&lt;br/&gt;
104857600 bytes (105 MB) copied, 48.3932 s, 2.2 MB/s&lt;br/&gt;
(dd_pid=1833, time=25, timeout=600)&lt;br/&gt;
Disk quotas for usr quota_usr (uid 60000):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
    /mnt/lustre   98310       0  204800       -       1       0       0       -&lt;br/&gt;
lustre-MDT0000_UUID&lt;br/&gt;
                      2*      -       2       -       1       -       0       -&lt;br/&gt;
lustre-OST0000_UUID&lt;br/&gt;
                  98309       -  114688       -       -       -       -       -&lt;br/&gt;
lustre-OST0001_UUID&lt;br/&gt;
                      0       -       0       -       -       -       -       -&lt;br/&gt;
Total allocated inode limit: 0, total allocated block limit: 114688&lt;br/&gt;
Delete files...&lt;br/&gt;
Wait for unlink objects finished...&lt;br/&gt;
sleep 5 for ZFS zfs&lt;br/&gt;
sleep 5 for ZFS zfs&lt;br/&gt;
Waiting for MDT destroys to complete&lt;br/&gt;
sleep 5 for ZFS zfs&lt;br/&gt;
Waiting for MDT destroys to complete&lt;br/&gt;
Creating test directory&lt;br/&gt;
fail_val=0&lt;br/&gt;
fail_loc=0&lt;br/&gt;
User quota (limit: 200)&lt;br/&gt;
Disk quotas for usr quota_usr (uid 60000):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
    /mnt/lustre       0       0  204800       -       0       0       0       -&lt;br/&gt;
lustre-MDT0000_UUID&lt;br/&gt;
                      0       -       0       -       0       -       0       -&lt;br/&gt;
lustre-OST0000_UUID&lt;br/&gt;
                      0       -       0       -       -       -       -       -&lt;br/&gt;
lustre-OST0001_UUID&lt;br/&gt;
                      0       -       0       -       -       -       -       -&lt;br/&gt;
Total allocated inode limit: 0, total allocated block limit: 0&lt;br/&gt;
sysctl: cannot stat /proc/sys/lustre/timeout: No such file or directory&lt;br/&gt;
Write 100M (directio) ...&lt;br/&gt;
running as uid/gid/euid/egid 60000/60000/60000/60000, groups:&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;dd&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;if=/dev/zero&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;bs=1M&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;of=/mnt/lustre/d18.sanity-quota/f18.sanity-quota&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;count=100&amp;#93;&lt;/span&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;oflag=direct&amp;#93;&lt;/span&gt;&lt;br/&gt;
UUID                   1K-blocks        Used   Available Use% Mounted on&lt;br/&gt;
lustre-MDT0000_UUID      2210560        3840     2204672   1% /mnt/lustre&lt;span class=&quot;error&quot;&gt;&amp;#91;MDT:0&amp;#93;&lt;/span&gt;&lt;br/&gt;
lustre-OST0000_UUID      3771392        3072     3758080   1% /mnt/lustre&lt;span class=&quot;error&quot;&gt;&amp;#91;OST:0&amp;#93;&lt;/span&gt;&lt;br/&gt;
lustre-OST0001_UUID      3771392        3072     3766272   1% /mnt/lustre&lt;span class=&quot;error&quot;&gt;&amp;#91;OST:1&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;filesystem_summary:      7542784        6144     7524352   1% /mnt/lustre&lt;/p&gt;

&lt;p&gt;Fail mds for 0 seconds&lt;br/&gt;
Failing mds1 on oleg365-server&lt;br/&gt;
Stopping /mnt/lustre-mds1 (opts&lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt; on oleg365-server&lt;br/&gt;
08:42:46 (1686832966) shut down&lt;br/&gt;
Failover mds1 to oleg365-server&lt;br/&gt;
mount facets: mds1&lt;br/&gt;
Starting mds1: -o localrecov  lustre-mdt1/mdt1 /mnt/lustre-mds1&lt;br/&gt;
oleg365-server: oleg365-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8&lt;br/&gt;
pdsh@oleg365-client: oleg365-server: ssh exited with exit code 1&lt;br/&gt;
Started lustre-MDT0000&lt;br/&gt;
08:43:02 (1686832982) targets are mounted&lt;br/&gt;
08:43:02 (1686832982) facet_failover done&lt;br/&gt;
oleg365-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid&lt;br/&gt;
mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec&lt;br/&gt;
100+0 records in&lt;br/&gt;
100+0 records out&lt;br/&gt;
104857600 bytes (105 MB) copied, 52.5656 s, 2.0 MB/s&lt;br/&gt;
(dd_pid=4187, time=30, timeout=600)&lt;br/&gt;
Disk quotas for usr quota_usr (uid 60000):&lt;br/&gt;
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace&lt;br/&gt;
    /mnt/lustre  102407       0  204800       -       1       0       0       -&lt;br/&gt;
lustre-MDT0000_UUID&lt;br/&gt;
                      2*      -       2       -       1       -       0       -&lt;br/&gt;
lustre-OST0000_UUID&lt;br/&gt;
                 102406       -  107525       -       -       -       -       -&lt;br/&gt;
lustre-OST0001_UUID&lt;br/&gt;
                      0       -       0       -       -       -       -       -&lt;br/&gt;
Total allocated inode limit: 0, total allocated block limit: 107525&lt;br/&gt;
Delete files...&lt;br/&gt;
Wait for unlink objects finished...&lt;br/&gt;
sleep 5 for ZFS zfs&lt;br/&gt;
sleep 5 for ZFS zfs&lt;br/&gt;
Waiting for MDT destroys to complete&lt;br/&gt;
 sanity-quota test_18: @@@@@@ FAIL: [ 2836.180747] Lustre: ll_ost_io00_004: service thread pid 27906 was inactive for 40.067 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: &lt;br/&gt;
  Trace dump:&lt;br/&gt;
  = /home/green/git/lustre-release/lustre/tests/test-framework.sh:6566:error()&lt;br/&gt;
  = /home/green/git/lustre-release/lustre/tests/sanity-quota.sh:2945:test_18()&lt;br/&gt;
  = /home/green/git/lustre-release/lustre/tests/test-framework.sh:6906:run_one()&lt;br/&gt;
  = /home/green/git/lustre-release/lustre/tests/test-framework.sh:6955:run_one_logged()&lt;br/&gt;
  = /home/green/git/lustre-release/lustre/tests/test-framework.sh:6792:run_test()&lt;br/&gt;
  = /home/green/git/lustre-release/lustre/tests/sanity-quota.sh:2948:main()&lt;br/&gt;
Dumping lctl log to /tmp/testlogs//sanity-quota.test_18.*.1686833039.log&lt;br/&gt;
Delete files...&lt;br/&gt;
Wait for unlink objects finished...&lt;br/&gt;
rsync: chown &quot;/tmp/testlogs/.sanity-quota.test_18.debug_log.oleg365-server.1686833039.log.4knRXN&quot; failed: Operation not permitted (1)&lt;br/&gt;
rsync: chown &quot;/tmp/testlogs/.sanity-quota.test_18.dmesg.oleg365-server.1686833039.log.Nvxt9o&quot; failed: Operation not permitted (1)&lt;br/&gt;
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1651) &lt;span class=&quot;error&quot;&gt;&amp;#91;generator=3.1.2&amp;#93;&lt;/span&gt;&lt;br/&gt;
sleep 5 for ZFS zfs&lt;br/&gt;
Waiting for MDT destroys to complete&lt;br/&gt;
Delete files...&lt;br/&gt;
Wait for unlink objects finished...&lt;br/&gt;
sleep 5 for ZFS zfs&lt;br/&gt;
Waiting for MDT destroys to complete&lt;/p&gt;</description>
                <environment></environment>
        <key id="76592">LU-16905</key>
            <summary>sanity-quota/18 Failure (possible due to incorrect timeout)</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="arshad512">Arshad Hussain</assignee>
                                    <reporter username="arshad512">Arshad Hussain</reporter>
                        <labels>
                    </labels>
                <created>Fri, 16 Jun 2023 03:36:22 +0000</created>
                <updated>Wed, 28 Jun 2023 23:19:44 +0000</updated>
                            <resolved>Wed, 28 Jun 2023 23:19:44 +0000</resolved>
                                                    <fixVersion>Lustre 2.16.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>3</watches>
                                                                            <comments>
                            <comment id="375627" author="arshad512" created="Fri, 16 Jun 2023 03:39:08 +0000"  >&lt;p&gt;form logs... &lt;a href=&quot;https://testing.whamcloud.com/gerrit-janitor/32099/testresults/sanity-quota-zfs-centos7_x86_64-centos7_x86_64/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/gerrit-janitor/32099/testresults/sanity-quota-zfs-centos7_x86_64-centos7_x86_64/&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;sysctl: cannot stat /proc/sys/lustre/timeout: No such file or directory
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;timeout=$(sysctl -n lustre.timeout)
sysctl: cannot stat /proc/sys/lustre/timeout: No such file or directory
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Possible fix...?&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Read directly from /sys/fs/lustre/timeout
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="375630" author="gerrit" created="Fri, 16 Jun 2023 04:24:59 +0000"  >&lt;p&gt;&quot;Arshad Hussain &amp;lt;arshad.hussain@aeoncomputing.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/51337&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/51337&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-16905&quot; title=&quot;sanity-quota/18 Failure (possible due to incorrect timeout)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-16905&quot;&gt;&lt;del&gt;LU-16905&lt;/del&gt;&lt;/a&gt; tests: Fix &apos;timeout&apos; value under sanity-quota/18&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 0c07281fc89d6149e03f57b8cd2a35dcf62d392e&lt;/p&gt;</comment>
                            <comment id="375692" author="adilger" created="Fri, 16 Jun 2023 16:16:18 +0000"  >&lt;p&gt;Thanks for the patch. &lt;/p&gt;</comment>
                            <comment id="376809" author="gerrit" created="Wed, 28 Jun 2023 21:49:34 +0000"  >&lt;p&gt;&quot;Oleg Drokin &amp;lt;green@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/51337/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/51337/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-16905&quot; title=&quot;sanity-quota/18 Failure (possible due to incorrect timeout)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-16905&quot;&gt;&lt;del&gt;LU-16905&lt;/del&gt;&lt;/a&gt; tests: Fix &apos;timeout&apos; value under sanity-quota/18&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: b7fe927b0dd5b895bd1df607d75aa6069577e454&lt;/p&gt;</comment>
                            <comment id="376835" author="pjones" created="Wed, 28 Jun 2023 23:19:44 +0000"  >&lt;p&gt;Landed for 2.16&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i03o6v:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>