<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:09:09 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-655] sanity test 27q hung</title>
                <link>https://jira.whamcloud.com/browse/LU-655</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;While running sanity tests on a single test node, test 27q hung as follows:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== sanity test 27q: append to truncated file with all OSTs full (should error) ===== 21:56:29 (1314852989)
fail_loc=0
/mnt/lustre/d0.sanity/d27/f27q has size 80000000 OK
OSTIDX=0 MDSIDX=1
osc.lustre-OST0000-osc-MDT0000.prealloc_last_id=65
osc.lustre-OST0000-osc-MDT0000.prealloc_next_id=65
osc.lustre-OST0001-osc-MDT0000.prealloc_last_id=513
osc.lustre-OST0001-osc-MDT0000.prealloc_next_id=292
fail_val=-1
fail_loc=0x215
Creating to objid 65 on ost lustre-OST0000...
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Console log showed that:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: DEBUG MARKER: == sanity test 27q: append to truncated file with all OSTs full (should error) ===== 21:56:29 (1314852989)
LustreError: 8011:0:(lov_request.c:569:lov_update_create_set()) error creating fid 0x2917 sub-object on OST idx 0/1: rc = -28
LustreError: 8011:0:(lov_request.c:569:lov_update_create_set()) Skipped 1 previous similar message
LustreError: 9113:0:(libcfs_fail.h:81:cfs_fail_check_set()) *** cfs_fail_loc=215 ***
LustreError: 4276:0:(libcfs_fail.h:81:cfs_fail_check_set()) *** cfs_fail_loc=215 ***
LustreError: 4276:0:(ldlm_lib.c:2128:target_send_reply_msg()) @@@ processing error (-28)  req@ffff88060eafe400 x1378722884187257/t0(0) o-1-&amp;gt;dda7060d-4cff-074b-9616-5a5558ff548b@NET_0x9000000000000_UUID:0/0 lens 456/0 e 0 to 0 dl 1314853018 ref 1 fl Interpret:/ffffffff/ffffffff rc 0/-1
LustreError: 11-0: an error occurred while communicating with 0@lo. The ost_write operation failed with -28
LustreError: 4151:0:(libcfs_fail.h:81:cfs_fail_check_set()) *** cfs_fail_loc=215 ***
LustreError: 4151:0:(libcfs_fail.h:81:cfs_fail_check_set()) Skipped 49013 previous similar messages
LustreError: 4151:0:(libcfs_fail.h:81:cfs_fail_check_set()) *** cfs_fail_loc=215 ***
LustreError: 4151:0:(libcfs_fail.h:81:cfs_fail_check_set()) Skipped 100123 previous similar messages
Lustre: Service thread pid 8011 was inactive for 62.00s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Pid: 8011, comm: mdt_04

Call Trace:
 [&amp;lt;ffffffffa03076fe&amp;gt;] cfs_waitq_wait+0xe/0x10 [libcfs]
 [&amp;lt;ffffffffa076f79e&amp;gt;] lov_create+0xbce/0x1580 [lov]
 [&amp;lt;ffffffff8105dc60&amp;gt;] ? default_wake_function+0x0/0x20
 [&amp;lt;ffffffffa09a64dd&amp;gt;] mdd_lov_create+0xadd/0x21f0 [mdd]
 [&amp;lt;ffffffffa09b79f2&amp;gt;] mdd_create+0xca2/0x1db0 [mdd]
 [&amp;lt;ffffffffa0995208&amp;gt;] ? mdd_version_get+0x68/0xa0 [mdd]
 [&amp;lt;ffffffffa0a792bc&amp;gt;] cml_create+0xbc/0x280 [cmm]
 [&amp;lt;ffffffffa0a1eb7a&amp;gt;] mdt_reint_open+0x1bca/0x2c80 [mdt]
 [&amp;lt;ffffffffa0a069df&amp;gt;] mdt_reint_rec+0x3f/0x100 [mdt]
 [&amp;lt;ffffffffa09fee84&amp;gt;] mdt_reint_internal+0x6d4/0x9f0 [mdt]
 [&amp;lt;ffffffffa09ff505&amp;gt;] mdt_intent_reint+0x245/0x600 [mdt]
 [&amp;lt;ffffffffa09f7410&amp;gt;] mdt_intent_policy+0x3c0/0x6b0 [mdt]
 [&amp;lt;ffffffffa0510afa&amp;gt;] ldlm_lock_enqueue+0x2da/0xa50 [ptlrpc]
 [&amp;lt;ffffffffa052f305&amp;gt;] ? ldlm_export_lock_get+0x15/0x20 [ptlrpc]
 [&amp;lt;ffffffffa03155e2&amp;gt;] ? cfs_hash_bd_add_locked+0x62/0x90 [libcfs]
 [&amp;lt;ffffffffa05371f7&amp;gt;] ldlm_handle_enqueue0+0x447/0x1090 [ptlrpc]
 [&amp;lt;ffffffffa09ea53f&amp;gt;] ? mdt_unpack_req_pack_rep+0xcf/0x5d0 [mdt]
 [&amp;lt;ffffffffa09f782a&amp;gt;] mdt_enqueue+0x4a/0x110 [mdt]
 [&amp;lt;ffffffffa09f2bb5&amp;gt;] mdt_handle_common+0x8d5/0x1810 [mdt]
 [&amp;lt;ffffffffa0555104&amp;gt;] ? lustre_msg_get_opc+0x94/0x100 [ptlrpc]
 [&amp;lt;ffffffffa09f3bc5&amp;gt;] mdt_regular_handle+0x15/0x20 [mdt]
 [&amp;lt;ffffffffa0565c7e&amp;gt;] ptlrpc_main+0xb8e/0x1900 [ptlrpc]
 [&amp;lt;ffffffffa05650f0&amp;gt;] ? ptlrpc_main+0x0/0x1900 [ptlrpc]
 [&amp;lt;ffffffff8100c1ca&amp;gt;] child_rip+0xa/0x20
 [&amp;lt;ffffffffa05650f0&amp;gt;] ? ptlrpc_main+0x0/0x1900 [ptlrpc]
 [&amp;lt;ffffffffa05650f0&amp;gt;] ? ptlrpc_main+0x0/0x1900 [ptlrpc]
 [&amp;lt;ffffffff8100c1c0&amp;gt;] ? child_rip+0x0/0x20

LustreError: dumping log to /tmp/lustre-log.1314853064.8011
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Maloo report: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/7ce0bc08-d45e-11e0-8d02-52540025f9af&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/7ce0bc08-d45e-11e0-8d02-52540025f9af&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This could be reproduced easily by just running &apos;REFORMAT=&quot;--reformat&quot; bash sanity.sh&apos; on a single node.&lt;/p&gt;</description>
                <environment>&lt;br/&gt;
Lustre Branch: master&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://newbuild.whamcloud.com/job/lustre-master/275/&quot;&gt;http://newbuild.whamcloud.com/job/lustre-master/275/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL6/x86_64&lt;br/&gt;
</environment>
        <key id="11638">LU-655</key>
            <summary>sanity test 27q hung</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="hongchao.zhang">Hongchao Zhang</assignee>
                                    <reporter username="yujian">Jian Yu</reporter>
                        <labels>
                    </labels>
                <created>Thu, 1 Sep 2011 02:16:55 +0000</created>
                <updated>Tue, 25 Oct 2011 12:01:16 +0000</updated>
                            <resolved>Tue, 25 Oct 2011 12:01:16 +0000</resolved>
                                    <version>Lustre 2.1.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="19847" author="green" created="Thu, 1 Sep 2011 13:13:27 +0000"  >&lt;p&gt;This was introduced somewhere in the last 3 weeks.&lt;/p&gt;</comment>
                            <comment id="20400" author="jhammond" created="Wed, 21 Sep 2011 13:21:45 +0000"  >&lt;p&gt;filter_handle_precreate() is setting oa-&amp;gt;o_valid to OBD_MD_FLID|OBD_MD_FLGROUP, clobbering the setting of OBD_MD_FLFLAGS in filter_precreate().&lt;/p&gt;</comment>
                            <comment id="20506" author="shadow" created="Mon, 26 Sep 2011 05:43:28 +0000"  >&lt;p&gt;that is looks regression introduced by &lt;br/&gt;
commit 9f3f665577797660984bc1b6cbd443111dceef49&lt;br/&gt;
Author: hongchao.zhang &amp;lt;hongchao.zhang@whamcloud.com&amp;gt;&lt;br/&gt;
Date: &#160; Fri Aug 12 21:25:13 2011 +0800&lt;/p&gt;

&lt;p&gt; &#160; &#160;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-325&quot; title=&quot;performance-sanity test_8: @@@@@@ FAIL: test_8 failed with 1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-325&quot;&gt;&lt;del&gt;LU-325&lt;/del&gt;&lt;/a&gt; using preallocated objects if OST has enough disk space&lt;br/&gt;
...&lt;/p&gt;

&lt;p&gt;OST reported ENOSPC to mdt, but MDT don&apos;t return a error to client and resend create request many times.&lt;/p&gt;</comment>
                            <comment id="20507" author="shadow" created="Mon, 26 Sep 2011 07:50:25 +0000"  >&lt;p&gt;I reverted that commit on my local tree and that bug go away.&lt;/p&gt;</comment>
                            <comment id="20508" author="shadow" created="Mon, 26 Sep 2011 08:07:21 +0000"  >&lt;p&gt;high cpu load produced by result of loop of OSC create requests caused &lt;br/&gt;
                if (handle_async_create(fake_req, rc)  == -EAGAIN) {&lt;br/&gt;
                        oscc_internal_create(oscc);&lt;br/&gt;
...&lt;br/&gt;
when OST returned ENOSPC to create request.&lt;br/&gt;
before that change handle_async_create return a error in same case&lt;br/&gt;
        if (oscc-&amp;gt;oscc_flags &amp;amp; OSCC_FLAG_NOSPC)&lt;br/&gt;
                GOTO(out_wake, rc = -ENOSPC);&lt;br/&gt;
...&lt;br/&gt;
but now EGAIN returned and it&apos;s produced a loop from requests.&lt;/p&gt;</comment>
                            <comment id="21687" author="green" created="Sat, 22 Oct 2011 00:04:13 +0000"  >&lt;p&gt;Now that we are in 2.2 cycle it would be nice to have this fixed.&lt;br/&gt;
Can you please take a look?&lt;/p&gt;</comment>
                            <comment id="21841" author="pjones" created="Tue, 25 Oct 2011 12:01:16 +0000"  >&lt;p&gt;Being worked under LU791&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvbs7:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>5473</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>