<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:50:55 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5371] Failure on test suite parallel-scale test_simul</title>
                <link>https://jira.whamcloud.com/browse/LU-5371</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This issue was created by maloo for sarah &amp;lt;sarah@whamcloud.com&amp;gt;&lt;/p&gt;

&lt;p&gt;This issue relates to the following test suite run: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/c8156066-0dc7-11e4-af8b-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/c8156066-0dc7-11e4-af8b-5254006e85c2&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The sub-test test_simul failed with the following error:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;simul failed! 1&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;test log:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;22:35:15: FAILED in simul_rmdir: too many operations succeeded (2).
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>client and server: lustre-b2_6-rc2 RHEL6 ldiskfs</environment>
        <key id="25652">LU-5371</key>
            <summary>Failure on test suite parallel-scale test_simul</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="di.wang">Di Wang</assignee>
                                    <reporter username="maloo">Maloo</reporter>
                        <labels>
                    </labels>
                <created>Fri, 18 Jul 2014 20:28:44 +0000</created>
                <updated>Fri, 17 Oct 2014 17:59:40 +0000</updated>
                            <resolved>Thu, 21 Aug 2014 16:48:17 +0000</resolved>
                                    <version>Lustre 2.6.0</version>
                                    <fixVersion>Lustre 2.7.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="89645" author="jlevi" created="Mon, 21 Jul 2014 17:35:01 +0000"  >&lt;p&gt;Di,&lt;br/&gt;
Could you please look into and comment on this one?&lt;br/&gt;
Thank you!&lt;/p&gt;</comment>
                            <comment id="89677" author="sarah" created="Mon, 21 Jul 2014 20:30:11 +0000"  >&lt;p&gt;Also hit this issue in interop test between 2.5.2 server and b2_6-rc2 testing&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/3c3c2f74-0d82-11e4-b3f5-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/3c3c2f74-0d82-11e4-b3f5-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="89696" author="di.wang" created="Tue, 22 Jul 2014 00:05:05 +0000"  >&lt;p&gt;Hmm, I checked the log, it seems failures start to occur around July 3th. (build 2548)&lt;/p&gt;

&lt;p&gt;It seems simul test tries to rmdir a single directory from multiple clients(multiple threads), and only 1 thread should succeed. But two threads succeed in the test, that is why it causes failure. &lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;04:50:57: FAILED in simul_rmdir: too many operations succeeded (2).
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Hmm, I checked the debug log on both clients and MDT side.&lt;/p&gt;

&lt;p&gt;Client1  (succeeds for rmdir)&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;00000080:00200000:1.0:1405575315.343857:0:29632:0:(namei.c:1137:ll_rmdir_generic()) VFS Op:name=simul_rmdir.0, dir=[0x2000013a8:0x143db:0x0](ffff880060e42138)
00010000:00010000:1.0:1405575315.343863:0:29632:0:(ldlm_request.c:1113:ldlm_cli_cancel_local()) ### client-side cancel ns: lustre-MDT0000-mdc-ffff88005dc3e000 lock: ffff88001f3b3500/0x4c7dc53d5dce65cc lrc: 2/0,0 mode: PR/PR res: [0x2000013a7:0x1c43f:0x0].0 bits 0x13 rrc: 2 type: IBT flags: 0x8400000000 nid: local remote: 0xce57c426058b2c04 expref: -99 pid: 29632 timeout: 0 lvb_type: 0
00010000:00010000:1.0:1405575315.343874:0:29632:0:(ldlm_request.c:1172:ldlm_cancel_pack()) ### packing ns: lustre-MDT0000-mdc-ffff88005dc3e000 lock: ffff88001f3b3500/0x4c7dc53d5dce65cc lrc: 1/0,0 mode: --/PR res: [0x2000013a7:0x1c43f:0x0].0 bits 0x13 rrc: 1 type: IBT flags: 0x4809400000000 nid: local remote: 0xce57c426058b2c04 expref: -99 pid: 29632 timeout: 0 lvb_type: 0
00010000:00010000:1.0:1405575315.343877:0:29632:0:(ldlm_request.c:1176:ldlm_cancel_pack()) 1 locks packed
00010000:00010000:1.0:1405575315.343877:0:29632:0:(ldlm_lock.c:219:ldlm_lock_put()) ### final lock_put on destroyed lock, freeing it. ns: lustre-MDT0000-mdc-ffff88005dc3e000 lock: ffff88001f3b3500/0x4c7dc53d5dce65cc lrc: 0/0,0 mode: --/PR res: [0x2000013a7:0x1c43f:0x0].0 bits 0x13 rrc: 1 type: IBT flags: 0x4809400000000 nid: local remote: 0xce57c426058b2c04 expref: -99 pid: 29632 timeout: 0 lvb_type: 0
00000100:00100000:1.0:1405575315.343885:0:29632:0:(client.c:1480:ptlrpc_send_new_req()) Sending RPC pname:cluuid:pid:xid:nid:opc simul:49ab2d2b-0df9-c120-4bb6-1f0147383ab7:29632:1473843428301100:10.2.4.203@tcp:36
00000100:00100000:1.0:1405575315.343909:0:29632:0:(client.c:2146:ptlrpc_set_wait()) set ffff88001f1b0440 going to sleep for 43 seconds
00000100:00100000:1.0:1405575315.345000:0:29632:0:(client.c:1863:ptlrpc_check_set()) Completed RPC pname:cluuid:pid:xid:nid:opc simul:49ab2d2b-0df9-c120-4bb6-1f0147383ab7:29632:1473843428301100:10.2.4.203@tcp:36
00000080:00200000:1.0:1405575315.345007:0:29632:0:(llite_lib.c:1425:ll_clear_inode()) VFS Op:inode=[0x2000013a7:0x1c43f:0x0](ffff8800616f1178)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;


&lt;p&gt;Client2 (also succeeds with rmdir}&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;00000080:00200000:0.0:1405575315.345956:0:18310:0:(namei.c:1137:ll_rmdir_generic()) VFS Op:name=simul_rmdir.0, dir=[0x2000013a8:0x143db:0x0](ffff88005d0bebb8)
00010000:00010000:0.0:1405575315.345961:0:18310:0:(ldlm_request.c:1113:ldlm_cli_cancel_local()) ### client-side cancel ns: lustre-MDT0000-mdc-ffff88007dbfd000 lock: ffff88002e7eed40/0x366918de5153be5b lrc: 2/0,0 mode: PR/PR res: [0x2000013a7:0x1c43f:0x0].0 bits 0x13 rrc: 2 type: IBT flags: 0x8400000000 nid: local remote: 0xce57c426058b2c20 expref: -99 pid: 18310 timeout: 0 lvb_type: 0
00010000:00010000:0.0:1405575315.345970:0:18310:0:(ldlm_request.c:1172:ldlm_cancel_pack()) ### packing ns: lustre-MDT0000-mdc-ffff88007dbfd000 lock: ffff88002e7eed40/0x366918de5153be5b lrc: 1/0,0 mode: --/PR res: [0x2000013a7:0x1c43f:0x0].0 bits 0x13 rrc: 1 type: IBT flags: 0x4809400000000 nid: local remote: 0xce57c426058b2c20 expref: -99 pid: 18310 timeout: 0 lvb_type: 0
00010000:00010000:0.0:1405575315.345974:0:18310:0:(ldlm_request.c:1176:ldlm_cancel_pack()) 1 locks packed
00010000:00010000:0.0:1405575315.345975:0:18310:0:(ldlm_lock.c:219:ldlm_lock_put()) ### final lock_put on destroyed lock, freeing it. ns: lustre-MDT0000-mdc-ffff88007dbfd000 lock: ffff88002e7eed40/0x366918de5153be5b lrc: 0/0,0 mode: --/PR res: [0x2000013a7:0x1c43f:0x0].0 bits 0x13 rrc: 1 type: IBT flags: 0x4809400000000 nid: local remote: 0xce57c426058b2c20 expref: -99 pid: 18310 timeout: 0 lvb_type: 0
00000100:00100000:0.0:1405575315.345981:0:18310:0:(client.c:1480:ptlrpc_send_new_req()) Sending RPC pname:cluuid:pid:xid:nid:opc simul:992018ad-6c7b-3f3b-15cf-42922a66aa9a:18310:1473838828202428:10.2.4.203@tcp:36
00000100:00100000:0.0:1405575315.345989:0:18310:0:(client.c:2146:ptlrpc_set_wait()) set ffff88002c1cf200 going to sleep for 43 seconds
00000100:00100000:0.0:1405575315.347314:0:18310:0:(client.c:1863:ptlrpc_check_set()) Completed RPC pname:cluuid:pid:xid:nid:opc simul:992018ad-6c7b-3f3b-15cf-42922a66aa9a:18310:1473838828202428:10.2.4.203@tcp:36
00000080:00200000:0.0:1405575315.347322:0:18310:0:(llite_lib.c:1425:ll_clear_inode()) VFS Op:inode=[0x2000013a7:0x1c43f:0x0](ffff88003ab0c1b8)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It is interesting when MDT handle the request from client2, it is already return ENOENT, but it seems client2 ignore that&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;00000100:00100000:1.0:1405575315.353820:0:25427:0:(service.c:2144:ptlrpc_server_handle_request()) Handled RPC pname:cluuid+ref:pid:xid:nid:opc mdt00_003:992018ad-6c7b-3f3b-15cf-42922a66aa9a+675:18310:x1473838828202428:12345-10.2.4.205@tcp:36 Request procesed in 988us (1036us total) trans 0 rc -2/-2
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Sigh there are no enough debug log tell me what happen there, I will keep dig.&lt;/p&gt;</comment>
                            <comment id="89698" author="di.wang" created="Tue, 22 Jul 2014 00:42:55 +0000"  >&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/11170&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/11170&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Sorry, this is brought in by &lt;br/&gt;
commit 3ea78dd02a57211ae9b55111323d14cfbc71bc43&lt;br/&gt;
Author: Wang Di &amp;lt;di.wang@intel.com&amp;gt;&lt;br/&gt;
Date:   Wed Jun 25 22:35:52 2014 -0700&lt;/p&gt;

&lt;p&gt;    &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4921&quot; title=&quot;DNE clients should fall back to &amp;quot;try all stripes&amp;quot; for lookups in directories with unknown hash functions &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4921&quot;&gt;&lt;del&gt;LU-4921&lt;/del&gt;&lt;/a&gt; lmv: try all stripes for unknown hash functions&lt;/p&gt;

&lt;p&gt;    For unknown hash type, LMV should try all stripes to locate&lt;br/&gt;
    the name entry. But it will only for lookup and unlink, i.e.&lt;br/&gt;
    we can only list and unlink entries under striped dir with&lt;br/&gt;
    unknown hash type.&lt;/p&gt;

&lt;p&gt;    Signed-off-by: wang di &amp;lt;di.wang@intel.com&amp;gt;&lt;br/&gt;
    Change-Id: Ifeed7131c24e48277a6cc8fd4c09b7534e31079f&lt;br/&gt;
    Reviewed-on: &lt;a href=&quot;http://review.whamcloud.com/10041&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10041&lt;/a&gt;&lt;br/&gt;
    Tested-by: Jenkins&lt;br/&gt;
    Reviewed-by: John L. Hammond &amp;lt;john.hammond@intel.com&amp;gt;&lt;br/&gt;
    Reviewed-by: Andreas Dilger &amp;lt;andreas.dilger@intel.com&amp;gt;&lt;br/&gt;
    Tested-by: Maloo &amp;lt;hpdd-maloo@intel.com&amp;gt;&lt;/p&gt;</comment>
                            <comment id="92147" author="jlevi" created="Thu, 21 Aug 2014 16:48:17 +0000"  >&lt;p&gt;Patch landed to Master.&lt;/p&gt;</comment>
                            <comment id="96610" author="sarah" created="Fri, 17 Oct 2014 17:59:09 +0000"  >&lt;p&gt;Hit this error in interop test between 2.6.0 client and master server, patch needs to be back ported to b2_6 fix the error&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/336e7e88-5558-11e4-a570-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/336e7e88-5558-11e4-a570-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="24267">LU-4921</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwrsn:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>14975</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>