<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:40:26 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4184] ll_layout_refresh() may ignore layout returned by LDLM_ENQUEUE</title>
                <link>https://jira.whamcloud.com/browse/LU-4184</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;15:32:02&amp;#93;&lt;/span&gt; John Hammond: Can I ask you something about LDLM_FL_LVB_READY and layout fetch?&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:33:31&amp;#93;&lt;/span&gt; Jinshan Xiong: yes&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:35:27&amp;#93;&lt;/span&gt; John Hammond: In ll_layout_refresh(), in the &quot;requeue layout lock for file ...&quot; case, I see that the layout is returned in the reply to LDLM_ENQUEUE but we still send a MDS_GETXATTR to fetch the layout.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:36:14&amp;#93;&lt;/span&gt; Jinshan Xiong: actually not. This is the tricky part.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:36:15&amp;#93;&lt;/span&gt; John Hammond: It seems because LDLM_FL_LVB_READY isn&apos;t set until ll_layout_conf() is called.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:37:07&amp;#93;&lt;/span&gt; Jinshan Xiong: ah LDLM_ENQUEUE, let me see.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:37:40&amp;#93;&lt;/span&gt; John Hammond: I&apos;m looking at the debug log and I see two RPCs.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:38:26&amp;#93;&lt;/span&gt; Jinshan Xiong: right now, I believe the layout is returned by getxattr&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:39:05&amp;#93;&lt;/span&gt; John Hammond: But also by the reply to LDLM_ENQUEUE:&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:39:06&amp;#93;&lt;/span&gt; John Hammond: 00000002:00010000:3.0:1383074475.529239:0:4081:0:(mdc_locks.c:760:mdc_finish_enqueue()) ### layout lock returned by: layout, lvb_len: 56&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:41:12&amp;#93;&lt;/span&gt; Jinshan Xiong: yes, it can be returned by LDLM_ENQUEUE, but not by completion_ast().&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:42:10&amp;#93;&lt;/span&gt; Jinshan Xiong: did you see that the layout is returned but LVB_READY is not set&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:42:12&amp;#93;&lt;/span&gt; Jinshan Xiong: ?&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:42:23&amp;#93;&lt;/span&gt; John Hammond: Right.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:43:03&amp;#93;&lt;/span&gt; Jinshan Xiong: is that a LAYOUT lock enqueue?&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:43:58&amp;#93;&lt;/span&gt; John Hammond: Yes, from ll_layout_refresh().&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:46:02&amp;#93;&lt;/span&gt; John Hammond: Outside of OSC, only ldlm_lock_allow_match() will set LDLM_FL_LVB_READY and only ll_layout_conf() calls ldlm_lock_allow_match().&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:46:36&amp;#93;&lt;/span&gt; Jinshan Xiong: This is not good.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:46:51&amp;#93;&lt;/span&gt; Jinshan Xiong: we can only set LVB_READY after layout is applied.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:47:54&amp;#93;&lt;/span&gt; Jinshan Xiong: seems that we need a new flag to mark that layout has been transferred by DLM ENQUEUE&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:48:25&amp;#93;&lt;/span&gt; Jinshan Xiong: instead of using ldlm_is_lvb_ready() in ll_layout_fetch()&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:55:58&amp;#93;&lt;/span&gt; John Hammond: Maybe I&apos;m missing something but why can&apos;t the &quot;layout lock returned by ...&quot; block of  mdc_finish_enqueue() set LDLM_FL_LVB_READY?&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:56:40&amp;#93;&lt;/span&gt; Jinshan Xiong: because having a valid layout lock on the client means the layout is correct.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:57:02&amp;#93;&lt;/span&gt; Jinshan Xiong: So we have to apply the layout before setting the lvb ready flag&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:57:58&amp;#93;&lt;/span&gt; John Hammond: &amp;gt; because having a valid layout lock on the client means the layout is correct.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;15:58:20&amp;#93;&lt;/span&gt; John Hammond: Which layout?&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;16:00:11&amp;#93;&lt;/span&gt; John Hammond: Do you mean that we need separate flags for &quot;LVB ready&quot; and &quot;allow match&quot;?&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;16:00:18&amp;#93;&lt;/span&gt; Jinshan Xiong: cl_conf_set().&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;16:00:42&amp;#93;&lt;/span&gt; Jinshan Xiong: LVB ready implies allow match.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;16:00:58&amp;#93;&lt;/span&gt; Jinshan Xiong: I mean a new flag to mark layout is returned from DLM ENQ&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;16:01:13&amp;#93;&lt;/span&gt; Jinshan Xiong: actually in current code, just an extra RPC is needed.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;16:01:30&amp;#93;&lt;/span&gt; Jinshan Xiong: based on the fact that layout is rarely changed, maybe not a big deal&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;16:02:10&amp;#93;&lt;/span&gt; John Hammond: Altought it&apos;s rarely changed the lock can be cancelled frequently.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;16:02:47&amp;#93;&lt;/span&gt; John Hammond: I&apos;m not changing layouts here.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;16:04:18&amp;#93;&lt;/span&gt; Jinshan Xiong: I see. In that case, we can define a new flag for this, or just use l_lvb_data to mark that it&apos;s already had a valid layout&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;16:05:07&amp;#93;&lt;/span&gt; Jinshan Xiong: ah no, we can&apos;t use l_lvb_data, because it may be an empty file w/o layout&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;00000080:00010000:3.0:1383074475.528860:0:4081:0:(file.c:3826:ll_layout_refresh()) ### lustre: requeue layout lock for file ffff880216661180/[0x200000400:0x9:0x0].
00010000:00010000:3.0:1383074475.528872:0:4081:0:(ldlm_lock.c:797:ldlm_lock_addref_internal_nolock()) ### ldlm_lock_addref(CR) ns: lustre-MDT0000-mdc-ffff8801f9de0c00 lock: ffff8802028618c0/0xb77a1fc2b7a52873 lrc: 3/1,0 mode: --/CR res: [0x200000400:0x9:0x0].0 bits 0x0 rrc: 1 type: IBT flags: 0x10000000000000 nid: local remote: 0x0 expref: -99 pid: 4081 timeout: 0 lvb_type: 3
00010000:00010000:3.0:1383074475.528877:0:4081:0:(ldlm_request.c:926:ldlm_cli_enqueue()) ### client-side enqueue START, flags 1000
00010000:00010000:3.0:1383074475.528881:0:4081:0:(ldlm_request.c:988:ldlm_cli_enqueue()) ### sending request ns: lustre-MDT0000-mdc-ffff8801f9de0c00 lock: ffff8802028618c0/0xb77a1fc2b7a52873 lrc: 3/1,0 mode: --/CR res: [0x200000400:0x9:0x0].0 bits 0x8 rrc: 1 type: IBT flags: 0x0 nid: local remote: 0x0 expref: -99 pid: 4081 timeout: 0 lvb_type: 3
00000100:00100000:3.0:1383074475.528887:0:4081:0:(client.c:1469:ptlrpc_send_new_req()) Sending RPC pname:cluuid:pid:xid:nid:opc sys_stat:6b228439-eb8c-afd9-ca76-05a18f32a6fe:4081:1450258649319240:0@lo:101
00000100:00100000:3.0:1383074475.528897:0:4081:0:(events.c:352:request_in_callback()) peer: 12345-0@lo
00000100:00100000:3.0:1383074475.528920:0:4081:0:(client.c:2117:ptlrpc_set_wait()) set ffff8801f5279cc0 going to sleep for 6 seconds
00000100:00100000:2.0:1383074475.528989:0:3435:0:(service.c:2011:ptlrpc_server_handle_request()) Handling RPC pname:cluuid+ref:pid:xid:nid:opc mdt01_002:6b228439-eb8c-afd9-ca76-05a18f32a6fe+30:4081:x1450258649319240:12345-0@lo:101
00000100:00100000:2.0:1383074475.529103:0:3435:0:(service.c:2055:ptlrpc_server_handle_request()) Handled RPC pname:cluuid+ref:pid:xid:nid:opc mdt01_002:6b228439-eb8c-afd9-ca76-05a18f32a6fe+32:4081:x1450258649319240:12345-0@lo:101 Request procesed in 116us (205us total) trans 0 rc 0/0
00010000:00080000:3.0:1383074475.529212:0:4081:0:(ldlm_request.c:1317:ldlm_cli_update_pool()) @@@ Zero SLV or Limit found (SLV: 14166000000, Limit: 0)  req@ffff8801f52d9800 x1450258649319240/t0(0) o101-&amp;gt;lustre-MDT0000-mdc-ffff8801f9de0c00@0@lo:12/10 lens 376/368 e 0 to 0 dl 1383074482 ref 2 fl Rpc:R/0/0 rc 0/0
00000100:00100000:3.0:1383074475.529218:0:4081:0:(client.c:1834:ptlrpc_check_set()) Completed RPC pname:cluuid:pid:xid:nid:opc sys_stat:6b228439-eb8c-afd9-ca76-05a18f32a6fe:4081:1450258649319240:0@lo:101
00010000:00010000:3.0:1383074475.529225:0:4081:0:(ldlm_lock.c:1091:ldlm_granted_list_add_lock()) ### About to add lock: ns: lustre-MDT0000-mdc-ffff8801f9de0c00 lock: ffff8802028618c0/0xb77a1fc2b7a52873 lrc: 4/1,0 mode: CR/CR res: [0x200000400:0x9:0x0].0 bits 0x8 rrc: 1 type: IBT flags: 0x10000000000000 nid: local remote: 0xb77a1fc2b7a52881 expref: -99 pid: 4081 timeout: 0 lvb_type: 3
00010000:00010000:3.0:1383074475.529230:0:4081:0:(ldlm_request.c:700:ldlm_cli_enqueue_fini()) ### client-side enqueue END ns: lustre-MDT0000-mdc-ffff8801f9de0c00 lock: ffff8802028618c0/0xb77a1fc2b7a52873 lrc: 4/1,0 mode: CR/CR res: [0x200000400:0x9:0x0].0 bits 0x8 rrc: 1 type: IBT flags: 0x0 nid: local remote: 0xb77a1fc2b7a52881 expref: -99 pid: 4081 timeout: 0 lvb_type: 3
00000002:00100000:3.0:1383074475.529235:0:4081:0:(mdc_locks.c:640:mdc_finish_enqueue()) @@@ op: 1024 disposition: 0, status: 0  req@ffff8801f52d9800 x1450258649319240/t0(0) o101-&amp;gt;lustre-MDT0000-mdc-ffff8801f9de0c00@0@lo:12/10 lens 376/368 e 0 to 0 dl 1383074482 ref 1 fl Complete:R/0/0 rc 0/0
00000002:00010000:3.0:1383074475.529239:0:4081:0:(mdc_locks.c:760:mdc_finish_enqueue()) ### layout lock returned by: layout, lvb_len: 56
00000080:00010000:3.0:1383074475.529247:0:4081:0:(llite_internal.h:1569:ll_set_lock_data()) setting l_data to inode ffff880216661180 (144115205255725065/33554436) for lock 0xb77a1fc2b7a52873
00000080:00010000:3.0:1383074475.529250:0:4081:0:(file.c:3659:ll_layout_lock_set()) ### File ffff880216661180/[0x200000400:0x9:0x0] being reconfigured: 1.
00000100:00100000:3.0:1383074475.529267:0:4081:0:(client.c:1469:ptlrpc_send_new_req()) Sending RPC pname:cluuid:pid:xid:nid:opc sys_stat:6b228439-eb8c-afd9-ca76-05a18f32a6fe:4081:1450258649319256:0@lo:49
00000100:00100000:3.0:1383074475.529289:0:4081:0:(events.c:352:request_in_callback()) peer: 12345-0@lo
00000100:00100000:3.0:1383074475.529295:0:4081:0:(client.c:2117:ptlrpc_set_wait()) set ffff8801f5279cc0 going to sleep for 6 seconds
00000100:00100000:3.0:1383074475.529329:0:3433:0:(service.c:2011:ptlrpc_server_handle_request()) Handling RPC pname:cluuid+ref:pid:xid:nid:opc mdt01_000:6b228439-eb8c-afd9-ca76-05a18f32a6fe+32:4081:x1450258649319256:12345-0@lo:49
00000100:00100000:3.0:1383074481.529269:0:4081:0:(client.c:2117:ptlrpc_set_wait()) set ffff8801f5279cc0 going to sleep for 37 seconds
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="21716">LU-4184</key>
            <summary>ll_layout_refresh() may ignore layout returned by LDLM_ENQUEUE</summary>
                <type id="4" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11310&amp;avatarType=issuetype">Improvement</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="jay">Jinshan Xiong</assignee>
                                    <reporter username="jhammond">John Hammond</reporter>
                        <labels>
                            <label>layout</label>
                    </labels>
                <created>Tue, 29 Oct 2013 21:22:32 +0000</created>
                <updated>Thu, 8 Feb 2018 18:20:17 +0000</updated>
                            <resolved>Thu, 8 Feb 2018 18:20:17 +0000</resolved>
                                    <version>Lustre 2.6.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="70224" author="bzzz" created="Wed, 30 Oct 2013 04:02:09 +0000"  >&lt;p&gt;I think we should not be using LVB for this - the code on MDS side becomes too tricky (due to possible re-entrance into MDT via LDLM) and buys us nearly nothing because in the majority of cases the layout is brought by another means.&lt;/p&gt;</comment>
                            <comment id="220456" author="jay" created="Thu, 8 Feb 2018 18:20:17 +0000"  >&lt;p&gt;The fix is already in Lustre.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzw79j:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>11321</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>