<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:50:27 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5319] Support multiple slots per client in last_rcvd file</title>
                <link>https://jira.whamcloud.com/browse/LU-5319</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;While running mdtest benchmark, I have observed that file creation and unlink operations from a single Lustre client quickly saturates to around 8000 iops: maximum is reached as soon as with 4 tasks in parallel.&lt;br/&gt;
When using several Lustre mount points on a single client node, the file creation and unlink rate do scale with the number of tasks, up to the 16 cores of my client node.&lt;/p&gt;

&lt;p&gt;Looking at the code, it appears that most metadata operations are serialized by a mutex in the MDC layer.&lt;br/&gt;
In mdc_reint() routine, request posting is protected by mdc_get_rpc_lock() and mdc_put_rpc_lock(), where the lock is :&lt;br/&gt;
&lt;tt&gt;struct client_obd -&amp;gt; struct mdc_rpc_lock *cl_rpc_lock -&amp;gt; struct mutex rpcl_mutex&lt;/tt&gt;.&lt;/p&gt;

&lt;p&gt;After an email discussion with Andreas Dilger, it appears that the limitation is actually on the MDS, since it cannot handle more than a single filesystem-modifying RPC at one time. There is only one slot in the MDT last_rcvd file for each client to save the state for the reply in case it is lost.&lt;/p&gt;

&lt;p&gt;The aim of this ticket is to implement multiple slots per client in the last_rcvd file so that several filesystem-modifying RPCs can be handled in parallel.&lt;/p&gt;

&lt;p&gt;The single client metadata performance should be significantly improved while still ensuring a safe recovery mecanism.&lt;/p&gt;</description>
                <environment></environment>
        <key id="25521">LU-5319</key>
            <summary>Support multiple slots per client in last_rcvd file</summary>
                <type id="2" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11311&amp;avatarType=issuetype">New Feature</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="bzzz">Alex Zhuravlev</assignee>
                                    <reporter username="pichong">Gregoire Pichon</reporter>
                        <labels>
                            <label>p4b</label>
                            <label>patch</label>
                            <label>performance</label>
                            <label>recovery</label>
                    </labels>
                <created>Thu, 10 Jul 2014 15:00:19 +0000</created>
                <updated>Wed, 19 Oct 2022 00:08:56 +0000</updated>
                            <resolved>Thu, 27 Aug 2015 13:58:41 +0000</resolved>
                                    <version>Lustre 2.8.0</version>
                                    <fixVersion>Lustre 2.8.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>34</watches>
                                                                            <comments>
                            <comment id="88702" author="pjones" created="Thu, 10 Jul 2014 15:06:57 +0000"  >&lt;p&gt;thanks Gregoire!&lt;/p&gt;</comment>
                            <comment id="88732" author="pjones" created="Thu, 10 Jul 2014 17:20:33 +0000"  >&lt;p&gt;Alex&lt;/p&gt;

&lt;p&gt;I understand that you have done some initial experimentation in this area&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="89339" author="pjones" created="Thu, 17 Jul 2014 13:57:17 +0000"  >&lt;p&gt;Alex&lt;/p&gt;

&lt;p&gt;Is this the prototype for this work? &lt;a href=&quot;http://review.whamcloud.com/#/c/9871/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/9871/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="89340" author="bzzz" created="Thu, 17 Jul 2014 14:08:44 +0000"  >&lt;p&gt;sorry, missed the first comment.. yes, that&apos;s it.&lt;/p&gt;</comment>
                            <comment id="90237" author="rread" created="Mon, 28 Jul 2014 21:41:01 +0000"  >&lt;p&gt;Alex, it might be a good idea to land the client part and perhaps even the protocol changes (with backward compatibility) from that patch sooner rather than later. (Of course with maxslots = 1.) This will reduce the size of the patch, and also we&apos;ll have a larger base of potentially compatible clients once we have servers that do support  multislot &amp;gt; 1.&lt;/p&gt;</comment>
                            <comment id="90264" author="adilger" created="Tue, 29 Jul 2014 06:17:35 +0000"  >&lt;p&gt;Robert, have you had any chance to test this patch out with workloads that might benefit?&lt;/p&gt;</comment>
                            <comment id="90301" author="rread" created="Tue, 29 Jul 2014 15:35:59 +0000"  >&lt;p&gt;No, I haven&apos;t. &lt;/p&gt;</comment>
                            <comment id="90430" author="pichong" created="Wed, 30 Jul 2014 13:48:56 +0000"  >&lt;p&gt;I have run the mdtest benchmark on a single client in the following configuration:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;lustre 2.5.60&lt;/li&gt;
	&lt;li&gt;lustre 2.5.60 with fail_loc=0x804 (it bypasses the mdc request serialization)&lt;/li&gt;
	&lt;li&gt;lustre 2.5.60 + patch #9871 from Alexey&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;Results are given in attachment:&lt;br/&gt;
mdtest lustre 2.5.60 file creation.png&lt;br/&gt;
mdtest lustre 2.5.60 file removal.png&lt;/p&gt;</comment>
                            <comment id="90436" author="mjmac" created="Wed, 30 Jul 2014 14:18:34 +0000"  >&lt;p&gt;Gregoire,&lt;/p&gt;

&lt;p&gt;That&apos;s a pretty compelling result. Do you have a feel for how the patch performance compares to running with multiple mountpoints per client node? I would assume that the multi-mount performance would be similar (or even a bit worse), but I&apos;m curious as to whether you have any data for that case.&lt;/p&gt;</comment>
                            <comment id="90517" author="pichong" created="Thu, 31 Jul 2014 06:57:39 +0000"  >&lt;p&gt;Here are the results with one additional configuration: multiple mount points of the same fs on the single client node. It&apos;s slightly better for file creation and quite similar for file removal.&lt;/p&gt;

&lt;p&gt;see mdtest lustre file creation b.png&lt;br/&gt;
and mdtest lustre file removal b.png&lt;/p&gt;</comment>
                            <comment id="90541" author="rread" created="Thu, 31 Jul 2014 15:13:45 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=pichong&quot; class=&quot;user-hover&quot; rel=&quot;pichong&quot;&gt;pichong&lt;/a&gt;, thanks for running that again, this is helpful.  How many mounts did you use,  by the way?&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=bzzz&quot; class=&quot;user-hover&quot; rel=&quot;bzzz&quot;&gt;bzzz&lt;/a&gt;, can we make maxslots tunable or even better have the client request a desired value? For some workloads at least I think it should even be a multiple of cores on the client. &lt;/p&gt;</comment>
                            <comment id="90620" author="bzzz" created="Fri, 1 Aug 2014 10:34:48 +0000"  >&lt;p&gt;Robert, it was a proto. I think if we want to productize the patch, then it makes sense to collect requirements like maxslots, etc.&lt;/p&gt;</comment>
                            <comment id="90634" author="pichong" created="Fri, 1 Aug 2014 14:28:50 +0000"  >&lt;p&gt;For multi-mount case there was one mount point per task (24 mount points at max).&lt;/p&gt;

&lt;p&gt;I am currently writing a solution architecture document, so we all agree on improvement requirements.&lt;/p&gt;

&lt;p&gt;Alexey, I have a question on your prototype...&lt;br/&gt;
Why the reply data corresponding to export&apos;s last transno is never released ? The code comment says &lt;cite&gt;otherwise we risk to lose last_committed&lt;/cite&gt;, but I don&apos;t see what &lt;tt&gt;ted_last_reply&lt;/tt&gt; value is used for.&lt;/p&gt;</comment>
                            <comment id="90673" author="bzzz" created="Sun, 3 Aug 2014 06:06:32 +0000"  >&lt;p&gt;&amp;gt; Alexey, I have a question on your prototype... Why the reply data corresponding to export&apos;s last transno is never released ? The code comment says otherwise we risk to lose last_committed, but I don&apos;t see what ted_last_reply value is used for.&lt;/p&gt;

&lt;p&gt;it&apos;s used to retain corresponding slot in last_rcvd. see tgt_free_reply_data().&lt;/p&gt;</comment>
                            <comment id="90675" author="pichong" created="Mon, 4 Aug 2014 06:55:05 +0000"  >&lt;p&gt;Sorry, it&apos;s still not clear...&lt;br/&gt;
I see &lt;tt&gt;ted_last_reply&lt;/tt&gt; field updated to retain the reply data corresponding to last transno, but it is not used anywhere else.&lt;br/&gt;
And when client area of last_rcvd file is written, the last transno is written from &lt;tt&gt;ted_transno&lt;/tt&gt;.&lt;/p&gt;</comment>
                            <comment id="90680" author="bzzz" created="Mon, 4 Aug 2014 08:17:56 +0000"  >&lt;p&gt;	if (ted-&amp;gt;ted_last_reply != NULL) {&lt;br/&gt;
		tgt_free_reply_data(lut, ted, ted-&amp;gt;ted_last_reply);&lt;br/&gt;
		ted-&amp;gt;ted_last_reply = NULL;&lt;/p&gt;

&lt;p&gt;then in tgt_free_reply_data():&lt;br/&gt;
   tgt_bitmap_clear(lut, trd-&amp;gt;trd_index);&lt;/p&gt;

&lt;p&gt;since then corresponding slot in the new last_rcvd file can be re-used.&lt;/p&gt;</comment>
                            <comment id="90859" author="pichong" created="Tue, 5 Aug 2014 14:41:44 +0000"  >&lt;p&gt;I have attached a solution architecture document (MDTReplyReconstructionImprovement.architecture.pdf).&lt;/p&gt;

&lt;p&gt;It&apos;s a proposal that describes the functional improvements to support multiple filesystem-modifying MDT requests per client and MDT reply reconstruction in that context.&lt;/p&gt;

&lt;p&gt;It would be great to have some feedbacks from anyone interested by this feature.&lt;/p&gt;</comment>
                            <comment id="92918" author="pichong" created="Mon, 1 Sep 2014 13:57:57 +0000"  >&lt;p&gt;I would like to start working on a high level design document for this feature.&lt;br/&gt;
Please send your feedbacks on the architecture document by the end of the week (2014/09/07), so I can take them into account.&lt;br/&gt;
thanks.&lt;/p&gt;</comment>
                            <comment id="93180" author="bzzz" created="Thu, 4 Sep 2014 11:17:01 +0000"  >&lt;p&gt;Gregoire, the document looks good. &lt;/p&gt;

&lt;p&gt;the close case is specific because of openlock cache - a RPC being handled by MDT (so mdc semaphore/the slot are busy) may ask for openhandle to release (lock cancellation leading to close RPC) which requires another slot.&lt;/p&gt;</comment>
                            <comment id="94302" author="adilger" created="Wed, 17 Sep 2014 19:29:28 +0000"  >&lt;p&gt;Alex, what is left to be done with your patch before it can move from prototype to production?&lt;/p&gt;</comment>
                            <comment id="94339" author="adilger" created="Thu, 18 Sep 2014 02:57:32 +0000"  >&lt;p&gt;Gregoire,&lt;br/&gt;
some comments on the design doc.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The &lt;tt&gt;mount.lustre&lt;/tt&gt; command will support a new option &lt;tt&gt;max_modify_md_rpcs_in_flight=num&lt;/tt&gt; that specifies the desired maximum number  of modify metadata RPCs in flight for that client. This will allow clients with different level of metadata activity, or different number of cores, to be able to send more or less modify metadata RPCs in parallel.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Lustre typically does not set tunables via mount options, since that would be a large number of different possible options.  This would typically handled by a read-write tunable &lt;tt&gt;lctl set_param mdc.&lt;b&gt;.max_modify_rpcs_in_flight=num&lt;/tt&gt; to match the normal &lt;tt&gt;mdc.&lt;/b&gt;.max_rpcs_in_flight=num&lt;/tt&gt; tunable.  It may be that there is no reason to limit this beyond the existing max_rpcs_in_flight=8 default value.  Since OSTs are fine to have 8 writes in flight at one time, there is no reason to expect MDTs to have a problem with this either.&lt;/p&gt;

&lt;p&gt;Typically, the MDS will return the maximum number for max_rpcs_in_flight in obd_connect_data, and the client is free to chose any number less than or equal to this.  Default is what server replies in ocd, or if unsupported us 1 for MDC and 8 for OSC.  Setting max_rpcs_in_flight to a larger number via /proc should return an error.  It probably makes sense to handle this OBD_CONNECT flag on the OSS as well, and both MDS and OSS services can tune this default parameter permanently in a single place (with &lt;tt&gt;lctl conf_param&lt;/tt&gt; or &lt;tt&gt;lctl set_param &lt;span class=&quot;error&quot;&gt;&amp;#91;-P&amp;#93;&lt;/span&gt;&lt;/tt&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The existing code handles the close requests (MDS_CLOSE and MDS_DONE_WRITING) differently from other metadata requests. They benefit from a separate serialization allowing a close request to be sent in parallel of other requests, probably to avoid some deadlock situations (what is the exact use case, Jira or Bugzilla id?).&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;This is &lt;a href=&quot;https://bugzilla.lustre.org/show_bug.cgi?id=3462&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://bugzilla.lustre.org/show_bug.cgi?id=3462&lt;/a&gt; (really b=3166, but that one is private).  The original problem was related to sending the close while there was another RPC in flight (keeping the main slot in the last_rcvd file was busy). Without this extra RPC in flight (and an extra slot to hold the reply reconstruction), the close could deadlock&lt;/p&gt;

&lt;p&gt;The document looks good otherwise.&lt;/p&gt;</comment>
                            <comment id="94354" author="pichong" created="Thu, 18 Sep 2014 07:50:15 +0000"  >&lt;p&gt;thanks for your comments.&lt;br/&gt;
I understand the use of a mdc configuration parameter rather than the mount option.&lt;/p&gt;

&lt;p&gt;There is however an issue with using the max_rpcs_in_flight to control the maximum modify metadata rpcs in parallel since the limit will also apply to the non-modify metadata rpcs (getattr, readdir, ...). Even if MDT has a single slot, mdc must be able to send up to max_rpcs_in_flight getattr requests in parallel. This is the current behavior.&lt;/p&gt;

&lt;p&gt;Therefore, I think it&apos;s necessary to introduce a specific parameter for the max number of rpcs that depend on slot allocation on the MDT, max_md_rpcs_in_flight for instance. It&apos;s value could not be higher than max_rpcs_in_flight.&lt;/p&gt;</comment>
                            <comment id="94359" author="pichong" created="Thu, 18 Sep 2014 09:30:16 +0000"  >&lt;p&gt;Alexey,&lt;br/&gt;
Could you explain what was the motivation for using a tag for metadata requests ?&lt;br/&gt;
What was the inconvenient with releasing the reply data structure and slot in reply bitmap only when reply ack is received ?&lt;/p&gt;

&lt;p&gt;thanks.&lt;/p&gt;</comment>
                            <comment id="94360" author="bzzz" created="Thu, 18 Sep 2014 10:22:46 +0000"  >&lt;p&gt;we do not resend REP-ACKs, they can be lost. meaning the corresponding slot can&apos;t be re-used as long as the client is alive. I consider few schemas.. e.g. ping can bring some information, but the schema with tags seem the most trivial one to me. notice it&apos;s perfect - we don&apos;t use REP-ACKs with OST. so if 8 slots were used, then the client use 2 slots at most for long, then the remaining 6 slots are &quot;lost&quot; again, but not till the client&apos;s eviction at least. probably ping can be used to tell the target the highest replied XID - given ping is used on the idling connection, this should be OK?&lt;/p&gt;</comment>
                            <comment id="95020" author="pichong" created="Fri, 26 Sep 2014 07:42:46 +0000"  >&lt;p&gt;Here is a new version of the architecture document.&lt;br/&gt;
It has been updated with comments posted by Alexey Zhuravlev and Andreas Dilger.&lt;/p&gt;</comment>
                            <comment id="100125" author="pichong" created="Wed, 26 Nov 2014 14:40:00 +0000"  >&lt;p&gt;Here is the design document of the MDT reply reconstruction improvement (MDTReplyReconstructionImprovement.design.pdf).&lt;br/&gt;
Please review it and post comments in the JIRA ticket.&lt;br/&gt;
Thanks.&lt;/p&gt;</comment>
                            <comment id="102918" author="gerrit" created="Thu, 8 Jan 2015 20:42:44 +0000"  >&lt;p&gt;Andreas Dilger (andreas.dilger@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/13297&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13297&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; mdt: pass __u64 for storing opdata&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 1ab36efb03e47adc3a48d93bfad3b77d23b67cba&lt;/p&gt;</comment>
                            <comment id="102920" author="adilger" created="Thu, 8 Jan 2015 20:52:33 +0000"  >&lt;p&gt;Gregoire, thank you for the excellent design document.  I was reading through this and had a few comments:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;is there even a need or benefit to the server tightly restricting the number of modifying RPCs in flight at connect time?  It seems to me that it makes sense to limit this at the client by default (like max_rpcs_in_flight), but there is no technical reason why some clients (e.g. login nodes) couldn&apos;t have, say, 128 modifying RPCs in flight.  What functional purpose does &lt;tt&gt;ocd_max_mod_rpcs&lt;/tt&gt; serve?
&lt;blockquote&gt;&lt;p&gt;It might be the opportunity to record the per-operation data as a 64-bit field: lcd_last_data and lcd_last_close_data are 32-bit fields whereas intent disposition is 64bits. The lrd_data field could be defined as 64 bits.&lt;/p&gt;&lt;/blockquote&gt;&lt;/li&gt;
	&lt;li&gt;This seems possible, though it would add some complexity to the code if we ever started to use this as a 64-bit field.  I notice that &lt;tt&gt;mdt_get_disposition()&lt;/tt&gt; and friends are all silently converting the passed value to a signed int, so I pushed patch &lt;a href=&quot;http://review.whamcloud.com/13297&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13297&lt;/a&gt; to change that code to pass the __u64 through unchanged.&lt;/li&gt;
	&lt;li&gt;&lt;tt&gt;struct reply_data&lt;/tt&gt; should be padded to a power-of-two size so that it fits into disk blocks evenly, and named something like &lt;tt&gt;tgt_reply_data&lt;/tt&gt; or &lt;tt&gt;lustre_reply_data&lt;/tt&gt;.  &lt;tt&gt;struct reply_header&lt;/tt&gt; should be the same size as &lt;tt&gt;struct reply_data&lt;/tt&gt; so that the structures are all aligned, and also renamed in a similar manner.&lt;/li&gt;
	&lt;li&gt;for the Reply Data Release section, you wrote that solution #3 (received XID) is the preferred one, where the sends &quot;the highest consecutive XID&quot; to the server in each RPC.  Does this depend on the XID values actually being &lt;em&gt;consecutive&lt;/em&gt; values?  That would be a problem, since the XIDs are global on a client and shared by all servers that the client is sending RPCs to, so they will not be consecutive to a single MDT or OST.  I &lt;em&gt;think&lt;/em&gt; what you mean is the client will send to each server the highest RPC XID for which a reply has been received and does not have an unreplied lower-numbered RPC XID.&lt;/li&gt;
&lt;/ul&gt;
</comment>
                            <comment id="103192" author="pichong" created="Mon, 12 Jan 2015 15:56:41 +0000"  >&lt;p&gt;Andreas, thanks you for the design review.&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;restricting the number of modifying RPCs in flight at connect time&lt;br/&gt;
I recognize there is no strong need to have that restriction. It was mentionned in the architecture document and seemed to be accepted by everyone.&lt;br/&gt;
More important, it seems to me a way to prevent a client to flood the MDT with its requests, preventing other clients to use the service at the expected performance level. But it&apos;s true there is no real checking on server side at the moment.&lt;br/&gt;
The MDT provides a service, so it seems natural that it controls and limits its usage, and &lt;tt&gt;ocd_maxmodrpcs&lt;/tt&gt; field is how it informs clients of the limit value.&lt;/li&gt;
&lt;/ul&gt;


&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;64-bit lrd_data field&lt;br/&gt;
After adding the patch you have posted, I will try to make &lt;tt&gt;lrd_data&lt;/tt&gt; field 64-bit.&lt;/li&gt;
&lt;/ul&gt;


&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;on-disk structures&lt;br/&gt;
The structures used in the reply_data file are actually named &lt;tt&gt;lsd_reply_header&lt;/tt&gt; and &lt;tt&gt;lsd_reply_data&lt;/tt&gt;, similarly to &lt;tt&gt;lsd_client_data&lt;/tt&gt;.&lt;br/&gt;
I agree about the power-of-two size rounding and the same size for both.&lt;/li&gt;
&lt;/ul&gt;


&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;received XID&lt;br/&gt;
Actually the term &quot;the highest consecutive XID received by a client&quot; is not exact. I had seen the XIDs are global on a client and shared among the client obd devices. But the only important point is that the XIDs are always increasing. The server manage this information for each client separately and releases the reply data having a lower xid for that client only.&lt;br/&gt;
With &quot;highest consecutive&quot; I wanted to say there is no lower XID for which reply was not received for that client obd device. I agree with your rephrase &quot;the highest RPC XID for which a reply has been received and does not have an unreplied lower-numbered RPC XID&quot;.&lt;/li&gt;
&lt;/ul&gt;
</comment>
                            <comment id="103196" author="bzzz" created="Mon, 12 Jan 2015 16:19:09 +0000"  >&lt;p&gt;I think we shouldn&apos;t limit this to MDT. OST does need this functionality as well.&lt;/p&gt;</comment>
                            <comment id="103308" author="pichong" created="Tue, 13 Jan 2015 08:18:45 +0000"  >&lt;p&gt;I agree the feature could also apply to OST, and I have tried to take it into account in the design.&lt;br/&gt;
But, the initial goal was to improve the Metadata client performance (see the ticket initial description), and I will keep that scope since this is already a significant work for me.&lt;br/&gt;
This does not prevent someone else to extend later the architecture solution and design to provide this functionality to OST.&lt;/p&gt;</comment>
                            <comment id="103310" author="bzzz" created="Tue, 13 Jan 2015 08:23:56 +0000"  >&lt;p&gt;given the code is shared on the server, it doesn&apos;t make sense to put the client&apos;s implementation to MDC and limit the scope, IMHO. there should be no changes from the architecture/design point of view. the patch I mentioned before addresses this already.&lt;/p&gt;</comment>
                            <comment id="107075" author="bzzz" created="Mon, 16 Feb 2015 14:48:19 +0000"  >&lt;p&gt;from the document it&apos;s not clear how that highest consecutive XID is maintained. RPCs aren&apos;t sent in XID order. at least with the current code it&apos;s possible to have XID=5 assigned, then have few RPCs with XID=6,7,8,9,10 sent and replied and then RPC with XID=5 is sent. if RPC(XID=10) was sent with that &quot;mark&quot;=9 and it races with RPC(XID=5), then we can lose the slot for XID=5 ?&lt;/p&gt;</comment>
                            <comment id="107913" author="gerrit" created="Wed, 25 Feb 2015 11:12:32 +0000"  >&lt;p&gt;Alex Zhuravlev (alexey.zhuravlev@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/13862&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13862&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; ptlrpc: atomically assign XID and put on the sending list&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: fb8b7fb24e83f90bbb2fa11ff9aaebcecd257e4e&lt;/p&gt;</comment>
                            <comment id="108071" author="gerrit" created="Thu, 26 Feb 2015 08:50:36 +0000"  >&lt;p&gt;Alex Zhuravlev (alexey.zhuravlev@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/13895&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13895&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; protocol: support for multislot feature&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 94b4151ec488ed6af9a4d996b5753c767a980ef3&lt;/p&gt;</comment>
                            <comment id="108103" author="gerrit" created="Thu, 26 Feb 2015 15:41:37 +0000"  >&lt;p&gt;Alex Zhuravlev (alexey.zhuravlev@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/13900&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13900&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; mdc: new wrappers to support multislot feature&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 6bf51ad2daead1017f34bc0c28f0462ee8df9c4e&lt;/p&gt;</comment>
                            <comment id="108380" author="gerrit" created="Mon, 2 Mar 2015 08:50:24 +0000"  >&lt;p&gt;Alex Zhuravlev (alexey.zhuravlev@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/13931&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13931&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; protocol: define reserved OBD_CONNECT bits&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_5&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: a3c540c609f8a092eb0a1c63806e71707745f3d3&lt;/p&gt;</comment>
                            <comment id="108381" author="gerrit" created="Mon, 2 Mar 2015 08:57:44 +0000"  >&lt;p&gt;Alex Zhuravlev (alexey.zhuravlev@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/13932&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13932&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; protocol: define OBD_CONNECT_MULTISLOT bit&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_7&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 8635c06ebb3f7543d7e83b12ce6213940b1cf115&lt;/p&gt;</comment>
                            <comment id="108554" author="pichong" created="Tue, 3 Mar 2015 15:26:50 +0000"  >&lt;p&gt;Here is a new version of the design document.&lt;br/&gt;
It has been updated with comments posted by Andreas Dilger.&lt;/p&gt;</comment>
                            <comment id="108567" author="pichong" created="Tue, 3 Mar 2015 16:43:56 +0000"  >&lt;p&gt;Alex,&lt;/p&gt;

&lt;p&gt;I don&apos;t see the objectives of the several patches you recently posted to gerrit, as the content of your patches does not implement what has been written &lt;b&gt;and reviewed&lt;/b&gt; in the design document. For example, the more appropriate solution to release reply data on server side has been identified in the design as the solution based on XID, not the one based on tag reuse.&lt;/p&gt;

&lt;p&gt;The patches &lt;a href=&quot;http://review.whamcloud.com/13895&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13895&lt;/a&gt; and &lt;a href=&quot;http://review.whamcloud.com/13900&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13900&lt;/a&gt; implement the tag based solution, don&apos;t they ?&lt;/p&gt;</comment>
                            <comment id="108582" author="bzzz" created="Tue, 3 Mar 2015 16:58:50 +0000"  >&lt;p&gt;I don&apos;t think we should rely on XID only as processing time and order of RPCs can vary significantly. Imagine there are 32 RPCs in flight (say, with XIDs 1-32). for a reason, XID=32 took a long, so we won&apos;t be able to re-use potentially free slots 1-31. IMO, this isn&apos;t very nice because reply_log file grows for no reason. I think XID-based approach is needed to address a different case when RPCs-in-flight decrease and some tags aren&apos;t used for a while.&lt;/p&gt;</comment>
                            <comment id="108705" author="pichong" created="Wed, 4 Mar 2015 08:11:35 +0000"  >&lt;p&gt;Alex,&lt;br/&gt;
This is the kind of information you should have shared at the time of design review !&lt;/p&gt;

&lt;p&gt;If we want to coordinate our efforts, as we both agree by email last week, I think we &lt;b&gt;need to fully agree on the design&lt;/b&gt;. So please review the design document attached to the ticket (current version is 0.3) and give back your comments before continuing working on patches.&lt;/p&gt;

&lt;p&gt;If you prefer, I am also open to setup a call to have live discussions on the design.&lt;br/&gt;
Regards,&lt;br/&gt;
Gr&#233;goire.&lt;/p&gt;
</comment>
                            <comment id="108713" author="bzzz" created="Wed, 4 Mar 2015 10:27:27 +0000"  >&lt;p&gt;sure.. from the design I read last time (few weeks ago) I got to think we&apos;ll be using both methods (tag + XID). also, I observed that XID approach has the issue I described on Feb 16. no one replied &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/wink.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</comment>
                            <comment id="108728" author="pichong" created="Wed, 4 Mar 2015 14:10:55 +0000"  >&lt;p&gt;I agree with your comment posted in Feb 16.&lt;br/&gt;
Hopefully, your gerrit patch #13862 &quot;atomically assign XID and put on the sending list&quot; addresses the issue.&lt;/p&gt;</comment>
                            <comment id="108733" author="gerrit" created="Wed, 4 Mar 2015 15:01:51 +0000"  >&lt;p&gt;Gr&#233;goire Pichon (gregoire.pichon@bull.net) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/13960&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13960&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; ptlrpc: Add OBD_CONNECT_MULTIMODRPCS flag&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 115af1c4a7568fc84aaa6225302bc0970ea61aa2&lt;/p&gt;</comment>
                            <comment id="108739" author="bzzz" created="Wed, 4 Mar 2015 15:52:14 +0000"  >&lt;p&gt;so, we haven&apos;t agreed on the design yet, but the patches are still coming &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/wink.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</comment>
                            <comment id="109192" author="pichong" created="Mon, 9 Mar 2015 10:09:11 +0000"  >&lt;p&gt;Here is a new version of the design document: 0.4.&lt;br/&gt;
It has been updated to take into account Alex&apos;s comments.&lt;/p&gt;</comment>
                            <comment id="109619" author="bzzz" created="Fri, 13 Mar 2015 11:37:45 +0000"  >&lt;p&gt;the document looks solid, few comments:&lt;/p&gt;

&lt;p&gt;&amp;gt; This will ensure the export&apos;s last transno can be rebuilt from disk in case of recovery.&lt;/p&gt;

&lt;p&gt;better to say &quot;highest committed transno&quot;&lt;/p&gt;

&lt;p&gt;&amp;gt; When the MDT will receive an RPC with the MSG_RESENT flag&lt;/p&gt;

&lt;p&gt;iirc, we have to check for MSG_REPLAY as well&lt;/p&gt;

&lt;p&gt;&amp;gt; This is because, without the client received xid information, the server would not be able to release the reply data in memory&lt;/p&gt;

&lt;p&gt;probably we should stop to support this after some future version.&lt;/p&gt;

&lt;p&gt;&amp;gt; 3. received xid&lt;br/&gt;
probably you should mention that currently imp_send_list isn&apos;t strictly ordered and XIDs &quot;from the past&quot; can join the list.&lt;br/&gt;
also, looking at imp_send_list isn&apos;t enough - there is imp_delayed_list. &lt;/p&gt;

&lt;p&gt;as for downgrade, I&apos;d think we shouldn&apos;t allow this if some of reply_data is valid - otherwise recovery can break.&lt;br/&gt;
I&apos;d suggest to introduce an incompatibility flag preventing old code to mount multislot-enabled fs. to enable downgrade&lt;br/&gt;
a simple schema can be used. for example, once all the clients are disconnected, MDT can reset that incompatibility flag&lt;br/&gt;
on a disk.&lt;/p&gt;

&lt;p&gt;probably it makes sense to describe shortly what happens when a client disconnects: we have to release all the slots.&lt;/p&gt;

&lt;p&gt;performance improvements: some optimizations for bitmap scanning?&lt;/p&gt;

&lt;p&gt;testing should be mentioned: how do we verify execute-once semantics is retained.&lt;/p&gt;</comment>
                            <comment id="109822" author="pichong" created="Tue, 17 Mar 2015 10:33:59 +0000"  >&lt;p&gt;&amp;gt; better to say &quot;highest committed transno&quot;&lt;br/&gt;
ok&lt;/p&gt;

&lt;p&gt;&amp;gt; we have to check for MSG_REPLAY as well&lt;br/&gt;
ok&lt;/p&gt;

&lt;p&gt;&amp;gt; probably we should stop to support this after some future version.&lt;br/&gt;
I agree, but I don&apos;t think there is much to do at the moment.&lt;/p&gt;

&lt;p&gt;&amp;gt; probably you should mention that currently imp_send_list isn&apos;t strictly ordered and XIDs &quot;from the past&quot; can join the list. also, looking at imp_send_list isn&apos;t enough - there is imp_delayed_list. &lt;br/&gt;
ok&lt;/p&gt;

&lt;p&gt;&amp;gt; as for downgrade, I&apos;d think we shouldn&apos;t allow this if some of reply_data is valid&lt;br/&gt;
I agree&lt;/p&gt;

&lt;p&gt;&amp;gt; probably it makes sense to describe shortly what happens when a client disconnects: we have to release all the slots.&lt;br/&gt;
Are you talking about old on-disk reply data of the client that disconnects ?&lt;br/&gt;
That&apos;s a good idea. This would allow &quot;reply data&quot; file to be completely cleared when all clients have disconnected and it could be then truncated to avoid unnecessary parsing at next MDT start. It would also avoid the use of the slot generation number and reply data generation number &lt;tt&gt;lrd_generation&lt;/tt&gt;.&lt;br/&gt;
However, I am wondering if parsing the entire &quot;reply data&quot; file when client disconnects is acceptable in term of processing time.&lt;/p&gt;

&lt;p&gt;&amp;gt; testing should be mentioned: how do we verify execute-once semantics is retained.&lt;br/&gt;
What do you mean by &quot;execute-once&quot; semantics ? Is it the fact that any client operation is treated only once by the server, even in case of RPC resend, target recovery, etc... ?&lt;/p&gt;</comment>
                            <comment id="109856" author="gerrit" created="Tue, 17 Mar 2015 16:45:20 +0000"  >&lt;p&gt;Gr&#233;goire Pichon (gregoire.pichon@bull.net) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/14095&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14095&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; ptlrpc: Add a tag field to ptlrpc messages&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 4456dfe1242602fbead299c64041cb6bdbfe2408&lt;/p&gt;</comment>
                            <comment id="109950" author="bzzz" created="Wed, 18 Mar 2015 07:03:11 +0000"  >&lt;p&gt;&amp;gt; Are you talking about old on-disk reply data of the client that disconnects ? That&apos;s a good idea. This would allow &quot;reply data&quot; file to be completely cleared when all clients have disconnected and it could be then truncated to avoid unnecessary parsing at next MDT start. It would also avoid the use of the slot generation number and reply data generation number lrd_generation. However, I am wondering if parsing the entire &quot;reply data&quot; file when client disconnects is acceptable in term of processing time.&lt;/p&gt;

&lt;p&gt;the primary goal is to support backward compatibility (even in some restricted form). for simplicity the schema can be as the following: when all the clients are disconnected (it&apos;s easy and cheap to detect), then the whole content of reply_log can be discarded.&lt;/p&gt;

&lt;p&gt;&amp;gt; testing should be mentioned: how do we verify execute-once semantics is retained.&lt;br/&gt;
What do you mean by &quot;execute-once&quot; semantics ? Is it the fact that any client operation is treated only once by the server, even in case of RPC resend, target recovery, etc... ?&lt;/p&gt;

&lt;p&gt;correct. we have to examine if RPCs A and B were sent concurrently (i.e. using different tags), then the server reconstructs the corresponding replies.&lt;/p&gt;</comment>
                            <comment id="110474" author="gerrit" created="Tue, 24 Mar 2015 14:36:30 +0000"  >&lt;p&gt;Gr&#233;goire Pichon (gregoire.pichon@bull.net) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/14153&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14153&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; mdc: Add max modify RPCs in flight variable&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: bad683c33942fbc47cb9ae6c22187f95669ab3a4&lt;/p&gt;</comment>
                            <comment id="110847" author="gerrit" created="Fri, 27 Mar 2015 15:28:57 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/13297/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13297/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; mdt: pass __u64 for storing opdata&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 6cad330ded9c6ff21b35229139b7f60fbbfe80c6&lt;/p&gt;</comment>
                            <comment id="111094" author="gerrit" created="Tue, 31 Mar 2015 13:43:00 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/13960/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13960/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; ptlrpc: Add OBD_CONNECT_MULTIMODRPCS flag&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 807a40c5678109087e2c8759bd86253034804c8d&lt;/p&gt;</comment>
                            <comment id="111217" author="pichong" created="Wed, 1 Apr 2015 14:14:59 +0000"  >&lt;p&gt;Alex,&lt;br/&gt;
I have seen the patch for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6376&quot; title=&quot;Add RPC lock for OSP update RPC &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6376&quot;&gt;&lt;del&gt;LU-6376&lt;/del&gt;&lt;/a&gt; &quot;Add RPC lock for OSP update RPC&quot; has been landed last week.&lt;br/&gt;
What is the role of OSP component ?&lt;br/&gt;
What kind of metadata operations does it perform ?&lt;/p&gt;</comment>
                            <comment id="111218" author="bzzz" created="Wed, 1 Apr 2015 14:22:28 +0000"  >&lt;p&gt;OSP represents remote OSD. say, there are MDT and OST nodes. MDT can talk to OST using regular OSD API and OSP is a proxy transferring the API&apos;s methods over a network, including modifying methods. at the moment it has to serialize modifications due to the single-slot last_rcvd and this is a big limitation given OSP is used for distributed metadata and other features. this is why I asked for a generic enough implementation which can be easily used by another components, including OSP.&lt;/p&gt;</comment>
                            <comment id="111348" author="pichong" created="Thu, 2 Apr 2015 11:43:22 +0000"  >&lt;p&gt;Alex,&lt;br/&gt;
Could you give simple usecases/examples where OSP is used to transfer modifying methods ?&lt;br/&gt;
This will help me write testcases.&lt;br/&gt;
Thanks.&lt;/p&gt;</comment>
                            <comment id="111349" author="bzzz" created="Thu, 2 Apr 2015 11:50:50 +0000"  >&lt;p&gt;there are two major features using that - LFSCK (check sanity-lfsck.sh) and DNE (in many scripts, look for &quot;lfs mkdir&quot;).&lt;/p&gt;</comment>
                            <comment id="111552" author="gerrit" created="Mon, 6 Apr 2015 01:05:34 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/14095/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14095/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; ptlrpc: Add a tag field to ptlrpc messages&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 3998a8e474b58a5bb4bc47b620adc836c27ab70d&lt;/p&gt;</comment>
                            <comment id="111639" author="gerrit" created="Tue, 7 Apr 2015 11:17:18 +0000"  >&lt;p&gt;Gr&#233;goire Pichon (gregoire.pichon@bull.net) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/14374&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14374&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; mdc: manage number of modify RPCs in flight&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: fc37353393a36f10420c6e6396dab1a9b5f42acb&lt;/p&gt;</comment>
                            <comment id="111640" author="gerrit" created="Tue, 7 Apr 2015 11:17:19 +0000"  >&lt;p&gt;Gr&#233;goire Pichon (gregoire.pichon@bull.net) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/14375&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14375&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; osp: manage number of modify RPCs in flight&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 29a0d43597a328c10d6167c734876736637db187&lt;/p&gt;</comment>
                            <comment id="111641" author="gerrit" created="Tue, 7 Apr 2015 11:17:20 +0000"  >&lt;p&gt;Gr&#233;goire Pichon (gregoire.pichon@bull.net) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/14376&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14376&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; mdc: remove deprecated metadata RPC serialization&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 38e9d11ef4e4a36a80f7bb66a203e1c1c3d4f537&lt;/p&gt;</comment>
                            <comment id="111805" author="pichong" created="Thu, 9 Apr 2015 11:51:58 +0000"  >&lt;p&gt;Here is version 0.5 of the design document.&lt;br/&gt;
It has been updated to address Alexey&apos;s last comments.&lt;/p&gt;</comment>
                            <comment id="114088" author="gerrit" created="Fri, 1 May 2015 22:11:28 +0000"  >&lt;p&gt;Gr&#233;goire Pichon (gregoire.pichon@bull.net) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/14655&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14655&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; client: support multiple modify RPCs in parallel&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 5b122238c6f8f0d58ac74f2466e887098de01c2b&lt;/p&gt;</comment>
                            <comment id="115189" author="pichong" created="Wed, 13 May 2015 15:21:05 +0000"  >&lt;p&gt;Here is version 1 of the test plan document.&lt;br/&gt;
Please, feel free to review and give some feedbacks.&lt;/p&gt;</comment>
                            <comment id="115190" author="gerrit" created="Wed, 13 May 2015 15:22:29 +0000"  >&lt;p&gt;Gr&#233;goire Pichon (gregoire.pichon@bull.net) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/14793&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14793&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; ptlrpc: embed highest XID in each request&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 5f2bf2d0b79825bd2214809fee2c679a1f010fd9&lt;/p&gt;</comment>
                            <comment id="115809" author="gerrit" created="Tue, 19 May 2015 14:12:22 +0000"  >&lt;p&gt;Gr&#233;goire Pichon (gregoire.pichon@bull.net) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/14860&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14860&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; mdt: support multiple modify RCPs in parallel&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 1d9870b493954d53d8cbb1ff994a99817df0b2d7&lt;/p&gt;</comment>
                            <comment id="115810" author="gerrit" created="Tue, 19 May 2015 14:12:23 +0000"  >&lt;p&gt;Gr&#233;goire Pichon (gregoire.pichon@bull.net) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/14861&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14861&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; tests: testcases for multiple modify RPCs feature&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 1fcb722e9b0b22a0cd5ae4640a1abf146efd1877&lt;/p&gt;</comment>
                            <comment id="115811" author="gerrit" created="Tue, 19 May 2015 14:12:24 +0000"  >&lt;p&gt;Gr&#233;goire Pichon (gregoire.pichon@bull.net) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/14862&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14862&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; utils: update lr_reader to display additional data&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 0738702663851f3d01f45b1725df2c623b507bb9&lt;/p&gt;</comment>
                            <comment id="116340" author="adilger" created="Mon, 25 May 2015 17:41:52 +0000"  >&lt;p&gt;Gr&#233;goire, it looks like your patches are regularly causing conf-sanity test_32&lt;span class=&quot;error&quot;&gt;&amp;#91;abd&amp;#93;&lt;/span&gt; to fail or time out.  This is not the case with other patches being tested recently, so it looks like there is a regression in your patches 14860 (test fail) and 14861 (test timeout).&lt;/p&gt;

&lt;p&gt;One failure in 14860 is:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;LustreError: 14561:0:(class_obd.c:684:cleanup_obdclass()) obd_memory max: 53634235, leaked: 152
shadow-10vm8: 
shadow-10vm8: Memory leaks detected
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Could you please investigate.&lt;/p&gt;</comment>
                            <comment id="116994" author="gerrit" created="Sun, 31 May 2015 17:03:15 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/14153/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14153/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; mdc: add max modify RPCs in flight variable&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 60c05ea9f66f9bd3f5fd35942a12edb1e311c455&lt;/p&gt;</comment>
                            <comment id="119882" author="gerrit" created="Mon, 29 Jun 2015 22:06:18 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/14793/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14793/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; ptlrpc: embed highest XID in each request&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: bf3e7f67cb33f3b4e0590ef8af3843ac53d0a4e8&lt;/p&gt;</comment>
                            <comment id="119991" author="gerrit" created="Wed, 1 Jul 2015 01:44:11 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/14374/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14374/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; mdc: manage number of modify RPCs in flight&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 1fc013f90175d1e50d7a22b404ad6abd31a43e38&lt;/p&gt;</comment>
                            <comment id="119994" author="gerrit" created="Wed, 1 Jul 2015 02:01:39 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/14860/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14860/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; mdt: support multiple modify RCPs in parallel&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 5fc7aa3687daca5c14b0e479c58146e0987daf7f&lt;/p&gt;</comment>
                            <comment id="121522" author="pichong" created="Fri, 17 Jul 2015 12:05:49 +0000"  >&lt;p&gt;Here is version 2 of the test plan document, updated with tests results.&lt;/p&gt;</comment>
                            <comment id="123672" author="gerrit" created="Sun, 9 Aug 2015 23:43:07 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/14862/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14862/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; utils: update lr_reader to display additional data&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 6460ae59bf6e5175797dc66ecbe560eebc8b6333&lt;/p&gt;</comment>
                            <comment id="125200" author="gerrit" created="Wed, 26 Aug 2015 15:49:02 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/14861/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/14861/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5319&quot; title=&quot;Support multiple slots per client in last_rcvd file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5319&quot;&gt;&lt;del&gt;LU-5319&lt;/del&gt;&lt;/a&gt; tests: testcases for multiple modify RPCs feature&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: c2d27a0f12688c0d029880919f8b002e557b540c&lt;/p&gt;</comment>
                            <comment id="125356" author="pjones" created="Thu, 27 Aug 2015 13:58:41 +0000"  >&lt;p&gt;Landed for 2.8&lt;/p&gt;</comment>
                            <comment id="135628" author="niu" created="Wed, 9 Dec 2015 04:35:28 +0000"  >&lt;p&gt;The multi-slots implementation introduced a regression, see &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5951&quot; title=&quot;sanity test_39k: mtime is lost on close&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5951&quot;&gt;&lt;del&gt;LU-5951&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To get the unreplied requests by scan the sending/delayed list, current multi-slots implementation moved the xid assignment from request packing stage to request sending stage, however, that breaks the original mechanism which used to coordinate the timestamp update on OST objects (caused by some out of order operations, such as setattr, truncate and write).&lt;/p&gt;

&lt;p&gt;To fix this regression, &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5951&quot; title=&quot;sanity test_39k: mtime is lost on close&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5951&quot;&gt;&lt;del&gt;LU-5951&lt;/del&gt;&lt;/a&gt; moved the xid assignment back to request packing stage, and introduced an unreplied list to track all the unreplied requests. Following is a brief description of the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5951&quot; title=&quot;sanity test_39k: mtime is lost on close&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5951&quot;&gt;&lt;del&gt;LU-5951&lt;/del&gt;&lt;/a&gt; patch:&lt;/p&gt;

&lt;p&gt;obd_import-&amp;gt;imp_unreplied_list is introduced to track all the unreplied requests, and all requests in the list is sorted by xid, so that client may get the known maximal replied xid by checking the first element in the list.&lt;/p&gt;

&lt;p&gt;obd_import-&amp;gt;imp_known_replied_xid is introduced for sanity check purpose, it&apos;s updated along with the imp_unreplied_list.&lt;/p&gt;

&lt;p&gt;Once a request is built, it&apos;ll be inserted into the unreplied list, and when the reply is seen by client or the request is going to be freed, the request will be removed from the list. Two tricky points are worth mentioning here:&lt;/p&gt;

&lt;p&gt;1. Replay requests need be added back to the unreplied list before sending, instead of adding them back one by one during replay, we choose to add them back all together before replay, that&apos;ll be easier for strict sanity check and less bug prone.&lt;/p&gt;

&lt;p&gt;2. The sanity check on server side is strengthened a lot, to satisfy the stricter check, connect &amp;amp; disconnect request won&apos;t carry the known replied xid anymore, see the comments in ptlrpc_send_new_req() for details.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                                        </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="29163">LU-6386</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="32210">LU-7185</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="33072">LU-7410</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="31120">LU-6864</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="31791">LUDOC-304</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="31059">LU-6840</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="27699">LU-5951</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="31424">LU-6981</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="34384">LU-7729</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="31605">LU-7028</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="31845">LU-7082</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="33049">LU-7408</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="31062">LU-6841</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="18725">LU-3285</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="61736">LU-14144</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="12696">LU-933</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="30759">LU-6753</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="15847" name="MDTReplyReconstructionImprovement.architecture.pdf" size="466549" author="pichong" created="Fri, 26 Sep 2014 07:42:46 +0000"/>
                            <attachment id="17460" name="MDTReplyReconstructionImprovement.design.pdf" size="873874" author="pichong" created="Thu, 9 Apr 2015 11:51:58 +0000"/>
                            <attachment id="18436" name="MDTReplyReconstructionImprovement.testplan.pdf" size="629378" author="pichong" created="Fri, 17 Jul 2015 12:05:49 +0000"/>
                            <attachment id="15442" name="mdtest lustre 2.5.60 file creation b.png" size="10162" author="pichong" created="Thu, 31 Jul 2014 06:57:39 +0000"/>
                            <attachment id="15443" name="mdtest lustre 2.5.60 file removal b.png" size="10056" author="pichong" created="Thu, 31 Jul 2014 06:57:39 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwr3b:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>14856</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>