<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:42:05 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4367] unlink performance regression on lustre-2.5.52 client</title>
                <link>https://jira.whamcloud.com/browse/LU-4367</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;lustre-2.5.52 client (and maybe more old client as well) causes metadata performance (unlink files in the single shared directory) regression.&lt;br/&gt;
Here is test results on lustre-2.5.52 clients and lustre-2.4.1 clients. lustre-2.5.52 is running on all servers.&lt;/p&gt;

&lt;p&gt;1 x MDS, 4 x OSS (32 x OST) and 16 clients(64 processs, 20000 files per process)&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lustre-2.4.1 client

4.1-take2.log
-- started at 12/09/2013 07:31:29 --

mdtest-1.9.1 was launched with 64 total task(s) on 16 node(s)
Command line used: /work/tools/bin/mdtest -d /lustre/dir.0 -n 20000 -F -i 3
Path: /lustre
FS: 1141.8 TiB   Used FS: 0.0%   Inodes: 50.0 Mi   Used Inodes: 0.0%

64 tasks, 1280000 files

SUMMARY: (of 3 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   File creation     :      58200.265      56783.559      57589.448        594.589
   File stat         :     123351.857     109571.584     114223.612       6455.043
   File read         :     109917.788      83891.903      99965.718      11472.968
   File removal      :      60825.889      59066.121      59782.774        754.599
   Tree creation     :       4048.556       1971.934       3082.293        853.878
   Tree removal      :         21.269         15.069         18.204          2.532

-- finished at 12/09/2013 07:34:53 --
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lustre-2.5.5.2 client

-- started at 12/09/2013 07:13:42 --

mdtest-1.9.1 was launched with 64 total task(s) on 16 node(s)
Command line used: /work/tools/bin/mdtest -d /lustre/dir.0 -n 20000 -F -i 3
Path: /lustre
FS: 1141.8 TiB   Used FS: 0.0%   Inodes: 50.0 Mi   Used Inodes: 0.0%

64 tasks, 1280000 files

SUMMARY: (of 3 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   File creation     :      58286.631      56689.423      57298.286        705.112
   File stat         :     127671.818     116429.261     121610.854       4631.684
   File read         :     173527.817     158205.242     166676.568       6359.445
   File removal      :      46818.194      45638.851      46118.111        506.151
   Tree creation     :       3844.458       2576.354       3393.050        578.560
   Tree removal      :         21.383         18.329         19.844          1.247

-- finished at 12/09/2013 07:17:07 --
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;46K ops/sec (lusre-2.5.52) vs 60K ops/sec (lustre-2.4.1). 25% performance drops on Lustre-2.5.52 compared to Lustre-2.4.1.&lt;/p&gt;</description>
                <environment></environment>
        <key id="22390">LU-4367</key>
            <summary>unlink performance regression on lustre-2.5.52 client</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="laisiyao">Lai Siyao</assignee>
                                    <reporter username="ihara">Shuichi Ihara</reporter>
                        <labels>
                            <label>HB</label>
                    </labels>
                <created>Mon, 9 Dec 2013 15:52:17 +0000</created>
                <updated>Thu, 13 Oct 2016 18:10:07 +0000</updated>
                            <resolved>Wed, 12 Nov 2014 16:51:13 +0000</resolved>
                                    <version>Lustre 2.5.0</version>
                                    <fixVersion>Lustre 2.7.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>12</watches>
                                                                            <comments>
                            <comment id="73149" author="green" created="Mon, 9 Dec 2013 23:36:31 +0000"  >&lt;p&gt;Did this happen on only 2.5.52, as in 2.5.51 servers were fine? Any chance you can arrive at the patch that introduced this with a bit of git bisect?&lt;/p&gt;</comment>
                            <comment id="73150" author="pjones" created="Mon, 9 Dec 2013 23:37:01 +0000"  >&lt;p&gt;Cliff&lt;/p&gt;

&lt;p&gt;Have you seen any performance drops like this on Hyperion?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="73161" author="ihara" created="Tue, 10 Dec 2013 00:26:06 +0000"  >&lt;p&gt;At least 2.5.0 and 2.5.51 are also fine. It seems something happened between 2.5.51 and 2.5.52. I will try git bisect to find exactly commit which caused this performance differences.&lt;/p&gt;

&lt;p&gt;2.5.0 client&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;-- started at 12/09/2013 15:41:13 --

mdtest-1.9.1 was launched with 64 total task(s) on 16 node(s)
Command line used: /work/tools/bin/mdtest -d /lustre/dir.0 -n 20000 -F -i 3
Path: /lustre
FS: 1141.8 TiB   Used FS: 0.0%   Inodes: 50.0 Mi   Used Inodes: 0.0%

64 tasks, 1280000 files

SUMMARY: (of 3 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   File creation     :      56576.814      56173.806      56435.397        185.176
   File stat         :     122978.552     108868.929     115211.059       5847.741
   File read         :     108518.269      86626.909      94978.533       9660.755
   File removal      :      61474.088      59462.447      60343.718        839.925
   Tree creation     :       4253.858       2061.083       3124.005        896.447
   Tree removal      :         22.261         14.862         19.262          3.179

-- finished at 12/09/2013 15:44:39 --
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;2.5.51 client&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;-- started at 12/09/2013 16:10:46 --

mdtest-1.9.1 was launched with 64 total task(s) on 16 node(s)
Command line used: /work/tools/bin/mdtest -d /lustre/dir.0 -n 20000 -F -i 3
Path: /lustre
FS: 1141.8 TiB   Used FS: 0.0%   Inodes: 50.0 Mi   Used Inodes: 0.0%

64 tasks, 1280000 files

SUMMARY: (of 3 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   File creation     :      57207.432      56112.732      56627.502        449.278
   File stat         :     122587.505     110561.252     115014.601       5382.466
   File read         :     105060.899      90757.318      99241.371       6135.844
   File removal      :      61824.540      59560.836      60470.541        976.093
   Tree creation     :       4096.000       1602.715       3181.058       1120.772
   Tree removal      :         20.478         17.985         19.354          1.032

-- finished at 12/09/2013 16:14:10 --
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="73173" author="ihara" created="Tue, 10 Dec 2013 05:36:51 +0000"  >&lt;p&gt;Here is &quot;git bisect&quot; results. &lt;/p&gt;

&lt;p&gt;File removal operation to shared directory&lt;/p&gt;
&lt;div class=&apos;table-wrap&apos;&gt;
&lt;table class=&apos;confluenceTable&apos;&gt;&lt;tbody&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;commit&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;Removal(ops/sec)&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;result&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;98ac0fe3a45dde62759ecaa4c84e6250ac2067f8(HEAD)&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;46818&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;bad&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;2.5.51&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;61824&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;good&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;e9a1f308b5359c2de1fda67816ef662ce727d275&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;45919&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;bad&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;cbab0aa32ed2d21f59aae3a28285b49802b734f2&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;46917&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;bad&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;2b13169cd86b4868730f2c45432645b7d2cc0073&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;62137&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;good&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;a9ae2181f3efd811e17843ebf951b00fb9ea0366&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;63721&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;good&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;12d2b04f2204bc087f380cb214a29c126f50d709&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;63157&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;good&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;b17d23fd01557c0e23f5c3b4eeea237c08fe2bc5&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;44786&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;bad&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;&lt;font color=&quot;red&quot;&gt;55989b17c7391266740d68e3c62418e184364ed7&lt;/font&gt;&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;&lt;font color=&quot;red&quot;&gt;46392&lt;/font&gt;&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;&lt;font color=&quot;red&quot;&gt;bad&lt;/font&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;


&lt;p&gt;&lt;b&gt;55989b17c7391266740d68e3c62418e184364ed7 &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3544&quot; title=&quot;Writing to new files under NFS export from Lustre will result in ENOENT (SLES11SP2)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3544&quot;&gt;&lt;del&gt;LU-3544&lt;/del&gt;&lt;/a&gt; llite: simplify dentry revalidate&lt;/b&gt;&lt;br/&gt;
This commit is exactly point where we have been getting metadata performance regression.&lt;/p&gt;

&lt;p&gt;And, for double check, I also tested curent HDAD of master branch with revert of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3544&quot; title=&quot;Writing to new files under NFS export from Lustre will result in ENOENT (SLES11SP2)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3544&quot;&gt;&lt;del&gt;LU-3544&lt;/del&gt;&lt;/a&gt; patch. Here is result. The removal performance is back.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;-- started at 12/09/2013 21:28:02 --

mdtest-1.9.1 was launched with 64 total task(s) on 16 node(s)
Command line used: /work/tools/bin/mdtest -d /lustre/dir.0 -n 20000 -F -i 3
Path: /lustre
FS: 1141.8 TiB   Used FS: 0.0%   Inodes: 50.0 Mi   Used Inodes: 0.0%

64 tasks, 1280000 files

SUMMARY: (of 3 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   File creation     :      59437.920      56476.490      58310.121       1307.967
   File stat         :     127083.044     115640.003     120232.454       4936.949
   File read         :     110833.651     100376.278     105721.983       4272.411
   File removal      :      64267.994      63221.494      63591.734        478.906
   Tree creation     :       3533.533       1503.874       2724.054        878.023
   Tree removal      :         21.026         18.468         20.149          1.189

-- finished at 12/09/2013 21:31:17 --
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="73189" author="pjones" created="Tue, 10 Dec 2013 12:56:22 +0000"  >&lt;p&gt;Lai&lt;/p&gt;

&lt;p&gt;Are you able to comment?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="73262" author="laisiyao" created="Wed, 11 Dec 2013 05:18:27 +0000"  >&lt;p&gt;Hi Ihara, could you test with createmany and unlinkmany? I&apos;m afraid it&apos;s not unlink performance drop, but mdtest causes file revalidation failure and relookup, because 55989b17c7391266740d68e3c62418e184364ed7 &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3544&quot; title=&quot;Writing to new files under NFS export from Lustre will result in ENOENT (SLES11SP2)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3544&quot;&gt;&lt;del&gt;LU-3544&lt;/del&gt;&lt;/a&gt; llite: simplify dentry revalidate only touches the code path of dentry revalidation. I&apos;ll do mdtest locally to reproduce this.&lt;/p&gt;</comment>
                            <comment id="74780" author="ihara" created="Sun, 12 Jan 2014 09:05:55 +0000"  >&lt;p&gt;Hi Lai,&lt;/p&gt;

&lt;p&gt;Sorry delay response on this. I have tested with createmany and unlinkmany on 16 clients and total 64 processes simultaneously.&lt;br/&gt;
Here is summary of results. The number was not impacted whether &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3544&quot; title=&quot;Writing to new files under NFS export from Lustre will result in ENOENT (SLES11SP2)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3544&quot;&gt;&lt;del&gt;LU-3544&lt;/del&gt;&lt;/a&gt; patch applied or not.&lt;/p&gt;

&lt;div class=&apos;table-wrap&apos;&gt;
&lt;table class=&apos;confluenceTable&apos;&gt;&lt;tbody&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;&amp;nbsp;&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;iteration 1&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;iteration 2&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;iteration 3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;2.5.52&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;45491&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;44416&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;44200&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;2.5.52/wo LU3544 patch&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;44157&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;43648&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;44182&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;
</comment>
                            <comment id="75734" author="ihara" created="Tue, 28 Jan 2014 00:31:18 +0000"  >&lt;p&gt;Hi Lai, any adviseses and updates of this?&lt;/p&gt;</comment>
                            <comment id="76958" author="laisiyao" created="Thu, 13 Feb 2014 13:23:05 +0000"  >&lt;p&gt;I don&apos;t find any clue yet, will need more time on testing, I&apos;ll update the progress next week.&lt;/p&gt;</comment>
                            <comment id="77991" author="laisiyao" created="Thu, 27 Feb 2014 08:47:43 +0000"  >&lt;p&gt;I tested on different setup, but I didn&apos;t see the unlink performance drop. If possible, could you use oprofile to find which function consumes more time for 2.5.52 client?&lt;/p&gt;

&lt;p&gt;I noticed that you only tested with a small set of files (20000 total files) and iterated three times. Could you test with more files and only one iteration? And could you also test with one client to see if unlink gets slow?&lt;/p&gt;</comment>
                            <comment id="78381" author="adilger" created="Tue, 4 Mar 2014 19:49:06 +0000"  >&lt;p&gt;Lai,&lt;br/&gt;
what kind of system did you test on?  I suspect that this slowdown is only visible with a fast MDS and IB network and enough OSTs so that unlinking the OST objects is not the bottleneck.  I don&apos;t know that changing the parameters of what is being tested is needed, since there is clearly a slowdown in this test which is significantly larger than the standard deviation between tests (-15000 unlinks/sec with stddev 1000).&lt;/p&gt;

&lt;p&gt;Also, I think it is important to note that this is only an issue during unlink, and in fact with it is +3000 unlink/sec faster than 2.5.0/2.5.51 once the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3544&quot; title=&quot;Writing to new files under NFS export from Lustre will result in ENOENT (SLES11SP2)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3544&quot;&gt;&lt;del&gt;LU-3544&lt;/del&gt;&lt;/a&gt; patch is reverted.  However, it does appear that the open-for-read (&quot;File read&quot; with 0 bytes read) performance is +60000 open/sec faster with the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3544&quot; title=&quot;Writing to new files under NFS export from Lustre will result in ENOENT (SLES11SP2)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3544&quot;&gt;&lt;del&gt;LU-3544&lt;/del&gt;&lt;/a&gt; patch applied, which is also important not to lose.&lt;/p&gt;

&lt;p&gt;I suspect there is some subtle difference in the new ll_revalidate_dentry() code that is only triggering in the unlink case, possibly forcing an extra RPC to the MDS to revalidate the dentry just before it is being unlinked?  Rather than spending time trying to reproduce the performance loss, it might make more sense to just get a debug log of unlink with and without the 55989b17c73912 patch applied and see what the difference is in the callpath and RPCs sent.  Hopefully, there is just a minor change that can be done to fix the unlink path and not impact the other performance.&lt;/p&gt;</comment>
                            <comment id="78667" author="laisiyao" created="Fri, 7 Mar 2014 04:25:52 +0000"  >&lt;p&gt;I tested on three testnodes in Toro, one client, one MDS, two OSS on same OST.&lt;/p&gt;

&lt;p&gt;I was suspecting that in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3544&quot; title=&quot;Writing to new files under NFS export from Lustre will result in ENOENT (SLES11SP2)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3544&quot;&gt;&lt;del&gt;LU-3544&lt;/del&gt;&lt;/a&gt; patch .revalidate just return 0 if dentry is invalid, and let .lookup to do real lookup, instead of lookup in .revalidate in old code, and this may introduce a small overhead. I&apos;ll double check the call trace of unlink to see whether there are extra lookups.&lt;/p&gt;</comment>
                            <comment id="79513" author="laisiyao" created="Mon, 17 Mar 2014 15:53:29 +0000"  >&lt;p&gt;command like `mdtest -d /lustre/dir.0 -n 20000 -F -i 3` executed following syscalls on each file:&lt;br/&gt;
1. creat&lt;br/&gt;
2. close&lt;br/&gt;
3. stat&lt;br/&gt;
4. open&lt;br/&gt;
5. I/O&lt;br/&gt;
6. close&lt;br/&gt;
7. unlink&lt;/p&gt;

&lt;p&gt;For old code, syscall open in step 4 called .revalidate(IT_OPEN), which opened file, and close in step 6 called .release and did close the file.&lt;br/&gt;
While for new code, .revalidate doesn&apos;t execute intent any more, but return 1 directly, and later ll_intent_file_open() will open file with MDS_INODELOCK_OPEN, so close in step 6 doesn&apos;t really close the file because open lock is cached. And in step 7 unlink needs to close file before unlinking file, and this is the cause of unlink performance drop.&lt;/p&gt;

&lt;p&gt;IMHO this is not a real bug, because no extra RPC was sent, but because mdtest opened file twice, so in new code open lock is fetched. A possible fix might be to add a timestamp so .open can know just now .revalidate(IT_OPEN) was called, so no need to fetch open lock. But I&apos;m not sure this is necessary.&lt;/p&gt;</comment>
                            <comment id="79545" author="ihara" created="Tue, 18 Mar 2014 02:19:34 +0000"  >&lt;p&gt;Yes, even this might not be bug, we see perforamnce drop under mdtest IO senario at least. mdtest is one of major benchmark tool for metadata and it&apos;s one of metadata scenario. we would be keeping (at least) same performance with newer version of Lustre. If you have idea of workaorund, please share with us. I would like to test them.&lt;/p&gt;</comment>
                            <comment id="79555" author="laisiyao" created="Tue, 18 Mar 2014 07:02:31 +0000"  >&lt;p&gt;During test I saw other places that can be improved to increase file creation, stat, and maybe read performance, and I composed two patches:&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/#/c/9696/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/9696/&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/#/c/9697/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/9697/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Would you apply these two patches and get some result?&lt;/p&gt;</comment>
                            <comment id="79558" author="ihara" created="Tue, 18 Mar 2014 09:05:29 +0000"  >&lt;p&gt;sure, will test those patches very soon and keep you updates! Thanks a lot, again!&lt;/p&gt;</comment>
                            <comment id="81929" author="ihara" created="Fri, 18 Apr 2014 09:14:13 +0000"  >&lt;p&gt;Lai, these patches are broken. can&apos;t copy file from local filesystem to Lustre.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@r21 tmp]# touch /tmp/a
[root@r21 tmp]# cp /tmp/a /lustre/
cp: cannot create regular file `/lustre/a&apos;: File exists
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This worked.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@r21 tmp]# touch /lustre/a
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="81930" author="ihara" created="Fri, 18 Apr 2014 09:20:09 +0000"  >&lt;p&gt;this is debugfile when the problem happens.&lt;/p&gt;

&lt;p&gt;echo &quot;+trace&quot; &amp;gt; /proc/sys/lnet/debug&lt;br/&gt;
lctl debug_daemon start /tmp/debuglog 100&lt;br/&gt;
touch /tmp/a&lt;br/&gt;
cp /tmp/a /lustre&lt;br/&gt;
lctl debug_daemon stop&lt;br/&gt;
echo &quot;-trace&quot; &amp;gt; /proc/sys/lnet/debug&lt;/p&gt;</comment>
                            <comment id="82036" author="laisiyao" created="Mon, 21 Apr 2014 06:56:05 +0000"  >&lt;p&gt;Thanks Ihara, patches updated, previously I only tested mdtest, and didn&apos;t do a full test because they are intended to get mdtest performance data, and may not be final patches yet, sorry for the trouble made.&lt;/p&gt;</comment>
                            <comment id="82519" author="adilger" created="Fri, 25 Apr 2014 18:00:44 +0000"  >&lt;p&gt;Lai, it looks like the patches &lt;a href=&quot;http://review.whamcloud.com/9696&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/9696&lt;/a&gt; and &lt;a href=&quot;http://review.whamcloud.com/9697&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/9697&lt;/a&gt; are improving the open performance, but do not address the unlink performance.  Is there something that can be done to improve the unlink performance back to the 2.5.0 level so that 2.6.0 does not have a performance regression?&lt;/p&gt;</comment>
                            <comment id="82579" author="laisiyao" created="Mon, 28 Apr 2014 03:01:37 +0000"  >&lt;p&gt;The root cause is that revalidate(IT_OPEN) enqueued open lock, so that close is deferred to unlink which caused unlink performance drop, but totally there is no extra RPC. I don&apos;t find a clear way to handle this, so I think if we can improve open and stat performance a lot, it&apos;s worthwhile keeping the status quo.&lt;/p&gt;</comment>
                            <comment id="83272" author="adilger" created="Tue, 6 May 2014 08:06:09 +0000"  >&lt;p&gt;It might be possible to combine the close and unlink RPCs (unlink with close flag, or close with unlink flag?) so that the number of RPCs is actually reduced?  We already do something similar with early lock cancellation, so it might be possible to do something similar with the close.&lt;/p&gt;</comment>
                            <comment id="83378" author="laisiyao" created="Wed, 7 May 2014 03:36:16 +0000"  >&lt;p&gt;I&apos;ve thought of that, but considering the complication of open replay, and possibly SOM, I think it&apos;s not a trivial work. I&apos;ll think about it more and do some test later (maybe next week).&lt;/p&gt;</comment>
                            <comment id="84578" author="laisiyao" created="Wed, 21 May 2014 09:52:15 +0000"  >&lt;p&gt;Patch to combine close in unlink RPC: &lt;a href=&quot;http://review.whamcloud.com/#/c/10398/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/10398/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ihara, could you apply this only and get results from mdtest?&lt;/p&gt;</comment>
                            <comment id="85223" author="ihara" created="Fri, 30 May 2014 03:05:43 +0000"  >&lt;p&gt;Hi Lai, &lt;br/&gt;
it seems that there are several updates after you posted initial patches. please adivse me which patches should be applied?&lt;/p&gt;</comment>
                            <comment id="85282" author="adilger" created="Fri, 30 May 2014 18:06:44 +0000"  >&lt;p&gt;Lai should confirm, but I think the most important patch for addressing the unlink regression is &lt;a href=&quot;http://review.whamcloud.com/10398&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10398&lt;/a&gt; so that one should be tested first. &lt;/p&gt;

&lt;p&gt;There is also a potential improvement in &lt;a href=&quot;http://review.whamcloud.com/9696&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/9696&lt;/a&gt; that is next, but it doesn&apos;t affect unlink. I think the &lt;a href=&quot;http://review.whamcloud.com/9697&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/9697&lt;/a&gt; is too complex to land for 2.6.0 at this point, but if it gives a significant improvement then it could be landed for 2.7.0 and IEEL. &lt;/p&gt;</comment>
                            <comment id="85876" author="adilger" created="Thu, 5 Jun 2014 17:48:43 +0000"  >&lt;p&gt;Ihara, did you get a chance to test if 10398 fixes the unlink regression?  We are ready to land that patch.&lt;/p&gt;</comment>
                            <comment id="86086" author="ihara" created="Mon, 9 Jun 2014 13:01:03 +0000"  >&lt;p&gt;I&apos;m testing patches. will post results shortly.&lt;/p&gt;</comment>
                            <comment id="86710" author="adilger" created="Mon, 16 Jun 2014 17:52:35 +0000"  >&lt;p&gt;Ihara, any chance to post the results from your tests?&lt;/p&gt;</comment>
                            <comment id="87298" author="adilger" created="Mon, 23 Jun 2014 18:11:42 +0000"  >&lt;p&gt;Hi Ihara, is there a chance for you to post the mdtest results for the testing you did on 06-09 for patch &lt;a href=&quot;http://review.whamcloud.com/10398&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10398&lt;/a&gt; ?&lt;/p&gt;</comment>
                            <comment id="87374" author="ihara" created="Tue, 24 Jun 2014 15:32:15 +0000"  >&lt;p&gt;Andreas, sorry dely on this... Here is recent our test resutls.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Configuraiton
1 x MDS, 10 x SSD(RAID10) for MDT, 2 x OSS, 10 x OST(100 x NL-SAS)
32 clients, 64 mdtest threads and total 2.56M files creation/stats/removal
master branch(47cde804ddc9019ff0793229030211d536d0612f)
master branch(47cde804ddc9019ff0793229030211d536d0612f) + patch 10426 + patch 10398
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Unique Directory Operation&lt;br/&gt;
master branch(47cde804ddc9019ff0793229030211d536d0612f)&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;mdtest-1.9.3 was launched with 64 total task(s) on 32 node(s)
Command line used: ./mdtest -i 3 -n 40000 -u -d /lustre_test/mdtest.out
Path: /lustre_test
FS: 39.0 TiB   Used FS: 0.0%   Inodes: 50.0 Mi   Used Inodes: 0.0%

64 tasks, 2560000 files/directories

SUMMARY: (of 3 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   Directory creation:      48811.145      39252.347      42446.699       4500.354
   Directory stat    :     299207.829     290254.504     293619.032       3979.199
   Directory removal :      89250.695      86672.466      88049.098       1059.809
   File creation     :      80325.602      71720.354      76539.450       3588.203
   File stat         :     202533.695     202312.144     202430.663         91.108
   File read         :     224391.556     222667.559     223733.260        760.494
   File removal      :      93977.310      81732.593      89128.915       5313.644
   Tree creation     :        487.540        255.237        408.701        108.529
   Tree removal      :          7.483          7.376          7.416          0.048
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Unique Directory Operation&lt;br/&gt;
master branch(47cde804ddc9019ff0793229030211d536d0612f) + patch 10426 + patch 10398&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;mdtest-1.9.3 was launched with 64 total task(s) on 32 node(s)
Command line used: ./mdtest -i 3 -n 40000 -u -d /lustre_test/mdtest.out
Path: /lustre_test
FS: 39.0 TiB   Used FS: 0.0%   Inodes: 50.0 Mi   Used Inodes: 0.0%

64 tasks, 2560000 files/directories

SUMMARY: (of 3 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   Directory creation:      43529.024      38432.682      40492.505       2192.203
   Directory stat    :     295567.203     248965.236     278082.284      20727.046
   Directory removal :      99851.600      97510.819      98692.187        955.746
   File creation     :      76464.252      61260.049      69836.770       6358.281
   File stat         :     210322.996     203751.172     206953.520       2685.537
   File read         :     227658.211     225535.341     226317.238        952.564
   File removal      :      99144.730      98371.321      98765.310        315.911
   Tree creation     :        454.766        187.656        357.198        120.339
   Tree removal      :          7.494          7.383          7.438          0.045
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Shared Directory Operation&lt;br/&gt;
master branch(47cde804ddc9019ff0793229030211d536d0612f)&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;mdtest-1.9.3 was launched with 64 total task(s) on 32 node(s)
Command line used: ./mdtest -i 3 -n 40000 -d /lustre_test/mdtest.out
Path: /lustre_test
FS: 39.0 TiB   Used FS: 0.0%   Inodes: 50.0 Mi   Used Inodes: 0.0%

64 tasks, 2560000 files/directories

SUMMARY: (of 3 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   Directory creation:      28513.564      27700.587      28038.288        345.860
   Directory stat    :     142617.694     139431.318     141316.628       1364.858
   Directory removal :      60164.271      56562.712      58927.059       1672.450
   File creation     :      34568.359      34000.466      34304.269        233.536
   File stat         :     143387.629     140366.792     141459.265       1367.577
   File read         :     229820.877     222497.139     225426.481       3164.288
   File removal      :      66583.172      58133.175      61494.514       3659.539
   Tree creation     :       4132.319       3398.950       3773.387        299.598
   Tree removal      :         11.422          3.327          7.825          3.365
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Shared Directory Operation&lt;br/&gt;
master branch(47cde804ddc9019ff0793229030211d536d0612f) + patch 10426 + patch 10398&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;mdtest-1.9.3 was launched with 64 total task(s) on 32 node(s)
Command line used: ./mdtest -i 3 -n 40000 -d /lustre_test/mdtest.out
Path: /lustre_test
FS: 39.0 TiB   Used FS: 0.0%   Inodes: 50.0 Mi   Used Inodes: 0.0%

64 tasks, 2560000 files/directories

SUMMARY: (of 3 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   Directory creation:      28132.040      26630.642      27487.773        631.154
   Directory stat    :     136965.055     135597.500     136440.164        601.823
   Directory removal :      58149.733      55110.750      56638.405       1240.713
   File creation     :      33170.783      32710.907      32931.837        188.175
   File stat         :     138870.777     136286.854     137743.643       1080.330
   File read         :     234861.197     224503.115     228594.555       4499.710
   File removal      :      77518.626      69571.564      73940.211       3292.142
   Tree creation     :       4116.098       1102.314       2711.725       1238.885
   Tree removal      :          9.879          4.938          7.854          2.114
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We see performance improvements with patch for unlink operations to unique directories as well as shared directory. I will also want to check with lustre-2.5 to comapre.  btw, file/directory creation to shared directory is lower than I expected.. I will check later on other lustre version (e.g. b2_5) as well.&lt;/p&gt;</comment>
                            <comment id="87395" author="adilger" created="Tue, 24 Jun 2014 17:58:52 +0000"  >&lt;p&gt;It appears that the unlink performance has gone up, but the create and stat rate have gone down.   Can you please test those two patches separately?  If the 10398 patch is fixing the unlink performance without hurting the other performance it could land.  It might be that 10426 patch is changing the other performance and needs to be reworked.&lt;/p&gt;</comment>
                            <comment id="87401" author="ihara" created="Tue, 24 Jun 2014 18:51:45 +0000"  >&lt;p&gt;First, I tried only 10398 patch, but build fails since OBD_CONNECT_UNLINK_CLOSE is defined in 10426 patch. So, I needed both patches at same time to compile.&lt;/p&gt;

&lt;p&gt;BTW, here is same mdtest benchmark on same hardware, but lustre version is 2.5.2RC2. &lt;/p&gt;

&lt;p&gt;Unique Directory Operation&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;mdtest-1.9.3 was launched with 64 total task(s) on 32 node(s)
Command line used: ./mdtest -i 3 -n 40000 -u -d /lustre_test/mdtest.out
Path: /lustre_test
FS: 39.0 TiB   Used FS: 0.0%   Inodes: 50.0 Mi   Used Inodes: 0.0%

64 tasks, 2560000 files/directories

SUMMARY: (of 3 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   Directory creation:      44031.310      41420.993      43125.128       1205.815
   Directory stat    :     346144.788     329854.059     335352.348       7631.863
   Directory removal :      87592.556      86416.906      87118.114        506.033
   File creation     :      82518.567      64962.637      76375.141       8077.749
   File stat         :     215570.997     209551.901     212205.919       2508.198
   File read         :     151377.930     144487.897     147463.085       2890.255
   File removal      :     105964.879      93215.798     101520.782       5877.335
   Tree creation     :        628.925        410.522        542.680         94.889
   Tree removal      :          8.583          8.013          8.284          0.233
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Shared Directory Operation&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;mdtest-1.9.3 was launched with 64 total task(s) on 32 node(s)
Command line used: ./mdtest -i 3 -n 40000 -d /lustre_test/mdtest.out
Path: /lustre_test
FS: 39.0 TiB   Used FS: 0.0%   Inodes: 50.0 Mi   Used Inodes: 0.0%

64 tasks, 2560000 files/directories

SUMMARY: (of 3 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   Directory creation:      39463.778      38496.147      38986.389        395.138
   Directory stat    :     143006.039     134919.914     138809.226       3308.300
   Directory removal :      78711.817      76206.632      77846.563       1160.196
   File creation     :      75154.225      70792.633      72674.025       1830.264
   File stat         :     142431.366     138650.545     140623.793       1547.953
   File read         :     134643.457     132249.733     133383.879        981.251
   File removal      :      94311.826      83231.516      89991.676       4841.388
   Tree creation     :       4048.556       3437.954       3743.808        249.278
   Tree removal      :          9.098          4.048          6.792          2.084
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Unique directory metadata operation, overall, the result of master + 10398 + 10426 patches are close to 2.5.2RC2 results except directory stats. (stats operation, 2.5 is better than master)&lt;br/&gt;
However, metadata operations to a shared directory, most of 2.5.2RC2&apos;s numbers are still much higher than master or master + 10398 + 10426 patch&apos;s resutls. That&apos;s oritinal issue on this ticket, but still big performance gap there. Read operation, master branch much improved against 2.5 branch. &lt;/p&gt;</comment>
                            <comment id="87819" author="green" created="Mon, 30 Jun 2014 18:25:56 +0000"  >&lt;p&gt;So it looks like we have all of this extra file handle caching that should not really be happening at all.&lt;/p&gt;

&lt;p&gt;Originally when opencache was implemented - it did cache everything and that resulted in performance drop specifically due to slow lock cancellation.&lt;br/&gt;
That&apos;s when we decided to only restrict this caching to nfs-originated requests and some races - by only setting the flag in ll_file_intent_open where we could only get via nfs.&lt;br/&gt;
Now it appears that this assumption is broken?&lt;/p&gt;

&lt;p&gt;I am planning to take a deeper lok to understand what is happening with the cache now.&lt;/p&gt;</comment>
                            <comment id="87820" author="adilger" created="Mon, 30 Jun 2014 18:35:36 +0000"  >&lt;p&gt;Sorry about my earlier confusion with 10426 - I thought that was a different patch, but I see now that it is required for 10398 to work.&lt;/p&gt;

&lt;p&gt;It looks like the 10398 patch does improve the unlink performance, but at the expense of almost every other operation.  Since unlink is already faster than create, it doesn&apos;t make sense to speed it up and slow down create.  It looks like there is also some other change(s) that slowed down the create and stat operations on master compared to 2.5.2.&lt;/p&gt;

&lt;p&gt;It doesn&apos;t seem reasonable to land 10398 for 2.6.0 at this point.&lt;/p&gt;</comment>
                            <comment id="87934" author="laisiyao" created="Wed, 2 Jul 2014 01:48:19 +0000"  >&lt;p&gt;Oleg, the cause is simplified revalidate (see 7475), originally revalidate will execute IT_OPEN, but this code is replicate of lookup, and this opened handle can be lost if other client canceled this lock. So 7475 simplified revalidate, which just return 1 if dentry is valid, and let .open to really open file, but this can&apos;t be differentiate from NFS export open, so both open after revalidate and NFS export open take open lock.&lt;/p&gt;</comment>
                            <comment id="88795" author="green" created="Fri, 11 Jul 2014 04:55:03 +0000"  >&lt;p&gt;So, it looks like we still can infer if the open originated from vfs or not.&lt;/p&gt;

&lt;p&gt;When we come from do_filp_open (this is for real open path), we go through filename_lookup with LOOKUP_OPEN set, when we go through dentry_open, LOOKUP_OPEN is not set.&lt;/p&gt;

&lt;p&gt;As such the most brute-force way I see to address this is in ll_revalidate_dentry to always return 0 if LOOKUP_OPEN is set and LOOKUP_CONTINUE is NOT set (i.e. we are looking up last component).&lt;br/&gt;
We already do a similar trick for LOOKUP_OPEN|LOOKUP_CONTINUE&lt;/p&gt;

&lt;p&gt;BTW, while looking at the ll_revalidate_dentry logic, I think we can improve it quite a bit too in the area of intermediate path component lookup.&lt;/p&gt;

&lt;p&gt;All of this is in this patch: &lt;a href=&quot;http://review.whamcloud.com/11062&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/11062&lt;/a&gt;&lt;br/&gt;
Ihara-san, please give it a try to see if it helps for your workload?&lt;br/&gt;
This patch passes medium level of my testing (does not include any performance testing).&lt;/p&gt;</comment>
                            <comment id="88816" author="ihara" created="Fri, 11 Jul 2014 12:34:17 +0000"  >&lt;blockquote&gt;
&lt;p&gt;All of this is in this patch: &lt;a href=&quot;http://review.whamcloud.com/11062&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/11062&lt;/a&gt;&lt;br/&gt;
Ihara-san, please give it a try to see if it helps for your workload?&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;sure, will test that patches as soon as I can run benchmark. maybe early next week, thanks!&lt;/p&gt;</comment>
                            <comment id="89746" author="cliffw" created="Tue, 22 Jul 2014 16:49:39 +0000"  >&lt;p&gt;I ran the patch on Hyperion, 1,32,64,100 clients. Mdtest dir-per-process and single-shared-dir. &lt;br/&gt;
Spreadsheet with graphs attached&lt;/p&gt;</comment>
                            <comment id="98973" author="jlevi" created="Wed, 12 Nov 2014 16:51:13 +0000"  >&lt;p&gt;Patches landed to Master. Please reopen ticket if more work is needed.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="25787">LU-5426</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="24209">LU-4906</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="25158">LU-5197</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="36132">LU-8019</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="15403" name="LU-4367.xlsx" size="101436" author="cliffw" created="Tue, 22 Jul 2014 16:49:38 +0000"/>
                            <attachment id="14736" name="debugfile" size="524391" author="ihara" created="Fri, 18 Apr 2014 09:20:09 +0000"/>
                            <attachment id="13972" name="unlinkmany-result.zip" size="3830" author="ihara" created="Sun, 12 Jan 2014 08:58:56 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                    <customfield id="customfield_10030" key="com.atlassian.jira.plugin.system.customfieldtypes:labels">
                        <customfieldname>Epic/Theme</customfieldname>
                        <customfieldvalues>
                                        <label>Performance</label>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwayn:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>11951</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10021"><![CDATA[2]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>