<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:57:57 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-6179] Lock ahead - Request extent locks from userspace</title>
                <link>https://jira.whamcloud.com/browse/LU-6179</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;At the recent developers conference, Jinshan proposed a different method of approaching the performance problems described in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6148&quot; title=&quot;Strided lock proposal - Feature proposal for 2.8&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6148&quot;&gt;&lt;del&gt;LU-6148&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Instead of introducing a new type of LDLM lock matching, we&apos;d like to make it possible for user space to explicitly request LDLM locks asynchronously from the IO.&lt;/p&gt;

&lt;p&gt;I&apos;ve implemented a prototype version of the feature and will be uploading it for comments.  I&apos;ll explain the state of the current version in a comment momentarily.&lt;/p&gt;</description>
                <environment></environment>
        <key id="28460">LU-6179</key>
            <summary>Lock ahead - Request extent locks from userspace</summary>
                <type id="2" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11311&amp;avatarType=issuetype">New Feature</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="paf">Patrick Farrell</assignee>
                                    <reporter username="paf">Patrick Farrell</reporter>
                        <labels>
                            <label>bgti</label>
                            <label>patch</label>
                    </labels>
                <created>Thu, 29 Jan 2015 22:13:47 +0000</created>
                <updated>Wed, 28 Apr 2021 02:10:15 +0000</updated>
                            <resolved>Thu, 21 Sep 2017 11:57:44 +0000</resolved>
                                                    <fixVersion>Lustre 2.11.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>23</watches>
                                                                            <comments>
                            <comment id="105133" author="paf" created="Thu, 29 Jan 2015 22:16:18 +0000"  >&lt;p&gt;As suggested by Jinshan and Andreas, this shares the machinery currently used for glimpse locking.&lt;/p&gt;

&lt;p&gt;That machinery is renamed to cl_request_lock/cl_request_lock0.&lt;/p&gt;

&lt;p&gt;I am not sure about the naming of the functions, but couldn&apos;t think of anything better.&lt;br/&gt;
There are several questions in the code marked with &quot;FIXME&quot;, and I have not - yet - added any tests, since I&apos;d like to get initial feedback before adding those.&lt;/p&gt;

&lt;p&gt;Code has been lightly tested &amp;amp; basic functionality verified.  LDLM extent locks can be requested from userspace with the additional ioctl, and they remain in the client lock cache after the request.&lt;/p&gt;

&lt;p&gt;Further question - Given that a new LDLM flag has been added, does some sort of compatibility flag need to be added?  If so, where and how?&lt;/p&gt;</comment>
                            <comment id="105136" author="gerrit" created="Thu, 29 Jan 2015 22:22:55 +0000"  >&lt;p&gt;Patrick Farrell (paf@cray.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/13564&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/13564&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6179&quot; title=&quot;Lock ahead - Request extent locks from userspace&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6179&quot;&gt;&lt;del&gt;LU-6179&lt;/del&gt;&lt;/a&gt; llite: Implement lock ahead&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: ae44d1474073f6030d83372d83ae2a18047fcc68&lt;/p&gt;</comment>
                            <comment id="105730" author="paf" created="Wed, 4 Feb 2015 20:49:16 +0000"  >&lt;p&gt;Jinshan - &lt;/p&gt;

&lt;p&gt;Wondering about this:&lt;br/&gt;
                if (oscl-&amp;gt;ols_agl) &lt;/p&gt;
{
                        cl_object_put(env, osc2cl(osc));
                        result = 0;
                }

&lt;p&gt;It would be nice to somehow tell user space whether or not their locks conflicted.&lt;/p&gt;

&lt;p&gt;It looks like the -EWOULDBLOCK is returned from the server back to the client.&lt;/p&gt;

&lt;p&gt;So how about something like:&lt;br/&gt;
                if (oscl-&amp;gt;ols_agl) &lt;/p&gt;
{
                        cl_object_put(env, osc2cl(osc));
                        if (rc != -EWOULDBLOCK)
                                   rc = 0;
                }

&lt;p&gt;Then in cl_glimpse_lock:&lt;br/&gt;
                        /* -EWOULDBLOCK is ignored by agls, because the lock&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;will be re-requested as blocking if it is needed */&lt;br/&gt;
                        if (agl &amp;amp;&amp;amp; result == -EWOULDBLOCK)&lt;br/&gt;
                                result = 0;
&lt;hr /&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;Then, in the lock ahead code, write it to a &quot;result&quot; field in each of the user provided extents.&lt;/p&gt;

&lt;p&gt;Thoughts?&lt;/p&gt;</comment>
                            <comment id="105733" author="paf" created="Wed, 4 Feb 2015 21:16:47 +0000"  >&lt;p&gt;Ah, mistake on my part.  AGL locks do not have CEF_NONBLOCK set.&lt;/p&gt;

&lt;p&gt;So no need to have any extra handling in cl_glimpse_block for -EWOULDBLOCK.&lt;/p&gt;

&lt;p&gt;The only special handling is in ll_lock_ahead, which will continue on to the next extent if a particular lock request receives -EWOULDBLOCK.&lt;/p&gt;</comment>
                            <comment id="106545" author="paf" created="Tue, 10 Feb 2015 22:31:22 +0000"  >&lt;p&gt;A quick note to my previous:&lt;br/&gt;
-EWOULDBLOCK cannot be returned to userspace because the lock request is asynchronous.  -ECANCELED CAN be returned to userspace.&lt;/p&gt;</comment>
                            <comment id="106847" author="paf" created="Thu, 12 Feb 2015 19:51:58 +0000"  >&lt;p&gt;A question for anyone inclined to think about it...&lt;/p&gt;

&lt;p&gt;I&apos;m having trouble with the fact that there&apos;s still a reference on locks that have not been granted yet.  So if I try an unmount while an asynchronously requested DLM lock has not yet been granted, there&apos;s a dangling reference that prevents unmounting.&lt;/p&gt;

&lt;p&gt;Any thoughts?  There&apos;s a deadlock on the server that I&apos;m trying to sort out that caused me to see this, but resolving that doesn&apos;t solve the fundamental problem of unmounting with an as-yet-incomplete async lock request.  Do AGL locks have some way to handle this I&apos;ve missed or broken?&lt;/p&gt;</comment>
                            <comment id="119140" author="paf" created="Fri, 19 Jun 2015 20:09:59 +0000"  >&lt;p&gt;It&apos;s been a while here, but the code is ready.&lt;/p&gt;

&lt;p&gt;The current patch set at &lt;a href=&quot;http://review.whamcloud.com/#/c/13564/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/13564/&lt;/a&gt; is ready for review.  I&apos;ve fixed all of the bugs, etc, I&apos;m currently aware of.&lt;/p&gt;

&lt;p&gt;It&apos;d still be great to try to land this for 2.8, so I&apos;d like to ask for review soon if possible.&lt;/p&gt;</comment>
                            <comment id="119155" author="jay" created="Fri, 19 Jun 2015 21:37:12 +0000"  >&lt;p&gt;I will take a look at this patch. Thanks,&lt;/p&gt;</comment>
                            <comment id="128692" author="adilger" created="Mon, 28 Sep 2015 23:22:14 +0000"  >&lt;p&gt;Patrick, what do you think about using the &lt;tt&gt;llapi_ladvise()&lt;/tt&gt; interface from &lt;a href=&quot;http://review.whamcloud.com/10029&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10029&lt;/a&gt; &quot;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4931&quot; title=&quot;New feature of giving server/storage side advice of accessing file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4931&quot;&gt;&lt;del&gt;LU-4931&lt;/del&gt;&lt;/a&gt; ladvise: Add feature of giving file access advices&quot; instead of a separate &quot;lockahead&quot; interface?  This would essentially be a new &quot;WILLWRITE&quot; advice that fetches locks from the OSS.  One benefit of that interface is that it is possible to submit multiple advices for a single file at one time, and process them at the server.  This shouldn&apos;t be a cause of lock deadlocks since all of the advices are &quot;trylock&quot; and processed at the server, so any conflicting locks would never be granted.&lt;/p&gt;

&lt;p&gt;Having a single ladvise interface (WILLWRITE, WILLNEED, or maybe rename/alias this to WILLREAD; RANDOM) would avoid complexity for application writers.&lt;/p&gt;

&lt;p&gt;Thoughts?&lt;/p&gt;</comment>
                            <comment id="132152" author="paf" created="Fri, 30 Oct 2015 13:31:25 +0000"  >&lt;p&gt;Yeah.  Sorry for the delay in getting back to you...  I&apos;ve got a better write up of this that I left elsewhere, which I&apos;ll put in Gerrit later today as a reply to you and Vitaly.&lt;/p&gt;

&lt;p&gt;Briefly, that only gets us one of the two benefits of lock ahead.  That avoids the lock exchange situation, where clients take turns holding a large lock on the file.&lt;br/&gt;
That&apos;s one benefit of lock ahead, but the other is that the clients already have the locks when they go to write.  That means the client does not have to wait and ask the server to get the lock.  (The server will grant immediately, but the client still has to wait for the round trip to get the lock.)&lt;/p&gt;

&lt;p&gt;The total RPC traffic is the same, since the clients had to ask for the locks, but a client can ask for many locks fairly quickly and overlap those requests.  So when a write request actually comes in from user space, the client already has the relevant lock and can begin to write immediately.&lt;/p&gt;

&lt;p&gt;Not having that - which would be the case if we just used ladvise to inform the servers of our intentions - slows things down significantly for an individual client.  In that case, it might be possible to overcome that by adding more clients, since we&apos;re no longer undergoing lock exchange.  I still like the explicit lock ahead interface because it&apos;s more efficient, but I am trying some benchmarking at Cray to see if something like this - Which I can imitate fairly well just by using request only locking without lock ahead - is workable.  (I did benchmark it last year and got very poor results, but I&apos;m now wondering if there wasn&apos;t something wrong with that testing, since the results were much worse than I can easily explain.)&lt;/p&gt;</comment>
                            <comment id="135089" author="spitzcor" created="Thu, 3 Dec 2015 15:40:53 +0000"  >&lt;p&gt;We should open an LUDOC ticket to track doc updates for this policy.&lt;/p&gt;</comment>
                            <comment id="135181" author="adilger" created="Fri, 4 Dec 2015 00:36:42 +0000"  >&lt;p&gt;Your comments about the benefits of lockahead vs ladvise lead me to think that we may be talking past each other, since in my mind the end results of the two interfaces will be the same.&lt;/p&gt;

&lt;p&gt;I&apos;m &lt;b&gt;not&lt;/b&gt; suggesting that the use of &lt;tt&gt;ladvise&lt;/tt&gt; is intended to change the semantics of lockahead to &lt;b&gt;only&lt;/b&gt; advise the OSS of our intended file access pattern.  Rather, the &lt;tt&gt;LADVISE_WILLWRITE&lt;/tt&gt; (or maybe &lt;tt&gt;LADVISE_WILLWRLOCK&lt;/tt&gt;?) advice could prefetch one or more &lt;tt&gt;LCK_PW&lt;/tt&gt; locks to the client for each specified extent like your lockahead implementation does.  The main difference would be the userspace API for &quot;lockahead&quot; would change to be &quot;ladvise&quot; so that it is consistent with other types of advice that an application might give to the filesystem that isn&apos;t available via existing kernel APIs.  Since ladvise is a Lustre-specific interface we can specify the advice and semantics as we see fit, so that higher-level libraries can better convey their intentions to Lustre.&lt;/p&gt;

&lt;p&gt;My main goal for suggesting this is to avoid having multiple different ways of passing higher-level file access advice from userspace to the filesystem, rather than giving application/library developers a single interface that they can use for different IO patterns (both read and write, sequential and strided and random, synchronous and asynchronous), possibly at the same time on different parts of the same file.&lt;/p&gt;

&lt;p&gt;As I wrote previously, I&apos;m not dead set on this path, I just wanted to make sure that we are talking about the same thing before we disagree on the solution.&lt;/p&gt;</comment>
                            <comment id="136892" author="paf" created="Fri, 18 Dec 2015 19:41:49 +0000"  >&lt;p&gt;OK, I think I perhaps follow you better now.&lt;/p&gt;

&lt;p&gt;I can see fitting lock ahead in to the ladvise model; much of what&apos;s needed is there.  However, it would require some notable changes to ladvise, however.  It currently sends a special type of request over to the OST, rather than making a lock request.  Would we split ladvise at a low level to make lock requests (when doing lock ahead or similar) instead of using the OST_LADVISE requests as implemented in osc_ladvise_base?  I don&apos;t see, at that level, how I could adapt the ladvise infrastructure to requesting locks without just splitting it.&lt;/p&gt;

&lt;p&gt;There&apos;s still a lot of value in having just one interface, but I like it a lot less if I&apos;m only partly using ladvise to make the requests.&lt;/p&gt;

&lt;p&gt;So I&apos;m still on the fence, partly depending on your thoughts on the above.&lt;/p&gt;</comment>
                            <comment id="136893" author="paf" created="Fri, 18 Dec 2015 19:42:13 +0000"  >&lt;p&gt;Ah, and one more thought, more pragmatically: ladvise is not landed yet.&lt;/p&gt;</comment>
                            <comment id="137256" author="adilger" created="Wed, 23 Dec 2015 06:50:57 +0000"  >&lt;p&gt;My interest in using the ladvise API is to be consistent in userspace for &lt;tt&gt;LADVISE_WILLREAD&lt;/tt&gt; and &lt;tt&gt;LADVISE_WILLWRITE&lt;/tt&gt;.  I&apos;m a bit unhappy that the RPC transport would be different between the two, but I don&apos;t think that is fatal.  I&apos;d rather use the existing LDLM infrastructure for lock-ahead, instead of shoe-horning it into &lt;tt&gt;OST_LADVISE&lt;/tt&gt;.&lt;/p&gt;

&lt;p&gt;I expect that the ladvise code will be landed early on with 2.9 so I don&apos;t think that would be an obstacle.&lt;/p&gt;</comment>
                            <comment id="137319" author="paf" created="Wed, 23 Dec 2015 20:23:07 +0000"  >&lt;p&gt;All right.  I&apos;m certainly game for rebasing on ladvise, though that will come a bit later as I&apos;ll have to adapt ladvise slightly to fit the new functionality.&lt;/p&gt;

&lt;p&gt;I&apos;m not sure I understand exactly what you&apos;re getting at for LADVISE_WILLREAD and LADVISE_WILLWRITE (though I agree with the larger goal of having only one interface as much as possible).  Are you talking about that just in terms of how the lock mode is specified?&lt;/p&gt;</comment>
                            <comment id="137394" author="adilger" created="Thu, 24 Dec 2015 08:32:12 +0000"  >&lt;p&gt;I&apos;m thinking lockahead would mostly be related to WILLWRITE. The current WILLREAD support for ladvise is prefetching the data into cache on the OSS side, and I don&apos;t know if that makes sense to integrate into lockahead or not? Since the DLM locking for reads isn&apos;t going to conflict like it does for writes, the only benefit I can see is that it makes sense to cancel potentially conflicting write locks on the region WILLREAD is covering, unless the lock holder is the requesting node. &lt;/p&gt;</comment>
                            <comment id="137449" author="vitaly_fertman" created="Fri, 25 Dec 2015 17:25:03 +0000"  >&lt;p&gt;afaics, the discussion in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4931&quot; title=&quot;New feature of giving server/storage side advice of accessing file&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4931&quot;&gt;&lt;del&gt;LU-4931&lt;/del&gt;&lt;/a&gt; states that fadvise and ladvise to be kept separate, in a sense that ladvise is dealing with OST cache, i.e. the client given the advise has a cluster-wide knowledge about this file IO patterns and timing (otherwise, advises like DONTNEED will not work if other clients have not finished). Whereas fadvise is dealing with the client cache, what is its turn may need an fs callback to let FS to do some client cache pre-fetching or even re-use ladvise functionality if needed.&lt;/p&gt;

&lt;p&gt;Having said that, lock ahead is relevant to fadvise and it seems it should stay separate from ladvise, until we want to re-implement fadvise functionality through ladvise as well. otherwise, the purpose of ladvise becomes very confusing. however, It does not mean you cannot create 2 separate lfs commands sharing the code and the same ioctl between them.&lt;/p&gt;</comment>
                            <comment id="137451" author="adilger" created="Sat, 26 Dec 2015 02:09:05 +0000"  >&lt;p&gt;I don&apos;t have any confidence that the upstream fadvise() call will be changed in a way that is useful to Lustre any time soon. Using ladvise for lockahead makes sense to me, since it is essentially telling the server that the client will be writing the given ranges in the near future, and to optimize this the result is to pass write locks to the client if available.  Since the lockahead code is advisory, it isn&apos;t actually requiring the locks to be sent (e.g. if conflicting with other clients holding the lock) so it fits the current API reasonably well. &lt;/p&gt;</comment>
                            <comment id="155705" author="paf" created="Tue, 14 Jun 2016 21:59:00 +0000"  >&lt;p&gt;Andreas made an excellent comment in the review (which I can&apos;t reply to there because I&apos;m in the middle of replying to the other comments):&lt;/p&gt;

&lt;p&gt;&quot;Do you think &quot;lockahead&quot; is the right name for this advice, or &quot;willwrite&quot;? Not necessarily a request to change it, but an open question. Note: I&apos;d prefer &quot;lockahead&quot; over &quot;lock_ahead&quot; since the other advices are also a single word.&lt;br/&gt;
&quot;lockahead&quot; is very specific request for what action the application wants taken, while &quot;willwrite&quot; is saying what the application wants to do and leaves the &quot;what&quot; of optimization to Lustre. For example, &quot;willwrite&quot; might send both the lockahead request as well as (potentially) an fallocate() request to preallocate the file blocks more efficiently...&quot;&lt;/p&gt;

&lt;p&gt;That&apos;s a good point.  Lock ahead isn&apos;t a very good name for the feature, it&apos;s just a name for a particular use of the feature.&lt;/p&gt;

&lt;p&gt;I&apos;m leery about naming it WILLWRITE, though.  It&apos;s possible to request READ locks via this mechanism (yeah, there&apos;s no obvious application, but it&apos;s possible).  I also thought it could be used for requesting group locks, potentially.&lt;/p&gt;

&lt;p&gt;How about &quot;lockrequest/LU_LADVISE_LOCKREQUEST&quot;?  I will similarly rename the various other bits.&lt;/p&gt;</comment>
                            <comment id="155706" author="paf" created="Tue, 14 Jun 2016 22:00:31 +0000"  >&lt;p&gt;Note that perhaps a request to fallocate could be included as a flag which would generate an actual ladvise RPC, as opposed to the lock request RPC generated by lockahead.&lt;/p&gt;</comment>
                            <comment id="186140" author="paf" created="Fri, 24 Feb 2017 19:31:08 +0000"  >&lt;p&gt;The attached files are instructions for building, installing, and testing MPICH with lockahead, and two things you need for that process - A fully built and up-to-date package of autotools binaries, and the patch to ANL MPICH which enables lockahead via ladvise.  (This patch also includes support for the older ioctl based interface.)&lt;/p&gt;

&lt;p&gt;The patch is &lt;b&gt;NOT&lt;/b&gt; the final version that will be submitted to ANL (it needs cleaning up), but it has the same functionality, and does work correctly.&lt;/p&gt;</comment>
                            <comment id="189945" author="adilger" created="Wed, 29 Mar 2017 03:40:55 +0000"  >&lt;p&gt;Cong, would it be possible for you to test the attached MPICH patch to see what kind of performance improvements you can get.&lt;/p&gt;

&lt;p&gt;This would need to use a version of Lustre built with patch &lt;a href=&quot;https://review.whamcloud.com/13564&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/13564&lt;/a&gt; &quot;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6179&quot; title=&quot;Lock ahead - Request extent locks from userspace&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6179&quot;&gt;&lt;del&gt;LU-6179&lt;/del&gt;&lt;/a&gt; llite: Implement ladvise lockahead&quot; such as those available under &lt;a href=&quot;https://build.hpdd.intel.com/job/lustre-reviews/45851/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://build.hpdd.intel.com/job/lustre-reviews/45851/&lt;/a&gt; (e.g. &lt;a href=&quot;https://build.hpdd.intel.com/job/lustre-reviews/45851/arch=x86_64,build_type=server,distro=el7,ib_stack=inkernel/artifact/artifacts/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://build.hpdd.intel.com/job/lustre-reviews/45851/arch=x86_64,build_type=server,distro=el7,ib_stack=inkernel/artifact/artifacts/&lt;/a&gt; or similar).&lt;/p&gt;

&lt;p&gt;Patrick,&lt;br/&gt;
it probably makes sense to include the MPICH patch in the &lt;tt&gt;lustre/contrib&lt;/tt&gt; patch so that it is available for others to reference/patch as needed.  Ideally, that would be a closer-to-final version of the patch than what is attached here.&lt;/p&gt;

&lt;p&gt;One thing I note is that lockahead is only enabled when explicitly requested in the ADIO hints.  That potentially limits the value of this feature to a very small number of users that know about setting the lockahead hints, rather than helping many users by default.&lt;/p&gt;</comment>
                            <comment id="189986" author="paf" created="Wed, 29 Mar 2017 15:00:15 +0000"  >&lt;p&gt;Andreas,&lt;/p&gt;

&lt;p&gt;I&apos;ll float the idea of turning it on by default, but I&apos;m pretty sure the Cray libraries people will feel strongly that it should be limited to use with hints to use with hints, particularly because in some of the simplest cases (single writer per stripe) it will reduce performance.  We could try to bake in enough smarts to avoid those cases, but at least so far, the preference of the libraries people has been to keep it with the hints.&lt;/p&gt;

&lt;p&gt;About the Lustre contrib patch: from looking at it as it is in the source currently, I believe that patch has been integrated into MPICH.  It&apos;s actually probably time to remove the existing one, but it would be reasonable to add a new one enabling lock ahead.&lt;/p&gt;

&lt;p&gt;I&apos;ll see about getting an updated version from our library developers.  Unless it comes in right away, I will probably make that a separate commit from the current one.&lt;/p&gt;</comment>
                            <comment id="190027" author="adilger" created="Wed, 29 Mar 2017 18:20:11 +0000"  >&lt;p&gt;Patrick, sorry I wasn&apos;t meaning that you should do anything with the old patches in &lt;tt&gt;lustre/contrib&lt;/tt&gt;, rather that you should add the new lockahead MPICH patch there. It would be great if you could delete the old MPICH patches at the same time, since they are long obsolete.  It looks like the &lt;tt&gt;lustre/contrib/README&lt;/tt&gt; file also needs an update.&lt;/p&gt;</comment>
                            <comment id="190032" author="czx0003" created="Wed, 29 Mar 2017 18:59:13 +0000"  >&lt;p&gt;Hi Andreas,&lt;/p&gt;

&lt;p&gt;Sure, I will run the test.&lt;/p&gt;

&lt;p&gt;Cong&lt;/p&gt;</comment>
                            <comment id="191964" author="paf" created="Thu, 13 Apr 2017 20:19:58 +0000"  >&lt;p&gt;@JamesNunez - The attached are the Cray produced documents on the lockahead specific testing we did internally.  It&apos;s also seen some testing with applications, load testing, and of course the sanity tests.&lt;/p&gt;</comment>
                            <comment id="195834" author="paf" created="Mon, 15 May 2017 15:05:50 +0000"  >&lt;p&gt;Cong,&lt;/p&gt;

&lt;p&gt;I&apos;m looking at your test results, and since the two ways of running gave almost identical results, I think we&apos;ve got a problem, possibly a bottleneck somewhere else.  (There could be a bug in the MPICH or Lustre side as well causing lockahead not to activate, but I did test both, so we&apos;ll assume no for the moment.)&lt;/p&gt;

&lt;p&gt;First: What happens if you try just 4 processes and 4 aggregators, no lockahead?  What&apos;s the result look like?  That &lt;b&gt;should&lt;/b&gt; avoid lock contention entirely and give better results...  But I bet we&apos;re still going to see that same 2.6 GB/s final number&lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/help_16.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;.&lt;/p&gt;

&lt;p&gt;What does 1 aggregator do with a 1 stripe file?  What about 2 aggregators with a 1 stripe file, with and without lockahead?&lt;/p&gt;

&lt;p&gt;And what about what should probably be the maximum performance case, 8 process FPP without collective I/O?&lt;/p&gt;</comment>
                            <comment id="197509" author="czx0003" created="Tue, 30 May 2017 05:07:01 +0000"  >&lt;p&gt;Hi Patrick,&lt;/p&gt;

&lt;p&gt;Thanks for the comments! We have conducted 3 tests: perfect scenario, varying number of processes and varying Lustre Stripe Size. &#8220;LockAheadResults.docx&#8221; documents the details. &lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;Test 1&amp;#93;&lt;/span&gt; In the perfect scenario, we launch 4 Processes on 4 Lustre Clients (1 Process per Client), accessing 4 Lustre OSTs remotely. Both Original and Lock Ahead cases deliver 2700MB/s bandwidth. This is the maximum bandwidth of our Lustre file system. (Section 2.1 Perfect Scenario (Independent I/O))&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;Test 2&amp;#93;&lt;/span&gt; To conduct the test that the Lock Ahead code should probably deliver superior performance than the original code, we launch up to 512 processes to perform independent I/O to our Lustre file system. The bandwidth of both Original and Lock Ahead cases are 2000MB/s. (Section 2.2 Vary number of processes (Independent I/O))&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;Test 3&amp;#93;&lt;/span&gt; We also have investigated the effects of various Lustre Stripe Size on the I/O performance. We keep IOR Transfer Size constant (4MB), and increase the Lustre Stripe Size from 1MB to 64MB, Both Original and Lock Ahead cases deliver 2000MB/s bandwidth. (2.3 Vary Lustre Stripe Size (Independent I/O))&lt;/p&gt;</comment>
                            <comment id="197537" author="paf" created="Tue, 30 May 2017 12:50:51 +0000"  >&lt;p&gt;If you&apos;re getting the maximum bandwidth already without lockahead, then it&apos;s definitely not going to help. &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;  There&apos;s no help for it to give.&lt;/p&gt;

&lt;p&gt;I don&apos;t completely follow your description of the patterns, but that&apos;s OK.  Can we try simplifying?&lt;/p&gt;

&lt;p&gt;Let&apos;s try 1 stripe, 1 process from one node.  What&apos;s the bandwidth #?&lt;br/&gt;
Then try 2 processes, one per node (so, 2 nodes) to a single file (again on one OST).  What does that show?  (Without lockahead)&lt;br/&gt;
(Also, please share your IOR command lines for these, like you did before.)&lt;br/&gt;
Then, if there&apos;s a difference in those cases, try lockahead in the second case.&lt;/p&gt;

&lt;p&gt;If we&apos;ve got everything set up right and the OSTs are fast enough for this to matter (I think they may not be), then the second case should be slower than the first (and lockahead should help).  But it looks like each OST is capable of ~600-700 MB/s, that may not be enough to show this, depending on network latency, etc.  I would expect to see the effect, but it might not show up.  We make use of this primarily on much faster OSTs.  (3-6 GB/s, for example)  So if it doesn&apos;t show up, maybe you could try RAM backed OSTs?&lt;/p&gt;

&lt;p&gt;Thanks!&lt;/p&gt;</comment>
                            <comment id="197568" author="adilger" created="Tue, 30 May 2017 17:36:27 +0000"  >&lt;p&gt;Cong, the lockahead code will only show a benefit if there is a single shared file with all threads writing to that file.  Otherwise, Lustre will grant a single whole-file lock to each client at first write, and there is no lock contention.&lt;/p&gt;</comment>
                            <comment id="197573" author="czx0003" created="Tue, 30 May 2017 17:47:13 +0000"  >&lt;p&gt;Hi Andreas,&lt;/p&gt;

&lt;p&gt;Yes. In my second test, I launch 512 processes on 8 Lustre Clients (64 Processes/Client) to write a single shared file, there should be lock contentions in Lustre.&lt;/p&gt;</comment>
                            <comment id="197579" author="paf" created="Tue, 30 May 2017 18:23:21 +0000"  >&lt;p&gt;Cong,&lt;/p&gt;

&lt;p&gt;Yes, that&apos;s true, but with that many processes and so few (and relatively slow) OSTs, you may not see any difference.  For example, in this case, your OSTs are (best case) capable of 2700 MB/s total.  That means each process only needs to provide 42 MB/s of that, and each node only ~340 MB/s.  That means per OST, each node only needs to provide ~85 MB/s.  That&apos;s not much, so I&apos;m not surprised lockahead isn&apos;t giving any benefit.&lt;/p&gt;

&lt;p&gt;Lockahead is really for situations where a single OST is faster than one client can write to it.  One process on one client can generally write at between 1-2 GB/s, depending on various network and CPU properties.  So these OSTs are quite slow for this testing.&lt;/p&gt;

&lt;p&gt;So, this testing is sensitive to scale and latency issues.  Are you able to do the small tests I requested?  They should shed some light.&lt;/p&gt;</comment>
                            <comment id="197589" author="jay" created="Tue, 30 May 2017 19:40:20 +0000"  >&lt;p&gt;Let&apos;s reduce the number of processes per client and see how it goes. For example, let&apos;s do 1 process per client with 8 clients, and then 2 processes per client, etc.&lt;/p&gt;</comment>
                            <comment id="197604" author="czx0003" created="Tue, 30 May 2017 21:14:48 +0000"  >&lt;p&gt;Hi Jinshan,&lt;/p&gt;

&lt;p&gt;Thanks for the suggestions! Yes, in our second test (Section 2.2 Vary number of processes (Independent I/O)), we started from 1 process per client with 8 clients (total 8 processes) to 64 processes per client with 8 clients (total 512 processes).&lt;/p&gt;

&lt;p&gt;Hi Patrick,&lt;/p&gt;

&lt;p&gt;Yes, we have tried simple test as you suggested. Please have a look at the results in section 2.4: Simple Test (1 process and 2 processes accessing a single shared file on one OST).&lt;/p&gt;</comment>
                            <comment id="198358" author="paf" created="Tue, 6 Jun 2017 19:01:08 +0000"  >&lt;p&gt;Cong,&lt;/p&gt;

&lt;p&gt;Sorry to take a bit to get back to you.&lt;/p&gt;

&lt;p&gt;Given the #s in section 2.4, you&apos;re &lt;b&gt;barely&lt;/b&gt; seeing the problem, and lockahead does have some overhead.  I wouldn&apos;t necessarily expect it to help in that case.  It would be much easier to see if you had faster OSTs - So, I&apos;d like to request RAM backed OSTs.&lt;/p&gt;

&lt;p&gt;It&apos;s also possible something is wrong with the library.  While I think we&apos;ll need RAM backed OSTs (or at least, much faster OSTs) to see benefit, we can explore this possibility as well.&lt;/p&gt;

&lt;p&gt;Let&apos;s take one of the very simple tests, like a 1 stripe file with 1 process per client on 2 clients.  I assume you&apos;re creating the file fresh before the test, but if not, please remove it and re-create it right before the test.  Then, let&apos;s look at lock count before and after running IOR (add the -k option so the file isn&apos;t deleted, otherwise the locks will be cleaned up).&lt;/p&gt;

&lt;p&gt;Specifically, on one of the clients, cat the lock count for the OST where the file is:&lt;br/&gt;
cat /sys/fs/lustre/ldlm/namespaces/&lt;span class=&quot;error&quot;&gt;&amp;#91;OST&amp;#93;&lt;/span&gt;/lock_count&lt;/p&gt;

&lt;p&gt;Before and after the test.&lt;/p&gt;

&lt;p&gt;If the file is not deleted and the lock count hasn&apos;t gone up, lock ahead didn&apos;t work for some reason.&lt;/p&gt;

&lt;p&gt;Again, I think we&apos;ll need RAM backed OSTs regardless...  But this would be useful even without that.&lt;/p&gt;</comment>
                            <comment id="198359" author="paf" created="Tue, 6 Jun 2017 19:06:28 +0000"  >&lt;p&gt;Slides and paper from Cray User Group 2017 attached.  They contain real performance #s on real hardware, including from real applications.  Just for reference in case anyone is curious.&lt;/p&gt;</comment>
                            <comment id="198364" author="czx0003" created="Tue, 6 Jun 2017 19:46:42 +0000"  >&lt;p&gt;Hi Patrick,&lt;/p&gt;

&lt;p&gt;Thanks for the great suggestions! We conducted more tests recently and was able to demonstrate the power of Lock Ahead in test &quot;2.3 Vary Lustre Stripe Size (Independent I/O)&quot;.&lt;/p&gt;

&lt;p&gt;In this test, the transfer size of each process is configured to be 1MB and the stripe size grows from 256KB to 16MB. When the stripe size equals to 16MB, 16 processes write a single stripe simultaneously, leading to lock contention issues. In this test, Lock Ahead code performs 21.5% better than original code.&lt;/p&gt;</comment>
                            <comment id="198365" author="paf" created="Tue, 6 Jun 2017 20:14:10 +0000"  >&lt;p&gt;Huh, OK!  That&apos;s a clever way to show it.&lt;/p&gt;

&lt;p&gt;A faster OST will show much larger benefits, of course. &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</comment>
                            <comment id="198370" author="czx0003" created="Tue, 6 Jun 2017 21:19:48 +0000"  >&lt;p&gt;Hi Patrick,&lt;/p&gt;

&lt;p&gt;In the very simple test you suggested, the lock count does increase from 0 to 4010, so the lock ahead works well.&lt;br/&gt;
Yes, a faster OST should show more benefits. &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</comment>
                            <comment id="199706" author="tappro" created="Tue, 20 Jun 2017 13:32:06 +0000"  >&lt;p&gt;Patrick, the new test 255c in sanity.sh reports the following:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;== sanity test 255c: suite of ladvise lockahead tests ================================================ 04:54:04 (1495688044) Starting test test10 at 1495688045
Finishing test test10 at 1495688045
Starting test test20 at 1495688045
cannot give advice: Invalid argument (22)
cannot give advice: Invalid argument (22)
cannot give advice: Invalid argument (22)
cannot give advice: Invalid argument (22)
cannot give advice: Invalid argument (22)
cannot give advice: Invalid argument (22)
Finishing test test20 at 1495688045

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Is that expected or test isn&apos;t working properly?&lt;/p&gt;

&lt;p&gt;This is from the latest test results &lt;a href=&quot;https://testing.hpdd.intel.com/sub_tests/38e1f10a-4129-11e7-91f4-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/sub_tests/38e1f10a-4129-11e7-91f4-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="199714" author="paf" created="Tue, 20 Jun 2017 13:51:24 +0000"  >&lt;p&gt;Oh.  Hm.  No - It&apos;s skipping some of the tests.  Sorry about that, thanks for pointing it out.  Some development stuff I was doing escaped in to what I pushed upstream...  I&apos;ll fix that when I rebase to merge.&lt;/p&gt;</comment>
                            <comment id="208996" author="gerrit" created="Thu, 21 Sep 2017 06:12:47 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/13564/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/13564/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6179&quot; title=&quot;Lock ahead - Request extent locks from userspace&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6179&quot;&gt;&lt;del&gt;LU-6179&lt;/del&gt;&lt;/a&gt; llite: Implement ladvise lockahead&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: a8dcf372f430c308d3e96fb506563068d0a80c2d&lt;/p&gt;</comment>
                            <comment id="209026" author="pjones" created="Thu, 21 Sep 2017 11:57:44 +0000"  >&lt;p&gt;Landed for 2.11&lt;/p&gt;</comment>
                            <comment id="266492" author="gerrit" created="Wed, 1 Apr 2020 01:20:34 +0000"  >&lt;p&gt;Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/38109&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/38109&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6179&quot; title=&quot;Lock ahead - Request extent locks from userspace&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6179&quot;&gt;&lt;del&gt;LU-6179&lt;/del&gt;&lt;/a&gt; llite: remove LOCKAHEAD_OLD compatibility&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: f6bc909bfda5521454631a4985648e07c63137ee&lt;/p&gt;</comment>
                            <comment id="266904" author="gerrit" created="Mon, 6 Apr 2020 14:28:50 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/38109/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/38109/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6179&quot; title=&quot;Lock ahead - Request extent locks from userspace&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6179&quot;&gt;&lt;del&gt;LU-6179&lt;/del&gt;&lt;/a&gt; llite: remove LOCKAHEAD_OLD compatibility&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 5315db3f1066619d6effe4f778d2df3ad1ba738f&lt;/p&gt;</comment>
                            <comment id="267160" author="gerrit" created="Wed, 8 Apr 2020 14:13:07 +0000"  >&lt;p&gt;Patrick Farrell (farr0186@gmail.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/38179&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/38179&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6179&quot; title=&quot;Lock ahead - Request extent locks from userspace&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6179&quot;&gt;&lt;del&gt;LU-6179&lt;/del&gt;&lt;/a&gt; llite: Remove last lockahead old compat&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: d23ed5713d41b5d8c69a989bb2600d83bf701c31&lt;/p&gt;</comment>
                            <comment id="299881" author="gerrit" created="Wed, 28 Apr 2021 02:10:15 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/38179/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/38179/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-6179&quot; title=&quot;Lock ahead - Request extent locks from userspace&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-6179&quot;&gt;&lt;del&gt;LU-6179&lt;/del&gt;&lt;/a&gt; llite: Remove last lockahead old compat&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 12a0c7b5944d9e48e38416c7cac2cde153e3148b&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10120">
                    <name>Blocker</name>
                                            <outwardlinks description="is blocking">
                                        <issuelink>
            <issuekey id="48239">LU-9962</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                                        </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="32367">LU-7225</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="48801">LU-10136</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="46545">LUDOC-379</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="28463">LU-6181</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="53914">LU-11618</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="28312">LU-6148</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="26317" name="LUSTRE-LockAhead-140417-1056-170.pdf" size="65496" author="paf" created="Fri, 14 Apr 2017 16:09:57 +0000"/>
                            <attachment id="26299" name="LockAhead-TestReport.txt" size="1425376" author="paf" created="Thu, 13 Apr 2017 20:18:14 +0000"/>
                            <attachment id="26927" name="LockAheadResults.docx" size="528061" author="czx0003" created="Tue, 6 Jun 2017 19:29:53 +0000"/>
                            <attachment id="25570" name="anl_mpich_build_guide.txt" size="17923" author="paf" created="Fri, 24 Feb 2017 19:29:12 +0000"/>
                            <attachment id="26925" name="cug paper.pdf" size="730886" author="paf" created="Tue, 6 Jun 2017 19:05:32 +0000"/>
                            <attachment id="25571" name="lockahead_ladvise_mpich_patch" size="30864" author="paf" created="Fri, 24 Feb 2017 19:29:12 +0000"/>
                            <attachment id="26926" name="mmoore cug slides.pdf" size="1205365" author="paf" created="Tue, 6 Jun 2017 19:05:33 +0000"/>
                            <attachment id="25572" name="sle11_build_tools.tar.gz" size="2604413" author="paf" created="Fri, 24 Feb 2017 19:29:14 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzx58n:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>17290</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>