<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:48:02 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5042] Recovery Lock Replay</title>
                <link>https://jira.whamcloud.com/browse/LU-5042</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;While performing load testing on one of our filesystems this week we power cycled the OSSs to test recovery.  To my surprise it ended up taking the OSS several hours to complete recovery and the vast majority of that time was spent in the lock replay stage.&lt;/p&gt;

&lt;p&gt;What I know for certain is that the OST has roughly 500,000 locks outstanding before it was power cycled.  When it came up all the clients did properly reconnect to it and seems to have decided to replay &lt;em&gt;all&lt;/em&gt; their locks, used and unused.  I thought we fixed this years ago, so I verified that the tunables were set such that we shouldn&apos;t replay unused locks.  They appeared to be set properly but those 500,000 locks were resent to the OST.&lt;/p&gt;

&lt;p&gt;After the recovery timed dropped to zero and I didn&apos;t quickly see recovery complete message I dumped some stacks from the OST.  They showed that the tgt_recov thread was in stage two sequentially replaying all of those 500,000 locks.  Because this was being done sequentially from a single thread the disk was hardly working and the system looked idle.&lt;/p&gt;

&lt;p&gt;This exact behavior has been reported on our production machines and I can easily understand why an administrator might think the system was hung/deadlocked and give up on it.  Basically the recovery timer drops to zero and then recovery doesn&apos;t actually complete for several hours.&lt;/p&gt;

&lt;p&gt;You should be able to fairly easily reproduce this on any test system.  Just ensure your server has a large number of locks enqueued and then power cycle it.&lt;/p&gt;</description>
                <environment></environment>
        <key id="24643">LU-5042</key>
            <summary>Recovery Lock Replay</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="bfaccini">Bruno Faccini</assignee>
                                    <reporter username="behlendorf">Brian Behlendorf</reporter>
                        <labels>
                            <label>llnl</label>
                    </labels>
                <created>Fri, 9 May 2014 23:40:23 +0000</created>
                <updated>Thu, 14 Jun 2018 21:41:37 +0000</updated>
                            <resolved>Thu, 28 Aug 2014 14:32:54 +0000</resolved>
                                    <version>Lustre 2.4.3</version>
                                    <fixVersion>Lustre 2.7.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>10</watches>
                                                                            <comments>
                            <comment id="83689" author="jay" created="Sat, 10 May 2014 00:53:40 +0000"  >&lt;p&gt;Do you have a rough idea what type the locks are? Right now in 2.4 implementation, only read locks w/o covering any read ahead pages will be canceled during recovery.&lt;/p&gt;

&lt;p&gt;We have a better implementation for cancel for recovery in master, but I want to clarify the situation before initiating a backport.&lt;/p&gt;</comment>
                            <comment id="83807" author="bzzz" created="Mon, 12 May 2014 06:14:25 +0000"  >&lt;p&gt;shouldn&apos;t we try to replay locks concurrently? given the locks do not conflict we could also skip regular processing on the server. we easily do 10K file creations a second from many client. with non-conflicting locks I&apos;d expect something like 100K enqueues/second.&lt;/p&gt;</comment>
                            <comment id="83860" author="jay" created="Mon, 12 May 2014 15:56:32 +0000"  >&lt;p&gt;That would be a good direction to improve. However, I think there might be a BUG on the client side to replay unnecessary locks.&lt;/p&gt;</comment>
                            <comment id="83863" author="bzzz" created="Mon, 12 May 2014 15:59:22 +0000"  >&lt;p&gt;from the lock itself it might be hard to decide whether the lock is &quot;useful&quot; (i.e. has some data behind).&lt;/p&gt;</comment>
                            <comment id="83875" author="jay" created="Mon, 12 May 2014 16:38:31 +0000"  >&lt;p&gt;ah right now in the implementation of 2.4, our policy is that if there is no readers for a LCK_PR, we think it&apos;s safe to cancel the lock. Of course we do trick to check the case if a page is being covered by multiple locks.&lt;/p&gt;

&lt;p&gt;In the latest implementation in master, we also do check for write lock as well - if canceling a write lock won&apos;t cause write back, it&apos;s also good to cancel it on the local side during recovery.&lt;/p&gt;</comment>
                            <comment id="83876" author="bzzz" created="Mon, 12 May 2014 16:42:57 +0000"  >&lt;p&gt;&quot;no readers&quot; doesn&apos;t mean a lock is useless - there might be data cached and protected by the lock. now if you cancel that you lose data. for example, that could be an executable running on the cluster with 1000x nodes. and after OST failover all those would have to re-read the data back.&lt;/p&gt;</comment>
                            <comment id="83877" author="jay" created="Mon, 12 May 2014 17:01:45 +0000"  >&lt;p&gt;indeed, I agree what you said.&lt;/p&gt;

&lt;p&gt;This is a trade-off to make the system operable sooner by shortening the recovery time. Otherwise, it would take a lot of time to replay all locks and the system is unusable during that time.&lt;/p&gt;</comment>
                            <comment id="83878" author="bzzz" created="Mon, 12 May 2014 17:09:42 +0000"  >&lt;p&gt;like said before - doing one lock a time can&apos;t be very fast. many lock replays in flight could make it few times faster which could be enough. iirc, we are able to process ~100K getattr/sec, which is more expensive than just locks (especially when you don&apos;t need to check for conflicts).&lt;/p&gt;</comment>
                            <comment id="83881" author="jay" created="Mon, 12 May 2014 17:34:07 +0000"  >&lt;p&gt;It totally depends on the work load - if the locks are for many clients, the res lock contention on the server side would be high, but it&apos;s worth trying.&lt;/p&gt;</comment>
                            <comment id="83885" author="behlendorf" created="Mon, 12 May 2014 17:58:20 +0000"  >&lt;p&gt;Jinshan, I&apos;m not sure what the mix of locks being replayed is.  We&apos;ve been running our SWL stress tests which loads up the filesystem with a random assortment of concurrent IOR, simul, fdtree, mdtest, and various application codes.  Each client appears to be replaying roughly 15,000 locks which I have a hard time believing are all covering dirty data which needs to be replayed.&lt;/p&gt;

&lt;p&gt;Replaying the enqueue also appears to be much slower than you&apos;re expecting.  We&apos;re seeing a rate of roughly 100 enqueues per second on ZFS because all the FID lookups are being done sequentially.  Each FID lookup on ZFS is likely going to require a synchronous IO because the OIs on won&apos;t be cached yet after the reboot.  And it appears this is all done sequentially from the target_recov thread in 2.4.  Here&apos;s a target_recov stack from an OST showing the issue.  During this replay &apos;iostat -x&apos; reports that our disks are only roughly 10% utilized which makes sense due to the single threaded read workload.&lt;/p&gt;

&lt;p&gt;Doing this sequentially wouldn&apos;t be so bad if there was an interface which could use to do a prefetch on the FID lookup as described in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5041&quot; title=&quot;FID Prefetching&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5041&quot;&gt;LU-5041&lt;/a&gt;.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;cat /proc/5512/stack 
[&amp;lt;ffffffffa01e78dc&amp;gt;] cv_wait_common+0x8c/0x100 [spl]
[&amp;lt;ffffffffa01e7968&amp;gt;] __cv_wait_io+0x18/0x20 [spl]
[&amp;lt;ffffffffa03b15bb&amp;gt;] zio_wait+0xfb/0x1b0 [zfs]
[&amp;lt;ffffffffa031f6bd&amp;gt;] dbuf_read+0x3fd/0x740 [zfs]
[&amp;lt;ffffffffa031fb89&amp;gt;] __dbuf_hold_impl+0x189/0x480 [zfs]
[&amp;lt;ffffffffa031ff06&amp;gt;] dbuf_hold_impl+0x86/0xc0 [zfs]
[&amp;lt;ffffffffa0320f80&amp;gt;] dbuf_hold+0x20/0x30 [zfs]
[&amp;lt;ffffffffa0327767&amp;gt;] dmu_buf_hold+0x97/0x1d0 [zfs]
[&amp;lt;ffffffffa037be7f&amp;gt;] zap_get_leaf_byblk+0x4f/0x2a0 [zfs]
[&amp;lt;ffffffffa037c13a&amp;gt;] zap_deref_leaf+0x6a/0x80 [zfs]
[&amp;lt;ffffffffa037c500&amp;gt;] fzap_lookup+0x60/0x120 [zfs]
[&amp;lt;ffffffffa0381f01&amp;gt;] zap_lookup_norm+0xe1/0x190 [zfs]
[&amp;lt;ffffffffa0382043&amp;gt;] zap_lookup+0x33/0x40 [zfs]
[&amp;lt;ffffffffa0cf86e0&amp;gt;] osd_fid_lookup+0xb0/0x2e0 [osd_zfs]
[&amp;lt;ffffffffa0cf2311&amp;gt;] osd_object_init+0x1a1/0x6d0 [osd_zfs]
[&amp;lt;ffffffffa070712d&amp;gt;] lu_object_alloc+0xcd/0x300 [obdclass]
[&amp;lt;ffffffffa0708571&amp;gt;] lu_object_find_at+0x211/0x370 [obdclass]
[&amp;lt;ffffffffa07086e6&amp;gt;] lu_object_find+0x16/0x20 [obdclass]
[&amp;lt;ffffffffa0d8f6c5&amp;gt;] ofd_object_find+0x35/0xf0 [ofd]
[&amp;lt;ffffffffa0d9eb0d&amp;gt;] ofd_lvbo_init+0x32d/0x950 [ofd]
[&amp;lt;ffffffffa084fa64&amp;gt;] ldlm_resource_get+0x374/0x820 [ptlrpc]
[&amp;lt;ffffffffa084a1b9&amp;gt;] ldlm_lock_create+0x59/0xcc0 [ptlrpc]
[&amp;lt;ffffffffa08716b6&amp;gt;] ldlm_handle_enqueue0+0x156/0x10a0 [ptlrpc]
[&amp;lt;ffffffffa0872666&amp;gt;] ldlm_handle_enqueue+0x66/0x70 [ptlrpc]
[&amp;lt;ffffffffa0d59378&amp;gt;] ost_handle+0x1db8/0x48e0 [ost]
[&amp;lt;ffffffffa0853011&amp;gt;] handle_recovery_req+0x181/0x2e0 [ptlrpc]
[&amp;lt;ffffffffa0859c82&amp;gt;] target_recovery_thread+0x912/0x1980 [ptlrpc]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="83892" author="prakash" created="Mon, 12 May 2014 18:14:14 +0000"  >&lt;p&gt;I think getting the filesystem back online should be the priority here. Yes, we want to try and preserve &quot;useful&quot; locks, but we don&apos;t want to do that at the cost of many minutes or hours of filesystem uptime.&lt;/p&gt;</comment>
                            <comment id="83894" author="behlendorf" created="Mon, 12 May 2014 18:18:44 +0000"  >&lt;p&gt;One other troublesome thing I noticed was that once recovery was complete and all the locks were enqueued virtually all of them were canceled.  The lock count on the server drops quickly from roughly 500,000 to 5,000.&lt;/p&gt;</comment>
                            <comment id="83946" author="jay" created="Mon, 12 May 2014 23:57:20 +0000"  >&lt;p&gt;Are the test cases running while on recovery?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One other troublesome thing I noticed was that once recovery was complete and all the locks were enqueued virtually all of them were canceled. The lock count on the server drops quickly from roughly 500,000 to 5,000.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;This is weird, the only thing I can think of is that those locks are write locks and there were lots of glimpse requests which accelerates the cancelation of them.&lt;/p&gt;

&lt;p&gt;We really need to make clear what type of locks they are. Is it possible to run the test cases again and get some log?&lt;/p&gt;</comment>
                            <comment id="83947" author="behlendorf" created="Tue, 13 May 2014 00:00:37 +0000"  >&lt;p&gt;Sure, it&apos;s easy enough to run the stress tests.  Can you tell me what logs you want.  Are the proc files on the client/servers, what log level on the server wouldn&apos;t be completely overwhelming.&lt;/p&gt;</comment>
                            <comment id="83950" author="jay" created="Tue, 13 May 2014 00:24:58 +0000"  >&lt;p&gt;I think it would be enough to get some log on the client side. There is no proc interface to dump all lock from a namespace. So please apply the follow settings on the client node:&lt;/p&gt;

&lt;p&gt;  lctl set_param debug=-1&lt;br/&gt;
  lctl set_param debug=-trace&lt;br/&gt;
  lctl set_param debug_mb=200&lt;/p&gt;

&lt;p&gt;should be enough. Please dump the log some time during the recovery and after recovery so that we can get all of them.&lt;/p&gt;

&lt;p&gt;Thanks in advance.&lt;/p&gt;</comment>
                            <comment id="84644" author="behlendorf" created="Wed, 21 May 2014 19:51:18 +0000"  >&lt;p&gt;I was able to grab an inspect a set of client logs for this problems.  For all the logs I looked at the client seems to have done the right thing and only replayed the locks it considered &apos;used&apos;.  Given our test workload and those log results I&apos;m inclined to believe there really were 500,000 &apos;used&apos; locks spread over all of the clients that needed to be replayed.  And upon further reflection this really isn&apos;t a particularly large number considering that&apos;s only 500 locks per-client  if you have 1000 clients.&lt;/p&gt;

&lt;p&gt;So I&apos;m thinking the issue here is simply that the servers must be able to handle lock replay significantly faster.  Particularly because none of this time is accounted for in the recovery timer which is downright confusing.  It shows up as the recovery timer dropping to zero and then it taking minutes or hours for you FS to actually be available.&lt;/p&gt;

&lt;p&gt;From what I&apos;ve seen on our ZFS based OSTs it&apos;s largely because lock replay seems to require reading data from the disk and this is all done sequentially.  The fact that you have a cold cache certainly doesn&apos;t help things.  Speeding that up somehow would improve things considerably.  I&apos;ve suggesting prefetching as a fairly easy way to speed this up but I&apos;m all for better ideas.&lt;/p&gt;</comment>
                            <comment id="84719" author="jay" created="Thu, 22 May 2014 17:06:52 +0000"  >&lt;p&gt;My guess is that the target is reading inode for LVB and pack it back to client side when replaying a lock. This can be avoided if this is really the case.&lt;/p&gt;</comment>
                            <comment id="84799" author="behlendorf" created="Fri, 23 May 2014 18:34:13 +0000"  >&lt;p&gt;If this IO can be avoided entirely that would be ideal.  Can you propose a patch because at the moment failover is exceptionally painful where there are a significant number of locks to replay.&lt;/p&gt;</comment>
                            <comment id="84826" author="pjones" created="Fri, 23 May 2014 23:18:49 +0000"  >&lt;p&gt;Bruno&lt;/p&gt;

&lt;p&gt;Could you please create a patch to implement this approach?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="84905" author="bfaccini" created="Tue, 27 May 2014 09:26:51 +0000"  >&lt;p&gt;Ok, I currently try to implement Jinshan&apos;s idea. Will try to push a patch soon.&lt;/p&gt;
</comment>
                            <comment id="85985" author="bfaccini" created="Fri, 6 Jun 2014 08:04:14 +0000"  >&lt;p&gt;Sorry I am a bit late on this.&lt;/p&gt;

&lt;p&gt;Based on earlier discussion/comments and after looking into related source files, and since I am not fully aware of the whole replay mechanism, here is what I am planning to implement :&lt;/p&gt;

&lt;p&gt;           _ no longer fetch/fill/send lvb for/to Client for already granted and successfully replayed locks on Server side. Seems this has to occur mainly in ldlm_handle_enqueue0().&lt;/p&gt;

&lt;p&gt;           _ on Client, upon already granted+relayed lock successful reply from Server, keep going with what we already got.  Seems this has to occur mainly in ldlm_cli_enqueue_fini(). But other places, like ldlm_handle_cp_callback()/replay_one_lock()/... may also need to be investigated.&lt;/p&gt;

&lt;p&gt;           _ not sure already on which flags/fields I will use to implement this and also if I need to only focus on Client/OST replays.&lt;/p&gt;

&lt;p&gt;           _ also, what is still unclear for me is how the Server is able to detect the replayed lock was already granted to Client (mainly during recovery after a Server crash/reboot), and also how the Server will handle the situation where lvb content had not been fetched during replay and really needs to be at some point of time later.&lt;/p&gt;

&lt;p&gt;Jinshan, Brian, any comments/add-ons/no-go on this ?&lt;/p&gt;</comment>
                            <comment id="86130" author="jay" created="Mon, 9 Jun 2014 17:51:01 +0000"  >&lt;blockquote&gt;
&lt;p&gt;_ no longer fetch/fill/send lvb for/to Client for already granted and successfully replayed locks on Server side. Seems this has to occur mainly in ldlm_handle_enqueue0().&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;I think we need to revise ldlm_resource_get() to delay lvbo_init() call for new resource. Instead, we can call lvbo_init() at the time when LVB is really necessary.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;_ not sure already on which flags/fields I will use to implement this and also if I need to only focus on Client/OST replays.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Clients pack a flag into replay request to indicate if the lock is granted or blocked. See the code snippet from ldlm_lock_enqueue() below:&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;        } &lt;span class=&quot;code-keyword&quot;&gt;else&lt;/span&gt; &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (*flags &amp;amp; LDLM_FL_REPLAY) {
                &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (*flags &amp;amp; LDLM_FL_BLOCK_CONV) {
                        ldlm_resource_add_lock(res, &amp;amp;res-&amp;gt;lr_converting, lock);
                        GOTO(out, rc = ELDLM_OK);
                } &lt;span class=&quot;code-keyword&quot;&gt;else&lt;/span&gt; &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (*flags &amp;amp; LDLM_FL_BLOCK_WAIT) {
                        ldlm_resource_add_lock(res, &amp;amp;res-&amp;gt;lr_waiting, lock);
                        GOTO(out, rc = ELDLM_OK);
                } &lt;span class=&quot;code-keyword&quot;&gt;else&lt;/span&gt; &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (*flags &amp;amp; LDLM_FL_BLOCK_GRANTED) {
                        ldlm_grant_lock(lock, NULL);
                        GOTO(out, rc = ELDLM_OK);
                }
                &lt;span class=&quot;code-comment&quot;&gt;/* If no flags, fall through to normal enqueue path. */&lt;/span&gt;
        }
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;For replay locks, it can skip the enqueue process and we can easily skip LVB packing in ldlm_lock_enqueue0().&lt;/p&gt;</comment>
                            <comment id="86138" author="bfaccini" created="Mon, 9 Jun 2014 19:04:07 +0000"  >&lt;p&gt;Jinshan, thanks for your comments and help, I was already hesitating to block lvb setup/fetch between lvbo_init() or lvbo_fill() calls, so ... Let&apos;s give i a try now!&lt;/p&gt;</comment>
                            <comment id="87548" author="bfaccini" created="Thu, 26 Jun 2014 09:53:48 +0000"  >&lt;p&gt;1st patch attempt, as testonly, is at &lt;a href=&quot;http://review.whamcloud.com/10845&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10845&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;My local testing is ok, but as indicated in commit-msg of patch, it is still unclear for me if cases where a request that requires LVB update is received by Server between end of recovery/replays and before LVB is re-filled upon lock granting to a new Client ...&lt;/p&gt;
</comment>
                            <comment id="89224" author="bfaccini" created="Wed, 16 Jul 2014 15:08:39 +0000"  >&lt;p&gt;More testing of my patch found a flaw when a delayed LVB needs to be allocated+filled+sent back to a new Client as part of new/non-replayed and/but not immediately granted lock ...&lt;br/&gt;
So, patch-set #4/#5 now also handle the case of new/not-replayed lock requests that can&apos;t be granted immediately but where LVB has to be sent to Client. I wonder why LVB is sent back for non-granted locks ? Will this not be better/optimized to only send LVB for granted locks or upon completion-AST ?&lt;/p&gt;</comment>
                            <comment id="92711" author="pjones" created="Thu, 28 Aug 2014 14:32:54 +0000"  >&lt;p&gt;Landed for 2.7&lt;/p&gt;</comment>
                            <comment id="93624" author="morrone" created="Tue, 9 Sep 2014 22:50:33 +0000"  >&lt;p&gt;Version for 2.4/2.5?&lt;/p&gt;</comment>
                            <comment id="93869" author="bfaccini" created="Fri, 12 Sep 2014 17:42:16 +0000"  >&lt;p&gt;b2_5 back-port is at &lt;a href=&quot;http://review.whamcloud.com/11895/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/11895/&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="93964" author="pjones" created="Mon, 15 Sep 2014 11:42:33 +0000"  >&lt;p&gt;b2_4 version: &lt;a href=&quot;http://review.whamcloud.com/#/c/11920/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/11920/&lt;/a&gt;&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwm93:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>13935</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>