<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:17:38 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-8450] replay-single test 70c: mount MDS hung</title>
                <link>https://jira.whamcloud.com/browse/LU-8450</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;replay-single test 70c hung while mounting MDS:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Starting mds1:   /dev/lvm-Role_MDS/P1 /mnt/lustre-mds1
CMD: onyx-33vm7 mkdir -p /mnt/lustre-mds1; mount -t lustre   		                   /dev/lvm-Role_MDS/P1 /mnt/lustre-mds1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Console log on MDS:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: DEBUG MARKER: mkdir -p /mnt/lustre-mds1; mount -t lustre                                   /dev/lvm-Role_MDS/P1 /mnt/lustre-mds1
LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache
LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 10.2.4.127@tcp (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 683 previous similar messagesLustre: 6963:0:(client.c:2113:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1469762588/real 1469762588]  req@ffff880051ebaa00 x1541153266605312/t0(0) o250-&amp;gt;MGC10.2.4.126@tcp@0@lo:26/25 lens 520/544 e 0 to 1 dl 1469762613 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 6963:0:(client.c:2113:ptlrpc_expire_one_request()) Skipped 13 previous similar messages
Lustre: 29062:0:(service.c:1335:ptlrpc_at_send_early_reply()) @@@ Couldn&apos;t add any time (5/5), not sending early reply 
  req@ffff88004a8faa00 x1541153729978528/t0(0) o101-&amp;gt;6a772ed4-43ff-dc51-4d04-2c0278989dc2@10.2.4.120@tcp:-1/-1 lens 872/3512 e 24 to 0 dl 1469763017 ref 2 fl Interpret:/0/0 rc 0/0
Lustre: lustre-MDT0002: Client 6a772ed4-43ff-dc51-4d04-2c0278989dc2 (at 10.2.4.120@tcp) reconnecting
Lustre: Skipped 1 previous similar message
Lustre: lustre-MDT0002: Export ffff880057b24400 already connecting from 10.2.4.120@tcp
Lustre: lustre-MDT0002: Export ffff880057b24400 already connecting from 10.2.4.120@tcp
Lustre: lustre-MDT0002: Export ffff880057b24400 already connecting from 10.2.4.120@tcp
Lustre: lustre-MDT0002: Export ffff880057b24400 already connecting from 10.2.4.120@tcp
Lustre: Skipped 1 previous similar message
Lustre: lustre-MDT0002: Export ffff880057b24400 already connecting from 10.2.4.120@tcp
Lustre: Skipped 3 previous similar messages
Lustre: lustre-MDT0002: Export ffff880057b24400 already connecting from 10.2.4.120@tcp
Lustre: Skipped 6 previous similar messages
Lustre: lustre-MDT0002: Export ffff880057b24400 already connecting from 10.2.4.120@tcp
Lustre: Skipped 12 previous similar messages
LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 10.2.4.127@tcp (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 1909 previous similar messages
Lustre: 6963:0:(client.c:2113:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1469763188/real 1469763188]  req@ffff8800546a1200 x1541153266628608/t0(0) o250-&amp;gt;MGC10.2.4.126@tcp@0@lo:26/25 lens 520/544 e 0 to 1 dl 1469763213 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 6963:0:(client.c:2113:ptlrpc_expire_one_request()) Skipped 19 previous similar messages
INFO: task mdt00_002:29063 blocked for more than 120 seconds.
&quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
mdt00_002       D ffffffffa0b1d108     0 29063      2 0x00000080 
 ffff88004f7b3aa0 0000000000000046 ffff88004bc15c00 ffff88004f7b3fd8
 ffff88004f7b3fd8 ffff88004f7b3fd8 ffff88004bc15c00 ffffffffa0b1d100
 ffffffffa0b1d104 ffff88004bc15c00 00000000ffffffff ffffffffa0b1d108
Call Trace:
 [&amp;lt;ffffffff8163cb09&amp;gt;] schedule_preempt_disabled+0x29/0x70
 [&amp;lt;ffffffff8163a805&amp;gt;] __mutex_lock_slowpath+0xc5/0x1c0
 [&amp;lt;ffffffff81639c6f&amp;gt;] mutex_lock+0x1f/0x2f
 [&amp;lt;ffffffffa0a8e024&amp;gt;] nodemap_add_member+0x34/0x1b0 [ptlrpc]
 [&amp;lt;ffffffffa0dbf161&amp;gt;] mdt_obd_reconnect+0x81/0x1d0 [mdt]
 [&amp;lt;ffffffffa09d1e6f&amp;gt;] target_handle_connect+0x1c4f/0x2e30 [ptlrpc]
 [&amp;lt;ffffffffa0a6f5f2&amp;gt;] tgt_request_handle+0x3f2/0x1320 [ptlrpc]
 [&amp;lt;ffffffffa0a1bccb&amp;gt;] ptlrpc_server_handle_request+0x21b/0xa90 [ptlrpc]
 [&amp;lt;ffffffffa0a19888&amp;gt;] ? ptlrpc_wait_event+0x98/0x340 [ptlrpc]
 [&amp;lt;ffffffff810b88d2&amp;gt;] ? default_wake_function+0x12/0x20
 [&amp;lt;ffffffff810af038&amp;gt;] ? __wake_up_common+0x58/0x90
 [&amp;lt;ffffffffa0a1fd80&amp;gt;] ptlrpc_main+0xaa0/0x1de0 [ptlrpc]
 [&amp;lt;ffffffffa0a1f2e0&amp;gt;] ? ptlrpc_register_service+0xe40/0xe40 [ptlrpc]
 [&amp;lt;ffffffff810a5aef&amp;gt;] kthread+0xcf/0xe0
 [&amp;lt;ffffffff810a5a20&amp;gt;] ? kthread_create_on_node+0x140/0x140
 [&amp;lt;ffffffff816469d8&amp;gt;] ret_from_fork+0x58/0x90 
 [&amp;lt;ffffffff810a5a20&amp;gt;] ? kthread_create_on_node+0x140/0x140
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Maloo reports:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/3f6a9a0e-557a-11e6-906c-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/3f6a9a0e-557a-11e6-906c-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/cecb3c06-54af-11e6-a39e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/cecb3c06-54af-11e6-a39e-5254006e85c2&lt;/a&gt;&lt;/p&gt;</description>
                <environment></environment>
        <key id="38506">LU-8450</key>
            <summary>replay-single test 70c: mount MDS hung</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="kit.westneat">Kit Westneat</assignee>
                                    <reporter username="yujian">Jian Yu</reporter>
                        <labels>
                    </labels>
                <created>Fri, 29 Jul 2016 23:26:59 +0000</created>
                <updated>Sat, 22 Oct 2016 01:09:15 +0000</updated>
                            <resolved>Sat, 22 Oct 2016 01:09:15 +0000</resolved>
                                    <version>Lustre 2.9.0</version>
                                    <fixVersion>Lustre 2.9.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="160371" author="yujian" created="Fri, 29 Jul 2016 23:27:41 +0000"  >&lt;p&gt;This is affecting patch review testing on master branch.&lt;/p&gt;</comment>
                            <comment id="160459" author="green" created="Mon, 1 Aug 2016 17:34:56 +0000"  >&lt;p&gt;Seems to be some sort of a deadlock in nodemap code&lt;/p&gt;</comment>
                            <comment id="160460" author="jgmitter" created="Mon, 1 Aug 2016 17:35:37 +0000"  >&lt;p&gt;Hi Kit,&lt;/p&gt;

&lt;p&gt;Could you please have a look at this issue?  It appears that the issue is occurring in the nodemap area.&lt;/p&gt;

&lt;p&gt;Thanks.&lt;br/&gt;
Joe&lt;/p&gt;</comment>
                            <comment id="160486" author="kit.westneat" created="Mon, 1 Aug 2016 21:45:44 +0000"  >&lt;p&gt;It looks like ldlm_revoke_export_locks is getting hung trying to empty the lock hash (exp-&amp;gt;exp_lock_hash), which causes the nodemap functions to lock waiting for that to finish.&lt;/p&gt;

&lt;p&gt;I was looking at the original code for ldlm_revoke_export_locks, and I noticed that that code does not use hash_for_each_empty, it just sends the ASTs and finishes. There was a change in 2008 to switch the lock data structure from a list to a hash. It looks like the change to loop until the hash is empty (hash_for_each_empty) was made then. The commit message doesn&apos;t discuss this change at all, so I wonder if it was unintentional.&lt;/p&gt;

&lt;p&gt;Is there anyone who would know what the correct behavior is?&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Kit&lt;/p&gt;</comment>
                            <comment id="164625" author="green" created="Thu, 1 Sep 2016 15:46:18 +0000"  >&lt;p&gt;Do you mean the commit 8073f0e4bef2db551a4b4bcaeb72a9986571f1bd ?&lt;/p&gt;

&lt;p&gt;I do not see a big logic change.&lt;br/&gt;
Before we traversed the list, collected entries, accumulated them into a ast list and then sent them from ldlm_run_ast_work().&lt;/p&gt;

&lt;p&gt;Now we use the hash_for_each_empty() to collect all entries in the hash, fill the list in the callback and then send everything from ldlm_run_ast_work() as before.&lt;br/&gt;
I am not sure what loop do you mean here. If you mean the one inside cfs_hash_for_each_empty, that one would leave very visible traces in logs if it was looping, an it would be seen in the trace too, but that&apos;s not happening?&lt;br/&gt;
Also leading comment in cfs_hash_for_each_relax() describe two cases for when it needs to loop and I don&apos;t think they are happening in this case?&lt;/p&gt;</comment>
                            <comment id="164650" author="kit.westneat" created="Thu, 1 Sep 2016 17:11:41 +0000"  >&lt;p&gt;Hi Oleg,&lt;/p&gt;

&lt;p&gt;Yes, that&apos;s the commit I&apos;m talking about.&lt;/p&gt;

&lt;p&gt;In the older version, the list was only traversed once right? In the newer version, it loops until the hash is empty, even if there are some locks that are not processed due to the three checks at the beginning of the callback. This is the change that I don&apos;t understand, I don&apos;t know enough about the locking system to know what those checks do in regards to lock eviction.&lt;/p&gt;

&lt;p&gt;When I&apos;ve run into similar deadlocks with D_INFO enabled, I see &quot;Try to empty hash:&quot; repeated essentially forever. It&apos;s possible that it&apos;s a different deadlock here though, but it seems similar.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Kit&lt;/p&gt;</comment>
                            <comment id="164701" author="green" created="Thu, 1 Sep 2016 20:40:27 +0000"  >&lt;p&gt;Well, the loop is there, but for it to be going around all the time the conditions need to be met:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
	&lt;li&gt;a. if rehash_key is enabled, an item can be moved from&lt;/li&gt;
	&lt;li&gt;one bucket to another bucket&lt;/li&gt;
	&lt;li&gt;b. user can remove non-zero-ref item from hash-table,&lt;/li&gt;
	&lt;li&gt;so the item can be removed from hash-table, even worse,&lt;/li&gt;
	&lt;li&gt;it&apos;s possible that user changed key and insert to another&lt;/li&gt;
	&lt;li&gt;hash bucket.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;So we need a constant stream of activity on the bucket then, where does that come from? If it&apos;s expected to be there then possibly cfs_hash_for_each_empty is the wrong API for you? Or you need to somehow fencethe activity in the hash while you are working on it.&lt;/p&gt;</comment>
                            <comment id="164949" author="kit.westneat" created="Tue, 6 Sep 2016 13:12:49 +0000"  >&lt;p&gt;Hi Oleg,&lt;/p&gt;

&lt;p&gt;I think we may be talking about different things. I believe you are talking about the loops in cfs_hash_for_each_relax, but the loop I&apos;m talking about is in cfs_hash_for_each_empty:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;        &lt;span class=&quot;code-keyword&quot;&gt;while&lt;/span&gt; (cfs_hash_for_each_relax(hs, func, data, 0)) {
                CDEBUG(D_INFO, &lt;span class=&quot;code-quote&quot;&gt;&quot;Try to empty hash: %s, loop: %u\n&quot;&lt;/span&gt;,
                       hs-&amp;gt;hs_name, i++);
        }
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;cfs_hash_for_each_relax returns the number of iterations it&apos;s done in its loops, essentially the number of items remaining in the hash. So as you can see, cfs_hash_for_each_empty loops until the hash is empty.&lt;/p&gt;

&lt;p&gt;The conditions I&apos;m talking about are in the callback for ldlm_revoke_export_locks:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (lock-&amp;gt;l_req_mode != lock-&amp;gt;l_granted_mode) {
                unlock_res_and_lock(lock);
                &lt;span class=&quot;code-keyword&quot;&gt;return&lt;/span&gt; 0;
        }
        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (lock-&amp;gt;l_resource-&amp;gt;lr_type != LDLM_IBITS &amp;amp;&amp;amp;
            lock-&amp;gt;l_resource-&amp;gt;lr_type != LDLM_PLAIN) {
                unlock_res_and_lock(lock);
                &lt;span class=&quot;code-keyword&quot;&gt;return&lt;/span&gt; 0;
        }
        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (ldlm_is_ast_sent(lock)) {
                unlock_res_and_lock(lock);
                &lt;span class=&quot;code-keyword&quot;&gt;return&lt;/span&gt; 0;
        }
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;So if there is a lock in the exp_lock_hash that meets one of those three conditions, cfs_hash_for_each_empty will loop until some external process modifies that lock. My theory is that in these deadlock cases, nothing modifies the lock and the hash is never emptied, so cfs_hash_for_each_empty loops forever.&lt;/p&gt;

&lt;p&gt;In the original code, it iterated the lock list once and did not require that the list be emptied. I was wondering if there is a reason for using cfs_hash_for_each_empty in this case, or if it&apos;s possible to replace it with cfs_hash_for_each_nolock, which seems to be closer to what the original code does. &lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Kit&lt;/p&gt;</comment>
                            <comment id="165031" author="green" created="Tue, 6 Sep 2016 23:37:41 +0000"  >&lt;p&gt;We are talking about the same loop.&lt;br/&gt;
The loop is external to cfs_hash_for_each_relax(), that stops on change (as is mentioned in the comment) and outside code needs to loop (the loop we are talking about).&lt;/p&gt;

&lt;p&gt;This code was used in (now removed) remote client functionality and you are the only remaining user, btw so you can mold this function to your taste.&lt;/p&gt;

&lt;p&gt;The conditions are a bit strange, but it was only used for MDS anyway so those could only be ibits (or flock) locks.&lt;br/&gt;
If your locks are MGS - then you are using plain locks - again ok for the conditions.&lt;br/&gt;
The Ungranted locks - those are supposed to become granted soon so should not be a big concern.&lt;br/&gt;
the ast sent is a lock for which there is a conflict and so it should disappear soon.&lt;/p&gt;

&lt;p&gt;If you can figure out what is the lock that&apos;s failing those, conditions could be amended I imagine.&lt;/p&gt;</comment>
                            <comment id="169154" author="kit.westneat" created="Tue, 11 Oct 2016 15:53:13 +0000"  >&lt;p&gt;&amp;gt; The loop is external to cfs_hash_for_each_relax(), that stops on change (as is mentioned in the comment) and outside code needs to loop (the loop we are talking about).&lt;/p&gt;

&lt;p&gt;Hmm, but cfs_hash_for_each_relax also returns non-zero if it is successful, no? It returns the number of items iterated over (count), so cfs_hash_for_each_empty loops until the hash is empty. As far as I can tell, neither cfs_hash_for_each_nolock nor cfs_hash_for_each_empty actually check the return value of cfs_hash_for_each_relax for the -EINTR condition described in the comment.&lt;/p&gt;

&lt;p&gt;Using cfs_hash_for_each_nolock in ldlm_revoke_export_locks makes more sense to me, I&apos;ll upload a patch with that change. It seems like cfs_hash_for_each_nolock should also be modified to check for -EINTR, though I&apos;m not sure what the proper behavior is.&lt;/p&gt;</comment>
                            <comment id="169375" author="gerrit" created="Wed, 12 Oct 2016 22:19:41 +0000"  >&lt;p&gt;Kit Westneat (kit.westneat@gmail.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/23120&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/23120&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8450&quot; title=&quot;replay-single test 70c: mount MDS hung&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8450&quot;&gt;&lt;del&gt;LU-8450&lt;/del&gt;&lt;/a&gt; nodemap: modify ldlm_revoke_export_locks&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: a6884dde68e959915dcfac6197641aa3cc27aed6&lt;/p&gt;</comment>
                            <comment id="170590" author="gerrit" created="Fri, 21 Oct 2016 15:01:00 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/23120/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/23120/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8450&quot; title=&quot;replay-single test 70c: mount MDS hung&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8450&quot;&gt;&lt;del&gt;LU-8450&lt;/del&gt;&lt;/a&gt; nodemap: modify ldlm_revoke_export_locks&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 449fe54db666a3ad29503a55aa0f048b5f1d6543&lt;/p&gt;</comment>
                            <comment id="170668" author="pjones" created="Sat, 22 Oct 2016 01:09:15 +0000"  >&lt;p&gt;Landed for 2.9&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="18740">LU-3291</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzyj0f:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>