<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:01:44 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-13492] lfs migrate -m returns Operation not permitted</title>
                <link>https://jira.whamcloud.com/browse/LU-13492</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Hello!&lt;/p&gt;

&lt;p&gt;When using &lt;tt&gt;lfs migrate -m&lt;/tt&gt; to migrate directories across MDTs, we sometimes face &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13298&quot; title=&quot;lfs migrate -m &amp;quot;migrate failed: Operation not supported (-95)&amp;quot; on DoM files&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13298&quot;&gt;&lt;del&gt;LU-13298&lt;/del&gt;&lt;/a&gt; (lfs migrate does not work yet with DoM files) for which we do have a workaround (ie. we restripe the files first without DoM). However, we are now having a different problem this time, I think.&lt;/p&gt;

&lt;p&gt;We&apos;re trying to migrate files from MDT0003 to MDT0001. While running a migration of a full user directory as follow:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lfs migrate -m 1 /fir/users/apatel6
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;we hit &quot;operation not permitted&quot; errors on multiple directories, and even retrying the migration is leading to the same error:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@fir-rbh01 storage]# lfs migrate -m 1 /fir/users/apatel6/data/10-scalingNEB/01-relaxwater/02-N
/fir/users/apatel6/data/10-scalingNEB/01-relaxwater/02-N migrate failed: Operation not permitted (-1)

[root@fir-rbh01 storage]# lfs getdirstripe /fir/users/apatel6/data/10-scalingNEB/01-relaxwater/02-N
lmv_stripe_count: 2 lmv_stripe_offset: 3 lmv_hash_type: fnv_1a_64,migrating
mdtidx           FID[seq:oid:ver]
     3           [0x2800394ad:0x3c7c:0x0]
     3           [0x280038894:0x124ee:0x0]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;I also noticed when writing this ticket that something seems wrong here as there are two mdtidx = &quot;3&quot;. Usually, when a directory is migrating from 3 to 1, we can see mdtidx 1 and 3.&lt;/p&gt;

&lt;p&gt;Quick check of the FIDs above:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@fir-rbh01 storage]# lfs fid2path /fir 0x2800394ad:0x3c7c:0x0
/fir/users/apatel6/data/10-scalingNEB/01-relaxwater/02-N
[root@fir-rbh01 storage]# lfs fid2path /fir 0x280038894:0x124ee:0x0
/fir/users/apatel6/data/10-scalingNEB/01-relaxwater/02-N
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;MDT0001 (not MDT0003!) shows this log message when attemping the failed command:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Apr 29 08:35:06 fir-md1-s2 kernel: LustreError: 22437:0:(mdd_dir.c:4496:mdd_migrate()) fir-MDD0001: &apos;02-N&apos; migration was interrupted, run &apos;lfs migrate -m 3 -c 1 -H 2 02-N&apos; to finish migration.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;I don&apos;t see anything else, but there might be debug flags that could be interesting?&lt;br/&gt;
 In any case, let me know how we could help troubleshoot this issue. We&apos;re using Lustre 2.12.4 here even on the client that performs the lfs migrate. Thanks!&lt;/p&gt;</description>
                <environment>CentOS 7.6 Kernel 3.10.0-957.27.2.el7_lustre.pl2.x86_64 </environment>
        <key id="58964">LU-13492</key>
            <summary>lfs migrate -m returns Operation not permitted</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="hongchao.zhang">Hongchao Zhang</assignee>
                                    <reporter username="sthiell">Stephane Thiell</reporter>
                        <labels>
                    </labels>
                <created>Wed, 29 Apr 2020 15:51:26 +0000</created>
                <updated>Wed, 4 Oct 2023 18:27:07 +0000</updated>
                                            <version>Lustre 2.12.4</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="268880" author="pjones" created="Wed, 29 Apr 2020 17:52:09 +0000"  >&lt;p&gt;Hongchao&lt;/p&gt;

&lt;p&gt;Could you please advise?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="268884" author="adilger" created="Wed, 29 Apr 2020 17:57:04 +0000"  >&lt;p&gt;Stephane, are you able to collect debug logs from the client and MDS during the failed migration?  Ideally, full debug in the client and MDS, but if the MDS is busy this would overflow the debug log, so if needed we could start with &quot;&lt;tt&gt;debug=+dlmtrace+rpctrace&lt;/tt&gt;&quot;.&lt;/p&gt;</comment>
                            <comment id="268931" author="sthiell" created="Thu, 30 Apr 2020 00:20:53 +0000"  >&lt;p&gt;Thanks! Attached full debug (+ALL) from the client as&#160;&lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/34797/34797_client-ALL.log&quot; title=&quot;client-ALL.log attached to LU-13492&quot;&gt;client-ALL.log&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&#160;(client NID is&#160;10.0.10.3@o2ib7) while running the following command (same as in the description above):&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lfs migrate -m 1 /fir/users/apatel6/data/10-scalingNEB/01-relaxwater/02-N
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;and 2 seconds of debug logs from the two MDS in question:&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;MDT0001 in&#160;&lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/34798/34798_fir-md1-s2-MDT0001_dlmtrace%2Brpctrace.log.gz&quot; title=&quot;fir-md1-s2-MDT0001_dlmtrace+rpctrace.log.gz attached to LU-13492&quot;&gt;fir-md1-s2-MDT0001_dlmtrace+rpctrace.log.gz&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;
	&lt;li&gt;MDT0003 in&#160;&lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/34799/34799_fir-md1-s4-MDT0003_dlmtrace%2Brpctrace.log.gz&quot; title=&quot;fir-md1-s4-MDT0003_dlmtrace+rpctrace.log.gz attached to LU-13492&quot;&gt;fir-md1-s4-MDT0003_dlmtrace+rpctrace.log.gz&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;In the logs of MDT0001, I can see:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;00000004:00020000:16.0:1588205266.457978:0:22469:0:(mdd_dir.c:4496:mdd_migrate()) fir-MDD0001: &apos;02-N&apos; migration was interrupted, run &apos;lfs migrate -m 3 -c 1 -H 2 02-N&apos; to finish migration.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;so I think I got this part at least.&lt;/p&gt;

&lt;p&gt;Let me know if I should try full debug of the MDS. Perhaps I could increase the debug buffer size.&lt;/p&gt;</comment>
                            <comment id="269037" author="sthiell" created="Thu, 30 Apr 2020 20:49:16 +0000"  >&lt;p&gt;We also noticed another thing on another directory tree, that may be related to this ticket.&lt;/p&gt;

&lt;p&gt;We were not able to migrate some &quot;leaf&quot; directories, and we noticed that all of them are actually empty.&lt;/p&gt;

&lt;p&gt;But even an explicit lfs migrate on them doesn&apos;t work (tested from both 2.12.4 and 2.13 clients):&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@fir-rbh01 storage]# lfs getdirstripe /fir/groups/bgirod/action_recognition/frames/v_ApplyEyeMakeup_g17_c03
lmv_stripe_count: 0 lmv_stripe_offset: 3 lmv_hash_type: none
[root@fir-rbh01 storage]# lfs migrate -m 1 /fir/groups/bgirod/action_recognition/frames/v_ApplyEyeMakeup_g17_c03
[root@fir-rbh01 storage]# echo $?
0
[root@fir-rbh01 storage]# lfs getdirstripe /fir/groups/bgirod/action_recognition/frames/v_ApplyEyeMakeup_g17_c03
lmv_stripe_count: 0 lmv_stripe_offset: 3 lmv_hash_type: none
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This directory is empty:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@fir-rbh01 storage]# stat /fir/groups/bgirod/action_recognition/frames/v_ApplyEyeMakeup_g17_c03
  File: &#8216;/fir/groups/bgirod/action_recognition/frames/v_ApplyEyeMakeup_g17_c03&#8217;
  Size: 4096            Blocks: 8          IO Block: 4096   directory
Device: e64e03a8h/3863872424d   Inode: 180148089774940567  Links: 2
Access: (2755/drwxr-sr-x)  Uid: (55081/  jbboin)   Gid: (24300/  bgirod)
Access: 2020-04-30 13:28:31.000000000 -0700
Modify: 2019-11-29 22:10:47.000000000 -0800
Change: 2019-11-29 22:10:47.000000000 -0800
 Birth: -
[root@fir-rbh01 storage]# ls -lisa /fir/groups/bgirod/action_recognition/frames/v_ApplyEyeMakeup_g17_c03
total 1840
180148089774940567    4 drwxr-sr-x     2 jbboin bgirod    4096 Nov 29 22:10 .
180148089774940559 1836 drwxr-sr-x 13322 jbboin bgirod 1871872 Apr 30 11:29 ..
[root@fir-rbh01 storage]# 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Originally, we ran &lt;tt&gt;lfs migrate -m 1 /fir/groups/bgirod&lt;/tt&gt;, which is mostly done by now, apart from a few empty directories in &lt;tt&gt;/fir/groups/bgirod/action_recognition/frames/&lt;/tt&gt;.&lt;/p&gt;

&lt;p&gt;Now, if I try again, I get:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@fir-rbh01 storage]# lfs migrate -m 1 /fir/groups/bgirod
/fir/groups/bgirod/ migrate failed: Operation not permitted (-1)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And same error, on MDT0001:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;fir-md1-s2: Apr 30 13:46:47 fir-md1-s2 kernel: LustreError: 22427:0:(mdd_dir.c:4496:mdd_migrate()) fir-MDD0001: &apos;bgirod&apos; migration was interrupted, run &apos;lfs migrate -m 3 -c 1 -H 2 bgirod&apos; to finish migration.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;current getdirstripe info of each component:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@fir-rbh01 storage]# lfs getdirstripe /fir/groups/bgirod
lmv_stripe_count: 2 lmv_stripe_offset: 3 lmv_hash_type: fnv_1a_64,migrating
mdtidx           FID[seq:oid:ver]
     3           [0x28003bb05:0x135:0x0]
     1           [0x2400576a9:0x1abb:0x0]
[root@fir-rbh01 storage]# lfs getdirstripe /fir/groups/bgirod/action_recognition
lmv_stripe_count: 2 lmv_stripe_offset: 3 lmv_hash_type: fnv_1a_64,migrating
mdtidx           FID[seq:oid:ver]
     3           [0x28003bb05:0x136:0x0]
     1           [0x2400576a9:0x1af9:0x0]
[root@fir-rbh01 storage]# lfs getdirstripe /fir/groups/bgirod/action_recognition/frames
lmv_stripe_count: 2 lmv_stripe_offset: 3 lmv_hash_type: fnv_1a_64,migrating
mdtidx           FID[seq:oid:ver]
     3           [0x28003bb05:0x138:0x0]
     1           [0x2400576a9:0x1ce2:0x0]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="269523" author="hongchao.zhang" created="Thu, 7 May 2020 11:10:21 +0000"  >&lt;p&gt;As per the stripe information of &quot;/fir/groups/bgirod&quot;, &quot;/fir/users/apatel6/data/10-scalingNEB/01-relaxwater/02-N&quot;, etc&lt;br/&gt;
It should be the migration to MDT0003 (if the original directory was on MDT0003, there will be two mdtidx 3 in the stripes)&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@zhanghc tests]# ../utils/lfs getdirstripe /mnt/lustre/pdir/cdir/
lmv_stripe_count: 2 lmv_stripe_offset: 3 lmv_hash_type: fnv_1a_64,migrating
mdtidx		 FID[seq:oid:ver]
     3		 [0x2c0000400:0xb:0x0]		
     3		 [0x2c0000400:0x9:0x0]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Was there ever migration to MDT0003 prior to this migration?&lt;/p&gt;

&lt;p&gt;the &quot;-EPERM&quot; is triggered in &quot;mdd_migrate&quot; because of the pending migration&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;static int mdd_migrate(const struct lu_env *env, struct md_object *md_pobj,
                       struct md_object *md_sobj, const struct lu_name *lname,
                       struct md_object *md_tobj, struct md_op_spec *spec,
                       struct md_attr *ma)
{
        if (S_ISDIR(attr-&amp;gt;la_mode)) {
                                ...
                                if (lmv-&amp;gt;lmv_migrate_offset !=
                                    lum_stripe_count ||
                                    lmv-&amp;gt;lmv_master_mdt_index !=
                                    lmu-&amp;gt;lum_stripe_offset ||
                                    (lmv_hash_type != 0 &amp;amp;&amp;amp;
                                     lmv_hash_type != lmu-&amp;gt;lum_hash_type)) {
                                        CERROR(&quot;%s: \&apos;&quot;DNAME&quot;\&apos; migration was &quot;
                                                &quot;interrupted, run \&apos;lfs migrate &quot;
                                                &quot;-m %d -c %d -H %d &quot;DNAME&quot;\&apos; to &quot;
                                                &quot;finish migration.\n&quot;,
                                                mdd2obd_dev(mdd)-&amp;gt;obd_name,
                                                PNAME(lname),
                                                le32_to_cpu(
                                                    lmv-&amp;gt;lmv_master_mdt_index),
                                                le32_to_cpu(
                                                    lmv-&amp;gt;lmv_migrate_offset),
                                                le32_to_cpu(lmv_hash_type),
                                                PNAME(lname));
                                        GOTO(out, rc = -EPERM);
                                }
                                ...
        }
        ...
}
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The migration request will be sent to the migration target MDT, then the above log was printed at MDT0001&lt;/p&gt;

&lt;p&gt;For the empty directory issue of &quot;/fir/groups/bgirod/action_recognition/frames/v_ApplyEyeMakeup_g17_c03&quot;&lt;br/&gt;
Could you please collect the similar debug logs during migration?&lt;br/&gt;
Thanks!&lt;/p&gt;</comment>
                            <comment id="275582" author="sthiell" created="Thu, 16 Jul 2020 21:11:52 +0000"  >&lt;p&gt;Hi Hongchao,&lt;/p&gt;

&lt;p&gt;Since my last message, we have upgraded to 2.12.5 and I cannot reproduce the problem with the empty directory. It has now been successfully migrated to MDT1.&lt;/p&gt;

&lt;p&gt;However, we still have issues with EPERM errors even in 2.12.5.&lt;/p&gt;

&lt;p&gt;For example, I tried again today, and it still doesn&apos;t work for this directory:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@fir-rbh02 ~]# lfs getdirstripe /fir/groups/astraigh/kousik
lmv_stripe_count: 2 lmv_stripe_offset: 0 lmv_hash_type: fnv_1a_64,migrating,lost_lmv
mdtidx		 FID[seq:oid:ver]
     0		 [0x200042f8e:0x29:0x0]		
     3		 [0x2800393f0:0x417d:0x0]		
[root@fir-rbh02 ~]# lfs migrate -m 3 /fir/groups/astraigh/kousik
/fir/groups/astraigh/kousik migrate failed: Operation not permitted (-1)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It looks like you spotted the problem (a previous migration was running). Is there a way to fix the problem so that we can migrate this directory to MDT3 for example?&lt;/p&gt;

&lt;p&gt;Thanks!&lt;br/&gt;
Stephane&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="58191">LU-13298</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="58680">LU-13425</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="66010">LU-15001</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="34797" name="client-ALL.log" size="5339609" author="sthiell" created="Thu, 30 Apr 2020 00:14:36 +0000"/>
                            <attachment id="34798" name="fir-md1-s2-MDT0001_dlmtrace+rpctrace.log.gz" size="2449192" author="sthiell" created="Thu, 30 Apr 2020 00:14:49 +0000"/>
                            <attachment id="34799" name="fir-md1-s4-MDT0003_dlmtrace+rpctrace.log.gz" size="4775431" author="sthiell" created="Thu, 30 Apr 2020 00:15:02 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00z3r:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>