<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:39:07 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-10893] all conf-sanity tests failed: format mgs: mkfs.lustre FATAL: Unable to build fs</title>
                <link>https://jira.whamcloud.com/browse/LU-10893</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;After &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-684&quot; title=&quot;replace dev_rdonly kernel patch with dm-flakey&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-684&quot;&gt;&lt;del&gt;LU-684&lt;/del&gt;&lt;/a&gt;&#160;&#160;&lt;a href=&quot;https://review.whamcloud.com/#/c/7200/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/#/c/7200/&lt;/a&gt;&#160; where dm-flakey layer was added to test-framework, conf-sanity didn`t pass with a real devices.&lt;br/&gt;
Example of configuration at local.sh&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;MDSCOUNT=1
OSTCOUNT=2
mds1_HOST=fre0101
MDSDEV1=/dev/vdb
mds_HOST=fre0101
MDSDEV=/dev/vdb
ost1_HOST=fre0102
OSTDEV1=/dev/vdb
ost2_HOST=fre0102
OSTDEV2=/dev/vdc
.....
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Errors:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;CMD: fre0205,fre0206,fre0208 PATH=/usr/lib64/lustre/tests/../tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/usr/lib64/lustre/tests/../tests/mpi:/usr/lib64/lustre/tests/../tests/racer:/usr/lib64/lustre/tests/../../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests/../tests:/usr/lib64/lustre/tests/../utils/gss:/root//usr/lib64/lustre/tests:/usr/lib64/lustre/tests:/usr/lib64/lustre/tests/../utils:/usr/lib64/mpich/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin::/sbin:/bin:/usr/sbin: NAME=ncli sh rpc.sh set_hostid 
fre0208: fre0208: executing set_hostid
fre0205: fre0205: executing set_hostid
fre0206: fre0206: executing set_hostid
CMD: fre0205 [ -e &quot;/dev/vdb&quot; ]
CMD: fre0205 grep -c /mnt/lustre-mgs&apos; &apos; /proc/mounts || true
CMD: fre0205 lsmod | grep lnet &amp;gt; /dev/null &amp;amp;&amp;amp;
lctl dl | grep &apos; ST &apos; || true
CMD: fre0205 e2label /dev/vdb
CMD: fre0205 mkfs.lustre --mgs --param=sys.timeout=20 --backfstype=ldiskfs --device-size=0 --mkfsoptions=\&quot;-E lazy_itable_init\&quot; --reformat /dev/vdb
fre0205: 
fre0205: mkfs.lustre FATAL: Unable to build fs /dev/vdb (256)
fre0205: 
fre0205: mkfs.lustre FATAL: mkfs failed 256
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;A quick look shows that reformat is fine at conf-sanity with the next change to t-f&lt;br/&gt;
formatall() {&lt;br/&gt;
        CLEANUP_DM_DEV=true stopall -f&lt;/p&gt;


&lt;p&gt;since there are a lot of stopall at conf-sanity, they requires a fix also, probably.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== conf-sanity test 17: Verify failed mds_postsetup won&apos;t fail assertion (2936) (should return errs) ====================================================================================================== 15:36:46 (1522942606)
start mds service on fre0113
Starting mds1: -o rw,user_xattr  /dev/mapper/mds1_flakey /mnt/lustre-mds1
fre0113: fre0113: executing set_default_debug -1 all 4
pdsh@fre0115: fre0113: ssh exited with exit code 1
pdsh@fre0115: fre0113: ssh exited with exit code 1
Started lustre-MDT0000
start mds service on fre0113
Starting mds2: -o rw,user_xattr  /dev/mapper/mds2_flakey /mnt/lustre-mds2
fre0113: fre0113: executing set_default_debug -1 all 4
pdsh@fre0115: fre0113: ssh exited with exit code 1
pdsh@fre0115: fre0113: ssh exited with exit code 1
Started lustre-MDT0001
start ost1 service on fre0114
Starting ost1: -o user_xattr  /dev/mapper/ost1_flakey /mnt/lustre-ost1
fre0114: fre0114: executing set_default_debug -1 all 4
pdsh@fre0115: fre0114: ssh exited with exit code 1
pdsh@fre0115: fre0114: ssh exited with exit code 1
Started lustre-OST0000
mount lustre on /mnt/lustre.....
Starting client: fre0115:  -o user_xattr,flock fre0113@tcp:/lustre /mnt/lustre
setup single mount lustre success
umount lustre on /mnt/lustre.....
Stopping client fre0115 /mnt/lustre (opts:)
stop ost1 service on fre0114
Stopping /mnt/lustre-ost1 (opts:-f) on fre0114
stop mds service on fre0113
Stopping /mnt/lustre-mds1 (opts:-f) on fre0113
stop mds service on fre0113
Stopping /mnt/lustre-mds2 (opts:-f) on fre0113
modules unloaded.
Remove mds config log
Stopping /mnt/lustre-mgs (opts:) on fre0113
fre0113: debugfs 1.42.13.x6 (01-Mar-2018)
start mgs service on fre0113
Loading modules from /usr/lib64/lustre/tests/..
detected 2 online CPUs by sysfs
Force libcfs to create 2 CPU partitions
../libcfs/libcfs/libcfs options: &apos;cpu_npartitions=2&apos;
../lnet/lnet/lnet options: &apos;accept=all&apos;
../lnet/klnds/socklnd/ksocklnd options: &apos;sock_timeout=10&apos;
gss/krb5 is not supported
Starting mgs:   /dev/mapper/mgs_flakey /mnt/lustre-mgs
fre0113: fre0113: executing set_default_debug -1 all 4
pdsh@fre0115: fre0113: ssh exited with exit code 1
pdsh@fre0115: fre0113: ssh exited with exit code 1
Started MGS
start ost1 service on fre0114
Starting ost1: -o user_xattr  /dev/mapper/ost1_flakey /mnt/lustre-ost1
fre0114: fre0114: executing set_default_debug -1 all 4
pdsh@fre0115: fre0114: ssh exited with exit code 1
pdsh@fre0115: fre0114: ssh exited with exit code 1
Started lustre-OST0000
start mds service on fre0113
Starting mds1: -o rw,user_xattr  /dev/mapper/mds1_flakey /mnt/lustre-mds1
fre0113: mount.lustre: mount /dev/mapper/mds1_flakey at /mnt/lustre-mds1 failed: No such file or directory
fre0113: Is the MGS specification correct?
fre0113: Is the filesystem name correct?
fre0113: If upgrading, is the copied client log valid? (see upgrade docs)
pdsh@fre0115: fre0113: ssh exited with exit code 2
Start of /dev/mapper/mds1_flakey on mds1 failed 2
Stopping clients: fre0115,fre0116 /mnt/lustre (opts:-f)
Stopping clients: fre0115,fre0116 /mnt/lustre2 (opts:-f)
Stopping /mnt/lustre-ost1 (opts:-f) on fre0114
pdsh@fre0115: fre0114: ssh exited with exit code 1
Stopping /mnt/lustre-mgs (opts:) on fre0113
fre0114: fre0114: executing set_hostid
fre0116: fre0116: executing set_hostid
fre0113: fre0113: executing set_hostid
Loading modules from /usr/lib64/lustre/tests/..
detected 2 online CPUs by sysfs
Force libcfs to create 2 CPU partitions
gss/krb5 is not supported
Formatting mgs, mds, osts
Format mgs: /dev/mapper/mgs_flakey
pdsh@fre0115: fre0113: ssh exited with exit code 1
 conf-sanity test_17: @@@@@@ FAIL: mgs: device &apos;/dev/mapper/mgs_flakey&apos; does not exist 
  Trace dump:
  = /usr/lib64/lustre/tests/../tests/test-framework.sh:5734:error()
  = /usr/lib64/lustre/tests/../tests/test-framework.sh:4314:__touch_device()
  = /usr/lib64/lustre/tests/../tests/test-framework.sh:4331:format_mgs()
  = /usr/lib64/lustre/tests/../tests/test-framework.sh:4384:formatall()
  = /usr/lib64/lustre/tests/conf-sanity.sh:109:reformat()
  = /usr/lib64/lustre/tests/conf-sanity.sh:91:reformat_and_config()
  = /usr/lib64/lustre/tests/conf-sanity.sh:605:test_17()
  = /usr/lib64/lustre/tests/../tests/test-framework.sh:6010:run_one()
  = /usr/lib64/lustre/tests/../tests/test-framework.sh:6049:run_one_logged()
  = /usr/lib64/lustre/tests/../tests/test-framework.sh:5848:run_test()
  = /usr/lib64/lustre/tests/conf-sanity.sh:607:main()
Dumping lctl log to /tmp/test_logs/1522942566/conf-sanity.test_17.*.1522942656.log
fre0114: Warning: Permanently added &apos;fre0115,192.168.101.15&apos; (ECDSA) to the list of known hosts.

fre0116: Warning: Permanently added &apos;fre0115,192.168.101.15&apos; (ECDSA) to the list of known hosts.

fre0113: Warning: Permanently added &apos;fre0115,192.168.101.15&apos; (ECDSA) to the list of known hosts.

Resetting fail_loc on all nodes...done.
FAIL 17 (51s)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="51729">LU-10893</key>
            <summary>all conf-sanity tests failed: format mgs: mkfs.lustre FATAL: Unable to build fs</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="yujian">Jian Yu</assignee>
                                    <reporter username="aboyko">Alexander Boyko</reporter>
                        <labels>
                    </labels>
                <created>Tue, 10 Apr 2018 06:21:19 +0000</created>
                <updated>Wed, 18 Jul 2018 12:29:58 +0000</updated>
                            <resolved>Wed, 18 Jul 2018 12:29:58 +0000</resolved>
                                    <version>Lustre 2.11.0</version>
                                    <fixVersion>Lustre 2.12.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="225906" author="yujian" created="Thu, 12 Apr 2018 17:39:08 +0000"  >&lt;p&gt;Hi Alexander,&lt;/p&gt;

&lt;p&gt;The following error is not related to dm-flakey device:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;CMD: fre0205 mkfs.lustre --mgs --param=sys.timeout=20 --backfstype=ldiskfs --device-size=0 --mkfsoptions=\&quot;-E lazy_itable_init\&quot; --reformat /dev/vdb
fre0205: 
fre0205: mkfs.lustre FATAL: Unable to build fs /dev/vdb (256)
fre0205: 
fre0205: mkfs.lustre FATAL: mkfs failed 256
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The format command was run on /dev/vdb and failed. What error messages are in dmesg or syslog? Could you please manually run the mkfs.lustre command on /dev/vdb to see if it passes? &lt;/p&gt;</comment>
                            <comment id="227185" author="aboyko" created="Thu, 3 May 2018 12:23:21 +0000"  >&lt;p&gt;I`ve played a bit with t-f and found that dm-flakey patch broke the typical usage of test-framework. The simple way for some testing is&lt;/p&gt;

&lt;p&gt;1) llmount.sh&lt;br/&gt;
2) ONLY=xxx sanity.sh&lt;br/&gt;
3) ONLY=xxx conf-sanity.sh&lt;br/&gt;
4) etc.&lt;br/&gt;
5) llmountcleanup.sh&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[test@devvm-centos-1 lustre-release]$ sudo MDSDEV=/dev/sdb MDSDEV1=/dev/sdb sh lustre/tests/llmount.sh
Stopping clients: devvm-centos-1 /mnt/lustre (opts:-f)
Stopping clients: devvm-centos-1 /mnt/lustre2 (opts:-f)
Loading modules from /home/test/lustre-release/lustre/tests/..
detected 4 online CPUs by sysfs
Force libcfs to create 2 CPU partitions
../libcfs/libcfs/libcfs options: &apos;cpu_npartitions=2&apos;
../lnet/lnet/lnet options: &apos;networks=tcp0(eth0) accept=all&apos;
gss/krb5 is not supported
quota/lquota options: &apos;hash_lqs_cur_bits=3&apos;
Formatting mgs, mds, osts
Format mds1: /dev/sdb
Format ost1: /tmp/lustre-ost1
Format ost2: /tmp/lustre-ost2
Checking servers environments
Checking clients devvm-centos-1 environments
Loading modules from /home/test/lustre-release/lustre/tests/..
detected 4 online CPUs by sysfs
Force libcfs to create 2 CPU partitions
gss/krb5 is not supported
Setup mgs, mdt, osts
Starting mds1:   /dev/mapper/mds1_flakey /mnt/lustre-mds1
Commit the device label on /dev/sdb
Started lustre-MDT0000
Starting ost1:   /dev/mapper/ost1_flakey /mnt/lustre-ost1
Commit the device label on /tmp/lustre-ost1
Started lustre-OST0000
Starting ost2:   /dev/mapper/ost2_flakey /mnt/lustre-ost2
Commit the device label on /tmp/lustre-ost2
Started lustre-OST0001
Starting client: devvm-centos-1:  -o user_xattr,flock devvm-centos-1@tcp:/lustre /mnt/lustre
UUID                   1K-blocks        Used   Available Use% Mounted on
lustre-MDT0000_UUID       125368        1904      112228   2% /mnt/lustre[MDT:0]
lustre-OST0000_UUID       325368       13508      284700   5% /mnt/lustre[OST:0]
lustre-OST0001_UUID       325368       13508      284700   5% /mnt/lustre[OST:1]

filesystem_summary:       650736       27016      569400   5% /mnt/lustre

Using TIMEOUT=20
seting jobstats to procname_uid
Setting lustre.sys.jobid_var from disable to procname_uid
Waiting 90 secs for update
Updated after 6s: wanted &apos;procname_uid&apos; got &apos;procname_uid&apos;
disable quota as required
[test@devvm-centos-1 lustre-release]$ sudo MDSDEV=/dev/sdb MDSDEV1=/dev/sdb ONLY=0 sh lustre/tests/conf-sanity.sh
devvm-centos-1: executing check_logdir /tmp/test_logs/1525349199
Logging to shared log directory: /tmp/test_logs/1525349199
devvm-centos-1: executing yml_node
Client: Lustre version: 2.11.51_20_g9ac477c
MDS: Lustre version: 2.11.51_20_g9ac477c
OSS: Lustre version: 2.11.51_20_g9ac477c
excepting tests: 32newtarball 101
skipping tests SLOW=no: 45 69
Stopping clients: devvm-centos-1 /mnt/lustre (opts:-f)
Stopping client devvm-centos-1 /mnt/lustre opts:-f
Stopping clients: devvm-centos-1 /mnt/lustre2 (opts:-f)
Stopping /mnt/lustre-mds1 (opts:-f) on devvm-centos-1
Stopping /mnt/lustre-ost1 (opts:-f) on devvm-centos-1
Stopping /mnt/lustre-ost2 (opts:-f) on devvm-centos-1
Loading modules from /home/test/lustre-release/lustre/tests/..
detected 4 online CPUs by sysfs
Force libcfs to create 2 CPU partitions
gss/krb5 is not supported
Formatting mgs, mds, osts
Format mds1: /dev/sdb

mkfs.lustre FATAL: Unable to build fs /dev/sdb (256)

mkfs.lustre FATAL: mkfs failed 256

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&#160;The real problem is that setup/mount export variables of dm-flakey device. But the next shell doesn`t know anything about them. Before this patch all configuration was located at separate file and worked fine.&lt;/p&gt;</comment>
                            <comment id="227186" author="aboyko" created="Thu, 3 May 2018 12:29:33 +0000"  >&lt;blockquote&gt;&lt;p&gt;The following error is not related to dm-flakey device:&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;It is directly related, because /dev/sdb used by dm-flakey still. And the mds reconfiguration use /dev/sdb instead of /dev/mapper/mds1_flakey&lt;/p&gt;</comment>
                            <comment id="227189" author="yujian" created="Thu, 3 May 2018 12:39:10 +0000"  >&lt;blockquote&gt;&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;test@devvm-centos-1 lustre-release&amp;#93;&lt;/span&gt;$ sudo MDSDEV=/dev/sdb MDSDEV1=/dev/sdb ONLY=0 sh lustre/tests/conf-sanity.sh&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;What about specifying the dm-flakey devices to MDSDEV{n} and OSTDEV{n} here?&lt;/p&gt;</comment>
                            <comment id="227193" author="aboyko" created="Thu, 3 May 2018 13:29:43 +0000"  >&lt;p&gt;The test works fine with specifying flakey devices.&lt;/p&gt;</comment>
                            <comment id="227194" author="yujian" created="Thu, 3 May 2018 14:34:45 +0000"  >&lt;p&gt;Thank you Alexander for verifying this.&lt;/p&gt;</comment>
                            <comment id="227299" author="aboyko" created="Fri, 4 May 2018 06:52:11 +0000"  >&lt;p&gt;@Jian Yu, will you fix the t-f issue?&lt;/p&gt;</comment>
                            <comment id="227309" author="zam" created="Fri, 4 May 2018 09:53:14 +0000"  >&lt;blockquote&gt;&lt;p&gt;What about specifying the dm-flakey devices to MDSDEV{n} and OSTDEV{n} here?&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;should an user assume that the same OSTDEV / MDSDEV parameters work with both llmount.sh and individual test scripts (i.e. conf_sanity.sh)? I think it is expected.&lt;/p&gt;</comment>
                            <comment id="229288" author="gerrit" created="Thu, 7 Jun 2018 14:17:04 +0000"  >&lt;p&gt;Alexandr Boyko (c17825@cray.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/32658&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/32658&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10893&quot; title=&quot;all conf-sanity tests failed: format mgs: mkfs.lustre FATAL: Unable to build fs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10893&quot;&gt;&lt;del&gt;LU-10893&lt;/del&gt;&lt;/a&gt; tests: allow to disable dm-flakey layer&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 5b521f7a46a52cd19d0286ca33e50b63c4f435e6&lt;/p&gt;</comment>
                            <comment id="230422" author="gerrit" created="Wed, 18 Jul 2018 05:59:11 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/32658/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/32658/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10893&quot; title=&quot;all conf-sanity tests failed: format mgs: mkfs.lustre FATAL: Unable to build fs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10893&quot;&gt;&lt;del&gt;LU-10893&lt;/del&gt;&lt;/a&gt; tests: allow to disable dm-flakey layer&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: f4618338643441970131f957f2a346ae3a455197&lt;/p&gt;</comment>
                            <comment id="230459" author="pjones" created="Wed, 18 Jul 2018 12:29:58 +0000"  >&lt;p&gt;Landed for 2.12&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="11774">LU-684</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                    <customfield id="customfield_10030" key="com.atlassian.jira.plugin.system.customfieldtypes:labels">
                        <customfieldname>Epic/Theme</customfieldname>
                        <customfieldvalues>
                                        <label>test</label>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzvjr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>