<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:52:44 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5583] clients receive IO error after MDT failover</title>
                <link>https://jira.whamcloud.com/browse/LU-5583</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;After our active MDS became completely unresponsive earlier, we attempted to fail over to the second MDS. This appeared to succeed, the MGS and MDT mounted successfully, as far as we can tell all clients reconnected, recovery completed. However at this stage, any operation on the file system (for example ls) on any client only connected via ethernet either hung or returned I/O errors, all clients using IB were operating normally.&lt;/p&gt;

&lt;p&gt;We then discovered that the MDT that there seemed to be a problem between MDT and all OSTs, as  &lt;tt&gt;lctl get_param lod.lustre03-MDT0000-mdtlov.target_obd&lt;/tt&gt; came back empty. Failing back to the (now rebooted) previous MDT worked and the file system is now operating normally again.&lt;/p&gt;

&lt;p&gt;Sample errors in syslog on one of the ethernet only clients while &lt;tt&gt;ls /mnt/lustre03&lt;/tt&gt; was returing I/O errors:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Sep  4 09:56:18 cs04r-sc-serv-06 kernel: Lustre: MGC172.23.144.1@tcp: Connection restored to MGS (at 172.23.144.2@tcp)
Sep  4 09:57:58 cs04r-sc-serv-06 kernel: LustreError: 11-0: lustre03-MDT0000-mdc-ffff880073fec800: Communicating with 172.23.144.2@tcp, operation mds_connect failed with -16.
Sep  4 09:58:23 cs04r-sc-serv-06 kernel: LustreError: 11-0: lustre03-MDT0000-mdc-ffff880073fec800: Communicating with 172.23.144.2@tcp, operation mds_connect failed with -16.
Sep  4 09:58:48 cs04r-sc-serv-06 kernel: LustreError: 11-0: lustre03-MDT0000-mdc-ffff880073fec800: Communicating with 172.23.144.2@tcp, operation mds_connect failed with -16.
Sep  4 09:59:13 cs04r-sc-serv-06 kernel: LustreError: 11-0: lustre03-MDT0000-mdc-ffff880073fec800: Communicating with 172.23.144.2@tcp, operation mds_connect failed with -16.
Sep  4 09:59:38 cs04r-sc-serv-06 kernel: LustreError: 11-0: lustre03-MDT0000-mdc-ffff880073fec800: Communicating with 172.23.144.2@tcp, operation mds_connect failed with -16.
Sep  4 10:00:03 cs04r-sc-serv-06 kernel: LustreError: 11-0: lustre03-MDT0000-mdc-ffff880073fec800: Communicating with 172.23.144.2@tcp, operation mds_connect failed with -16.
Sep  4 10:00:28 cs04r-sc-serv-06 kernel: LustreError: 11-0: lustre03-MDT0000-mdc-ffff880073fec800: Communicating with 172.23.144.2@tcp, operation mds_connect failed with -16.
Sep  4 10:01:18 cs04r-sc-serv-06 kernel: LustreError: 11-0: lustre03-MDT0000-mdc-ffff880073fec800: Communicating with 172.23.144.2@tcp, operation mds_connect failed with -16.
Sep  4 10:01:18 cs04r-sc-serv-06 kernel: LustreError: Skipped 1 previous similar message
Sep  4 10:02:33 cs04r-sc-serv-06 kernel: LustreError: 11-0: lustre03-MDT0000-mdc-ffff880073fec800: Communicating with 172.23.144.2@tcp, operation mds_connect failed with -16.
Sep  4 10:02:33 cs04r-sc-serv-06 kernel: LustreError: Skipped 2 previous similar messages
Sep  4 10:05:03 cs04r-sc-serv-06 kernel: LustreError: 11-0: lustre03-MDT0000-mdc-ffff880073fec800: Communicating with 172.23.144.2@tcp, operation mds_connect failed with -16.
Sep  4 10:05:03 cs04r-sc-serv-06 kernel: LustreError: Skipped 5 previous similar messages
Sep  4 10:09:38 cs04r-sc-serv-06 kernel: LustreError: 11-0: lustre03-MDT0000-mdc-ffff880073fec800: Communicating with 172.23.144.2@tcp, operation mds_connect failed with -16.
Sep  4 10:09:38 cs04r-sc-serv-06 kernel: LustreError: Skipped 10 previous similar messages
Sep  4 10:33:15 cs04r-sc-serv-06 kernel: LustreError: 32662:0:(dir.c:422:ll_get_dir_page()) read cache page: [0xe900001:0x3b1189d1:0x0] at 0: rc -4
Sep  4 10:33:15 cs04r-sc-serv-06 kernel: LustreError: 32662:0:(dir.c:584:ll_dir_read()) error reading dir [0xe900001:0x3b1189d1:0x0] at 0: rc -4
Sep  4 10:34:00 cs04r-sc-serv-06 kernel: LustreError: 32717:0:(dir.c:398:ll_get_dir_page()) dir page locate: [0xe900001:0x3b1189d1:0x0] at 0: rc -5
Sep  4 10:34:00 cs04r-sc-serv-06 kernel: LustreError: 32717:0:(dir.c:584:ll_dir_read()) error reading dir [0xe900001:0x3b1189d1:0x0] at 0: rc -5
Sep  4 10:37:44 cs04r-sc-serv-06 kernel: LustreError: 487:0:(mdc_locks.c:918:mdc_enqueue()) ldlm_cli_enqueue: -4
Sep  4 10:37:44 cs04r-sc-serv-06 kernel: LustreError: 487:0:(mdc_locks.c:918:mdc_enqueue()) Skipped 879 previous similar messages
Sep  4 10:37:57 cs04r-sc-serv-06 kernel: LustreError: 508:0:(dir.c:398:ll_get_dir_page()) dir page locate: [0xe900001:0x3b1189d1:0x0] at 0: rc -5
Sep  4 10:37:57 cs04r-sc-serv-06 kernel: LustreError: 508:0:(dir.c:584:ll_dir_read()) error reading dir [0xe900001:0x3b1189d1:0x0] at 0: rc -5
Sep  4 10:37:58 cs04r-sc-serv-06 kernel: LustreError: 510:0:(dir.c:398:ll_get_dir_page()) dir page locate: [0xe900001:0x3b1189d1:0x0] at 0: rc -5
Sep  4 10:37:59 cs04r-sc-serv-06 kernel: LustreError: 512:0:(dir.c:584:ll_dir_read()) error reading dir [0xe900001:0x3b1189d1:0x0] at 0: rc -5
Sep  4 10:37:59 cs04r-sc-serv-06 kernel: LustreError: 512:0:(dir.c:584:ll_dir_read()) Skipped 1 previous similar message
Sep  4 10:43:34 cs04r-sc-serv-06 kernel: LustreError: 875:0:(dir.c:398:ll_get_dir_page()) dir page locate: [0xe900001:0x3b1189d1:0x0] at 0: rc -5
Sep  4 10:43:34 cs04r-sc-serv-06 kernel: LustreError: 875:0:(dir.c:398:ll_get_dir_page()) Skipped 2 previous similar messages
Sep  4 10:43:34 cs04r-sc-serv-06 kernel: LustreError: 875:0:(dir.c:584:ll_dir_read()) error reading dir [0xe900001:0x3b1189d1:0x0] at 0: rc -5
Sep  4 10:43:34 cs04r-sc-serv-06 kernel: LustreError: 875:0:(dir.c:584:ll_dir_read()) Skipped 1 previous similar message
Sep  4 10:47:19 cs04r-sc-serv-06 kernel: LustreError: 1122:0:(dir.c:398:ll_get_dir_page()) dir page locate: [0xe900001:0x3b1189d1:0x0] at 0: rc -5
Sep  4 10:47:19 cs04r-sc-serv-06 kernel: LustreError: 1122:0:(dir.c:584:ll_dir_read()) error reading dir [0xe900001:0x3b1189d1:0x0] at 0: rc -5
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I&apos;ll attach the full MDT syslog file starting with the mount until we unmounted again to fail back to the previous MDT as a file.&lt;/p&gt;

&lt;p&gt;Note that IB and lnet over IB has been added to this file system recently, following the instructions in the manual on changing server NIDs, including unmounting everything, unloading lustre modules on the servers completely, tunefs.lustre --writeconf --erase-param with the new NIDs etc, mounting MGS, MDT, OSTs, in this order. (Some ethernet only clients might have been still up during this, but the client I used to test this while it wasn&apos;t working certainly had been unmounted then and rebooted a few times after).&lt;/p&gt;

&lt;p&gt;We are currently concerned that this will happen again if we have to do another fail over on the MDT, so want to solve this. Let us know what other information we should provide.&lt;/p&gt;</description>
                <environment>RHEL6 server, RHEL6 clients, servers connected to IB and ethernet, clients can be either connected to IB and ethernet or just ethernet</environment>
        <key id="26304">LU-5583</key>
            <summary>clients receive IO error after MDT failover</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="bobijam">Zhenyu Xu</assignee>
                                    <reporter username="ferner">Frederik Ferner</reporter>
                        <labels>
                    </labels>
                <created>Thu, 4 Sep 2014 12:55:17 +0000</created>
                <updated>Tue, 7 Jun 2016 15:38:26 +0000</updated>
                                            <version>Lustre 2.5.2</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>3</watches>
                                                                            <comments>
                            <comment id="93298" author="pjones" created="Fri, 5 Sep 2014 04:58:23 +0000"  >&lt;p&gt;Bobijam&lt;/p&gt;

&lt;p&gt;Could you please advise on this issue?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="93324" author="bobijam" created="Fri, 5 Sep 2014 15:27:55 +0000"  >&lt;p&gt;from the mds log, it shows that for some unknown reason, the MGS does not work correctly, &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Sep  4 09:55:21 cs04r-sc-mds03-02 kernel: Lustre: MGS: non-config logname received: params
Sep  4 09:55:22 cs04r-sc-mds03-02 kernel: Lustre: MGS: non-config logname received: params
...
Sep  4 09:56:35 cs04r-sc-mds03-02 kernel: LustreError: 43873:0:(obd_mount_server.c:1136:server_register_target()) lustre03-MDT0000: error registering with the MGS: rc = -5 (not fatal)
...
Sep  4 09:57:11 cs04r-sc-mds03-02 kernel: LustreError: 13a-8: Failed to get MGS log params and no local copy.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Is MGT is a separate device and is it mounted elsewhere while this MDS node is trying to mount it?&lt;/p&gt;</comment>
                            <comment id="93328" author="ferner" created="Fri, 5 Sep 2014 16:02:08 +0000"  >&lt;p&gt;Yes, the MGS is a separate partition, it would also be affected by any MDT fail over and our scripts will mount both of them on the same server. Though now that I think about it, I&apos;m not convinced any order is enforced in those scripts. It has always worked so far...&lt;/p&gt;</comment>
                            <comment id="93627" author="ferner" created="Tue, 9 Sep 2014 23:13:27 +0000"  >&lt;p&gt;so I&apos;ve just tried again, with the same result at least as far as the logs and the MDT are concerned, this time ignoring any scripts and doing all steps manually after a fresh reboot of the failover MDS. &lt;/p&gt;

&lt;p&gt;The following steps seem to reproduce it in this setup every time:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;umount MDT and MGS on the current MDS&lt;/li&gt;
	&lt;li&gt;mount MGS on the failover MDS, wait a few seconds (tried up to about 1 minute)&lt;/li&gt;
	&lt;li&gt;mount MDT on the same failover MDS, the appears to work.&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;The same errors about as above are in the logs, though this time all clients that I&apos;ve tried seem to work and new clients can mount the file system, so the MGS appears to work at least for them.&lt;/p&gt;

&lt;p&gt;I&apos;ve then (on the same failover MDS) attempted to unmount/mount the MDT, this fails with the following log messages:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Sep  9 23:46:26 cs04r-sc-mds03-02 kernel: Lustre: server umount lustre03-MDT0000 complete
Sep  9 23:46:37 cs04r-sc-mds03-02 kernel: LDISKFS-fs (dm-6): mounted filesystem with ordered data mode. quota=off. Opts:
Sep  9 23:46:38 cs04r-sc-mds03-02 kernel: LustreError: 40807:0:(genops.c:320:class_newdev()) Device MGC10.144.144.1@o2ib already exists at 4, won&apos;t add
Sep  9 23:46:38 cs04r-sc-mds03-02 kernel: LustreError: 40807:0:(obd_config.c:374:class_attach()) Cannot create device MGC10.144.144.1@o2ib of type mgc : -17
Sep  9 23:46:38 cs04r-sc-mds03-02 kernel: LustreError: 40807:0:(obd_mount.c:195:lustre_start_simple()) MGC10.144.144.1@o2ib attach error -17
Sep  9 23:46:38 cs04r-sc-mds03-02 kernel: LustreError: 40807:0:(obd_mount_server.c:861:lustre_disconnect_lwp()) lustre03-MDT0000-lwp-MDT0000: Can&apos;t end config log lustre03-client.
Sep  9 23:46:38 cs04r-sc-mds03-02 kernel: LustreError: 40807:0:(obd_mount_server.c:1436:server_put_super()) lustre03-MDT0000: failed to disconnect lwp. (rc=-2)
Sep  9 23:46:38 cs04r-sc-mds03-02 kernel: LustreError: 40807:0:(obd_mount_server.c:1466:server_put_super()) no obd lustre03-MDT0000
Sep  9 23:46:38 cs04r-sc-mds03-02 kernel: LustreError: 40807:0:(obd_mount_server.c:135:server_deregister_mount()) lustre03-MDT0000 not registered
Sep  9 23:46:38 cs04r-sc-mds03-02 kernel: Lustre: server umount lustre03-MDT0000 complete
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Repeating the steps above on the initially active MDS again does not generate the last two log entries you highlighted (only the first two) and cycling through umount/mount for just the MDT  works as expected and succeeds in mounting the MDT every time.&lt;/p&gt;

&lt;p&gt;Looking at tunefs.lustre output (below), I don&apos;t see any typo in the IP addresses for the mgs node, but maybe there&apos;s another problem, so I&apos;ll put them here in case it&apos;s relevant and/or helps&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[bnh65367@cs04r-sc-mds03-01 ~]$ sudo tunefs.lustre --print /dev/vg_lustre03/mdt
checking for existing Lustre data: found
Reading CONFIGS/mountdata

   Read previous values:
Target:     lustre03-MDT0000
Index:      0
Lustre FS:  lustre03
Mount type: ldiskfs
Flags:      0x1401
              (MDT no_primnode )
Persistent mount opts: iopen_nopriv,user_xattr,errors=remount-ro,acl
Parameters: mgsnode=10.144.144.1@o2ib,172.23.144.1@tcp mgsnode=10.144.144.2@o2ib,172.23.144.2@tcp failover.node=10.144.144.1@o2ib,172.23.144.1@tcp failover.node=10.144.144.2@o2ib,172.23.144.2@tcp mdt.quota_type=ug mdt.group_upcall=/usr/sbin/l_getgroups


   Permanent disk data:
Target:     lustre03-MDT0000
Index:      0
Lustre FS:  lustre03
Mount type: ldiskfs
Flags:      0x1401
              (MDT no_primnode )
Persistent mount opts: iopen_nopriv,user_xattr,errors=remount-ro,acl
Parameters: mgsnode=10.144.144.1@o2ib,172.23.144.1@tcp mgsnode=10.144.144.2@o2ib,172.23.144.2@tcp failover.node=10.144.144.1@o2ib,172.23.144.1@tcp failover.node=10.144.144.2@o2ib,172.23.144.2@tcp mdt.quota_type=ug mdt.group_upcall=/usr/sbin/l_getgroups

exiting before disk write.
[bnh65367@cs04r-sc-mds03-01 ~]$ sudo tunefs.lustre --print /dev/vg_lustre03/mgs
checking for existing Lustre data: found
Reading CONFIGS/mountdata

   Read previous values:
Target:     MGS
Index:      unassigned
Lustre FS:  lustre
Mount type: ldiskfs
Flags:      0x4
              (MGS )
Persistent mount opts: iopen_nopriv,user_xattr,errors=remount-ro
Parameters:


   Permanent disk data:
Target:     MGS
Index:      unassigned
Lustre FS:  lustre
Mount type: ldiskfs
Flags:      0x4
              (MGS )
Persistent mount opts: iopen_nopriv,user_xattr,errors=remount-ro
Parameters:

exiting before disk write.
[bnh65367@cs04r-sc-mds03-01 ~]$ 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Additional help in debugging this would be appreciated.&lt;/p&gt;</comment>
                            <comment id="93636" author="bobijam" created="Wed, 10 Sep 2014 01:44:47 +0000"  >&lt;p&gt;this looks like &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4943&quot; title=&quot;Client Failes to mount filesystem&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4943&quot;&gt;&lt;del&gt;LU-4943&lt;/del&gt;&lt;/a&gt; issue (MGC device does not clean up before another mount), would you mind trying patch &lt;a href=&quot;http://review.whamcloud.com/#/c/11765/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/11765/&lt;/a&gt; on the MDS?&lt;/p&gt;</comment>
                            <comment id="93657" author="ferner" created="Wed, 10 Sep 2014 06:15:42 +0000"  >&lt;p&gt;This patch has fixed the umount/remount issue on the failover MDS. &lt;/p&gt;

&lt;p&gt;It didn&apos;t fix the issue with these entries below, but I don&apos;t think you expect this to be fixed by the patch, just stating for clarity&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;kernel: LustreError: 13a-8: Failed to get MGS log params and no local copy.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;On the other hand, I&apos;m not completely able to reproduce the initial issue anymore, it looks like all clients can talk to the MDT and &lt;tt&gt;lctl get_param lod.lustre03-MDT0000-mdtlov.target_obd&lt;/tt&gt; returns all OSTs as active.&lt;/p&gt;</comment>
                            <comment id="93658" author="bobijam" created="Wed, 10 Sep 2014 06:32:02 +0000"  >&lt;p&gt;yes, you are right, #11765 is just for umount/remount issue. The failure to get MGS log I think could be related to the multiple mount of MGT device at the same time.&lt;/p&gt;</comment>
                            <comment id="93660" author="ferner" created="Wed, 10 Sep 2014 06:52:48 +0000"  >&lt;p&gt;This is how we had been running before the upgrade without any problem and as far as I can see it&apos;s only been an issue after the upgrade. Is the recommendation these days to have a separate MGS on a separate machine?&lt;/p&gt;

&lt;p&gt;Anyway, is the failure to get the MGS log a problem we need to worry about or is it mainly cosmetic?&lt;/p&gt;
</comment>
                            <comment id="93661" author="bobijam" created="Wed, 10 Sep 2014 07:28:54 +0000"  >&lt;p&gt;you can use one MGS node for different filesystem, but need  separate MGT device for different filesystem. &lt;/p&gt;</comment>
                            <comment id="93668" author="ferner" created="Wed, 10 Sep 2014 09:04:12 +0000"  >&lt;p&gt;I&apos;ve done a few more tests on this file system while I can (planned maintenance, nearly over now).&lt;/p&gt;

&lt;p&gt;I&apos;ll try to summarise the results here and hopefully they&apos;ll be useful for something (at least they&apos;ll be for us to remember what has been tested if we get back to this later..)&lt;/p&gt;


&lt;p&gt;In this file system, we have one MGT and one MDT, both share the same disk backend and are on the same LVM VG, separate LVs. We have two MDS servers able to access this storage (cs04r-sc-mds03-01 and cs04r-sc-mds03-02), both have lnet configured to use TCP and o2ib. The MDT has is configured to access the MGS on either of the servers, via two mgsnode parameters, both listing o2ib and tcp IP addresses.&lt;/p&gt;

&lt;p&gt;When the MGT and MDT are mounted in this order on cs04r-sc-mds03-01 all seems to be well, no messages in syslog about failure to get MGS log params or anything.&lt;br/&gt;
When the MGT and then the MDT are mounted in this order on cs04r-sc-mds03-02, we get these messages about the failure to get MGS log params but other than the first time, the MDT appears to be working fine.&lt;br/&gt;
Mounting the MGT on cs04r-sc-mds03-01 and later mounting the MDT on cs04r-sc-mds03-02 also works fine, no errors in syslog.&lt;br/&gt;
Mounting the MGT on cs04r-sc-mds03-02 and later mounting the MDT on cs04r-sc-mds03-01 generates the messages about failure to get MGS log params on cs04r-sc-mds03-01.&lt;/p&gt;

&lt;p&gt;So, it seems the MGT works on cs04r-sc-mds03-01 but not on cs04r-sc-mds03-02.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="26311">LU-5585</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="15631" name="cs04r-sc-mds03-02-messages.txt" size="46721" author="ferner" created="Thu, 4 Sep 2014 12:55:17 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10490" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>End date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Wed, 10 Sep 2014 12:55:17 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwvbb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>15574</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10493" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>Start date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Thu, 4 Sep 2014 12:55:17 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    </customfields>
    </item>
</channel>
</rss>