<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:30:03 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-2994] mmap IO performance problem</title>
                <link>https://jira.whamcloud.com/browse/LU-2994</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Our an customer has an interesting configuration with Lustre.&lt;/p&gt;

&lt;p&gt;They have VM environment with KVM(Kernel Virtual Machine). VM host node is RHEL6.2. This is Lustre client and mounting the Lustre. Guest OS&apos;s images are located on the Lustre.&lt;/p&gt;

&lt;p&gt;The hadoop is running on these guest OS and HDFS is crated on the VM&apos;s image. &lt;br/&gt;
When we tested hadoop example codes (teragen), we see a lot of error messages on Lustre client(VM host nodes) below.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Mar 21 04:01:59 s08 kernel: LustreError: 132-0: BAD WRITE CHECKSUM: changed in transit AND doesn&apos;t match the original - likely false positive due to mmap IO (bug 11742): from 192.168.100.95@o2ib inum 22/1194173787 object 7/0 extent [18041946112-18041950207]
Mar 21 04:01:59 s08 kernel: LustreError: 3308:0:(osc_request.c:1423:check_write_checksum()) original client csum 9f200f04 (type 2), server csum cb180f07 (type 2), client csum now ce430f5f
Mar 21 04:01:59 s08 kernel: LustreError: 3308:0:(osc_request.c:1652:osc_brw_redo_request()) @@@ redo for recoverable error -11  req@ffff88086754e400 x1430178466264362/t4304523663 o4-&amp;gt;lustre-OST0001_UUID@192.168.100.95@o2ib:6/4 lens 448/608 e 0 to 1 dl 1363806126 ref 1 fl Interpret:R/0/0 rc 0/0
Mar 21 04:02:34 s08 kernel: LustreError: 132-0: BAD WRITE CHECKSUM: changed on the client after we checksummed it - likely false positive due to mmap IO (bug 11742): from 192.168.100.95@o2ib inum 22/1194173787 object 7/0 extent [18041978880-18041991167]
Mar 21 04:02:34 s08 kernel: LustreError: Skipped 4 previous similar messages
Mar 21 04:02:34 s08 kernel: LustreError: 3308:0:(osc_request.c:1423:check_write_checksum()) original client csum a32dae6e (type 2), server csum 991aae8f (type 2), client csum now 991aae8f
Mar 21 04:02:34 s08 kernel: LustreError: 3308:0:(osc_request.c:1423:check_write_checksum()) Skipped 4 previous similar messages
Mar 21 04:02:34 s08 kernel: LustreError: 3308:0:(osc_request.c:1652:osc_brw_redo_request()) @@@ redo for recoverable error -11  req@ffff88086754e400 x1430178466359938/t4304619111 o4-&amp;gt;lustre-OST0001_UUID@192.168.100.95@o2ib:6/4 lens 448/608 e 0 to 1 dl 1363806161 ref 1 fl Interpret:R/0/0 rc 0/0
Mar 21 04:02:34 s08 kernel: LustreError: 3308:0:(osc_request.c:1652:osc_brw_redo_request()) Skipped 4 previous similar messages
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And, we see a lot of timeout error messages for local disk&apos;s (VM image). This is reproduce-able and I&apos;ve demonstrated same problem in our lab. &lt;br/&gt;
This is similar to &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2001&quot; title=&quot;read operation is slow when mmap is enabled&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2001&quot;&gt;&lt;del&gt;LU-2001&lt;/del&gt;&lt;/a&gt; and we couldn&apos;t have performance regressions if it accesses to Lustre through the NFS.&lt;/p&gt;

&lt;p&gt;I&apos;m going to collect debug logs and attach on here.&lt;/p&gt;</description>
                <environment></environment>
        <key id="18019">LU-2994</key>
            <summary>mmap IO performance problem</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="2">Won&apos;t Fix</resolution>
                                        <assignee username="jay">Jinshan Xiong</assignee>
                                    <reporter username="ihara">Shuichi Ihara</reporter>
                        <labels>
                    </labels>
                <created>Wed, 20 Mar 2013 19:15:36 +0000</created>
                <updated>Thu, 8 Feb 2018 18:30:18 +0000</updated>
                            <resolved>Thu, 8 Feb 2018 18:30:18 +0000</resolved>
                                    <version>Lustre 1.8.9</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>8</watches>
                                                                            <comments>
                            <comment id="54499" author="pjones" created="Wed, 20 Mar 2013 19:24:41 +0000"  >&lt;p&gt;Ihara&lt;/p&gt;

&lt;p&gt;Which version of Lustre are they using?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="54501" author="jay" created="Wed, 20 Mar 2013 19:35:02 +0000"  >&lt;p&gt;mmap write with checksum should have been fixed by introducing ll_page_mkwrite() method. Yes, debug log is the first step for this problem&lt;/p&gt;</comment>
                            <comment id="54504" author="green" created="Wed, 20 Mar 2013 19:47:25 +0000"  >&lt;p&gt;the error messages we have a silencing patch in 2.x I believe, commit f8995c83720e999b13f057739f8217822a3951fa&lt;/p&gt;

&lt;p&gt;The sad reality is lustre mmap is on the heavy side, and mmap performance is not super great with all the single-page io it usually encounters, esp. forced read-modify-write cycles too.&lt;br/&gt;
NFS hides that because it does not really cares about data consistency so could be more lacking in page tracking and such.&lt;/p&gt;

&lt;p&gt;I am not totally sure kvm itself is using mmap io for disk access, I just checked a copy of my kvm running and there&apos;s no disk in /proc/fd/map. It might depend on different disk drivers used too, so try playing with that.&lt;/p&gt;

&lt;p&gt;The other thing I was sure kvm is using mmap for is the system RAM image that I believed it stores in an unlinked file somewhere in /tmp, but I don&apos;t see any evidence of that either. If this does happen, converting /tmp to tmpfs-based filesystem might be prudent.&lt;/p&gt;

&lt;p&gt;So, sorry I am coming at this from a totally different angle, but I think it should be totally possible to reduce kvm usage of mmap an sidestep the problem in this way.&lt;/p&gt;</comment>
                            <comment id="54505" author="green" created="Wed, 20 Mar 2013 19:48:12 +0000"  >&lt;p&gt;Jinshan: I suspect this is some sort of 1.8 deployment, so page_mkwrite might not be there.&lt;/p&gt;</comment>
                            <comment id="54530" author="ihara" created="Thu, 21 Mar 2013 03:54:08 +0000"  >&lt;p&gt;This is lustre-1.8.9.wc1.&lt;/p&gt;</comment>
                            <comment id="54531" author="ihara" created="Thu, 21 Mar 2013 03:55:22 +0000"  >&lt;p&gt;debug files attached.&lt;/p&gt;</comment>
                            <comment id="54584" author="ihara" created="Thu, 21 Mar 2013 17:59:13 +0000"  >&lt;p&gt;collected stack trace of qeme-kvm process before VM&apos;s node hang. No mmap io..&lt;/p&gt;</comment>
                            <comment id="54700" author="ihara" created="Fri, 22 Mar 2013 19:21:41 +0000"  >&lt;p&gt;sorry, qemu-kvm called mmap.. I will post collrect syslog and debug log.&lt;/p&gt;</comment>
                            <comment id="54702" author="ihara" created="Fri, 22 Mar 2013 19:26:48 +0000"  >&lt;p&gt;This is my configuration and test configuration. sorry confusions, but it seems to be related to mmap calls on the lustre client.&lt;/p&gt;

&lt;p&gt;oss, mds - Lustre&apos;s OSS/MDS which is running lustre-1.8.9&lt;br/&gt;
s08 - lustre client (lustre mounsted on /lustre) and master node of KVM&lt;br/&gt;
haoop1-3 - VMs which is running s08, these VM images are located on /lustre/images/&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;hadoop sample job started on master node of hadoop (hadoop1). writing the 10GB file to HDFS.
# hadoop jar /usr/lib/hadoop/hadoop-test.jar TestDFSIO -write -nrFiles 1 -fileSize 10GB
13/03/23 03:56:18 INFO fs.TestDFSIO: TestDFSIO.0.0.6
13/03/23 03:56:18 INFO fs.TestDFSIO: nrFiles = 1
13/03/23 03:56:18 INFO fs.TestDFSIO: fileSize (MB) = 10240.0
13/03/23 03:56:18 INFO fs.TestDFSIO: bufferSize = 1000000
13/03/23 03:56:18 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
13/03/23 03:56:18 INFO fs.TestDFSIO: creating control file: 10737418240 bytes, 1 files
13/03/23 03:56:18 INFO fs.TestDFSIO: created control files for: 1 files
13/03/23 03:56:19 INFO mapred.FileInputFormat: Total input paths to process : 1
13/03/23 03:56:20 INFO mapred.JobClient: Running job: job_201303240831_0029
13/03/23 03:56:21 INFO mapred.JobClient:  map 0% reduce 0%
13/03/23 03:56:36 INFO mapred.JobClient:  map 100% reduce 0%
...
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I monitored kvm process on KMV master node and it called mmap.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# grep -i mmap strace-log.txt 
03:56:34 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f4f309d1000
03:57:32 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f500dccc000
03:57:32 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f500c2f1000
03:57:32 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f50075ff000
03:57:32 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f4f3f3fe000
03:57:32 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f4f3e0f4000
03:57:32 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f4f3c8ee000
03:57:32 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f4f3beed000
03:57:32 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f4f3b4ec000
03:57:32 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f4f3aaeb000
03:57:32 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f4f3a0ea000
03:57:32 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f4f36ddb000
03:57:32 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f4f363da000
03:58:48 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f500dccc000
03:58:57 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f500c2f1000
03:58:57 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f50075ff000
03:58:57 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f5005ffb000
04:02:49 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f5004af8000
04:02:50 mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f4f3e0f4000
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;From Lustre client&apos;s debug log.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;00000008:02020000:6:1363978813.043771:0:8798:0:(osc_request.c:1420:check_write_checksum()) 132-0: BAD WRITE CHECKSUM: changed in transit AND doesn&apos;t match the original - likely false positive due to mmap IO (bug 11742): from 192.168.100.95@o2ib inum 43/1177415599 object 738/0 extent [4158808064-4158820351]
00000008:00020000:6:1363978813.043772:0:8798:0:(osc_request.c:1423:check_write_checksum()) original client csum 7335b83c (type 2), server csum d105b83e (type 2), client csum now 2ee4b840
00000008:00000001:6:1363978813.043773:0:8798:0:(osc_request.c:1478:osc_brw_fini_request()) Process leaving (rc=18446744073709551605 : -11 : fffffffffffffff5)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;VM side (hadoop3), there are a lot of device IO errors.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Mar 23 03:57:31 hadoop3 kernel: end_request: I/O error, dev vda, sector 5689384
Mar 23 03:57:31 hadoop3 kernel: Buffer I/O error on device dm-0, logical block 582661
Mar 23 03:57:31 hadoop3 kernel: lost page write due to I/O error on dm-0
Mar 23 03:57:31 hadoop3 kernel: Buffer I/O error on device dm-0, logical block 582662
Mar 23 03:57:31 hadoop3 kernel: lost page write due to I/O error on dm-0
Mar 23 03:57:31 hadoop3 kernel: Buffer I/O error on device dm-0, logical block 582663
Mar 23 03:57:31 hadoop3 kernel: lost page write due to I/O error on dm-0
Mar 23 03:57:31 hadoop3 kernel: Buffer I/O error on device dm-0, logical block 582664
Mar 23 03:57:31 hadoop3 kernel: lost page write due to I/O error on dm-0
Mar 23 03:57:31 hadoop3 kernel: end_request: I/O error, dev vda, sector 54641960
Mar 23 03:57:31 hadoop3 kernel: Buffer I/O error on device dm-0, logical block 6701733
Mar 23 03:57:31 hadoop3 kernel: lost page write due to I/O error on dm-0
Mar 23 03:57:31 hadoop3 kernel: Buffer I/O error on device dm-0, logical block 6701734
Mar 23 03:57:31 hadoop3 kernel: lost page write due to I/O error on dm-0
Mar 23 03:57:31 hadoop3 kernel: Buffer I/O error on device dm-0, logical block 6701735
...
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="54703" author="ihara" created="Fri, 22 Mar 2013 19:34:08 +0000"  >&lt;p&gt;all syslog and debug files are uploaded on /uploads/&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2994&quot; title=&quot;mmap IO performance problem&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2994&quot;&gt;&lt;del&gt;LU-2994&lt;/del&gt;&lt;/a&gt;/20130323 on ftp site.&lt;/p&gt;

&lt;p&gt;Please have a look at them and let me know if you need futher informaiton.&lt;/p&gt;</comment>
                            <comment id="54704" author="green" created="Fri, 22 Mar 2013 19:36:33 +0000"  >&lt;p&gt;The mmap call you quote is &quot;anonymous&quot; mmap, meaning it does not really attach itself to any filesystem, it&apos;s just a fancy way of allocating memory.&lt;/p&gt;

&lt;p&gt;I wonder if the bad checksum is valid and you really have a network problem? do you get a similar bad checksum message on the server?&lt;/p&gt;</comment>
                            <comment id="54717" author="ihara" created="Sat, 23 Mar 2013 01:37:15 +0000"  >&lt;blockquote&gt;&lt;p&gt;I wonder if the bad checksum is valid and you really have a network problem? do you get a similar bad checksum message on the server?&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;no, this is exactly happened at the customer site and we demonstrated same problem in lab here.&lt;/p&gt;</comment>
                            <comment id="54723" author="ihara" created="Sun, 24 Mar 2013 06:50:54 +0000"  >&lt;p&gt;Few updates.. &lt;/p&gt;

&lt;p&gt;As fa as we find out qemu in detial, if cache=none to disk option of virtual disk for VMs, host opens the image with O_DIRECT which means no page cache on the Lustre client.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;&amp;lt;driver name=&apos;qemu&apos; type=&apos;qcow2&apos; cache=&apos;none&apos;/&amp;gt;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Howerver, &quot;none&quot; is NOT &quot;none&quot; caching on the VMs. VM has still small cache to emurate &quot;write back&quot; mode of disk. So, from host perspective, this is not true O_DIRECT mode.&lt;/p&gt;

&lt;p&gt;We changed cache mode, writeback and writethrough to disable O_DIRECT on the host and use async IO, instaed. At this moment, we haven&apos;t see problem so far...&lt;/p&gt;</comment>
                            <comment id="54727" author="ihara" created="Sun, 24 Mar 2013 15:37:17 +0000"  >&lt;p&gt;with lustre-2.x, if VM&apos;s images are located on the Lustre, then start VMs, that doens&apos;t work (even can&apos;t start VMs) due to qemu-kvm got -EINVAL from ll_direct_IO_26() here.. &lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;       /* FIXME: io smaller than PAGE_SIZE is broken on ia64 ??? */
       if ((file_offset &amp;amp; ~CFS_PAGE_MASK) || (count &amp;amp; ~CFS_PAGE_MASK))
               RETURN(-EINVAL);
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="54731" author="green" created="Sun, 24 Mar 2013 19:00:40 +0000"  >&lt;p&gt;Ah! Now this makes total sense!&lt;br/&gt;
It seems we always assumed that mmap was the only way to change pages while IO was in-progress, but in reality O_DIRECT is very similar - the pages are under user control so parallel threads might still change them while the IO thread is blocked.&lt;/p&gt;

&lt;p&gt;As far as EINVAL goes, can you confirm the count IS in fact a multiply of 4k and at offset that&apos;s multiply of 4k?&lt;/p&gt;</comment>
                            <comment id="220471" author="jay" created="Thu, 8 Feb 2018 18:30:18 +0000"  >&lt;p&gt;close old tickets&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="12412" name="debug.txt.gz" size="3319489" author="ihara" created="Thu, 21 Mar 2013 03:55:22 +0000"/>
                            <attachment id="12414" name="strace-qemu-kvm.log" size="786416" author="ihara" created="Thu, 21 Mar 2013 17:59:13 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10490" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>End date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Mon, 11 Apr 2016 19:15:36 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvlrj:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>7301</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10021"><![CDATA[2]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10493" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>Start date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Wed, 20 Mar 2013 19:15:36 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    </customfields>
    </item>
</channel>
</rss>