<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:17:57 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-1586] no free catalog slots for log</title>
                <link>https://jira.whamcloud.com/browse/LU-1586</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;It seems the MDT catalog file may be damaged on our test filesystem.  We were doing recovery testing with the patch for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1352&quot; title=&quot;spurious recovery timer resets&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1352&quot;&gt;&lt;del&gt;LU-1352&lt;/del&gt;&lt;/a&gt;.  Sometime after power-cycling the MDS and letting it go through recovery, clients started getting EFAULT writing to lustre.  These failures are accompanied by the following console errors on the MDS.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Jun 28 12:08:45 zwicky-mds2 kernel: LustreError: 11841:0:(llog_cat.c:81:llog_cat_new_log()) no free catalog slots for log...
Jun 28 12:08:45 zwicky-mds2 kernel: LustreError: 11841:0:(llog_cat.c:81:llog_cat_new_log()) Skipped 3 previous similar messages
Jun 28 12:08:45 zwicky-mds2 kernel: LustreError: 11841:0:(llog_obd.c:454:llog_obd_origin_add()) write one catalog record failed: -28
Jun 28 12:08:45 zwicky-mds2 kernel: LustreError: 11841:0:(llog_obd.c:454:llog_obd_origin_add()) Skipped 3 previous similar messages
Jun 28 12:08:45 zwicky-mds2 kernel: LustreError: 11841:0:(mdd_object.c:1330:mdd_changelog_data_store()) changelog failed: rc=-28 op17 t[0x200de60af:0x17913:0x0]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I mentioned this in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1570&quot; title=&quot;llog_cat.c:428:llog_cat_process_flags() catlog 0x27500007 crosses index zero&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1570&quot;&gt;&lt;del&gt;LU-1570&lt;/del&gt;&lt;/a&gt;, but I figured a new ticket was needed.&lt;/p&gt;</description>
                <environment>&lt;a href=&quot;https://github.com/chaos/lustre/commits/2.1.1-15chaos&quot;&gt;https://github.com/chaos/lustre/commits/2.1.1-15chaos&lt;/a&gt; </environment>
        <key id="15097">LU-1586</key>
            <summary>no free catalog slots for log</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="bogl">Bob Glossman</assignee>
                                    <reporter username="nedbass">Ned Bass</reporter>
                        <labels>
                            <label>llnl</label>
                    </labels>
                <created>Fri, 29 Jun 2012 18:50:26 +0000</created>
                <updated>Wed, 1 Mar 2017 12:02:54 +0000</updated>
                            <resolved>Wed, 1 Mar 2017 12:02:54 +0000</resolved>
                                    <version>Lustre 2.1.1</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>17</watches>
                                                                            <comments>
                            <comment id="41335" author="bogl" created="Fri, 29 Jun 2012 19:14:46 +0000"  >&lt;p&gt;This problem nearly certainly has a common underlying cause with &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1570&quot; title=&quot;llog_cat.c:428:llog_cat_process_flags() catlog 0x27500007 crosses index zero&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1570&quot;&gt;&lt;del&gt;LU-1570&lt;/del&gt;&lt;/a&gt;.  Both come down to damage in the catalog.  Don&apos;t plan to close this as a dup until we complete our analysis.&lt;/p&gt;

&lt;p&gt;Ned,&lt;br/&gt;
  You say this happened while you were testing a fix from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1352&quot; title=&quot;spurious recovery timer resets&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1352&quot;&gt;&lt;del&gt;LU-1352&lt;/del&gt;&lt;/a&gt;.  As far as I know that back port to b2_1 has not yet been fully tested and accepted.  Kind of suspicious.&lt;/p&gt;</comment>
                            <comment id="41336" author="nedbass" created="Fri, 29 Jun 2012 19:18:30 +0000"  >&lt;p&gt;Indeed, these are the types of issues we&apos;re hoping to shake out before we even consider putting that patch in production.  I should mention that the &quot;catlog 0x27500007 crosses index zero&quot; error message from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1570&quot; title=&quot;llog_cat.c:428:llog_cat_process_flags() catlog 0x27500007 crosses index zero&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1570&quot;&gt;&lt;del&gt;LU-1570&lt;/del&gt;&lt;/a&gt; began appearing in the MDS logs before we installed the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1352&quot; title=&quot;spurious recovery timer resets&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1352&quot;&gt;&lt;del&gt;LU-1352&lt;/del&gt;&lt;/a&gt; patch.&lt;/p&gt;</comment>
                            <comment id="41343" author="adilger" created="Sat, 30 Jun 2012 10:44:48 +0000"  >&lt;p&gt;If this is a production system, it would be possible to delete the CATALOGS file on the MDS.&lt;/p&gt;

&lt;p&gt;That said, is there any reason you are aware of that  would cause a lot of llog records to be produced but not deleted?  Creating  and deleting a lot of files, but having OSTs offline could do this.&lt;/p&gt;

&lt;p&gt;It would be useful to get a listing of the OBJECTS directory on the MDS, just to see how many llog files there actually are.  The catalog should be able to reference ~64000 log files.&lt;/p&gt;</comment>
                            <comment id="41344" author="nedbass" created="Sat, 30 Jun 2012 13:13:33 +0000"  >&lt;blockquote&gt;&lt;p&gt;is there any reason you are aware of that would cause a lot of llog records to be produced but not deleted? Creating and deleting a lot of files, but having OSTs offline could do this.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;It&apos;s very possible that&apos;s what happened.  The OSTs were down overnight while our I/O SWL tests were running on the clients.  That may have included mdtest which does create lots of files.  I thought the MDS was down too, but I&apos;d have to go back over the logs to reconstruct what really happened.&lt;/p&gt;

&lt;blockquote&gt;&lt;p&gt;It would be useful to get a listing of the OBJECTS directory on the MDS, just to see how many llog files there actually are. The catalog should be able to reference ~64000 log files.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;&amp;#35; zwicky-mds2 &amp;gt; ls OBJECTS/ | wc -l&lt;br/&gt;
65428&lt;/p&gt;</comment>
                            <comment id="41759" author="nedbass" created="Thu, 12 Jul 2012 14:29:24 +0000"  >&lt;blockquote&gt;&lt;p&gt;Creating and deleting a lot of files, but having OSTs offline could do this.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;I still don&apos;t really understand this failure mode.  Shouldn&apos;t the llogs be replayed and removed when the OSTs reconnect to the MDS?&lt;/p&gt;

&lt;p&gt;Also, we found that removing the CATALOGS file didn&apos;t resolve this.  We event went so far as to clear the OBJECTS and CONFIGS directories and issue a writeconf, but were unable to resuscitate the MDS.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: MGS: Logs for fs lc2 were removed by user request.  All servers must be restarted in order to regenerate the logs.
Lustre: Setting parameter lc2-MDT0000-mdtlov.lov.stripecount in log lc2-MDT0000
Lustre: Setting parameter lc2-clilov.lov.stripecount in log lc2-client
LustreError: 5569:0:(genops.c:304:class_newdev()) Device lc2-MDT0000-mdtlov already exists, won&apos;t add
LustreError: 5569:0:(obd_config.c:327:class_attach()) Cannot create device lc2-MDT0000-mdtlov of type lov : -17
LustreError: 5569:0:(obd_config.c:1363:class_config_llog_handler()) Err -17 on cfg command:
Lustre:    cmd=cf001 0:lc2-MDT0000-mdtlov  1:lov  2:lc2-MDT0000-mdtlov_UUID  
LustreError: 15c-8: MGC10.1.1.212@o2ib9: The configuration from log &apos;lc2-MDT0000&apos; failed (-17). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information.
LustreError: 5513:0:(obd_mount.c:1192:server_start_targets()) failed to start server lc2-MDT0000: -17
LustreError: 5513:0:(obd_mount.c:1723:server_fill_super()) Unable to start targets: -17
LustreError: 5513:0:(obd_mount.c:1512:server_put_super()) no obd lc2-MDT0000
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We needed the filesystem for other testing, so we gave up and reformatted it.  However, we&apos;re concerned that this failure may occur in production and we don&apos;t have a recovery process.  So we&apos;d really like to understand better what happened and how to fix it.&lt;/p&gt;</comment>
                            <comment id="41763" author="adilger" created="Thu, 12 Jul 2012 17:07:46 +0000"  >&lt;p&gt;-17 = EEXIST, so I would suspect it is complaining about a file in CONFIGS, but you reported that was cleared out as well.&lt;/p&gt;

&lt;p&gt;You are correct that bringing the OSTs back online should cause the OST recovery logs to be cleaned up.  Along with the message in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1570&quot; title=&quot;llog_cat.c:428:llog_cat_process_flags() catlog 0x27500007 crosses index zero&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1570&quot;&gt;&lt;del&gt;LU-1570&lt;/del&gt;&lt;/a&gt;, it seems there is something at your site that is consuming more llogs than normal.  Each llog should allow up to 64k unlinks to be stored for recovery, and up to 64k llogs in a catalog PER OST, though new llog files are started for each boot.  That means 4B unlinks or 64k reboots, or combinations thereof, per OST before the catalog wraps back to zero (per &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1570&quot; title=&quot;llog_cat.c:428:llog_cat_process_flags() catlog 0x27500007 crosses index zero&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1570&quot;&gt;&lt;del&gt;LU-1570&lt;/del&gt;&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;The logs &lt;em&gt;should&lt;/em&gt; be deleted sequentially after the MDT-&amp;gt;OST orphan recovery is completed when the OST reconnects, freeing up their slot in the catalog file.  It is possible that something was broken in this code in 2.x and it hasn&apos;t been noticed until now, since it would take a long time to see the symptoms.&lt;/p&gt;

&lt;p&gt;A simple test to reproduce this would be to create &amp;amp; delete files in a loop (~1M) on a specific OST (using &quot;mkdir $LUSTRE/test_ostN; lfs setstripe -i N $LUSTRE/test_ostN; createmany -o $LUSTRE/test_ostN/f 1000000&quot;, where &quot;N&quot; is some OST number) and see if the number of llog files in the MDT OBJECTS/ directory is increasing steadily over time (beyond 2 or 3 files per OST).  I don&apos;t recall specifically, but it may need an unmount and remount of the OST for the llog files to be cleaned up.&lt;/p&gt;

&lt;p&gt;Failing that test, try creating a large number of files (~1M) in $LUSTRE/test_ostN, and then unmount OST N and delete all the files.  This should succeed without error, but there will be many llog entries stored in the llog file.  The llog files should be cleaned when this OST is mounted again.&lt;/p&gt;</comment>
                            <comment id="41767" author="nedbass" created="Thu, 12 Jul 2012 17:38:39 +0000"  >&lt;blockquote&gt;&lt;p&gt;-17 = EEXIST, so I would suspect it is complaining about a file in CONFIGS, but you reported that was cleared out as well.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Even more strangely, the EEXIST errors persisted after reformatting the MDT.  To test whether something on disk was to blame, we formatted a loopback device with the same filesystem name, and &lt;b&gt;still&lt;/b&gt; got EEXIST mounting the MDT.  Formatting and the loopback device with a different filesystem name worked as expected.  Rebooting the MDS node cleared the problem (I suspect reloading the module stack would have as well).  There must have been some in-memory state left over from the original filesytsem (we double-checked in /proc/mounts that the old MDT was not mounted).&lt;/p&gt;

&lt;p&gt;In any case, this seems like something to track in a separate (but possibly related) issue.&lt;/p&gt;

&lt;blockquote&gt;&lt;p&gt;it seems there is something at your site that is consuming more llogs than normal.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;By any chance, do you suppose running Robinhood could be a factor?  I just learned one of our admins was evaluating Robinhood on this filesystem, but my understanding it that it just reads the Changelogs so I don&apos;t suspect a connection.&lt;/p&gt;

&lt;p&gt;Thanks for the test suggestions.  We&apos;ll give them a try when the filesystem is free again.&lt;/p&gt;</comment>
                            <comment id="41851" author="adilger" created="Fri, 13 Jul 2012 23:37:51 +0000"  >
&lt;p&gt;Yes, the changelogs could definitely be a factor.  Once there is a registered changelog user, the changelogs are kept on disk until they are consumed.  That ensures that if e.g. Robinhood crashes, or has some other problem for a day or four, that it won&apos;t have to do a full scan just to recover the state again.&lt;/p&gt;

&lt;p&gt;However, if the ChangeLog user is not unregistered, the changelogs will be kept until they run out of space.  I suspect that is the root cause here, and should be investigated further.  This bug should be CC&apos;d to Jinshan and Aurelien Degremont, who are working on HSM these days.&lt;/p&gt;

&lt;p&gt;Cheers, Andreas&lt;/p&gt;



</comment>
                            <comment id="42133" author="pjones" created="Mon, 23 Jul 2012 11:31:25 +0000"  >&lt;p&gt;Adding those involved with HSM for comment&lt;/p&gt;</comment>
                            <comment id="52778" author="nedbass" created="Wed, 20 Feb 2013 21:16:11 +0000"  >&lt;p&gt;It seems like lots of bad things can happen if the changelog catalog is allowed to become full: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2843&quot; title=&quot;ASSERTION( last_rec-&amp;gt;lrh_index == tail-&amp;gt;lrt_index )&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2843&quot;&gt;&lt;del&gt;LU-2843&lt;/del&gt;&lt;/a&gt; &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2844&quot; title=&quot;NULL pointer deref on unmount&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2844&quot;&gt;&lt;del&gt;LU-2844&lt;/del&gt;&lt;/a&gt; &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2845&quot; title=&quot;NULL pointer deref in osp_precreate_thread()&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2845&quot;&gt;&lt;del&gt;LU-2845&lt;/del&gt;&lt;/a&gt;.  Besides these crashes the MDS service fails to start due to EINVAL errors from mdd_changelog_llog_init(), and the only way I&apos;ve found to recover is manually deleting the changelog_catalog file.&lt;/p&gt;

&lt;p&gt;I&apos;m interested in adding safety mechanisms to prevent this situation.  Perhaps the MDS could automatically unregister changelog users or set the changelog mask to zero based on a  tunable threshold of unprocessed records.  Does anyone have other ideas for how to handle this more gracefully?&lt;/p&gt;</comment>
                            <comment id="52787" author="adilger" created="Thu, 21 Feb 2013 01:58:36 +0000"  >&lt;p&gt;Ned, I agree this should be handled more gracefully. I think it is preferable to unregister the oldest consumer as the catalog approaches full, which should cause old records to be released (need to check this).  That is IMHO better than setting the mask to zero and no longer recording new events.&lt;/p&gt;

&lt;p&gt;In both cases the consumer will have to do some scanning to find new changes. However, in the first case, it is more likely that the old consumer is no longer in use and no harm is done, while in the second case even a well-behaved consumer is punished.&lt;/p&gt;

&lt;p&gt;On a related note, do you know how many files were created before the catalog was full?  In theory about 4B Changelog entries should be possible (approx 64000^2), but this might be reduced by some small factor if there are multiple records per file (e.g. create + setattr). &lt;/p&gt;</comment>
                            <comment id="52789" author="nedbass" created="Thu, 21 Feb 2013 03:26:59 +0000"  >&lt;p&gt;It only took about 1.3 million changelog entries to fill the catalog.  My test case was something like&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;MDSDEV1=/dev/sda llmount.sh
lctl --device lustre-MDT0000 changelog_register
&lt;span class=&quot;code-keyword&quot;&gt;while&lt;/span&gt; createmany -m /mnt/lustre/%d 1000 ; &lt;span class=&quot;code-keyword&quot;&gt;do&lt;/span&gt;
    unlinkmany /mnt/lustre/%d 1000
done
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;and it made it through about 670 iterations before failing.&lt;/p&gt;
</comment>
                            <comment id="52824" author="nedbass" created="Thu, 21 Feb 2013 14:14:10 +0000"  >&lt;p&gt;Sorry, I was filling the device not the changelog catalog.  I specified MDSDEV1=/dev/sda thinking it would use the whole device, but I also need to set MDSSIZE.  So it will take days not minutes to hit this limit, making it less worrisome but still something that should be addressed.&lt;/p&gt;

&lt;p&gt;The reason I&apos;m now picking this thread up again is that we have plans to enable changelogs on our production systems for use by Robinhood.  We&apos;re concerned about being exposed to the problems under discussion here if Robinhood goes down for an extended period.&lt;/p&gt;</comment>
                            <comment id="52860" author="adegremont" created="Fri, 22 Feb 2013 03:19:05 +0000"  >&lt;p&gt;FYI we had Robinhood setup on a filesystem with 100 millions of inodes, and MDS RPC rate between 1k/s and 30k/s peak. We had Robinhood stopped for days and we had millions of record changelog to be consumed. It has required also days to close the gap but the MDS was &lt;b&gt;very&lt;/b&gt;, &lt;b&gt;very&lt;/b&gt; far from being filled. (MDS size was 2 TB). I think we did not consume even 1% of this device.&lt;br/&gt;
Do not worry &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt; &lt;/p&gt;</comment>
                            <comment id="52877" author="nedbass" created="Fri, 22 Feb 2013 11:22:56 +0000"  >&lt;p&gt;Aurelien, we&apos;re concerned about filling the changelog catalog, not the device.  We actually had that happen on our our test system when Robinhood was down and I was testing metadata peformance (hence this Jira issue).  It&apos;s far less likely on a production system with non-pathological workloads, but not outside the realm of possibility.&lt;/p&gt;</comment>
                            <comment id="103022" author="kilian" created="Fri, 9 Jan 2015 17:09:21 +0000"  >&lt;p&gt;As a matter of fact, it happened to us on a production filesystem. I wouldn&apos;t say the workload is non-pathological, though. &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;

&lt;p&gt;Anyway, we noticed at some point that a MD operation such as &quot;chown&quot; could lead too ENOSPC:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# chown userA /scratch/users/userA
chown: changing ownership of `/scratch/users/userA/&apos;: No space left on device
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The related MDS messages are:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;LustreError: 8130:0:(llog_cat.c:82:llog_cat_new_log()) no free catalog slots for log...
LustreError: 8130:0:(mdd_dir.c:783:mdd_changelog_ns_store()) changelog failed: rc=-28, op1 test c[0x20000b197:0x108d0:0x0] p[0x200002efb:0x155d5:0x0]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Any tip on how to solve this? Would consuming (or clearing) the changelogs be sufficient?&lt;/p&gt;</comment>
                            <comment id="186584" author="adilger" created="Wed, 1 Mar 2017 12:02:54 +0000"  >&lt;p&gt;Close as a duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7340&quot; title=&quot;ChangeLogs catalog full condition should be handled more gracefully&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7340&quot;&gt;&lt;del&gt;LU-7340&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                            <outwardlinks description="duplicates">
                                        <issuelink>
            <issuekey id="32827">LU-7340</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="43359">LU-9055</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="15051">LU-1570</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10490" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>End date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Fri, 9 Jan 2015 18:50:26 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzv33z:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>4003</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10493" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>Start date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Fri, 29 Jun 2012 18:50:26 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    </customfields>
    </item>
</channel>
</rss>