<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:05:14 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-7012] files not being deleted from OST after being re-activated</title>
                <link>https://jira.whamcloud.com/browse/LU-7012</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We had 4 OSTs that we deactivated because of an imbalance in utilization that was causing ENOSPC messages to our users. We identified a file that was consuming a significant amount of space that we deleted while the OSTs were deactivated. The file is no longer seen in the directory structure (the MDS processed the request), but the objects on the OSTs were not marked as free. After re-activating the OSTs, it doesn&apos;t appear that the llog was flushed, which should free up those objects. &lt;/p&gt;

&lt;p&gt;At this time, some users are not able to run jobs because they cannot allocated any space. &lt;/p&gt;

&lt;p&gt;We understand how this is supposed to work, but as the user in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4295&quot; title=&quot;removing files on deactivated OST doesn&amp;#39;t free up space&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4295&quot;&gt;&lt;del&gt;LU-4295&lt;/del&gt;&lt;/a&gt; pointed out, it is not. &lt;/p&gt;

&lt;p&gt;Please advise. &lt;/p&gt;</description>
                <environment>RHEL-6.6, lustre-2.5.4</environment>
        <key id="31500">LU-7012</key>
            <summary>files not being deleted from OST after being re-activated</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="tappro">Mikhail Pershin</assignee>
                                    <reporter username="dustb100">Dustin Leverman</reporter>
                        <labels>
                    </labels>
                <created>Mon, 17 Aug 2015 15:19:21 +0000</created>
                <updated>Thu, 15 Sep 2016 15:10:25 +0000</updated>
                            <resolved>Wed, 11 Nov 2015 18:07:49 +0000</resolved>
                                    <version>Lustre 2.5.4</version>
                                    <fixVersion>Lustre 2.8.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>15</watches>
                                                                            <comments>
                            <comment id="124285" author="green" created="Mon, 17 Aug 2015 15:33:32 +0000"  >&lt;p&gt;Does your OST shows as active on the MDT (i.e. did MDT reconnect)?&lt;/p&gt;</comment>
                            <comment id="124287" author="ezell" created="Mon, 17 Aug 2015 15:35:47 +0000"  >&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@atlas-mds1 ~]# cat /proc/fs/lustre/osc/*/active|sort|uniq -c
   1008 1
[root@atlas-mds1 ~]# cat /proc/fs/lustre/osp/*/active|sort|uniq -c
   1008 1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Aug 17 09:40:25 atlas-mgs1.ccs.ornl.gov kernel: [7161385.311368] Lustre: Permanently reactivating atlas1-OST02ce
Aug 17 09:40:25 atlas-mgs1.ccs.ornl.gov kernel: [7161385.321383] Lustre: Setting parameter atlas1-OST02ce-osc.osc.active in log atlas1-client
Aug 17 09:40:40 atlas-mgs1.ccs.ornl.gov kernel: [7161400.916159] Lustre: Permanently reactivating atlas1-OST039b
Aug 17 09:40:40 atlas-mgs1.ccs.ornl.gov kernel: [7161400.926057] Lustre: Setting parameter atlas1-OST039b-osc.osc.active in log atlas1-client
Aug 17 09:40:51 atlas-mgs1.ccs.ornl.gov kernel: [7161411.936736] Lustre: Permanently reactivating atlas1-OST02c1
Aug 17 09:40:51 atlas-mgs1.ccs.ornl.gov kernel: [7161411.946798] Lustre: Setting parameter atlas1-OST02c1-osc.osc.active in log atlas1-client
Aug 17 09:41:00 atlas-mgs1.ccs.ornl.gov kernel: [7161420.990618] Lustre: Permanently reactivating atlas1-OST02fb
Aug 17 09:41:00 atlas-mgs1.ccs.ornl.gov kernel: [7161421.000097] Lustre: Setting parameter atlas1-OST02fb-osc.osc.active in log atlas1-client
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="124291" author="green" created="Mon, 17 Aug 2015 15:43:03 +0000"  >&lt;p&gt;&quot;Permanently reactivating&quot; is just a message from mgs.&lt;br/&gt;
How about on the MDs logs showing reconnect to the OST and OST showing thta MDT connected to it?&lt;/p&gt;</comment>
                            <comment id="124295" author="green" created="Mon, 17 Aug 2015 15:52:48 +0000"  >&lt;p&gt;Essentially the footprint I am looking for (on the MDS) would be:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 7384.128329] Lustre: setting import lustre-OST0001_UUID INACTIVE by administrator request
[ 7403.759510] Lustre: lustre-OST0001-osc-ffff8800b96a1800: Connection to lustre-OST0001 (at 192.168.10.227@tcp) was lost; in progress operations using this service will wait for recovery to complete
[ 7403.764253] LustreError: 167-0: lustre-OST0001-osc-ffff8800b96a1800: This client was evicted by lustre-OST0001; in progress operations using this service will fail.
[ 7403.765235] Lustre: lustre-OST0001-osc-ffff8800b96a1800: Connection restored to lustre-OST0001 (at 192.168.10.227@tcp)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Where the first INACTIVE would come from lctl deactivate and the connection restored would come from lctl activate.&lt;/p&gt;</comment>
                            <comment id="124296" author="dustb100" created="Mon, 17 Aug 2015 15:56:03 +0000"  >&lt;p&gt;Oleg, &lt;br/&gt;
     Below is the log messages for the reactivation of atlas1-OST039b, atlas1-OST02c1, atlas1-OST02fb, and atlas1-OST02ce:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Aug 17 08:11:19 atlas-mds1.ccs.ornl.gov kernel: [2973887.632969] Lustre: setting import atlas1-OST02c1_UUID INACTIVE by administrator request
Aug 17 08:11:25 atlas-mds1.ccs.ornl.gov kernel: [2973893.078469] Lustre: setting import atlas1-OST02fb_UUID INACTIVE by administrator request
Aug 17 08:11:30 atlas-mds1.ccs.ornl.gov kernel: [2973898.379605] Lustre: setting import atlas1-OST039b_UUID INACTIVE by administrator request
Aug 17 08:42:11 atlas-mds1.ccs.ornl.gov kernel: [2975741.381423] Lustre: atlas1-OST039b-osc-MDT0000: Connection to atlas1-OST039b (at 10.36.225.89@o2ib) was lost; in progress operations using this service will wait for recovery to complete
Aug 17 08:42:11 atlas-mds1.ccs.ornl.gov kernel: [2975741.400737] LustreError: 167-0: atlas1-OST039b-osc-MDT0000: This client was evicted by atlas1-OST039b; in progress operations using this service will fail.
Aug 17 08:42:11 atlas-mds1.ccs.ornl.gov kernel: [2975741.416837] Lustre: atlas1-OST039b-osc-MDT0000: Connection restored to atlas1-OST039b (at 10.36.225.89@o2ib)
Aug 17 08:42:18 atlas-mds1.ccs.ornl.gov kernel: [2975747.822971] Lustre: atlas1-OST02fb-osc-MDT0000: Connection to atlas1-OST02fb (at 10.36.225.73@o2ib) was lost; in progress operations using this service will wait for recovery to complete
Aug 17 08:42:18 atlas-mds1.ccs.ornl.gov kernel: [2975747.842235] LustreError: 167-0: atlas1-OST02fb-osc-MDT0000: This client was evicted by atlas1-OST02fb; in progress operations using this service will fail.
Aug 17 08:42:18 atlas-mds1.ccs.ornl.gov kernel: [2975747.858294] Lustre: atlas1-OST02fb-osc-MDT0000: Connection restored to atlas1-OST02fb (at 10.36.225.73@o2ib)
Aug 17 08:42:26 atlas-mds1.ccs.ornl.gov kernel: [2975756.287935] Lustre: atlas1-OST02c1-osc-MDT0000: Connection to atlas1-OST02c1 (at 10.36.225.159@o2ib) was lost; in progress operations using this service will wait for recovery to complete
Aug 17 08:42:26 atlas-mds1.ccs.ornl.gov kernel: [2975756.307394] LustreError: 167-0: atlas1-OST02c1-osc-MDT0000: This client was evicted by atlas1-OST02c1; in progress operations using this service will fail.
Aug 17 08:42:26 atlas-mds1.ccs.ornl.gov kernel: [2975756.323480] Lustre: atlas1-OST02c1-osc-MDT0000: Connection restored to atlas1-OST02c1 (at 10.36.225.159@o2ib)
Aug 17 11:53:44 atlas-mds1.ccs.ornl.gov kernel: [2987244.922580] Lustre: setting import atlas1-OST02c7_UUID INACTIVE by administrator request
Aug 17 11:53:47 atlas-mds1.ccs.ornl.gov kernel: [2987248.220947] Lustre: atlas1-OST02c7-osc-MDT0000: Connection to atlas1-OST02c7 (at 10.36.225.165@o2ib) was lost; in progress operations using this service will wait for recovery to complete
Aug 17 11:53:47 atlas-oss2h8.ccs.ornl.gov kernel: [7165636.459725] Lustre: atlas1-OST02c7: Client atlas1-MDT0000-mdtlov_UUID (at 10.36.226.72@o2ib) reconnecting
Aug 17 11:53:47 atlas-mds1.ccs.ornl.gov kernel: [2987248.265826] LustreError: 167-0: atlas1-OST02c7-osc-MDT0000: This client was evicted by atlas1-OST02c7; in progress operations using this service will fail.
Aug 17 11:53:47 atlas-mds1.ccs.ornl.gov kernel: [2987248.281892] Lustre: atlas1-OST02c7-osc-MDT0000: Connection restored to atlas1-OST02c7 (at 10.36.225.165@o2ib)
Aug 17 11:53:47 atlas-oss2h8.ccs.ornl.gov kernel: [7165636.501432] Lustre: atlas1-OST02c7: deleting orphan objects from 0x0:11321511 to 0x0:11321537
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="124298" author="ezell" created="Mon, 17 Aug 2015 16:05:02 +0000"  >&lt;p&gt;Oleg-&lt;/p&gt;

&lt;p&gt;We have some OST object IDs of large files that should be deleted.  I just checked with debugfs, and the objects are still there.  If we unmount, mount as ldiskfs, remove the objects, unmount, and remount as lustre, will this cause a problem later (if the MDS delete request ever makes it through)?  We&apos;d also prefer a solution that doesn&apos;t require taking OSTs offline, but we&apos;ll do what we have to.  And we have an unknown number of other orphan objects out there.&lt;/p&gt;

&lt;p&gt;We also dumped the llog on the MDS, and the latest entry was from October 2013.&lt;/p&gt;</comment>
                            <comment id="124321" author="green" created="Mon, 17 Aug 2015 16:52:35 +0000"  >&lt;p&gt;Removing objects is not going to be a problem later.&lt;br/&gt;
In fact I imagine you can even mount ost in parallel as ldiskfs and remove the objects in the object dir (just make sure not to delete anything that is actually referenced).&lt;br/&gt;
Kernel will moderate access so lustre and parallel ldiskfs mount can coexist (just make sure to mount it on the same node).&lt;/p&gt;

&lt;p&gt;Though it&apos;s still strange that objects are not deleted by log replay.&lt;br/&gt;
An interesting experiment would be an MDS restart/failover, though I guess you would rather prefer not to try it.&lt;/p&gt;</comment>
                            <comment id="124336" author="adilger" created="Mon, 17 Aug 2015 18:35:19 +0000"  >&lt;p&gt;While this is related to &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt;, I think that there are two separate issues here:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;files are not deleted while the import is deactivated.  I think that issue should be handled by &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt;.&lt;/li&gt;
	&lt;li&gt;orphans are not cleaned up when the import is reactivated.  I think that issue should be handled by this ticket.&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;I&apos;m not sure why the OSP doesn&apos;t restart orphan cleanup when it is reactivated, but currently this needs an MDS restart.  That issue should be fixed to allow orphan cleanup to resume once the import is reactivated.&lt;/p&gt;</comment>
                            <comment id="124347" author="ezell" created="Mon, 17 Aug 2015 20:03:34 +0000"  >&lt;p&gt;We chose the &quot;safer&quot; route and unmounted the OST before mounting as ldiskfs.  We removed the files and usage went back down.&lt;/p&gt;</comment>
                            <comment id="126061" author="tappro" created="Wed, 2 Sep 2015 17:19:01 +0000"  >&lt;p&gt;Andreas, what is the difference between two cases in you comment? As I can see &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt; is about orphans as well. If file was deleted while OST is deactivated then its objects on OST are orphans and are not deleted after all. This is what &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt; is going to solve, isn&apos;t it?&lt;/p&gt;</comment>
                            <comment id="126129" author="adilger" created="Thu, 3 Sep 2015 05:09:54 +0000"  >&lt;p&gt;Mike, there are two separate problems:&lt;br/&gt;
1) the current method for doing OST space balancing is to deactivate the OSP and then migrate files (or let users do this gradually), so the deactivated OST will not be used for new objects. However, deactivating the OSP also prevents the MDS from destroying the objects of unlinked files (since 2.4) so space is never released on the OST, which confuses users. This issue will be addressed by &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt; by adding a new method for disabling object allocation on an OST without fully deactivating the OSP, so that the MDS can still process object destroys. &lt;/p&gt;

&lt;p&gt;2) when the deactivated OSP is reactivated again, even after restarting the OST, it does not process the unlink llogs (and presumably Astarte logs, but that is harder to check) until the MDS is stopped and restarted. The MDS should begin processing the recovery llogs after the OSP has been reactivated. That is what this bug is for. &lt;/p&gt;

&lt;p&gt;Even though &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt; will reduce the times when an OSP needs to be deactivated (i.e. Not for space balancing anymore), there are other times when this still needs to be done (e.g. OST offline for maintenance or similar) so recovery llog processing still needs to work. &lt;/p&gt;</comment>
                            <comment id="126515" author="tappro" created="Sun, 6 Sep 2015 06:55:29 +0000"  >&lt;p&gt;This is OSP problem it seems, which doesn&apos;t restart llog processing from the point where OST was de-activated. I am testing it locally now.&lt;/p&gt;</comment>
                            <comment id="126587" author="tappro" created="Mon, 7 Sep 2015 15:06:12 +0000"  >&lt;p&gt;Well, I was trying to reproduce that locally and objects are not deleted while OSP is deactivated but they are deleted immediately when I re-activate OSP back. I used &apos;lctl --device &amp;lt;osp device&amp;gt; deactivate&apos; command to deactivate an OSP. Then destroy big file that was previously created on that OST. The &apos;df&apos; shows that space on related OST is not freed, after that I re-activated OSP back and &apos;df&apos; shows space is returned back. Any thoughts what else may affect that?&lt;/p&gt;</comment>
                            <comment id="127422" author="yujian" created="Tue, 15 Sep 2015 23:49:36 +0000"  >&lt;p&gt;Hi Mike,&lt;/p&gt;

&lt;p&gt;What Lustre version did you test on? Is it Lustre 2.5.x or master branch?&lt;/p&gt;</comment>
                            <comment id="127694" author="tappro" created="Thu, 17 Sep 2015 19:44:44 +0000"  >&lt;p&gt;Yu Jian, you are right, I have used master branch instead of b2_5, my mistake. I am repeating local tests with 2.5 now&lt;/p&gt;</comment>
                            <comment id="127763" author="tappro" created="Fri, 18 Sep 2015 08:19:03 +0000"  >&lt;p&gt;I can reproduce this with 2.5, considering that master is working fine I am going to find out related changes in it and port them to the 2.5&lt;/p&gt;</comment>
                            <comment id="128283" author="gerrit" created="Wed, 23 Sep 2015 18:22:20 +0000"  >&lt;p&gt;Mike Pershin (mike.pershin@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/16612&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/16612&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7012&quot; title=&quot;files not being deleted from OST after being re-activated&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7012&quot;&gt;&lt;del&gt;LU-7012&lt;/del&gt;&lt;/a&gt; osp: don&apos;t use OSP when import is deactivated&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_5&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: b7daf6b218a34c18330ff6f5d8e023e48bee1e0b&lt;/p&gt;</comment>
                            <comment id="128285" author="tappro" created="Wed, 23 Sep 2015 18:25:07 +0000"  >&lt;p&gt;That is interesting that there are no obvious changes between master and b2_5 related to this behavior. Meanwhile I&apos;ve made a simple fix for this issue, it works for me. Please check it.&lt;/p&gt;</comment>
                            <comment id="129760" author="simmonsja" created="Wed, 7 Oct 2015 21:55:23 +0000"  >&lt;p&gt;Finishing testing your patch and it appears to have resolved our issues.&lt;/p&gt;</comment>
                            <comment id="129795" author="adilger" created="Thu, 8 Oct 2015 07:19:55 +0000"  >&lt;p&gt;Mike, is this needed for 2.7.x or only 2.5.x?  It would be great to link this to a specific patch/bug that fixed the problem for master if possible.&lt;/p&gt;</comment>
                            <comment id="131494" author="tappro" created="Mon, 26 Oct 2015 08:01:52 +0000"  >&lt;p&gt;Andreas, I see the same problem in 2.7 but it works somehow, I suppose that DNE changes fixed that indirectly by adding more synchronization mechanisms in OSP. Meanwhile, I&apos;d add this patch to the 2.7 just as direct fix for that particular problem&lt;/p&gt;</comment>
                            <comment id="131498" author="gerrit" created="Mon, 26 Oct 2015 08:13:12 +0000"  >&lt;p&gt;Mike Pershin (mike.pershin@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/16937&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/16937&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7012&quot; title=&quot;files not being deleted from OST after being re-activated&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7012&quot;&gt;&lt;del&gt;LU-7012&lt;/del&gt;&lt;/a&gt; osp: don&apos;t use OSP when import is deactivated&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 9a0a96518ab32908d381d22a4eccbfaa28cafd1d&lt;/p&gt;</comment>
                            <comment id="133244" author="gerrit" created="Wed, 11 Nov 2015 15:37:10 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/16937/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/16937/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7012&quot; title=&quot;files not being deleted from OST after being re-activated&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7012&quot;&gt;&lt;del&gt;LU-7012&lt;/del&gt;&lt;/a&gt; osp: don&apos;t use OSP when import is deactivated&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 82cbfd77f33bc33ea047407dfaecf4b04d44930a&lt;/p&gt;</comment>
                            <comment id="133272" author="jgmitter" created="Wed, 11 Nov 2015 18:07:49 +0000"  >&lt;p&gt;Landed for 2.8&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="22210">LU-4295</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="23911">LU-4825</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzxklz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10021"><![CDATA[2]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>