<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:46:10 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4825] lfs migrate not freeing space on OST</title>
                <link>https://jira.whamcloud.com/browse/LU-4825</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We have some OSTs that we let get out of hand and have reached 100% capacity.  We have offlined them using &quot;lctl --device &amp;lt;device_num&amp;gt; deactivate&quot; along with others that are approaching capacity.  Despite having users delete multi-terabyte files and using the lfs_migrate script (with two patches from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4293&quot; title=&quot;lfs_migrate is failing with a volatile file Operation not permitted error&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4293&quot;&gt;&lt;del&gt;LU-4293&lt;/del&gt;&lt;/a&gt; included to allow it to use &quot;lfs migrate&quot; as root instead of rsync) to migrate over 100 TB of data (with the full OSTs deactivated), we are not freeing up any space on the OSTs.&lt;/p&gt;

&lt;p&gt;Our initial guess was that after the layout swap of the &quot;lfs migrate&quot;, the old objects were not being deleted from disk because those OSTs were deactivated on the MDS.  Therefore on one OST I re-activated it on the MDS, unmounted from the OSS, and ran an &quot;e2fsck -v -f -p /dev/...&quot; and that seemed to free about 300 GB on the OST.  I tried the same procedure on another OST and it did not change anything.  The e2fsck output indicates that nothing &quot;happened&quot; in either case.&lt;/p&gt;

&lt;p&gt;This is a live, production file system so after yanking two OSTs offline I thought I&apos;d stop testing theories before too many users called &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/wink.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</description>
                <environment>SLES 11 SP2 clients, CentOS 6.4 servers (DDN packaged)</environment>
        <key id="23911">LU-4825</key>
            <summary>lfs migrate not freeing space on OST</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="laisiyao">Lai Siyao</assignee>
                                    <reporter username="hall0l">Shawn Hall</reporter>
                        <labels>
                            <label>ldiskfs</label>
                    </labels>
                <created>Wed, 26 Mar 2014 17:59:49 +0000</created>
                <updated>Thu, 17 Jan 2019 21:42:51 +0000</updated>
                            <resolved>Fri, 12 Aug 2016 12:47:36 +0000</resolved>
                                    <version>Lustre 2.4.1</version>
                                    <fixVersion>Lustre 2.9.0</fixVersion>
                    <fixVersion>Lustre 2.10.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>18</watches>
                                                                            <comments>
                            <comment id="80345" author="green" created="Wed, 26 Mar 2014 23:19:09 +0000"  >&lt;p&gt;2.4+ versions absolutely need OSTs to be connected for objects to be freed. Unlike earlier versions clients don&apos;t even try to free unlinked objects anymore.&lt;/p&gt;

&lt;p&gt;Just e2fsck should not free anything because it works on a local device only. So those 300G freed probably were related to earlir issues or it&apos;s llog replay that took care of the objects for you once MDS really reconnected to MDT and did a log replay (does 300M roughly match expected freed space?)&lt;br/&gt;
Could it be that hte other OST did not have anythning to unlink?&lt;/p&gt;

&lt;p&gt;I expect just reactivating OSTs and letting them to be reconnected to should free the space sometime soon after succesful reconnection after the sync is complete.&lt;/p&gt;</comment>
                            <comment id="81846" author="hall0l" created="Thu, 17 Apr 2014 16:24:24 +0000"  >&lt;p&gt;Thanks Oleg.  After re-enabling the OSTs with lctl, they eventually did gain free space.  We didn&apos;t get fully in the clear though until we moved some data to another file system.&lt;/p&gt;

&lt;p&gt;My recommendation from this would be to add some information in the Lustre manual, probably under the &quot;Handling Full OSTs&quot; subchapter.  The procedure described says to deactivate, lfs_migrate, and re-activate.  Intuition would say that you&apos;d see space freeing up as you lfs_migrate, not after you re-enable.  You don&apos;t want to re-enable an OST if it&apos;s still full.  Having a note in there about exactly when space will be freed on OSTs would help clear up any confusion.&lt;/p&gt;</comment>
                            <comment id="95620" author="sean" created="Fri, 3 Oct 2014 12:44:43 +0000"  >&lt;p&gt;Dear All,&lt;/p&gt;

&lt;p&gt;I have one OST that I am trying to decommission and I have a couple of possibly related issues that have come up only now that I have created a new file-system at version 2.5.2.  In my case, no objects are freed even after re-enabling all OSTS and waiting several hours.&lt;/p&gt;

&lt;p&gt;1) Files are not being unlinked/deleted from OSTS after references to them are removed if the file is on an inactive OST&lt;br/&gt;
2) lfs find by uuid does not work for some OSTS - apparently only those that have been deactivated and reactivated on the MDT. Find by index works, and so I can workaround this later issue.&lt;/p&gt;

&lt;p&gt;I deactivate the OST on the MDT to prevent further object allocation&lt;/p&gt;
&lt;ol&gt;
	&lt;li&gt;lctl --device 17 deactivate&lt;/li&gt;
	&lt;li&gt;grep atlas25-OST0079_UUID /proc/fs/lustre/lov/atlas25-MDT0000-mdtlov/target_obd&lt;br/&gt;
121: atlas25-OST0079_UUID INACTIVE&lt;/li&gt;
&lt;/ol&gt;


&lt;p&gt;On a 2.1 client, I run lfs_migrate which uses the rsync and in-place creation. I dont see the space usage on the inactive OST decrease or change at all, even by 1 byte.  I also don&apos;t see the OSTS that are recieving the data get an increase in space usage.  If I stop the migration process on the client, and re-activate the OST on thh MDT node, the space usage of the destination OSTS increases but the files are still not deleted from the OST*.  When I realized that this was happening, I have stopped the migration process and not restarted.  Therefore I should have an OST still with some files on it, but not full.  I actually have an OST with the same space usage as when I started.&lt;/p&gt;

&lt;p&gt;atlas25-OST0079_UUID 14593315264 13507785912   355458364  97% /lustre/atlas25&lt;span class=&quot;error&quot;&gt;&amp;#91;OST:121&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;I wonder if I am missing something, and &quot;lctl --device N deactivate&quot; is not not the way to prevent new stripes being created on the OST (the manual v2.X still recommends deactivate-before-migrate)?&lt;/p&gt;

&lt;p&gt;2) Now the lfs find for atlas25-OST0079 (index 121) on a lustre 2.5 client in a directory apparently containing files from the OST.  When this output was generated the OST was active on all lustre servers and clients, and had been so for at least 30 minutes.&lt;/p&gt;

&lt;ol&gt;
	&lt;li&gt;lfs find  -ost atlas25-OST0079 .&lt;/li&gt;
&lt;/ol&gt;


&lt;ol&gt;
	&lt;li&gt;lfs find  -ost 121 .&lt;br/&gt;
./mcatnl_herpp_ggH.17.2.11.3.root.220&lt;br/&gt;
./mcatnl_herpp_ggH.17.2.11.3.root.203&lt;/li&gt;
&lt;/ol&gt;


&lt;ol&gt;
	&lt;li&gt;lfs getstripe ./mcatnl_herpp_ggH.17.2.11.3.root.220&lt;br/&gt;
./mcatnl_herpp_ggH.17.2.11.3.root.220&lt;br/&gt;
lmm_stripe_count:   1&lt;br/&gt;
lmm_stripe_size:    1048576&lt;br/&gt;
lmm_pattern:        1&lt;br/&gt;
lmm_layout_gen:     0&lt;br/&gt;
lmm_stripe_offset:  121&lt;br/&gt;
    obdidx         objid         objid         group&lt;br/&gt;
       121            542840          0x84878                 0&lt;/li&gt;
&lt;/ol&gt;



&lt;p&gt;Cheers,&lt;br/&gt;
Sean&lt;/p&gt;

&lt;p&gt;*This diagnostic may help with explaining the above problem.&lt;/p&gt;

&lt;p&gt;When deactivating an OST, I see the following in the MDS logs:&lt;/p&gt;

&lt;p&gt;I see that the MDT node has been evicted by the OST from the MDT logs.&lt;/p&gt;

&lt;p&gt;Oct  2 21:19:33 pplxlustre25mds4 kernel: LustreError: 167-0: atlas25-OST0079-osc-MDT0000: This client was evicted by atlas25-OST0079; in progress operations using this service will fail.&lt;/p&gt;

&lt;p&gt;With other messages such as:&lt;br/&gt;
Oct  3 06:04:57 pplxlustre25mds4 kernel: LustreError: 2246:0:(osp_precreate.c:464:osp_precreate_send()) atlas25-OST0047-osc-MDT0000: can&apos;t precreate: rc = -28&lt;br/&gt;
Oct  3 06:24:57 pplxlustre25mds4 kernel: LustreError: 2252:0:(osp_precreate.c:464:osp_precreate_send()) Skipped 239 previous similar messages&lt;/p&gt;

&lt;p&gt;(Im assuming here that the messages for the OST0079 that I focus on in this email are bieng skipped)&lt;/p&gt;</comment>
                            <comment id="120784" author="adilger" created="Thu, 9 Jul 2015 00:26:20 +0000"  >&lt;p&gt;One problem here is that the documented procedure for migrating objects off of an OST is to use &quot;lctl --device XXX deactivate&quot; on the MDS for the OST(s), but this disconnects the MDS from the OST entirely and disables RPC sending at a low level in the code (RPC layer) so it isn&apos;t necessarily practical to special-case that code to allow only &lt;tt&gt;OST_DESTROY&lt;/tt&gt; RPCs through from the MDS, since the MDS doesn&apos;t even know whether the OST is alive or dead at that point.&lt;/p&gt;

&lt;p&gt;It seems we need to have a different method to disable &lt;em&gt;only&lt;/em&gt; MDS object creation on the specified OST(s) (ideally one that would also work on older versions of Lustre like possibly &lt;tt&gt;osp.&amp;#42;.max_precreated=0&lt;/tt&gt; or &lt;tt&gt;osp.&amp;#42;.max_create_count=0&lt;/tt&gt; or similar), and then update the documentation to reflect this new command for newer versions of Lustre, and possibly backport this to older releases that are affected (2.5/2.7).  The other option, which is less preferable, is to change the meaning of &quot;active=0&quot; so that it just quiesces an OSP connection, but doesn&apos;t disconnect it completely, and then conditionally allows &lt;tt&gt;OST_DESTROY&lt;/tt&gt; RPCs through if the OST is connected but just marked active=0, but that may cause other problems.&lt;/p&gt;</comment>
                            <comment id="120980" author="kjstrosahl" created="Fri, 10 Jul 2015 14:41:39 +0000"  >&lt;p&gt;Hello,&lt;/p&gt;

&lt;p&gt;   I&apos;m observing a similar occurrence on some of my systems as well.  Earlier in the week three of my osts reached 97% so I set them to read-only using lctl --device &amp;lt;device no&amp;gt; deactivate.  Yesterday I was able to add some new osts to the system, and so I started an lfs_migrate on one of the full osts.  I was aware that the system wouldn&apos;t update the space usage on the ost while it remained read-only, so this morning I set it back to active using lctl --device &amp;lt;device number&amp;gt; activate.  The oss reported that it was deleting orphan objects, but the size usage didn&apos;t go down... after an hour the ost had more data on it then when it was in read-only mode and so I deactivated it, again.&lt;/p&gt;</comment>
                            <comment id="124337" author="adilger" created="Mon, 17 Aug 2015 18:40:11 +0000"  >&lt;p&gt;One option that works on a variety of different Lustre versions is to mark an OST as degraded:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lctl set_param obdfilter.{OST_name}.degraded=1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;This means that the MDS will skip the degraded OST(s) during most allocations, but will not skip them if someone requested a widely striped file and not enough non-degraded OSTs to fill the request.&lt;/p&gt;

&lt;p&gt;I think we need to allow setting &lt;tt&gt;osp.&amp;#42;.max_create_count=0&lt;/tt&gt; to inform the MDS to skip object precreation on the OST(s), instead of using the old &lt;tt&gt;lctl --device &amp;#42; deactivate&lt;/tt&gt; method, so that the MDS can still destroy OST objects for unlinked files.  While it appears &lt;em&gt;possible&lt;/em&gt; to set &lt;tt&gt;max_create_count=0&lt;/tt&gt; today, the MDS still tries to create objects on that OST if specified via &lt;tt&gt;lfs setstripe -i &amp;lt;idx&amp;gt;&lt;/tt&gt; and it waits for a timeout (100s) trying to create files there before moving to the next OST (at &amp;lt;idx + 1&amp;gt;).&lt;/p&gt;

&lt;p&gt;If max_create_count==0 then the LOD/OSP should skip this OSP immediately instead of waiting for a full timeout.&lt;/p&gt;</comment>
                            <comment id="124338" author="jgmitter" created="Mon, 17 Aug 2015 18:42:36 +0000"  >&lt;p&gt;Hi Lai,&lt;br/&gt;
Can you take a look at this?  Please see Andreas&apos; last comment.&lt;br/&gt;
Thanks.&lt;br/&gt;
Joe&lt;/p&gt;</comment>
                            <comment id="124679" author="gerrit" created="Thu, 20 Aug 2015 06:16:24 +0000"  >&lt;p&gt;Andreas Dilger (andreas.dilger@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/16032&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/16032&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt; osp: rename variables to match /proc entry&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: d237665954c2cd3dde39f58b3171f3293676d5a3&lt;/p&gt;</comment>
                            <comment id="125353" author="gerrit" created="Thu, 27 Aug 2015 13:27:48 +0000"  >&lt;p&gt;Lai Siyao (lai.siyao@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/16105&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/16105&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt; osp: check max_create_count before use OSP&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 07e2c5d77cd33d3c24c283714b71ea6b7426cac7&lt;/p&gt;</comment>
                            <comment id="126122" author="yujian" created="Thu, 3 Sep 2015 01:11:59 +0000"  >&lt;p&gt;I created &lt;a href=&quot;https://jira.whamcloud.com/browse/LUDOC-305&quot; title=&quot;&amp;quot;lctl deactivate/activate&amp;quot; does not work as expected in 19.1. Handling Full OSTs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LUDOC-305&quot;&gt;&lt;del&gt;LUDOC-305&lt;/del&gt;&lt;/a&gt; to track the Lustre documentation change.&lt;/p&gt;</comment>
                            <comment id="133332" author="adilger" created="Thu, 12 Nov 2015 05:35:12 +0000"  >&lt;p&gt;As a temporary workaround on older Lustre versions before &lt;a href=&quot;http://review.whamcloud.com/16105&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/16105&lt;/a&gt; is landed, it is also possible to use:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;oss# lctl set_param fail_loc=0x229 fail_val=&amp;lt;ost_index&amp;gt;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;on the OSS where the OST to be deactivated is located.  This will block all creates on the specified OST index.&lt;/p&gt;

&lt;p&gt;This only allows blocking a single OST from creates per OSS at one time (by simulating running out of inodes in the OST_STATFS RPC sent to the MDS), but it avoids the drawbacks of completely deactivating the OST on the MDS (namely that OST objects are not destroyed on deactivated OSTs).  This will generate some console spew (&quot;&lt;tt&gt;&amp;#42;&amp;#42;&amp;#42; cfs_fail_loc=0x229, val=&amp;lt;ost_index&amp;gt;&amp;#42;&amp;#42;&amp;#42;&lt;/tt&gt;&quot; every few seconds), and makes the &quot;lfs df -i&quot; output for this OST to be incorrect (it will report all inodes in use), but it is a workaround after all.&lt;/p&gt;</comment>
                            <comment id="138620" author="gerrit" created="Tue, 12 Jan 2016 02:44:56 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/16032/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/16032/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt; osp: rename variables to match /proc entry&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 05bf10903eba13db3d152f2725de56243123e7c5&lt;/p&gt;</comment>
                            <comment id="152147" author="gerrit" created="Fri, 13 May 2016 02:18:34 +0000"  >&lt;p&gt;Andreas Dilger (andreas.dilger@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/20163&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/20163&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt; ofd: fix OBD_FAIL_OST_ENOINO/ENOSPC behaviour&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: c1e339c769e4b6fc26aefc8e7ffc7f8421dc047d&lt;/p&gt;</comment>
                            <comment id="159999" author="gerrit" created="Wed, 27 Jul 2016 03:01:24 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/16105/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/16105/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt; osp: check max_create_count before use OSP&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: aa1a240338d18201f1047db62b31603e2cffcfe3&lt;/p&gt;</comment>
                            <comment id="161726" author="pjones" created="Fri, 12 Aug 2016 12:47:36 +0000"  >&lt;p&gt;The main fix has landed for 2.9. I suggest moving Andreas&apos;s cleanup patch to be tracked under a different JIRA ticket refernece&lt;/p&gt;</comment>
                            <comment id="186395" author="gerrit" created="Tue, 28 Feb 2017 04:09:21 +0000"  >&lt;p&gt;Andreas Dilger (andreas.dilger@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/25661&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/25661&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt; utils: improve lfs_migrate usage message&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 081995843296f51829a9cd2bf7ae4eb9442df679&lt;/p&gt;</comment>
                            <comment id="188187" author="gerrit" created="Tue, 14 Mar 2017 02:58:04 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/20163/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/20163/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt; ofd: fix OBD_FAIL_OST_ENOINO/ENOSPC behaviour&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 659c81ca4bfbbc536260ff15bb31da84d9366791&lt;/p&gt;</comment>
                            <comment id="199073" author="gerrit" created="Tue, 13 Jun 2017 16:54:31 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/25661/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/25661/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4825&quot; title=&quot;lfs migrate not freeing space on OST&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4825&quot;&gt;&lt;del&gt;LU-4825&lt;/del&gt;&lt;/a&gt; utils: improve lfs_migrate usage message&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: ed8a63c9b83ce9f64df19a15ec362e1edb04a6f4&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                                        </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="31500">LU-7012</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="27637">LU-5931</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="31891">LUDOC-305</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="52636">LU-11115</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="22210">LU-4295</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="53898">LU-11605</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="39018">LU-8523</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10490" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>End date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Fri, 20 May 2016 17:59:49 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwihj:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>13268</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10493" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>Start date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Wed, 26 Mar 2014 17:59:49 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    </customfields>
    </item>
</channel>
</rss>