<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:26:41 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-2613] opening and closing file can generate &apos;unreclaimable slab&apos; space</title>
                <link>https://jira.whamcloud.com/browse/LU-2613</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We have a lot of nodes with a large amount of unreclaimable memory (over 4GB). Whatever we try to do (manually shrinking the cache, clearing lru locks, ...) the memory can&apos;t be recovered. The only way to get the memory back is to umount the lustre filesystem.&lt;/p&gt;

&lt;p&gt;After some troubleshooting, I was able to wrote a small reproducer where I just open(2) then close(2) files in O_RDWR (my reproducer use to open thousand of files to emphasize the issue).&lt;/p&gt;

&lt;p&gt;Attached 2 programs :&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;gentree.c (cc -o gentree gentree.c -lpthread) to generate a tree of known files (no need to use readdir in reproducer.c)&lt;/li&gt;
	&lt;li&gt;reproducer.c (cc -o reproducer reproduver.c -lpthread) to reproduce the issue.&lt;br/&gt;
The macro BASE_DIR has to be adjust according the local cluster configuration (you should provide the name of a directory located on a lustre filesystem).&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;There is no link between the 2 phases as rebooting the client between gentree &amp;amp; reproducer does&apos;t avoid the problem. Running gentree (which open as much files as reproducer) doesn&apos;t show the issue.&lt;/p&gt;</description>
                <environment></environment>
        <key id="17161">LU-2613</key>
            <summary>opening and closing file can generate &apos;unreclaimable slab&apos; space</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="hongchao.zhang">Hongchao Zhang</assignee>
                                    <reporter username="louveta">Alexandre Louvet</reporter>
                        <labels>
                            <label>JL</label>
                            <label>mn4</label>
                    </labels>
                <created>Mon, 14 Jan 2013 08:38:11 +0000</created>
                <updated>Wed, 12 Feb 2014 23:10:12 +0000</updated>
                            <resolved>Wed, 12 Feb 2014 23:10:00 +0000</resolved>
                                    <version>Lustre 2.1.3</version>
                    <version>Lustre 2.1.4</version>
                                    <fixVersion>Lustre 2.6.0</fixVersion>
                    <fixVersion>Lustre 2.5.1</fixVersion>
                                        <due></due>
                            <votes>1</votes>
                                    <watches>31</watches>
                                                                            <comments>
                            <comment id="50440" author="pjones" created="Mon, 14 Jan 2013 16:45:09 +0000"  >&lt;p&gt;Niu&lt;/p&gt;

&lt;p&gt;Could you please look into this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="50463" author="niu" created="Mon, 14 Jan 2013 22:19:14 +0000"  >&lt;p&gt;Hi, Alexandre&lt;/p&gt;

&lt;p&gt;I suppose you are referring lustre client, right? I guess the memory could be used by page/inode/dentry caches, did you try if &quot;echo 3 &amp;gt; /proc/sys/vm/drop_caches&quot; works? If it doesn&apos;t work, could you provide the /proc/meminfo &amp;amp; /proc/slabinfo before you running the reproducer, after running reproducer, and after running &quot;echo 3 ../drop_caches&quot;? Thanks in advance.&lt;/p&gt;</comment>
                            <comment id="50469" author="louveta" created="Tue, 15 Jan 2013 06:08:31 +0000"  >&lt;p&gt;&amp;gt; I suppose you are referring lustre client, right?&lt;br/&gt;
Yes&lt;/p&gt;

&lt;p&gt;&amp;gt; did you try if &quot;echo 3 &amp;gt; /proc/sys/vm/drop_caches&quot; works? &lt;br/&gt;
No it doesn&apos;t work.&lt;br/&gt;
It is surprising as the 2 codes I provided are doing the same number of open/close. Gentree works (ie is able to complete and drop_caches is able to free all memory), while reproducer don&apos;t (drop_caches do not free memory, if I tune reproducer to open enough files, the client will crash &lt;span class=&quot;error&quot;&gt;&amp;#91;the kernel&amp;#93;&lt;/span&gt; due to &apos;not enough memory&apos;). Gentree is doing some &apos;mkfile&apos; and 1 write per open, while reproducer is only opening/closing files, whithout doing IOs.&lt;/p&gt;

&lt;p&gt;I did modified a little the reproducer (just limit the number of opened files) to avoid my client to crash.&lt;br/&gt;
Archive logs_01.tar.gz contains :&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;before_mount.txt : free, cat /proc/meminfo , cat /proc/slabinfo just after client boot&lt;/li&gt;
	&lt;li&gt;after_mount.txt : same, just after having issue the &apos;mount -t lustre ...&apos; cmd&lt;/li&gt;
	&lt;li&gt;after_reproducer.txt : same, just after ./reproducer on 65536 files was run&lt;/li&gt;
	&lt;li&gt;after_drop_caches.txt : same, just after echo 3 &amp;gt; /proc/sys/vm/drop_caches was issued.&lt;/li&gt;
&lt;/ul&gt;
</comment>
                            <comment id="50471" author="niu" created="Tue, 15 Jan 2013 08:54:49 +0000"  >&lt;p&gt;I see, thanks Alexandre.&lt;/p&gt;

&lt;p&gt;The memory was used by the open replay requests. There wasn&apos;t any update operation in the reproducer, so the &apos;last committed transno&apos; is never being updated, and open repaly requests queued on client will never be dropped (since they all have trasno greater than the last committed). You can try to make an update operation (touch a file, for example) after running the reproducer, then do &quot;echo 3 &amp;gt; /proc/sys/vm/drop_caches&quot;, and you should see the memroy being reclaimed.&lt;/p&gt;

&lt;p&gt;I think such use case (open huge number of existing files, close, no any update opeartions) should be very unlikey to happen in the real world, so seems it&apos;s not a serious problem.&lt;/p&gt;</comment>
                            <comment id="50780" author="louveta" created="Fri, 18 Jan 2013 04:55:57 +0000"  >&lt;p&gt;Niu&lt;/p&gt;

&lt;p&gt;You were right, writing to the filesystem release the memory. Thanks.&lt;/p&gt;

&lt;p&gt;But I disagree with your assertion regarding the fact it&apos;s very unlikely in the real world. This issue has been found due to a user job that was running on thousand of nodes and using mpiio. This application open thousand hundreds of small output files (&amp;lt; 2 MB). Since the application is run on thousands of nodes, and since mpiio is a easy way to handle gather operations on large number of nodes the developer used it. Now, mpiio gather datas and concentrate them on nodes to make bigger IOs, and in this particular case, concentrate everything only on 2 mpi cpus (simple no more than 2 physical nodes), remaining nodes just playing the open(O_RDWR)/close() twist. The remaining part of the story remain in the fact that if the next job running on that node doesn&apos;t make a write to the filesystem, it will not be able to use all the memory and due to ENOMEM (the reality is that the batch scheduler running on the job check the amount of usable memory before running a new job and will detect such situation then remove the node from production).&lt;br/&gt;
This issue has been responsible of moving away from production thousand of nodes in the past month, so it was a serious problem.&lt;/p&gt;</comment>
                            <comment id="50792" author="niu" created="Fri, 18 Jan 2013 08:59:20 +0000"  >&lt;p&gt;I&apos;ll try to compose a patch to fix this, thank you.&lt;/p&gt;</comment>
                            <comment id="50966" author="niu" created="Tue, 22 Jan 2013 06:06:59 +0000"  >&lt;p&gt;patch for b2_1: &lt;a href=&quot;http://review.whamcloud.com/5143&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/5143&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="51989" author="tappro" created="Thu, 7 Feb 2013 15:15:04 +0000"  >&lt;p&gt;Niu, am I right that this problem exists in master too?&lt;/p&gt;</comment>
                            <comment id="52019" author="niu" created="Thu, 7 Feb 2013 21:36:50 +0000"  >&lt;p&gt;Hi, Mike. Yes, it&apos;s same for master.&lt;/p&gt;</comment>
                            <comment id="52097" author="tappro" created="Sun, 10 Feb 2013 12:04:25 +0000"  >&lt;p&gt;Nui, I&apos;d try to fix that at client side in master, e.g. we know the problem is that all open requests have transno, even those that don&apos;t update anything on disk. The reason for that is just to keep all open in replay queue to issue all opens again in case of recovery. Obviously that is not needed after close, and mdc_close() drops rq_replay flag to 0 so request can be deleted from replay queue, but it isn&apos;t because of last_committed value is very old due to current bug. Meanwhile we know that transno is fake for such opens and that check is not needed - request can be deleted from replay queue. I&apos;d try to drop request transno to 0 along with dropping rq_replay flag in mdc_close() (we need to make this only for non-create opens, can we check disposition flags?). The ptlrpc_free_committed() checks that with LBUG() currently, so we need to rework it too, making note that this can be closed open request and allow transno 0 with goto free_req label. At least we are solving client problem on client side, avoiding any work on server.&lt;/p&gt;</comment>
                            <comment id="58997" author="jaylan" created="Tue, 21 May 2013 18:02:50 +0000"  >&lt;p&gt;We at NASA Ames may have hit this problem yesterday. Some 48.2G of memory were stuck in  unreclaimable slab:&lt;/p&gt;

&lt;p&gt;  OBJS      ACTIVE  USE OBJ SIZE  SLABS  OBJ/SLAB CACHE SIZE NAME&lt;/p&gt;

&lt;p&gt;109590860  1252200  1%  0.19K    5479543     20  21918172K   cl_page_kmem&lt;br/&gt;
 33631020   618522  1%  0.26K    2242068     15   8968272K   osc_page_kmem&lt;br/&gt;
 78005904   620016  0%  0.08K    1625123     48   6500492K   vvp_page_kmem&lt;br/&gt;
114452800    20267  0%  0.05K    1486400     77   5945600K   lov_page_kmem&lt;br/&gt;
134702628   620280  0%  0.04K    1464159     92   5856636K   lovsub_page_kmem&lt;/p&gt;

&lt;p&gt;It was a lustre client node. Both servers and clients run lustre 2.1.5.&lt;br/&gt;
Umounting the lustre filesystems did not release the unreclaimable slab memory.&lt;/p&gt;
</comment>
                            <comment id="59026" author="niu" created="Wed, 22 May 2013 01:38:09 +0000"  >&lt;p&gt;Jay, your case isn&apos;t the same problem (umount didn&apos;t help), could you open another ticket and provide more detailed information? Thanks.&lt;/p&gt;</comment>
                            <comment id="59027" author="jaylan" created="Wed, 22 May 2013 01:59:54 +0000"  >&lt;p&gt;The system I reported may have a different problem; however, we do have&lt;br/&gt;
a system-wide issue of this ticket. We needed to scan thousands of &lt;br/&gt;
clients for big slab build-ups nodes and reboot them regularly.&lt;/p&gt;

&lt;p&gt;Today we tried the write technique mentioned in this ticket and it indeed&lt;br/&gt;
released the stuck slab memory of those compute nodes.&lt;/p&gt;

&lt;p&gt;That system that had 48.2G of memory stuck in unreclaimable slab was a&lt;br/&gt;
bridge node, so the problem could be different. We will monitor that system.&lt;br/&gt;
If it starts to build up again and does not respond to the write technique,&lt;br/&gt;
I will open a new ticket.&lt;/p&gt;</comment>
                            <comment id="59344" author="shadow" created="Mon, 27 May 2013 04:07:48 +0000"  >&lt;p&gt;I think fix is too complex, sending global transno as part of any request (instead of per export) is enough. &lt;br/&gt;
(look to &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3399&quot; title=&quot;MDT don&amp;#39;t update client last commited correctly so produce OOM on client&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3399&quot;&gt;LU-3399&lt;/a&gt;).&lt;/p&gt;</comment>
                            <comment id="59354" author="niu" created="Mon, 27 May 2013 06:13:37 +0000"  >&lt;p&gt;Alexey, I don&apos;t see why your proposal can fix the problem.&lt;/p&gt;

&lt;p&gt;For short term release, we can fix it by triggering disk update for very 1000 fake transactions (patchset 2 of  &lt;a href=&quot;http://review.whamcloud.com/5143&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/5143&lt;/a&gt;); For longer term release, I think we have to change the protocol little bit: server reply not only the last_committed_transo (including fake trans), but also the last_committed_ondisk_transo (only real updating trans), then client should have enough knowledge to release open requests regarding to the last_committed_transo.&lt;/p&gt;</comment>
                            <comment id="59357" author="shadow" created="Mon, 27 May 2013 07:22:55 +0000"  >&lt;p&gt;Niu,&lt;/p&gt;

&lt;p&gt;currently - mdt send a transno updates just from export info, so it&apos;s last committed transaction for THAT client. &lt;br/&gt;
on client side we have check - request_trasno &amp;lt; server_last_commited and commit/freed all requests matched that.&lt;/p&gt;

&lt;p&gt;root cause of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2613&quot; title=&quot;opening and closing file can generate &amp;#39;unreclaimable slab&amp;#39; space&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2613&quot;&gt;&lt;del&gt;LU-2613&lt;/del&gt;&lt;/a&gt; and &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3399&quot; title=&quot;MDT don&amp;#39;t update client last commited correctly so produce OOM on client&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3399&quot;&gt;LU-3399&lt;/a&gt; - client don&apos;t see any server_last_commited updates as that client don&apos;t have any own transaction. my patch solve it issue by using a global last_commited variable as server_last_commited for any client, in that case don&apos;t have a differences what client generate a transaction - that one or any other in cluster.&lt;/p&gt;

&lt;p&gt;you patch looks fine in situation when we have a single client with ro open / close, without other transaction - but anyway that don&apos;t update a client export transaction and don&apos;t flush replay queue on client side.&lt;/p&gt;</comment>
                            <comment id="59358" author="niu" created="Mon, 27 May 2013 08:12:04 +0000"  >&lt;p&gt;Alexy, If a client doesn&apos;t have it&apos;s own transactions, then it&apos;ll not have any cached replay requests, what&apos;s the point of returning the global last_committed to that client?&lt;/p&gt;</comment>
                            <comment id="59362" author="shadow" created="Mon, 27 May 2013 09:07:30 +0000"  >&lt;p&gt;No &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt; client have open and close in replay cache to have correct open replay, but both request don&apos;t create own &lt;em&gt;real&lt;/em&gt; transaction with callback on mdt so don&apos;t called obd_transno_commit_cb() function.&lt;/p&gt;</comment>
                            <comment id="59369" author="tappro" created="Mon, 27 May 2013 12:29:31 +0000"  >&lt;p&gt;Niu is right, if client has no own disk update it shouldn&apos;t care about any other. This is just wrong to rely on &apos;some other server update&apos; to decide we can drop closed open request on client. There can be no updates and we will have the same problem. Each client should have solve own problem with only own information, by that way we will have full solution no matter how many clients we have and what they are doing on server.&lt;/p&gt;

&lt;p&gt;For this particular problem the issue is on client side, because it holds closed open requests in queue, though it is not needed and we know that it is not needed but we continue to do more tricks on server to make client happy instead of solving that on client itself&lt;/p&gt;</comment>
                            <comment id="59370" author="shadow" created="Mon, 27 May 2013 12:52:25 +0000"  >&lt;p&gt;tappro, why? before VBR landed - we always send a last server updates to the client. with VBR we have start using a per export committed transno - which is wrong. &lt;br/&gt;
last commited information it&apos;s cluster wide info, not a private a per client as each client don&apos;t have own commit queue and shared a single server commits queue.&lt;/p&gt;

&lt;p&gt;as about open requests - that is may be need in open + unlink case to open an orphan as we don&apos;t know it&apos;s committed or not. &lt;/p&gt;</comment>
                            <comment id="59371" author="bzzz" created="Mon, 27 May 2013 12:55:33 +0000"  >&lt;p&gt;this is not wrong at all. last_committed can be tracked per export as this is what the client is interested in - own requests.&lt;br/&gt;
with VBR and late recovery per-export tracking becomes a requirement.&lt;/p&gt;

&lt;p&gt;and this improves SMP scalability, by definition.&lt;/p&gt;</comment>
                            <comment id="59374" author="tappro" created="Mon, 27 May 2013 13:14:57 +0000"  >&lt;p&gt;per export committed is right, it open way to many recovery features and more flexible. It is not bug as you think, but was done specially by design and I see no single reason to go back to the obd_last_committed, especially just to fixing single issue. Each client has exactly OWN commit queue actually, it know nothing about any other client requests. the open-unlink case was solved along with VBR years ago, we have no problems with that.&lt;/p&gt;

&lt;p&gt;If we have an issue related to this then better to think why it exists and how to fix it. It exists not because last_committed is not cluster wide, there can be no commit from other or there is single client and problem appears again. It exists because client is not able to drop closed open requests if there is no more server activity (or if it is not seeing that activity). IMHO, this is fundamental issue, client shouldn&apos;t rely on server activity. I&apos;d try to start from that point.&lt;/p&gt;</comment>
                            <comment id="59376" author="vitaly_fertman" created="Mon, 27 May 2013 13:25:42 +0000"  >&lt;p&gt;I do not think, this is client side issue. if server gives transno out, it is supposed to finally say this transno is committed. that&apos;s all. client should not think if to drop rpc from replay queue or not.&lt;/p&gt;

&lt;p&gt;regarding how to inform client this transno is committed - it is up to the server. informing client about other client&apos;s committed transno is excused just because it is mdt&apos;s logic to create such fake transactions. however, there are other options: a fake transaction commit; or direct exp_last_committed update - which happens from time to time...&lt;/p&gt;</comment>
                            <comment id="59379" author="shadow" created="Mon, 27 May 2013 13:33:20 +0000"  >&lt;p&gt;Tappro, i just don&apos;t say it&apos;s wrong to track per client base, i just say - client should be know about any transno committed as it&apos;s have no cost to implement and was done in previous version.&lt;/p&gt;

&lt;p&gt;i agree, it&apos;s open a window when it&apos;s single client - but it&apos;s very very rare case and we may use a cluster wide transno as short/middle time solution, to fix 2.1 and 2.4.&lt;/p&gt;

&lt;p&gt;But i will be happy if you will able to deliver a better fix in 2.5 without loosing compatibility with older clients.&lt;/p&gt;</comment>
                            <comment id="59491" author="adilger" created="Tue, 28 May 2013 23:17:07 +0000"  >&lt;p&gt;I&apos;ve reduced the severity of this bug from Blocker to Major.  Clearly, if the problem has existed since before 1.8.0 it cannot be affecting a huge number of users, though it definitely appears to be a problem with using RobinHood (or other tool) to open and close a large number of files.  Also, there is a simple workaround - periodically touch any file on the client to force a transaction so that the last_committed value is updated, and the saved RPCs will be flushed.&lt;/p&gt;

&lt;p&gt;Presumably, Robin Hood cannot be modified to get the information it needs without the open/close (e.g. stat() instead of fstat() and similar)?  That would be even less work on the part of the client.  In the short term, to work around the client bug, it could also be made to modify some temporary file in the filesystem (e.g. mknod(), then periodic utimes() to generate transactions) until this problem is resolved.&lt;/p&gt;

&lt;p&gt;The correct long-term solution to this problem is as Mike suggested early on in this bug - to decouple the open-replay handling from the RPC replay mechanism, since it isn&apos;t really the RPC layer&apos;s job to re-open files that are no longer involved in the transactions.  The RPC replay is of course correct for open(O_CREAT) that did not yet commit and/or close, but it doesn&apos;t make sense to keep non-creating open/close RPCs around after that time.  We had previously discussed moving the file open-replay handling up to the llite layer in the context of &quot;Simplified Interoperability&quot; (&lt;a href=&quot;http://wiki.lustre.org/images/0/0b/Simplified_InteropRecovery.pdf&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://wiki.lustre.org/images/0/0b/Simplified_InteropRecovery.pdf&lt;/a&gt; and &lt;a href=&quot;https://projectlava.xyratex.com/show_bug.cgi?id=18496&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://projectlava.xyratex.com/show_bug.cgi?id=18496&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;In the short term (2.1 and 2.4) there are a few compromise solutions possible.  If the server is doing other transactions, then it might make sense to return a last_committed value &amp;lt; the the most recent &quot;fake&quot; transaction number.  One possibility is to return last_committed = min(last_committed, inode_version - 1) in the RPC replies so that the clients don&apos;t get any state that they do not need, but at least depend on the recovery state of any file they have accessed?&lt;/p&gt;

&lt;p&gt;Alternately, at one time we discussed returning duplicate (old) transaction numbers for opens that do not create the file.  This allows the files to be replayed in the proper order after recovery, but they do not change the state on disk.&lt;/p&gt;</comment>
                            <comment id="59496" author="adilger" created="Wed, 29 May 2013 01:25:48 +0000"  >&lt;p&gt;Thinking about this further, I suspect the following will work correctly for new and old clients.&lt;/p&gt;

&lt;p&gt;The actual RPC open transno could be inode_version, but will have a later XID from the client, so it will sort correctly in the client replay list.  The close RPC transno can be max(inode_version of inode, max inode_version accessed by this export - 1).  The client RPC replay ordering should be in &lt;span class=&quot;error&quot;&gt;&amp;#91;transno, XID&amp;#93;&lt;/span&gt; order, so if this client also created the file, then the later open/close RPCs would still be replayed after it.  When the close gets a reply, this will also naturally ensure that both the open/close RPC transnos are &amp;lt; last_committed, if the object create itself was committed. &lt;/p&gt;

&lt;p&gt;For both open and close, it should be enough to return:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;        last_committed value = min(max(inode_version accessed by &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; export),
                                   last_committed)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;so that the client never &quot;sees&quot; a last_committed value newer than the inode_version (i.e. last change transaction) of any object it has accessed, and the actual last_committed value in case the inode was changed since the most recent transaction.&lt;/p&gt;

&lt;p&gt;I don&apos;t think there is any problem with the client or server having duplicate transaction numbers, per comment in &lt;tt&gt;ptlrpc_retain_replayable_request()&lt;/tt&gt;:&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;                /* We may have duplicate transnos &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; we create and then
                 * open a file, or &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; closes retained &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; to match creating
                 * opens, so use req-&amp;gt;rq_xid as a secondary key.
                 * (See bugs 684, 685, and 428.)
                 * XXX no longer needed, but all opens need transnos!
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This avoids the ever-growing transaction number for open+close that do not actually affect the on-disk state (except for keeping objects open for open-but-unlinked files).  So long as the client has the open after the object was first created, and the close before the current commit (which can&apos;t affect any open-unlinked state if some other client still has not sent a close RPC).&lt;/p&gt;</comment>
                            <comment id="59500" author="niu" created="Wed, 29 May 2013 03:05:20 +0000"  >&lt;p&gt;Andreas, I think your proposal is based on the assumption of there are other clients doing disk update in the cluster, right? What if there isn&apos;t any client doing disk updates or the disk updates frequency isn&apos;t high enough? I think that&apos;s the situation described in this ticket.&lt;/p&gt;</comment>
                            <comment id="59504" author="shadow" created="Wed, 29 May 2013 04:48:41 +0000"  >&lt;p&gt;Niu,&lt;/p&gt;

&lt;p&gt;window open only in case none updates in cluster exist. Any other cases will be solve problem, but it&apos;s unlikely in cluster where any client . if affected client will move to idle after open/close series - it will have a last_commited updates via ping.&lt;/p&gt;</comment>
                            <comment id="59613" author="adilger" created="Thu, 30 May 2013 09:42:48 +0000"  >&lt;p&gt;Niu, I think my proposal should work even if there are no changes being made to the filesystem at all.  The open transno would be &lt;tt&gt;inode_version&lt;/tt&gt; and the close transno would be &lt;tt&gt;max(inode_version, export last_committed)&lt;/tt&gt;, so it doesn&apos;t expose any newer transno to the client.  This also ensures that when the files are closed, the open/close RPCs are &amp;lt; last_committed, and will be dropped from the replay list on the client.&lt;/p&gt;</comment>
                            <comment id="59622" author="shadow" created="Thu, 30 May 2013 11:59:30 +0000"  >&lt;p&gt;Andreas,&lt;/p&gt;

&lt;p&gt;that is will be don&apos;t work. because if client mounted lustre with noatime, and executed just a open(O_RDONLY)+close they have a zero in peer_last_commited, because none updates will be send to him.&lt;/p&gt;</comment>
                            <comment id="59654" author="tappro" created="Thu, 30 May 2013 16:56:16 +0000"  >&lt;p&gt;Andreas, that may work, yes. Besides it will be really good to get rid of mdt_empty_transno() code.&lt;/p&gt;

&lt;p&gt;Alexey, with your example the imp_peer_committed_transno will be still updated because last_committed is to be returned from server as:&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;last_committed value = min(max(inode_version accessed by &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; export),
                                   last_committed)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&apos;accessed&apos; means not just updates but open/close too.&lt;/p&gt;</comment>
                            <comment id="59717" author="niu" created="Fri, 31 May 2013 02:23:13 +0000"  >&lt;p&gt;I see, but I&apos;m not sure if duplicated transno will bring us trouble. I&apos;ll try to cooke a patch to find it out later. Thanks, Andreas.&lt;/p&gt;</comment>
                            <comment id="59723" author="adilger" created="Fri, 31 May 2013 09:00:10 +0000"  >&lt;p&gt;See my earlier comment in this bug.  We used to have duplicate transnos for open/close at one time in the past, or at least it was something we strongly thought about, and the client code is ready for this to happen.&lt;/p&gt;</comment>
                            <comment id="59853" author="niu" created="Mon, 3 Jun 2013 07:35:58 +0000"  >&lt;p&gt;I&apos;m not sure if there is any good way to handle duplicated transnos in replay:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;Client A open_create file with transno T1, and the child version set as T1; (T1 is not committed)&lt;/li&gt;
	&lt;li&gt;Client B open_rdonly the same file, so the returned should be max(versions, exp_last_committed) = T1;&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;Then server how to replay the two open requests with same transno during recovery?&lt;/p&gt;</comment>
                            <comment id="59890" author="adilger" created="Mon, 3 Jun 2013 14:57:57 +0000"  >&lt;p&gt;Three options would be:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;client B gets back same data to do replay as client A, so it doesn&apos;t matter which one is replayed. However, this might get more complex to handle, but has the benefit of increasing robustness of the replay.&lt;/li&gt;
	&lt;li&gt;client B gets back a fake transno if openA is uncommitted, like it does today. This means there is still a commit to be done, do last_committed will be increased to either cover this fake transno, or at worst the number of fake transno is limited to the few seconds until commit.&lt;/li&gt;
	&lt;li&gt;commit on share of the open file. I don&apos;t like this much, because it slows down the common code path.&lt;/li&gt;
&lt;/ul&gt;
</comment>
                            <comment id="60143" author="niu" created="Fri, 7 Jun 2013 04:04:48 +0000"  >&lt;p&gt;Thank you, Andreas.&lt;/p&gt;

&lt;p&gt;Looks the first option needs to change client (or even protocol), I think if we have to change client &amp;amp; protocol, we&apos;d choose some cleaner &amp;amp; easier way: reply back two last_committed transnos (last_committed &amp;amp; on-disk last_committed, similar to my patchset 1) or totally change the open replay mechanism as you mentioned.&lt;/p&gt;

&lt;p&gt;I tried the second way today and discovered another problem: for performance reason, client only tries to scan the replay list to free the committed/closed requests whenever the last_committed is bumped, which means without client code changes, the open requests won&apos;t be freed promptly, even they have transno smaller than last_committed. Even if we change the client code, we&apos;d think out some better way rather than scanning the replay list on every reply.&lt;/p&gt;

&lt;p&gt;So, as a short term solution,  I think we&apos;d go back to keep using my patchset 2 (update disk on server periodically), or ask user to do this in userspace by themselves? What do you think about?&lt;/p&gt;</comment>
                            <comment id="60216" author="adilger" created="Sat, 8 Jun 2013 16:27:36 +0000"  >&lt;p&gt;Niu, for second option I thought the client would not even try to put the RPC into the replay list if transno &amp;lt;= last_committed?  Is it not checked in the close RPC callback to drop both requests from this list if both transnos are &amp;lt;= last_committed?  Iff a fake transno is given to the client to avoid a duplicate open+create transno, it should be inode_version+1, and upon close if the real last_committed is &amp;gt;= inode_version+1 then it should send inode_version+1 back to the client for last_committed. &lt;/p&gt;

&lt;p&gt;In any case, I don&apos;t mind also fixing this on the client as long as it doesn&apos;t break the interoperability. The client should be smart enough to cancel matching pairs of open+close if they are &amp;lt;= last_committed. That won&apos;t solve the problem by itself, but in conjunction with the server changes not to invent fake transnos &amp;gt; last_committed it should work. &lt;/p&gt;

&lt;p&gt;I think this is a combination of two bugs - server giving out transnos that are &amp;gt; last_committed and never committing them, and also that the client does not drop RPCs if the last_committed doesn&apos;t change. This should be possible to handle without scanning the list each time I think. &lt;/p&gt;</comment>
                            <comment id="60222" author="niu" created="Sun, 9 Jun 2013 08:20:04 +0000"  >&lt;blockquote&gt;
&lt;p&gt;Niu, for second option I thought the client would not even try to put the RPC into the replay list if transno &amp;lt;= last_committed?&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;That&apos;s ture for close request but not ture for open request, open request has to be retained regardless of transno.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Is it not checked in the close RPC callback to drop both requests from this list if both transnos are &amp;lt;= last_committed?&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;No, we didn&apos;t do that, but it&apos;s easy to fix if we change the client code.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt; Iff a fake transno is given to the client to avoid a duplicate open+create transno, it should be inode_version+1, and upon close if the real last_committed is &amp;gt;= inode_version+1 then it should send inode_version+1 back to the client for last_committed. &lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;One thing can&apos;t be handled well by this manner is: If a client never do update operation, it&apos;ll always have zero exp_last_committed, which means any replied trasno can&apos;t be larger than 0... We may fix it by generating an update operation upon each client connection (connect becomes an update operation? that looks quite dirty to me), or we should just ignore this rare case (no any update from a client) for this moment? Any suggestion? Thanks.&lt;/p&gt;</comment>
                            <comment id="60225" author="adilger" created="Sun, 9 Jun 2013 18:37:02 +0000"  >&lt;p&gt;Per previous comments, the client exp_last_committed should be min(last_committed, max(inode_version of all inodes accessed). That ensures that I&apos;d the client is accessing committed inodes that they can be discarded on close (assuming it isn&apos;t the very last file created).&lt;/p&gt;</comment>
                            <comment id="60427" author="shadow" created="Wed, 12 Jun 2013 10:54:13 +0000"  >&lt;p&gt;looks i found one more issue in same area.&lt;br/&gt;
mdt save a locks in difficult replies via mdt_object_unlock function and ost via oti_to_request function, but ptlrpc will freed a difficult replies only if last committed larger then export last_commited.&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;void ptlrpc_commit_replies(struct obd_export *exp)
{
        struct ptlrpc_reply_state *rs, *nxt;
        DECLARE_RS_BATCH(batch);
        ENTRY;

        rs_batch_init(&amp;amp;batch);
        /* Find any replies that have been committed and get their service
         * to attend to complete them. */
        
        &lt;span class=&quot;code-comment&quot;&gt;/* CAVEAT EMPTOR: spinlock ordering!!! */&lt;/span&gt;
        spin_lock(&amp;amp;exp-&amp;gt;exp_uncommitted_replies_lock);
        cfs_list_for_each_entry_safe(rs, nxt, &amp;amp;exp-&amp;gt;exp_uncommitted_replies,
                                     rs_obd_list) {
                LASSERT (rs-&amp;gt;rs_difficult);
                &lt;span class=&quot;code-comment&quot;&gt;/* VBR: per-export last_committed */&lt;/span&gt;
                LASSERT(rs-&amp;gt;rs_export);
                &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (rs-&amp;gt;rs_transno &amp;lt;= exp-&amp;gt;exp_last_committed) {
                        cfs_list_del_init(&amp;amp;rs-&amp;gt;rs_obd_list);
                        rs_batch_add(&amp;amp;batch, rs);
                }
        }
        spin_unlock(&amp;amp;exp-&amp;gt;exp_uncommitted_replies_lock);
        rs_batch_fini(&amp;amp;batch);
        EXIT;
}
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;but exp_last_commited is zero until modification request is hit but we have send a locks via getattr request without any transaction opened.&lt;/p&gt;</comment>
                            <comment id="60507" author="niu" created="Thu, 13 Jun 2013 02:08:33 +0000"  >&lt;blockquote&gt;
&lt;p&gt;Per previous comments, the client exp_last_committed should be min(last_committed, max(inode_version of all inodes accessed). That ensures that I&apos;d the client is accessing committed inodes that they can be discarded on close (assuming it isn&apos;t the very last file created).&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Andreas, my point is that last_committed returned to client can&apos;t be larger than the exp_last_committed on disk slot, otherwise, client would think server data lost on next recovery (see recovery-small.sh test_54). So, if a client doesn&apos;t update disk, it&apos;s exp_last_committed on disk should always be zero, and we can&apos;t return last_committed larger than 0 in such case. But I think we may force a disk update in this case.&lt;/p&gt;

&lt;p&gt;Another problem is that client code assumes transno unique in some places, such as ptlrpc_replay_next():&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;        cfs_list_for_each_safe(tmp, pos, &amp;amp;imp-&amp;gt;imp_replay_list) {
                req = cfs_list_entry(tmp, struct ptlrpc_request,
                                     rq_replay_list);

                /* If need to resend the last sent transno (because a
                   reconnect has occurred), then stop on the matching
                   req and send it again. If, however, the last sent
                   transno has been committed then we &lt;span class=&quot;code-keyword&quot;&gt;continue&lt;/span&gt; replay
                   from the next request. */
                &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (req-&amp;gt;rq_transno &amp;gt; last_transno) {
                        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (imp-&amp;gt;imp_resend_replay)
                                lustre_msg_add_flags(req-&amp;gt;rq_reqmsg,
                                                     MSG_RESENT);
                        &lt;span class=&quot;code-keyword&quot;&gt;break&lt;/span&gt;;
                }
                req = NULL;
        }
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;If there are duplicate transno requests, some requests will be skipped during replay, so the old client will have trouble with this manner.&lt;/p&gt;</comment>
                            <comment id="60530" author="adilger" created="Thu, 13 Jun 2013 10:09:41 +0000"  >&lt;p&gt;Does this affect the case if the duplicate transnos are &amp;lt; last_committed (i.e. the open/close RPCs)?  I thought those ones are just replayed on the server?  Ah, you are referencing the client code that replays the RPCs...&lt;/p&gt;

&lt;p&gt;If it would make the implementation easier, it would be possible to negotiate between the client and server with MSG_* flags in the ptlrpc_body whether they can handle duplicate transno or not.  If not, then the server would have to do occasional commits to bump the transno, otherwise this could be avoided for newer clients &amp;amp; servers.&lt;/p&gt;</comment>
                            <comment id="60549" author="niu" created="Thu, 13 Jun 2013 13:45:11 +0000"  >&lt;p&gt;Andreas, given that we&apos;re going to add MSG flag, I&apos;m thinking that if it would be better to pack another transno in reply. Looks pb_last_seen in ptlrpc_body is not used, I think we can use it to carry the last committed on-disk transo (pb_last_committed stores the last committed on-disk/fake transno), so that we can resolve the issue without introducing duplicate transo. I&apos;ll update the patch in this way if it&apos;s fine with you. Thanks.&lt;/p&gt;</comment>
                            <comment id="60746" author="niu" created="Mon, 17 Jun 2013 07:22:58 +0000"  >&lt;p&gt;patch for master: &lt;a href=&quot;http://review.whamcloud.com/6665&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/6665&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="61561" author="tappro" created="Mon, 1 Jul 2013 09:46:06 +0000"  >&lt;p&gt;Niu, Andreas, this become more and more complex as I can see. The problem we are trying to solve is that client keeps closed open requests in replay queue, right? Meanwhile the client itself wants them to be dropped from that queue, see mdc_close():&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;                /* We no longer want to preserve &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; open &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; replay even
                 * though the open was committed. b=3632, b=3633 */
		spin_lock(&amp;amp;mod-&amp;gt;mod_open_req-&amp;gt;rq_lock);
		mod-&amp;gt;mod_open_req-&amp;gt;rq_replay = 0;
		spin_unlock(&amp;amp;mod-&amp;gt;mod_open_req-&amp;gt;rq_lock);
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;So the problem is that request is not dropped when client ask to do that. That is because of last_committed check which is the only mechanism to drop request and that means for me we just need to add another one to drop request from replay queue regardless its transno. E.g. rq_closed_open flag can be added and checked to drop request. That could be much simpler. Did I miss something and there are other cases we are trying to solve?&lt;/p&gt;</comment>
                            <comment id="61568" author="niu" created="Mon, 1 Jul 2013 11:43:27 +0000"  >&lt;p&gt;Mike, right. I mentioned this (the open request can only be freed when the last_committed on client is bumped) in previous comment, and it&apos;s same for the close request. Adding another flag for checking open/close request might be work, what about my solution in  &lt;a href=&quot;http://review.whamcloud.com/6665&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/6665&lt;/a&gt; ? Could you review it? Thanks.&lt;/p&gt;</comment>
                            <comment id="61590" author="adilger" created="Mon, 1 Jul 2013 16:15:28 +0000"  >&lt;p&gt;If the request is not doing a create, couldn&apos;t both the open and close RPC be dropped at this time, regardless of the transno?&lt;/p&gt;</comment>
                            <comment id="61616" author="niu" created="Tue, 2 Jul 2013 02:52:15 +0000"  >&lt;p&gt;Andreas, the existing code can only drop open/close request when the last_committed seen by client is bumped, no matter if it&apos;s an open_create or not.&lt;/p&gt;</comment>
                            <comment id="62257" author="tappro" created="Mon, 15 Jul 2013 05:30:34 +0000"  >&lt;p&gt;Niu, exactly, and I propose to make that &apos;existing code&apos; able to drop closed open regardless its transno because it doesn&apos;t make sense after close. Current solution is still based on hacking server side in various ways. In fact that can be solved at client side, just by letting closed OPENs to be dropped despite their transno.&lt;/p&gt;</comment>
                            <comment id="62258" author="bzzz" created="Mon, 15 Jul 2013 07:04:42 +0000"  >&lt;p&gt;I guess it still makes some sense if open created a file?&lt;/p&gt;</comment>
                            <comment id="62261" author="niu" created="Mon, 15 Jul 2013 07:49:51 +0000"  >&lt;blockquote&gt;
&lt;p&gt;Niu, exactly, and I propose to make that &apos;existing code&apos; able to drop closed open regardless its transno because it doesn&apos;t make sense after close. Current solution is still based on hacking server side in various ways. In fact that can be solved at client side, just by letting closed OPENs to be dropped despite their transno.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Mike, I think there is no way to achieve this without server side changes, I can think of two ways so far:&lt;/p&gt;

&lt;p&gt;1. Server treats open/close as committed transactions, and returns client both last committed transno &amp;amp; last real transno (on-disk transno), client drops committed open &amp;amp; close request immediately after close. That&apos;s what I did in my patch.&lt;/p&gt;

&lt;p&gt;2. Server assigns no transno for open/close, and client open-replay mechanism must be adapted to this change (like Siyao mentioned in the review comment: track open handle in fs layer, and rebuild request when replay the open, and some other changes to open, close, open lock code could be required)&lt;/p&gt;

&lt;p&gt;The second solution looks cleaner to me, but it requires more code changes, and it&apos;ll be little tricky to handle open-create &amp;amp; open differently on client side.&lt;/p&gt;</comment>
                            <comment id="62262" author="bzzz" created="Mon, 15 Jul 2013 07:56:12 +0000"  >&lt;p&gt;yes, I also remember we discussed a way to implement openhandle as a LDLM lock and let LDLM to re-enqueue locks at recovery.&lt;/p&gt;</comment>
                            <comment id="62420" author="tappro" created="Tue, 16 Jul 2013 20:02:50 +0000"  >&lt;p&gt;Niu, in fact we don&apos;t need to wait for commit in case of closed open (no create) and exactly that case causes this bug with unreclaimable space. And I don&apos;t see why server help is needed here - client knows there was close and knows this is non-create open - that is enough to make decision to drop request from replay queue. I am not sure though how easy to distinguish non-create case from OPEN-CREATE, at first sign we need to check disposition flag for DISP_OPEN_CREATE bit. So possible solution can be:&lt;br/&gt;
1) after open reply check disposition for DISP_OPEN_CREATE bit and save that information in md_open_data, OR just take disposition from already saved mod_open_req during mdc_close()&lt;br/&gt;
2) in mdc_close() there is already mod-&amp;gt;mod_open_req-&amp;gt;rq_replay is set to 0, we set also mod_open_req-&amp;gt;rq_commit_nowait or any other new flag for non-create open.&lt;br/&gt;
3) in ptlrpc_free_committed() check that rq_commit_nowait flag and free such request immediately no matter what transno it has.&lt;/p&gt;

&lt;p&gt;Will that works? Am I missing something?&lt;/p&gt;</comment>
                            <comment id="62457" author="niu" created="Wed, 17 Jul 2013 06:54:57 +0000"  >&lt;p&gt;Mike, your solution looks fine to me, I&apos;ll update the patch in this way soon. Thanks.&lt;/p&gt;</comment>
                            <comment id="62472" author="tappro" created="Wed, 17 Jul 2013 10:32:31 +0000"  >&lt;p&gt;Niu, I am not so sure it will be easy to implement, this is just possible way to go but if that will work that would be good.&lt;/p&gt;</comment>
                            <comment id="62528" author="niu" created="Thu, 18 Jul 2013 05:59:46 +0000"  >&lt;p&gt;Mike, I realized that not only the open creates object (with DISP_OPEN_CREATE) needs be replayed, the open which creates stripe data needs be replayed as well (see mdt_create_data()), and I don&apos;t see how to identify such open on client, any good idea?&lt;/p&gt;</comment>
                            <comment id="62530" author="niu" created="Thu, 18 Jul 2013 07:40:06 +0000"  >&lt;p&gt;Seems the server code has to be changed. Anyway, I introduced a new DISP bit (DISP_OPEN_STRIPE) to identify the open which creates stripe, with this manner, the server/protocol changes are less than the former patch (server returning on-disk transno). Mike could you take a look at the patch? Thanks&lt;/p&gt;</comment>
                            <comment id="68190" author="pjones" created="Wed, 2 Oct 2013 19:52:49 +0000"  >&lt;p&gt;Pushing to 2.5.1 because it seems that the patch needs more work&lt;/p&gt;</comment>
                            <comment id="71565" author="bogl" created="Thu, 14 Nov 2013 20:24:38 +0000"  >&lt;p&gt;backport to b2_4&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/8277&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/8277&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="76913" author="pjones" created="Wed, 12 Feb 2014 23:10:00 +0000"  >&lt;p&gt;Landed for 2.5.1 and 2.6&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="19158">LU-3399</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="19114">LU-3381</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="22164">LU-4272</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="22157">LU-4270</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="12162" name="gentree.c" size="2622" author="louveta" created="Mon, 14 Jan 2013 08:38:39 +0000"/>
                            <attachment id="12164" name="logs_01.tar.gz" size="7408" author="louveta" created="Tue, 15 Jan 2013 06:08:42 +0000"/>
                            <attachment id="12163" name="reproducer.c" size="2193" author="louveta" created="Mon, 14 Jan 2013 08:38:53 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvf93:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>6116</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>