<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:32:29 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3277] LU-2139 may cause the performance regression</title>
                <link>https://jira.whamcloud.com/browse/LU-3277</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;There is a performance regression on the current master(c864582b5d4541c7830d628457e55cd859aee005) if we have multiple IOR threads per client. As far as I can test, &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2576&quot; title=&quot;Hangs in osc_enter_cache due to dirty pages not being flushed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2576&quot;&gt;&lt;del&gt;LU-2576&lt;/del&gt;&lt;/a&gt; might cause this performance regression. Here is quick test results on each commit.&lt;/p&gt;

&lt;p&gt;client : commit ac37e7b4d101761bbff401ed12fcf671d6b68f9c&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# mpirun -np 8 /lustre/IOR -w -b 8g -t 1m -e -C -F -vv -o /lustre/ior.out/file
IOR-2.10.3: MPI Coordinated Test of Parallel I/O

Run began: Sun May  5 12:24:09 2013
Command line used: /lustre/IOR -w -b 8g -t 1m -e -C -F -vv -o /lustre/ior.out/file
Machine: Linux s08 2.6.32-279.19.1.el6_lustre.x86_64 #1 SMP Sat Feb 9 21:55:32 PST 2013 x86_64
Using synchronized MPI timer
Start time skew across all tasks: 0.00 sec
Path: /lustre/ior.out
FS: 683.5 TiB   Used FS: 0.0%   Inodes: 5.0 Mi   Used Inodes: 0.0%
Participating tasks: 8
Using reorderTasks &apos;-C&apos; (expecting block, not cyclic, task assignment)
task 0 on s08
task 1 on s08
task 2 on s08
task 3 on s08
task 4 on s08
task 5 on s08
task 6 on s08
task 7 on s08

Summary:
	api                = POSIX
	test filename      = /lustre/ior.out/file
	access             = file-per-process
	pattern            = segmented (1 segment)
	ordering in a file = sequential offsets
	ordering inter file=constant task offsets = 1
	clients            = 8 (8 per node)
	repetitions        = 1
	xfersize           = 1 MiB
	blocksize          = 8 GiB
	aggregate filesize = 64 GiB

Using Time Stamp 1367781849 (0x5186b1d9) for Data Signature
Commencing write performance test.
Sun May  5 12:24:09 2013

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s) total(s)  iter
------    ---------  ---------- ---------  --------   --------   --------  --------   ----
write     3228.38    8388608    1024.00    0.001871   20.30      1.34       20.30      0    XXCEL
Operation  Max (MiB)  Min (MiB)  Mean (MiB)   Std Dev  Max (OPs)  Min (OPs)  Mean (OPs)   Std Dev  Mean (s)  Op grep #Tasks tPN reps  fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize

---------  ---------  ---------  ----------   -------  ---------  ---------  ----------   -------  --------
write        3228.38    3228.38     3228.38      0.00    3228.38    3228.38     3228.38      0.00  20.29996   8 8 1 1 1 1 0 0 1 8589934592 1048576 68719476736 -1 POSIX EXCEL

Max Write: 3228.38 MiB/sec (3385.20 MB/sec)

Run finished: Sun May  5 12:24:30 2013
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;client : commit 5661651b2cc6414686e7da581589c2ea0e1f1969&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# mpirun -np 8 /lustre/IOR -w -b 8g -t 1m -e -C -F -vv -o /lustre/ior.out/file
IOR-2.10.3: MPI Coordinated Test of Parallel I/O

Run began: Sun May  5 12:16:35 2013
Command line used: /lustre/IOR -w -b 8g -t 1m -e -C -F -vv -o /lustre/ior.out/file
Machine: Linux s08 2.6.32-279.19.1.el6_lustre.x86_64 #1 SMP Sat Feb 9 21:55:32 PST 2013 x86_64
Using synchronized MPI timer
Start time skew across all tasks: 0.00 sec
Path: /lustre/ior.out
FS: 683.5 TiB   Used FS: 0.0%   Inodes: 5.0 Mi   Used Inodes: 0.0%
Participating tasks: 8
Using reorderTasks &apos;-C&apos; (expecting block, not cyclic, task assignment)
task 0 on s08
task 1 on s08
task 2 on s08
task 3 on s08
task 4 on s08
task 5 on s08
task 6 on s08
task 7 on s08

Summary:
	api                = POSIX
	test filename      = /lustre/ior.out/file
	access             = file-per-process
	pattern            = segmented (1 segment)
	ordering in a file = sequential offsets
	ordering inter file=constant task offsets = 1
	clients            = 8 (8 per node)
	repetitions        = 1
	xfersize           = 1 MiB
	blocksize          = 8 GiB
	aggregate filesize = 64 GiB

Using Time Stamp 1367781395 (0x5186b013) for Data Signature
Commencing write performance test.
Sun May  5 12:16:35 2013

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s) total(s)  iter
------    ---------  ---------- ---------  --------   --------   --------  --------   ----
write     550.28     8388608    1024.00    0.001730   119.10     2.76       119.10     0    XXCEL
Operation  Max (MiB)  Min (MiB)  Mean (MiB)   Std Dev  Max (OPs)  Min (OPs)  Mean (OPs)   Std Dev  Mean (s)  Op grep #Tasks tPN reps  fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize

---------  ---------  ---------  ----------   -------  ---------  ---------  ----------   -------  --------
write         550.28     550.28      550.28      0.00     550.28     550.28      550.28      0.00 119.09623   8 8 1 1 1 1 0 0 1 8589934592 1048576 68719476736 -1 POSIX EXCEL

Max Write: 550.28 MiB/sec (577.01 MB/sec)

Run finished: Sun May  5 12:18:34 2013
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Both tests, the servers are running current master (c864582b5d4541c7830d628457e55cd859aee005)&lt;/p&gt;</description>
                <environment>RHEL6.3 and current master</environment>
        <key id="18712">LU-3277</key>
            <summary>LU-2139 may cause the performance regression</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="prakash">Prakash Surya</assignee>
                                    <reporter username="ihara">Shuichi Ihara</reporter>
                        <labels>
                            <label>HB</label>
                    </labels>
                <created>Sun, 5 May 2013 19:30:38 +0000</created>
                <updated>Wed, 11 Sep 2013 22:22:53 +0000</updated>
                            <resolved>Wed, 11 Sep 2013 22:22:53 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                                    <fixVersion>Lustre 2.6.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="57693" author="ihara" created="Sun, 5 May 2013 19:47:53 +0000"  >&lt;p&gt;reverted commit 5661651b2cc6414686e7da581589c2ea0e1f1969 from current master (c864582b5d4541c7830d628457e55cd859aee005) and ran IOR again and I saw similar results as commit ac37e7b4d101761bbff401ed12fcf671d6b68f9c.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;IOR-2.10.3: MPI Coordinated Test of Parallel I/O

Run began: Sun May  5 12:41:11 2013
Command line used: /lustre/IOR -w -b 8g -t 1m -e -C -F -vv -o /lustre/ior.out/file
Machine: Linux s08 2.6.32-279.19.1.el6_lustre.x86_64 #1 SMP Sat Feb 9 21:55:32 PST 2013 x86_64
Using synchronized MPI timer
Start time skew across all tasks: 0.00 sec
Path: /lustre/ior.out
FS: 683.5 TiB   Used FS: 0.0%   Inodes: 5.0 Mi   Used Inodes: 0.0%
Participating tasks: 8
Using reorderTasks &apos;-C&apos; (expecting block, not cyclic, task assignment)
task 0 on s08
task 1 on s08
task 2 on s08
task 3 on s08
task 4 on s08
task 5 on s08
task 6 on s08
task 7 on s08

Summary:
	api                = POSIX
	test filename      = /lustre/ior.out/file
	access             = file-per-process
	pattern            = segmented (1 segment)
	ordering in a file = sequential offsets
	ordering inter file=constant task offsets = 1
	clients            = 8 (8 per node)
	repetitions        = 1
	xfersize           = 1 MiB
	blocksize          = 8 GiB
	aggregate filesize = 64 GiB

Using Time Stamp 1367782871 (0x5186b5d7) for Data Signature
Commencing write performance test.
Sun May  5 12:41:11 2013

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s) total(s)  iter
------    ---------  ---------- ---------  --------   --------   --------  --------   ----
write     3143.64    8388608    1024.00    0.001566   20.85      1.97       20.85      0    XXCEL
Operation  Max (MiB)  Min (MiB)  Mean (MiB)   Std Dev  Max (OPs)  Min (OPs)  Mean (OPs)   Std Dev  Mean (s)  Op grep #Tasks tPN reps  fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize

---------  ---------  ---------  ----------   -------  ---------  ---------  ----------   -------  --------
write        3143.64    3143.64     3143.64      0.00    3143.64    3143.64     3143.64      0.00  20.84715   8 8 1 1 1 1 0 0 1 8589934592 1048576 68719476736 -1 POSIX EXCEL

Max Write: 3143.64 MiB/sec (3296.35 MB/sec)

Run finished: Sun May  5 12:41:32 2013
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="57700" author="niu" created="Mon, 6 May 2013 02:03:47 +0000"  >&lt;p&gt;The commit 5661651b2cc6414686e7da581589c2ea0e1f1969 is from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2139&quot; title=&quot;Tracking unstable pages&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2139&quot;&gt;&lt;del&gt;LU-2139&lt;/del&gt;&lt;/a&gt;, which added unstable pages accounting for Lustre, and following code changes could caused much more sync write.&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;@@ -1463,7 +1465,8 @@ &lt;span class=&quot;code-keyword&quot;&gt;static&lt;/span&gt; &lt;span class=&quot;code-object&quot;&gt;int&lt;/span&gt; osc_enter_cache_try(struct client_obd *cli,
                &lt;span class=&quot;code-keyword&quot;&gt;return&lt;/span&gt; 0;

        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (cli-&amp;gt;cl_dirty + CFS_PAGE_SIZE &amp;lt;= cli-&amp;gt;cl_dirty_max &amp;amp;&amp;amp;
-           cfs_atomic_read(&amp;amp;obd_dirty_pages) + 1 &amp;lt;= obd_max_dirty_pages) {
+           cfs_atomic_read(&amp;amp;obd_unstable_pages) + 1 +
+           cfs_atomic_read(&amp;amp;obd_dirty_pages) &amp;lt;= obd_max_dirty_pages) {
                osc_consume_write_grant(cli, &amp;amp;oap-&amp;gt;oap_brw_page);
                &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (&lt;span class=&quot;code-keyword&quot;&gt;transient&lt;/span&gt;) {
                        cli-&amp;gt;cl_dirty_transit += CFS_PAGE_SIZE;
@@ -1576,9 +1579,9 @@ void osc_wake_cache_waiters(struct client_obd *cli)

                ocw-&amp;gt;ocw_rc = -EDQUOT;
                &lt;span class=&quot;code-comment&quot;&gt;/* we can&apos;t dirty more */&lt;/span&gt;
-               &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; ((cli-&amp;gt;cl_dirty + CFS_PAGE_SIZE &amp;gt; cli-&amp;gt;cl_dirty_max) ||
-                   (cfs_atomic_read(&amp;amp;obd_dirty_pages) + 1 &amp;gt;
-                    obd_max_dirty_pages)) {
+               &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (cli-&amp;gt;cl_dirty + CFS_PAGE_SIZE &amp;gt; cli-&amp;gt;cl_dirty_max ||
+                   cfs_atomic_read(&amp;amp;obd_unstable_pages) + 1 +
+                   cfs_atomic_read(&amp;amp;obd_dirty_pages) &amp;gt; obd_max_dirty_pages) {
                        CDEBUG(D_CACHE, &lt;span class=&quot;code-quote&quot;&gt;&quot;no dirty room: dirty: %ld &quot;&lt;/span&gt;
                               &lt;span class=&quot;code-quote&quot;&gt;&quot;osc max %ld, sys max %d\n&quot;&lt;/span&gt;, cli-&amp;gt;cl_dirty,
                               cli-&amp;gt;cl_dirty_max, obd_max_dirty_pages);
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I think the title of this ticket should be changed as &quot;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2139&quot; title=&quot;Tracking unstable pages&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2139&quot;&gt;&lt;del&gt;LU-2139&lt;/del&gt;&lt;/a&gt; may cause the performance regression&quot;.&lt;/p&gt;</comment>
                            <comment id="57752" author="jlevi" created="Mon, 6 May 2013 18:48:54 +0000"  >&lt;p&gt;Oleg will revert the patch in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2139&quot; title=&quot;Tracking unstable pages&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2139&quot;&gt;&lt;del&gt;LU-2139&lt;/del&gt;&lt;/a&gt; that caused this regression.&lt;/p&gt;</comment>
                            <comment id="57781" author="pjones" created="Mon, 6 May 2013 22:39:17 +0000"  >&lt;p&gt;The patch has now been reverted - Ihara can you confirm whether the problem has now disappeared?&lt;/p&gt;</comment>
                            <comment id="57785" author="prakash" created="Mon, 6 May 2013 23:26:47 +0000"  >&lt;p&gt;How much memory is there on the client? And what is the commit frequency on the servers? I would expect the performance to be worse with the patch Niu points to &lt;em&gt;if&lt;/em&gt; the client has sufficient bandwidth (combined with async server commits) to fill it&apos;s available dirty page space with unstable pages. So this performance regression &lt;em&gt;might&lt;/em&gt; be &quot;working as intended&quot;, but that all depends on how many unstable pages the client is exhausting during the test.&lt;/p&gt;

&lt;p&gt;Can you sample &lt;tt&gt;lctl get_param &apos;llite.*.unstable_stats&apos;&lt;/tt&gt; and &lt;tt&gt;grep NFS_Unstable /proc/meminfo&lt;/tt&gt; a few times while the test is running to give me an idea of the number of unstable pages are being consumed? If this value is anywhere near the limit set in &lt;tt&gt;/proc/sys/lustre/max_dirty_mb&lt;/tt&gt; then maybe we need to rethink the default value of &lt;tt&gt;max_dirty_mb&lt;/tt&gt; and set it to something larger.&lt;/p&gt;</comment>
                            <comment id="57911" author="ihara" created="Wed, 8 May 2013 16:11:10 +0000"  >&lt;p&gt;Peter, yes, the performance is back with the latest commit (2.3.65). However, I&apos;m hitting another issue with IOR.. will figure out and if it&apos;s different problem, I will open new ticket.&lt;/p&gt;</comment>
                            <comment id="57912" author="ihara" created="Wed, 8 May 2013 16:20:21 +0000"  >&lt;p&gt;Prakash, attached are llite.*.unstable_stats&apos; and grep NFS_Unstable /proc/meminfo output during IOR. I also collected collectl log and found out the client was not writing always. it&apos;s doing like this writing; idle, writing; idle;...&lt;/p&gt;

&lt;p&gt;our client&apos;s memory is 64GB and tested larger max_dirty_mb(tested more than 3/4 of memory size from default 1/2), but it didn&apos;t help either.&lt;/p&gt;</comment>
                            <comment id="57928" author="prakash" created="Wed, 8 May 2013 18:06:19 +0000"  >&lt;blockquote&gt;
&lt;p&gt;Prakash, attached are llite.*.unstable_stats&apos; and grep NFS_Unstable /proc/meminfo output during IOR.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Thanks, I&apos;ll give these a look.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;it&apos;s doing like this writing; idle, writing; idle;...&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;OK, that sounds like exactly what I saw during my testing when I was hitting the kernel&apos;s dirty page limits. Allow me to explain..&lt;/p&gt;

&lt;p&gt;So, with that patch in place, we&apos;re now properly informing the kernel of our unstable pages by incrementing and decrementing the NFS_Unstable zone page. You can see this by watching the NFS_Unstable field in /proc/meminfo (before the patch this will always be zero, after the patch this will fluctuate with respect to Lustre IO). So that&apos;s all well and good, but how does that relate to the idle time seen during IO.. Well, what I think&apos;s happening is, the newly accounted for unstable pages are being factored in when the kernel calls &lt;tt&gt;balance_dirty_pages&lt;/tt&gt;, &lt;tt&gt;balance_dirty_pages&lt;/tt&gt; is then determining the system is out of dirty pages, and sleeps waiting for writeback to flush the dirty pages to disk.&lt;/p&gt;

&lt;p&gt;You can verify this theory by dumping the stacks of all processes when the IO is stalled, and check to see if any of the write threads are stuck sleeping in &lt;tt&gt;balance_dirty_pages&lt;/tt&gt;. What do these files show on you system: &lt;tt&gt;/proc/sys/vm/dirty_background_bytes&lt;/tt&gt;, &lt;tt&gt;/proc/sys/vm/dirty_background_ratio&lt;/tt&gt;, &lt;tt&gt;/proc/sys/vm/dirty_bytes&lt;/tt&gt;, &lt;tt&gt;/proc/sys/vm/dirty_ratio&lt;/tt&gt;? I &lt;em&gt;think&lt;/em&gt; the dirty limit on the system is a calculation of &lt;tt&gt;dirty_ratio * available_memory&lt;/tt&gt;, if &lt;tt&gt;dirty_bytes&lt;/tt&gt; is 0. So in your case the limit is about 12.8 GB, (assuming a &lt;tt&gt;dirty_ratio&lt;/tt&gt; of 20, which is the default on my system, and &lt;tt&gt;dirty_bytes&lt;/tt&gt; of 0). Seeing as the the value of NFS_Unstable in your logs hovers around 11556864 kB, it&apos;s plausible that dirty + unstable is hitting the limit.&lt;/p&gt;

&lt;p&gt;If my above hypothesis is correct, the behavior you were seeing was expected and working as designed. The same problem would occur if you could push NFS at the same rates. If you had the full &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2139&quot; title=&quot;Tracking unstable pages&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2139&quot;&gt;&lt;del&gt;LU-2139&lt;/del&gt;&lt;/a&gt; patch stack applied to the client and servers (&lt;a href=&quot;http://review.whamcloud.com/4245&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/4245&lt;/a&gt;, &lt;a href=&quot;http://review.whamcloud.com/4374&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/4374&lt;/a&gt;, &lt;a href=&quot;http://review.whamcloud.com/4375&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/4375&lt;/a&gt;, &lt;a href=&quot;http://review.whamcloud.com/5935&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/5935&lt;/a&gt;), I&apos;d expect this effect would go away.&lt;/p&gt;

&lt;p&gt;If you can, try setting &lt;tt&gt;dirty_ratio&lt;/tt&gt; and &lt;tt&gt;max_dirty_mb&lt;/tt&gt; to a large fraction of memory and rerun the test.&lt;/p&gt;</comment>
                            <comment id="57939" author="jlevi" created="Wed, 8 May 2013 19:24:16 +0000"  >&lt;p&gt;Lowering priority as &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2139&quot; title=&quot;Tracking unstable pages&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2139&quot;&gt;&lt;del&gt;LU-2139&lt;/del&gt;&lt;/a&gt; was reverted.&lt;/p&gt;</comment>
                            <comment id="57960" author="prakash" created="Wed, 8 May 2013 23:49:33 +0000"  >&lt;p&gt;In case it proves useful, here&apos;s an example stack for a thread waiting in the state I described in my previous comment:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;    fio           S 00000fffae72633c     0 59338  59283 0x00000000
    Call Trace:
    [c0000003e0deed20] [c0000003e0deede0] 0xc0000003e0deede0 (unreliable)
    [c0000003e0deeef0] [c000000000008e10] .__switch_to+0xc4/0x100
    [c0000003e0deef80] [c00000000042b0e0] .schedule+0x858/0x9c0
    [c0000003e0def230] [c00000000042b7c8] .schedule_timeout+0x1f8/0x240
    [c0000003e0def310] [c00000000042a444] .io_schedule_timeout+0x54/0x98
    [c0000003e0def3a0] [c00000000009ddfc] .balance_dirty_pages+0x294/0x390
    [c0000003e0def520] [c000000000095a2c] .generic_file_buffered_write+0x268/0x354
    [c0000003e0def660] [c000000000096074] .__generic_file_aio_write+0x374/0x3d8
    [c0000003e0def760] [c000000000096150] .generic_file_aio_write+0x78/0xe8
    [c0000003e0def810] [8000000006a7062c] .vvp_io_write_start+0xfc/0x3e0 [lustre]
    [c0000003e0def8e0] [800000000249a81c] .cl_io_start+0xcc/0x220 [obdclass]
    [c0000003e0def980] [80000000024a2634] .cl_io_loop+0x194/0x2c0 [obdclass]
    [c0000003e0defa30] [80000000069ea208] .ll_file_io_generic+0x498/0x670 [lustre]
    [c0000003e0defb30] [80000000069ea864] .ll_file_aio_write+0x1d4/0x3a0 [lustre]
    [c0000003e0defc00] [80000000069eab80] .ll_file_write+0x150/0x320 [lustre]
    [c0000003e0defce0] [c0000000000d1e9c] .vfs_write+0xd0/0x1c4
    [c0000003e0defd80] [c0000000000d208c] .SyS_write+0x54/0x98
    [c0000003e0defe30] [c000000000000580] syscall_exit+0x0/0x2c
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="66431" author="prakash" created="Wed, 11 Sep 2013 22:22:53 +0000"  >&lt;p&gt;Since I&apos;ve been assigned to this, I&apos;m marking it resolved. The &quot;bad patch&quot; was reverted, and there&apos;s been no reports of this since, which leads me to believe it is no longer an issue. Feel free to reopen if there is a compelling case to do so.&lt;/p&gt;

&lt;p&gt;General discussion of the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2139&quot; title=&quot;Tracking unstable pages&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2139&quot;&gt;&lt;del&gt;LU-2139&lt;/del&gt;&lt;/a&gt; issue and pending patch stack is better suited to happen in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2139&quot; title=&quot;Tracking unstable pages&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2139&quot;&gt;&lt;del&gt;LU-2139&lt;/del&gt;&lt;/a&gt; ticket.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10120">
                    <name>Blocker</name>
                                            <outwardlinks description="is blocking">
                                        <issuelink>
            <issuekey id="15971">LU-2139</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="17090">LU-2576</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="15971">LU-2139</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="12639" name="collectl.log" size="13369" author="ihara" created="Wed, 8 May 2013 16:20:21 +0000"/>
                            <attachment id="12640" name="stat.log" size="16779" author="ihara" created="Wed, 8 May 2013 16:20:21 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvq6f:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>8114</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10021"><![CDATA[2]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>