<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:40:33 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4198] Improve IO performance when using DIRECT IO using libaio</title>
                <link>https://jira.whamcloud.com/browse/LU-4198</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Attached to this Jira are some numbers from the direct IO tests. Write operations only.&lt;/p&gt;

&lt;p&gt;It was noticed that setting RPCs in flight to 256 in these tests gives poorer performance. max rpc here is set to 32.&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;A sample FIO output:
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;fio.4k.write.1.23499: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
fio-2.1.2
Starting 1 process
fio.4k.write.1.23499: Laying out IO file(s) (1 file(s) / 10MB)

fio.4k.write.1.23499: (groupid=0, jobs=1): err= 0: pid=10709: Fri Nov  1 11:47:29 2013
  write: io=10240KB, bw=2619.7KB/s, iops=654, runt=  3909msec
    clat (usec): min=579, max=5283, avg=1520.43, stdev=1216.20
     lat (usec): min=580, max=5299, avg=1521.37, stdev=1216.22
    clat percentiles (usec):
     |  1.00th=[  604],  5.00th=[  652], 10.00th=[  668], 20.00th=[  708],
     | 30.00th=[  732], 40.00th=[  756], 50.00th=[  796], 60.00th=[  844],
     | 70.00th=[ 1320], 80.00th=[ 3440], 90.00th=[ 3568], 95.00th=[ 3632],
     | 99.00th=[ 3824], 99.50th=[ 5024], 99.90th=[ 5216], 99.95th=[ 5280],
     | 99.99th=[ 5280]
    bw (KB  /s): min= 1224, max= 4366, per=97.64%, avg=2557.14, stdev=1375.64
    lat (usec) : 750=37.50%, 1000=30.12%
    lat (msec) : 2=5.00%, 4=26.76%, 10=0.62%
  cpu          : usr=0.92%, sys=8.70%, ctx=2562, majf=0, minf=25
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, &amp;gt;=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &amp;gt;=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &amp;gt;=64=0.0%
     issued    : total=r=0/w=2560/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=10240KB, aggrb=2619KB/s, minb=2619KB/s, maxb=2619KB/s, mint=3909msec, maxt=3909msec
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
                <environment>Seen in two environments.  AWS cloud (Robert R.) and a dual-OSS setup (3 SSD per OST) over 2x10 GbE.</environment>
        <key id="21786">LU-4198</key>
            <summary>Improve IO performance when using DIRECT IO using libaio</summary>
                <type id="4" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11310&amp;avatarType=issuetype">Improvement</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="bobijam">Zhenyu Xu</assignee>
                                    <reporter username="brett">Brett Lee</reporter>
                        <labels>
                            <label>clio</label>
                    </labels>
                <created>Fri, 1 Nov 2013 20:19:29 +0000</created>
                <updated>Wed, 26 Aug 2020 13:15:57 +0000</updated>
                            <resolved>Wed, 10 Jun 2020 19:40:35 +0000</resolved>
                                    <version>Lustre 2.4.1</version>
                                    <fixVersion>Lustre 2.14.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>25</watches>
                                                                            <comments>
                            <comment id="70528" author="keith" created="Fri, 1 Nov 2013 20:28:41 +0000"  >&lt;p&gt;I am not quite sure how to read this output to know if it is good bad.&lt;/p&gt;

&lt;p&gt;In general I expect Direct I/O to hurt performance. It gets the filesystem read/write caches out of the way of the app.  It is commonly used for databases to minimize the risk (some turn off hardware write caches as well).&lt;/p&gt;

&lt;p&gt;For 4k io I would not use wide stripping. &lt;/p&gt;</comment>
                            <comment id="70591" author="johann" created="Mon, 4 Nov 2013 07:54:50 +0000"  >&lt;blockquote&gt;
&lt;p&gt;Additional stripes on a file does not increase IO performance when using DIRECT IO&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;This is unfortunately expected since we have to wait for I/O completion on the first stripe before firing RPCs to the next one (i.e. &lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;foreach(stripe) { lock(stripe); do_sync_io(stripe); unlock(stripe); }&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;) to work around the cascading abort issue. On 1.8, some customers were using a patch to use lockless direct I/O by default.&lt;/p&gt;</comment>
                            <comment id="70624" author="rread" created="Mon, 4 Nov 2013 16:29:17 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=johann&quot; class=&quot;user-hover&quot; rel=&quot;johann&quot;&gt;johann&lt;/a&gt; Ah, that is what I was afraid of. Is there a lockless direct IO patch for 2.x? That would probably be very helpful in this use case. &lt;/p&gt;</comment>
                            <comment id="70625" author="rread" created="Mon, 4 Nov 2013 16:35:11 +0000"  >&lt;p&gt;I see in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-238&quot; title=&quot;add procfs tunable to enable/disable lockless direct I/O&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-238&quot;&gt;&lt;del&gt;LU-238&lt;/del&gt;&lt;/a&gt; that there is a nolock mount option that we can use to enabled lockless direct IO.&lt;/p&gt;</comment>
                            <comment id="70635" author="keith" created="Mon, 4 Nov 2013 18:03:15 +0000"  >&lt;p&gt;What is the cascading abort issue? &lt;/p&gt;</comment>
                            <comment id="70681" author="rread" created="Tue, 5 Nov 2013 01:43:21 +0000"  >&lt;p&gt;I tried mounting the client with &quot;nolock&quot; and performance actually got about 4x worse than before. &lt;/p&gt;</comment>
                            <comment id="70701" author="johann" created="Tue, 5 Nov 2013 09:59:55 +0000"  >&lt;blockquote&gt;&lt;p&gt; I see in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-238&quot; title=&quot;add procfs tunable to enable/disable lockless direct I/O&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-238&quot;&gt;&lt;del&gt;LU-238&lt;/del&gt;&lt;/a&gt; that there is a nolock mount option that we can use to enabled lockless direct IO. &lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;It seems that this patch enables lockless I/O not only for direct I/O, but also for buffered I/Os which is quite bad.&lt;/p&gt;

&lt;blockquote&gt;&lt;p&gt; What is the cascading abort issue? &lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Client holds a lock on resource from server A and waits for RPC completion on server B. This introduces an implicit dependency between servers. If server B is not responsive (e.g. doing failover or just slow because it is overloaded) and server A issues a blocking AST, the client will get evicted from server A since it cannot release the lock in a timely manner.&lt;/p&gt;

&lt;blockquote&gt;&lt;p&gt; I tried mounting the client with &quot;nolock&quot; and performance actually got about 4x worse than before. &lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Strange ... we definitely got better results with 1.8. There is probably something wrong with CLIO.&lt;br/&gt;
BTW, to be clear, you should only see a benefit if your direct writes cover multiple stripes, otherwise there won&apos;t be any parallelism.&lt;/p&gt;

&lt;p&gt;HTH&lt;/p&gt;</comment>
                            <comment id="70757" author="jay" created="Tue, 5 Nov 2013 18:07:21 +0000"  >&lt;p&gt;in 2.x, the only difference between direct IO and cache IO is whether it caches dirty data on the client. They actually share the same IO framework.&lt;/p&gt;

&lt;p&gt;Even though it&apos;s really strange that no lock version was 4x worse; server takes lock for no lock IO - is there anybody else operating this file meanwhile?&lt;/p&gt;</comment>
                            <comment id="70758" author="rread" created="Tue, 5 Nov 2013 18:28:03 +0000"  >&lt;p&gt;I was running a single threaded benchmark (FIO) and there was only a single client on the filesystem. So definitely not shared. &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;

&lt;p&gt;It seems there are other differences between direct and buffered IO, such as direct io is synchronous. I&apos;ve noticed while testing AIO with various io depths that AIO appears to make no difference with direct IO.&lt;/p&gt;</comment>
                            <comment id="70789" author="jay" created="Tue, 5 Nov 2013 22:49:53 +0000"  >&lt;p&gt;AIO used to work with Direct IO only. I don&apos;t know what the state in current kernel is, I&apos;ll check it out.&lt;/p&gt;

&lt;p&gt;If we want to use direct IO, two problems have to be addressed:&lt;br/&gt;
1. lock: if the file is being read or written in direct IO, it&apos;s unnecessary to take lock from server. can we make the assumption all direct IO should go with lockless?&lt;/p&gt;

&lt;p&gt;2. universal direct IO support: in current implementation, the address of user buffer has to be aligned to page. Niu has a patch to address this problem but it uses obsoleted interfaces.&lt;/p&gt;

&lt;p&gt;Both problems should not be difficult to solve.&lt;/p&gt;

&lt;p&gt;Robert, will you briefly describe the use case scenarios for direct IO?&lt;/p&gt;</comment>
                            <comment id="71888" author="jose_e_valerio" created="Tue, 19 Nov 2013 15:02:37 +0000"  >&lt;p&gt;Hello, all.&lt;/p&gt;

&lt;p&gt;I have performed tests in one of the two environments where Brett worked (the dual-OSS setup - 3 SSD per OST - over 2x10 GE).&lt;/p&gt;

&lt;p&gt;I made tests writing directly into a local SSD and with other network block storage tool (NBD - Network Block Device), playing with the O_DIRECT and O_SYNC flags:&lt;/p&gt;

&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;no flags + native Async IO (libaio), 8 writes in flight&lt;/li&gt;
	&lt;li&gt;O_DIRECT + native Async IO (libaio), 8 writes in flight&lt;/li&gt;
	&lt;li&gt;O_DIRECT + O_SYNC&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;Results are that the performance, in both setups (local SSD and NBD) has the following pattern:&lt;/p&gt;

&lt;p&gt;only libaio &amp;gt;&amp;gt; faster than &amp;gt;&amp;gt; O_DIRECT + libaio &amp;gt;&amp;gt; faster than &amp;gt;&amp;gt; O_DIRECT + O_SYNC&lt;/p&gt;

&lt;p&gt;whereas, with Lustre, O_DIRECT + libaio and O_DIRECT + O_SYNC show the same performance.&lt;/p&gt;

&lt;p&gt;I exchanged a couple of emails with Brett and he confirmed that in Lustre, setting O_DIRECT implies always setting also O_SYNC.&lt;/p&gt;

&lt;p&gt;Also, in theory:&lt;/p&gt;

&lt;p&gt;O_DIRECT (Since Linux 2.4.10)&lt;br/&gt;
              Try to minimize cache effects of the I/O to and from this&lt;br/&gt;
              file.  In general this will degrade performance, but it is&lt;br/&gt;
              useful in special situations, such as when applications do&lt;br/&gt;
              their own caching.  File I/O is done directly to/from user-&lt;br/&gt;
              space buffers.  The O_DIRECT flag on its own makes an effort&lt;br/&gt;
              to transfer data synchronously, but does not give the&lt;br/&gt;
              guarantees of the O_SYNC flag that data and necessary metadata&lt;br/&gt;
              are transferred.  To guarantee synchronous I/O, O_SYNC must be&lt;br/&gt;
              used in addition to O_DIRECT.  See NOTES below for further&lt;br/&gt;
              discussion.&lt;/p&gt;

&lt;p&gt;              A semantically similar (but deprecated) interface for block&lt;br/&gt;
              devices is described in raw(8).&lt;/p&gt;

&lt;p&gt;This comes from open(2) man page.&lt;/p&gt;

&lt;p&gt;So, to my understanding, O_DIRECT &quot;tries&quot; to write synchronously, but does not offer any guarantee. This is specially important in the case when using O_DIRECT + libaio, which is a library that allows non-blocking parallel writes to single-threaded user-space applications.&lt;/p&gt;

&lt;p&gt;According to the theory (man pages) and my tests with local SSD and NBD, I would personally say that Lustre deviates from the standard POSIX Filesystems.&lt;/p&gt;

&lt;p&gt;Not only that, this behavior slows down Lustre, apparently for no reason.&lt;/p&gt;

&lt;p&gt;If you agree with me on that, I would like to request a change in the code to correct this behavior, or at least some help on where to change it by myself, so I can test again and maybe see a bump in performance.&lt;/p&gt;

&lt;p&gt;Thanks in advance&lt;/p&gt;</comment>
                            <comment id="75889" author="brett" created="Wed, 29 Jan 2014 19:06:10 +0000"  >&lt;p&gt;Using packages built for &quot;SLES 11 SP2&quot; OS, am seeing an LBUG when mounting a newly created storage target (MGT in this case).  This event is repeatable.  Using packages from:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://build.whamcloud.com/job/lustre-reviews/21279/arch=x86_64,build_type=server,distro=sles11sp2,ib_stack=inkernel/artifact/artifacts/RPMS/x86_64/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://build.whamcloud.com/job/lustre-reviews/21279/arch=x86_64,build_type=server,distro=sles11sp2,ib_stack=inkernel/artifact/artifacts/RPMS/x86_64/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Stack trace seen on the console is:&lt;/p&gt;

&lt;p&gt;sles11sp2-2:~/work # mount -t lustre /dev/vdb /sap/mgs&lt;br/&gt;
[  159.338678] LustreError: 3532:0:(sec_ctx.c:80:pop_ctxt()) ASSERTION( segment_eq(get_fs(), get_ds()) ) failed: popping non-kernel context!&lt;br/&gt;
[  159.340266] LustreError: 3532:0:(sec_ctx.c:80:pop_ctxt()) LBUG&lt;br/&gt;
[  159.342723] Kernel panic - not syncing: LBUG&lt;br/&gt;
[  159.343356] Pid: 3532, comm: mount.lustre Tainted: G           N  3.0.93-0.5_lustre.ge80a1ca-default #1&lt;br/&gt;
[  159.344239] Call Trace:&lt;br/&gt;
[  159.344384]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff810048b5&amp;gt;&amp;#93;&lt;/span&gt; dump_trace+0x75/0x310&lt;br/&gt;
[  159.344692]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff814473a3&amp;gt;&amp;#93;&lt;/span&gt; dump_stack+0x69/0x6f&lt;br/&gt;
[  159.344983]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8144743c&amp;gt;&amp;#93;&lt;/span&gt; panic+0x93/0x201&lt;br/&gt;
[  159.345255]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa02f0dc3&amp;gt;&amp;#93;&lt;/span&gt; lbug_with_loc+0xa3/0xb0 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
[  159.345621]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa076087c&amp;gt;&amp;#93;&lt;/span&gt; pop_ctxt+0x19c/0x1a0 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
[  159.345984]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0b0667d&amp;gt;&amp;#93;&lt;/span&gt; osd_ost_init+0x23d/0x8d0 &lt;span class=&quot;error&quot;&gt;&amp;#91;osd_ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
[  159.346370]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0b06d39&amp;gt;&amp;#93;&lt;/span&gt; osd_obj_map_init+0x29/0x120 &lt;span class=&quot;error&quot;&gt;&amp;#91;osd_ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
[  159.346767]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0ae4151&amp;gt;&amp;#93;&lt;/span&gt; osd_device_init0+0x281/0x5c0 &lt;span class=&quot;error&quot;&gt;&amp;#91;osd_ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
[  159.347166]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0ae47c6&amp;gt;&amp;#93;&lt;/span&gt; osd_device_alloc+0x166/0x2c0 &lt;span class=&quot;error&quot;&gt;&amp;#91;osd_ldiskfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
[  159.347574]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa04d642b&amp;gt;&amp;#93;&lt;/span&gt; class_setup+0x61b/0xad0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
[  159.347957]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa04de5f5&amp;gt;&amp;#93;&lt;/span&gt; class_process_config+0xc95/0x18f0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
[  159.348393]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa04e3652&amp;gt;&amp;#93;&lt;/span&gt; do_lcfg+0x142/0x460 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
[  159.348752]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa04e3a04&amp;gt;&amp;#93;&lt;/span&gt; lustre_start_simple+0x94/0x210 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
[  159.349168]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa051171a&amp;gt;&amp;#93;&lt;/span&gt; osd_start+0x4fa/0x7c0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
[  159.349549]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa051b41d&amp;gt;&amp;#93;&lt;/span&gt; server_fill_super+0xfd/0xce0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
[  159.349965]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa04e91e8&amp;gt;&amp;#93;&lt;/span&gt; lustre_fill_super+0x178/0x530 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
[  159.350362]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff811556e3&amp;gt;&amp;#93;&lt;/span&gt; mount_nodev+0x83/0xc0&lt;br/&gt;
[  159.350668]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa04e1080&amp;gt;&amp;#93;&lt;/span&gt; lustre_mount+0x20/0x30 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
[  159.351035]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff811551ee&amp;gt;&amp;#93;&lt;/span&gt; mount_fs+0x4e/0x1a0&lt;br/&gt;
[  159.351318]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff811703f5&amp;gt;&amp;#93;&lt;/span&gt; vfs_kern_mount+0x65/0xd0&lt;br/&gt;
[  159.351623]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff811704e3&amp;gt;&amp;#93;&lt;/span&gt; do_kern_mount+0x53/0x110&lt;br/&gt;
[  159.351930]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff81171e2d&amp;gt;&amp;#93;&lt;/span&gt; do_mount+0x21d/0x260&lt;br/&gt;
[  159.352246]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff81171f30&amp;gt;&amp;#93;&lt;/span&gt; sys_mount+0xc0/0xf0&lt;br/&gt;
[  159.352529]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff81452112&amp;gt;&amp;#93;&lt;/span&gt; system_call_fastpath+0x16/0x1b&lt;br/&gt;
[  159.352867]  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;00007f99b62bd1ea&amp;gt;&amp;#93;&lt;/span&gt; 0x7f99b62bd1e9&lt;/p&gt;

&lt;p&gt;Have confirmed that the same installed OS functions properly using SLES packages from:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://build.whamcloud.com/job/lustre-b2_5/arch=x86_64,build_type=server,distro=sles11sp2,ib_stack=inkernel/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b2_5/arch=x86_64,build_type=server,distro=sles11sp2,ib_stack=inkernel/&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="75915" author="jay" created="Thu, 30 Jan 2014 03:37:32 +0000"  >&lt;p&gt;Hi Brett, it seems not related, please file a new ticket for the problem.&lt;/p&gt;</comment>
                            <comment id="76118" author="brett" created="Mon, 3 Feb 2014 18:52:25 +0000"  >&lt;p&gt;Thanks Jinshan - have opened a different ticket for that issue.&lt;/p&gt;

&lt;p&gt;In testing the build from 21279 on CentOS 6.4, saw two similar issues.  Configuration is a single node running a MGS, 1 MDT, 2 OSTs, and 1 client mount.  Also have an identical &quot;control&quot; system.  Both worked well (no issues seen, all tests completed w/o issue) with RHEL server bits from:&lt;br/&gt;
&lt;a href=&quot;http://downloads.whamcloud.com/public/lustre/latest-feature-release/el6/server/RPMS/x86_64/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://downloads.whamcloud.com/public/lustre/latest-feature-release/el6/server/RPMS/x86_64/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After reconfiguring same system with the new bits:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://build.whamcloud.com/job/lustre-reviews/21279/arch=x86_64,build_type=server,distro=el6,ib_stack=inkernel/artifact/artifacts/RPMS/x86_64/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-reviews/21279/arch=x86_64,build_type=server,distro=el6,ib_stack=inkernel/artifact/artifacts/RPMS/x86_64/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and running IO, there were two issues seen.  First was a hung system that resulted in a /tmp/debug log ~ 6MB.  Second was a system that got corrupted - showed 61% capacity utilization on both OSTs, though no files were present from the client perspective - also produced a debug log ~300K.  Both debug logs are available for further review but not uploaded.&lt;/p&gt;

&lt;p&gt;The system hang failure appeared immediate pursuant to the first of 32 IO tests, using synchronous IO.  The second failure was on only 2 of the 32 test cases, all of these using AIO.  The two tests that failed were 1 GB write and random write, in 64MB bursts.  In those two cases, the IO hung but I was able to ctrl-c out of the IO job.&lt;/p&gt;

&lt;p&gt;16 tests were run using 1 OST, 16 tests using 2 OSTs.  Note that in several of the test cases the performance benefit using these patches (vs. the control node) was very pronounced.  Will be working to get more samples to increase the reliability of these data, and to further check/troubleshoot any issues with stability.&lt;/p&gt;</comment>
                            <comment id="76265" author="brett" created="Wed, 5 Feb 2014 15:21:40 +0000"  >&lt;p&gt;Update:&lt;br/&gt;
Continuing to run benchmarks against this build.&lt;/p&gt;

&lt;p&gt;No further hung system issues.  Oddly, the hung system was on the initial IO, and never seen since.&lt;/p&gt;

&lt;p&gt;The &quot;corrupted&quot; event is reproducible, though I would no longer call it corrupted.  Rather, it has to do with stalled fio kernel threads.  After killing off the fio user processes, two kernel threads remained.  After rebooting to end those threads, the 61% was cleared.&lt;/p&gt;

&lt;p&gt;Note that all fio writes using block size 64M are not completing (though they are on the 2.5 release, as well as the root &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/check.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt; ext4 file system.&lt;/p&gt;

&lt;p&gt;All other reads/writes (sequential and random) are completing successfully and without incident.  Performance data comparisons upcoming.&lt;/p&gt;</comment>
                            <comment id="76642" author="brett" created="Mon, 10 Feb 2014 20:08:23 +0000"  >&lt;p&gt;Data in the attached spreadsheet seems to make a good case for including the performance improvements.  Also, I&#8217;ve not seen any further stability issues since the beginning of the test period.&lt;/p&gt;</comment>
                            <comment id="76646" author="jay" created="Mon, 10 Feb 2014 20:24:43 +0000"  >&lt;p&gt;Will you please increase iodepth to at least 32 and see if we can get any better results?&lt;/p&gt;</comment>
                            <comment id="76834" author="brett" created="Wed, 12 Feb 2014 15:22:43 +0000"  >&lt;p&gt;Better?  I thought those results were pretty good already.  Will give it a try.&lt;/p&gt;</comment>
                            <comment id="77155" author="brett" created="Sat, 15 Feb 2014 22:05:47 +0000"  >&lt;p&gt;Jinshan - an OST failed on me (each OST is one SATA-II or III disk) and have no other suitable disks.  Have ordered a pair of WD 10K RPM Velociraptors (200 MB/s) that will support queue depth up to 32 (NCQ).  On hold till then.&lt;/p&gt;</comment>
                            <comment id="98112" author="rhenwood" created="Fri, 31 Oct 2014 22:34:02 +0000"  >&lt;p&gt;Jinshan, please update this ticket description to include the reason that this ticket is a dependency for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3259&quot; title=&quot;cl_lock refactoring&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3259&quot;&gt;&lt;del&gt;LU-3259&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="101504" author="rhenwood" created="Fri, 12 Dec 2014 20:09:46 +0000"  >&lt;p&gt;This ticket isn&apos;t directly related to CLIO Simplification work. The ticket relationships on Jira have been updated to reflect this.&lt;/p&gt;</comment>
                            <comment id="152886" author="adilger" created="Thu, 19 May 2016 19:41:35 +0000"  >&lt;p&gt;Patches in Gerrit for this issue:&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/8201&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/8201&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/8612&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/8612&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="165797" author="jay" created="Tue, 13 Sep 2016 06:03:02 +0000"  >&lt;p&gt;Let&apos;s reopen this ticket after we have a more convincible solution for this issue.&lt;/p&gt;</comment>
                            <comment id="165846" author="rread" created="Tue, 13 Sep 2016 15:29:47 +0000"  >&lt;p&gt;&lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/sad.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</comment>
                            <comment id="197500" author="paf" created="Tue, 30 May 2017 00:12:27 +0000"  >&lt;p&gt;Patch is still in flight.  (Hope this is OK.)&lt;/p&gt;</comment>
                            <comment id="197501" author="paf" created="Tue, 30 May 2017 00:14:25 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-247&quot; title=&quot;Lustre client slow performance on BG/P IONs: unaligned DIRECT_IO&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-247&quot;&gt;&lt;del&gt;LU-247&lt;/del&gt;&lt;/a&gt; is pretty old and probably no one has time to update it...  But it could be very useful (with this) for improving &amp;lt; page size write performance.&lt;/p&gt;</comment>
                            <comment id="200083" author="paf" created="Fri, 23 Jun 2017 15:35:22 +0000"  >&lt;p&gt;Jinshan,&lt;/p&gt;

&lt;p&gt;Attached patch is a suggestion for fixing the need for size glimpsing for dio reads.  Not 100% sure it&apos;s safe, but some local testing suggests it&apos;s OK.  (Diff was a little big to drop in gerrit)&lt;/p&gt;</comment>
                            <comment id="220478" author="jay" created="Thu, 8 Feb 2018 18:35:08 +0000"  >&lt;p&gt;This work is still useful so probably we should keep this ticket open.&lt;/p&gt;</comment>
                            <comment id="227943" author="gerrit" created="Wed, 16 May 2018 03:46:27 +0000"  >&lt;p&gt;Jinshan Xiong (jinshan.xiong@gmail.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/32415&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/32415&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4198&quot; title=&quot;Improve IO performance when using DIRECT IO using libaio&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4198&quot;&gt;&lt;del&gt;LU-4198&lt;/del&gt;&lt;/a&gt; clio: turn on parallel mode for some kind of IO&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 5afe155acdeaf5fefc230d43d23936f03e0e447b&lt;/p&gt;</comment>
                            <comment id="227944" author="gerrit" created="Wed, 16 May 2018 03:46:29 +0000"  >&lt;p&gt;Jinshan Xiong (jinshan.xiong@gmail.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/32416&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/32416&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4198&quot; title=&quot;Improve IO performance when using DIRECT IO using libaio&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4198&quot;&gt;&lt;del&gt;LU-4198&lt;/del&gt;&lt;/a&gt; clio: AIO support for direct IO&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 8c480ce9300ad4e4f23f5fac4e5a3c2d038017c7&lt;/p&gt;</comment>
                            <comment id="227945" author="gerrit" created="Wed, 16 May 2018 03:46:30 +0000"  >&lt;p&gt;Jinshan Xiong (jinshan.xiong@gmail.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/32417&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/32417&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4198&quot; title=&quot;Improve IO performance when using DIRECT IO using libaio&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4198&quot;&gt;&lt;del&gt;LU-4198&lt;/del&gt;&lt;/a&gt; llite: no lock match for lockess I/O&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: e1c1366b1bd0129cd5276693da8cb1f895db3594&lt;/p&gt;</comment>
                            <comment id="237430" author="sihara" created="Mon, 26 Nov 2018 03:30:00 +0000"  >&lt;p&gt;Here is test resutls of patch &lt;a href=&quot;https://review.whamcloud.com/32416&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/32416&lt;/a&gt;&lt;br/&gt;
 &lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/31517/31517_LU-4198.png&quot; title=&quot;LU-4198.png attached to LU-4198&quot;&gt;LU-4198.png&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Client &lt;br/&gt;
 2 x E5-2650v4@2.20GHz, 128GB memory, 1 x EDR&lt;br/&gt;
 OSS/MDS&lt;br/&gt;
 DDN AI200(2xOSS/MDS, 20 x NVMe, 1 x EDR, master branch)&lt;/p&gt;

&lt;p&gt;Without patch, we only get 80K IOPS at 4k random read with DIO even increased number of threads. Here is fio parameters.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[randread]
ioengine=sync
;ioengine=libaio
rw=randread
blocksize=4096
iodepth=32
direct=1
size=1g
runtime=120
numjobs=128
group_reporting
directory=/cache0/fio.out
filename_format=f.$jobnum.$filenum
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;With aio patch &lt;a href=&quot;https://review.whamcloud.com/32416&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/32416&lt;/a&gt;, it could readch more than 600K IOPS per an client.&lt;br/&gt;
 patch helps not only supporting libaio on Lustre, but also benchmark (e.g. we only very small number of clients to satulate storage IOPS) and libaio supoprted applications. (e.g. database, virtual machine enviorment).&lt;br/&gt;
 One thing I need to inform here, but patch worked well only at less number of max_rpcs_in_flight. For instance, max_rpcs_in_flight=1 was scaling very well so far, but max_rpcs_in_flight=256 was problematic and didn&apos;t scale anything.&lt;/p&gt;</comment>
                            <comment id="246452" author="gerrit" created="Mon, 29 Apr 2019 08:50:16 +0000"  >&lt;p&gt;Wang Shilong (wshilong@ddn.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/34774&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/34774&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4198&quot; title=&quot;Improve IO performance when using DIRECT IO using libaio&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4198&quot;&gt;&lt;del&gt;LU-4198&lt;/del&gt;&lt;/a&gt; llite: transient page simplification&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 704390928ce55173d2b2fca0e0fe244907d750b2&lt;/p&gt;</comment>
                            <comment id="262908" author="gerrit" created="Sat, 8 Feb 2020 04:07:30 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/8201/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/8201/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4198&quot; title=&quot;Improve IO performance when using DIRECT IO using libaio&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4198&quot;&gt;&lt;del&gt;LU-4198&lt;/del&gt;&lt;/a&gt; clio: turn on lockless for some kind of IO&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 6bce536725efd166d2772f13fe954f271f9c53b8&lt;/p&gt;</comment>
                            <comment id="262909" author="gerrit" created="Sat, 8 Feb 2020 04:07:54 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/32416/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/32416/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4198&quot; title=&quot;Improve IO performance when using DIRECT IO using libaio&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4198&quot;&gt;&lt;del&gt;LU-4198&lt;/del&gt;&lt;/a&gt; clio: AIO support for direct IO&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: d1dded6e28473d889a9b24b47cbc804f90dd2956&lt;/p&gt;</comment>
                            <comment id="263557" author="gerrit" created="Wed, 19 Feb 2020 12:02:16 +0000"  >&lt;p&gt;Wang Shilong (wshilong@ddn.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/37621&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/37621&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4198&quot; title=&quot;Improve IO performance when using DIRECT IO using libaio&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4198&quot;&gt;&lt;del&gt;LU-4198&lt;/del&gt;&lt;/a&gt; clio: return error for short direct IO&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 9913a5411bb305d617ed3bba6bd5d000ffc11121&lt;/p&gt;</comment>
                            <comment id="265411" author="gerrit" created="Tue, 17 Mar 2020 03:40:57 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/37824/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/37824/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4198&quot; title=&quot;Improve IO performance when using DIRECT IO using libaio&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4198&quot;&gt;&lt;del&gt;LU-4198&lt;/del&gt;&lt;/a&gt; clio: Remove pl_owner&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: e880a6fccbc57ae335949c1dd20335359b1cb220&lt;/p&gt;</comment>
                            <comment id="267841" author="nrutman" created="Thu, 16 Apr 2020 18:30:55 +0000"  >&lt;p&gt;Can someone please summarize what the state of this ticket is? The subject seems to have wandered from &quot;Additional stripes on a file does not increase IO performance when using DIRECT IO&quot; to lockless DIO to AIO. Johann&apos;s and Jinshan&apos;s comments seem to be at odds as to whether DIO stripes are parallelized or not. &lt;/p&gt;

&lt;p&gt;Rough testing (DIO, not AIO) seems to indicate they are not. &lt;/p&gt;</comment>
                            <comment id="272513" author="adilger" created="Wed, 10 Jun 2020 19:40:35 +0000"  >&lt;p&gt;Nathan, I think regardless of how this ticket started, it ended up being used to land the AIO/DIO support for 2.14.  If there are still issues that need to be addressed, they should be done in the context of a new ticket.&lt;/p&gt;</comment>
                            <comment id="277394" author="wshilong" created="Thu, 13 Aug 2020 00:03:39 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=nrutman&quot; class=&quot;user-hover&quot; rel=&quot;nrutman&quot;&gt;nrutman&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I think &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13798&quot; title=&quot;Improve direct i/o performance with multiple stripes: Submit all stripes of a DIO and then wait&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13798&quot;&gt;&lt;del&gt;LU-13798&lt;/del&gt;&lt;/a&gt; will parallel DIO with stripes enabled, and we see big improvements with that, so i guess you could take a look there.&lt;/p&gt;</comment>
                            <comment id="278114" author="gerrit" created="Wed, 26 Aug 2020 13:15:57 +0000"  >&lt;p&gt;Mike Pershin (mpershin@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/39733&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/39733&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4198&quot; title=&quot;Improve IO performance when using DIRECT IO using libaio&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4198&quot;&gt;&lt;del&gt;LU-4198&lt;/del&gt;&lt;/a&gt; clio: turn on lockless for some kind of IO&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 2acd73b09c86ca7ee436152274f9d1beab1ad571&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="59962">LU-13786</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="56732">LU-12687</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="60009">LU-13798</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="60375">LU-13900</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="10666">LU-247</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="59643">LU-13697</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="45743">LU-9409</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="49437">LU-10278</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="60016">LU-13801</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="14079" name="JinshanPatchesTesting.xlsx" size="144263" author="brett" created="Mon, 10 Feb 2014 20:08:23 +0000"/>
                            <attachment id="31517" name="LU-4198.png" size="131629" author="sihara" created="Mon, 26 Nov 2018 03:10:47 +0000"/>
                            <attachment id="13729" name="fio.direct.xls" size="13312" author="brett" created="Fri, 1 Nov 2013 20:34:34 +0000"/>
                            <attachment id="27109" name="vvp_io.c.dio_i_size.patch" size="1954" author="paf" created="Fri, 23 Jun 2017 15:34:13 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzw7nb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>11385</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>