<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:26:07 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-9429] parallel-scale test_parallel_grouplock: test failed to respond and timed out</title>
                <link>https://jira.whamcloud.com/browse/LU-9429</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Seen everywhere test_parallel_grouplock was tested in tag 56 testing (2.9.56):&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/732dde3a-7e28-437b-8865-c350e9438ee4&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/732dde3a-7e28-437b-8865-c350e9438ee4&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/f14b71e9-9eda-4053-814d-fdf644925d29&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/f14b71e9-9eda-4053-814d-fdf644925d29&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/cb12c60c-613a-44b3-bfef-03c0651d2607&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/cb12c60c-613a-44b3-bfef-03c0651d2607&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/30cc75b6-594f-4255-accf-24fe11bdd565&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/30cc75b6-594f-4255-accf-24fe11bdd565&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/20ddc92f-b9fe-482d-ac1b-1602a513c824&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/20ddc92f-b9fe-482d-ac1b-1602a513c824&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/4f7e260b-bce2-4834-b77c-a1b47527d05a&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/4f7e260b-bce2-4834-b77c-a1b47527d05a&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With tag 56 testing, parallel-scale had two subtests (test_cascading_rw &amp;amp; test_parallel_grouplock) that failed 6 of 6 times.&lt;/p&gt;

&lt;p&gt;With tag 52-55 testing, some instances of test_cascading_rw failing were seen, but test_parallel_grouplock passed 100% of the time.&lt;/p&gt;

&lt;p&gt;With all 6 failures, we saw this sequence:&lt;/p&gt;

&lt;p&gt;test_cascading_rw: cascading_rw failed! 1 (covered by &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9367&quot; title=&quot;parallel-scale test_cascading_rw: cascading_rw failed! 1 &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9367&quot;&gt;&lt;del&gt;LU-9367&lt;/del&gt;&lt;/a&gt;):&lt;/p&gt;

&lt;p&gt;From test_log:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;/usr/lib64/lustre/tests/cascading_rw is running with 4 process(es) in DEBUG mode
22:55:22: Running test #/usr/lib64/lustre/tests/cascading_rw(iter 0)
[onyx-48vm1:12185] *** Process received signal ***
[onyx-48vm1:12185] Signal: Floating point exception (8)
[onyx-48vm1:12185] Signal code: Integer divide-by-zero (1)
[onyx-48vm1:12185] Failing at address: 0x4024c8
[onyx-48vm1:12185] [ 0] /lib64/libpthread.so.0(+0xf370) [0x7f6060bb8370]
[onyx-48vm1:12185] [ 1] /usr/lib64/lustre/tests/cascading_rw() [0x4024c8]
[onyx-48vm1:12185] [ 2] /usr/lib64/lustre/tests/cascading_rw() [0x402be0]
[onyx-48vm1:12185] [ 3] /usr/lib64/lustre/tests/cascading_rw() [0x40158e]
[onyx-48vm1:12185] [ 4] /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f6060809b35]
[onyx-48vm1:12185] [ 5] /usr/lib64/lustre/tests/cascading_rw() [0x40169d]
[onyx-48vm1:12185] *** End of error message ***
[onyx-48vm2.onyx.hpdd.intel.com][[59688,1],1][btl_tcp_frag.c:215:mca_btl_tcp_frag_recv] mca_btl_tcp_frag_recv: readv failed: Connection reset by peer (104)
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 12185 on node onyx-48vm1.onyx.hpdd.intel.com exited on signal 8 (Floating point exception).
--------------------------------------------------------------------------
 parallel-scale test_cascading_rw: @@@@@@ FAIL: cascading_rw failed! 1 
  Trace dump:
  = /usr/lib64/lustre/tests/test-framework.sh:4931:error()
  = /usr/lib64/lustre/tests/functions.sh:740:run_cascading_rw()
  = /usr/lib64/lustre/tests/parallel-scale.sh:130:test_cascading_rw()
  = /usr/lib64/lustre/tests/test-framework.sh:5207:run_one()
  = /usr/lib64/lustre/tests/test-framework.sh:5246:run_one_logged()
  = /usr/lib64/lustre/tests/test-framework.sh:5093:run_test()
  = /usr/lib64/lustre/tests/parallel-scale.sh:132:main()
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The test_cascading_rw failure was then followed by:&lt;/p&gt;

&lt;p&gt;test_parallel_grouplock: test failed to respond and timed out &lt;/p&gt;

&lt;p&gt;From test_log:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;parallel_grouplock subtests -t 11 PASS
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Note: Only for non-DNE configs did subtest 11 pass.&lt;/p&gt;

&lt;p&gt;Also from test_log:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;CMD: trevis-52vm1.trevis.hpdd.intel.com,trevis-52vm2,trevis-52vm7,trevis-52vm8 lctl clear
+ /usr/lib64/lustre/tests/parallel_grouplock -g -v -d /mnt/lustre/d0.parallel_grouplock -t 12
+ chmod 0777 /mnt/lustre
drwxrwxrwx 5 root root 4096 Apr 25 13:00 /mnt/lustre
+ su mpiuser sh -c &quot;/usr/lib64/compat-openmpi16/bin/mpirun --mca btl tcp,self --mca btl_tcp_if_include eth0 -mca boot ssh -machinefile /tmp/parallel-scale.machines -np 5 /usr/lib64/lustre/tests/parallel_grouplock -g -v -d /mnt/lustre/d0.parallel_grouplock -t 12 &quot;
/usr/lib64/lustre/tests/parallel_grouplock is running with 5 task(es) in DEBUG mode
23:38:55: Running test #/usr/lib64/lustre/tests/parallel_grouplock(iter 0)
23:38:55:	Beginning subtest 12
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This was the last activity seen before the test_parallel_grouplock timeout.  Nothing obvious was found in any of the console or dmesg logs.&lt;/p&gt;</description>
                <environment>all full tests:&lt;br/&gt;
clients: EL7 &amp;amp; SLES12, master branch, v2.9.56.11, b3565&lt;br/&gt;
&amp;nbsp;&amp;nbsp;(servers: ldiskfs &amp;amp; zfs, DNE &amp;amp; non-DNE)&lt;br/&gt;
&lt;br/&gt;
one interop test: &lt;br/&gt;
clients: EL7, master branch, v2.9.56.11, b3565 &lt;br/&gt;
&amp;nbsp;&amp;nbsp;(servers: ldiskfs, b2_9 branch, v2.9.0, b22)&lt;br/&gt;
</environment>
        <key id="45812">LU-9429</key>
            <summary>parallel-scale test_parallel_grouplock: test failed to respond and timed out</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="bobijam">Zhenyu Xu</assignee>
                                    <reporter username="jcasper">James Casper</reporter>
                        <labels>
                            <label>always_except</label>
                    </labels>
                <created>Tue, 2 May 2017 16:55:53 +0000</created>
                <updated>Wed, 4 Jan 2023 19:55:38 +0000</updated>
                                            <version>Lustre 2.10.0</version>
                    <version>Lustre 2.11.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="194356" author="pjones" created="Wed, 3 May 2017 18:18:27 +0000"  >&lt;p&gt;Bobijam&lt;/p&gt;

&lt;p&gt;Could you please advise on this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="194430" author="gerrit" created="Thu, 4 May 2017 12:43:08 +0000"  >&lt;p&gt;Bobi Jam (bobijam@hotmail.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/26943&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/26943&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9429&quot; title=&quot;parallel-scale test_parallel_grouplock: test failed to respond and timed out&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9429&quot;&gt;LU-9429&lt;/a&gt; mpi: parallel_grouplock.c group_test4 hung&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: cd96ac0012419916adb473cb9e2a7e21c6f8c963&lt;/p&gt;</comment>
                            <comment id="194457" author="casperjx" created="Thu, 4 May 2017 15:06:09 +0000"  >&lt;p&gt;parallel-scale test_parallel_grouplock TIMEOUTs in last 16 months (in master):&lt;br/&gt;
&amp;gt;       2016-01-01 to 2017-02-02:   0 occurances&lt;br/&gt;
&amp;gt;       2017-02-03 to 2017-04-05: 18 occurances&lt;br/&gt;
&amp;gt;                     since 2017-04-05: 87 occurances&lt;br/&gt;
&amp;gt;  &lt;br/&gt;
&amp;gt; tag 55 test (b3550): 2017-04-05 (parallel_grouplock 100% passing)&lt;br/&gt;
&amp;gt;  &lt;br/&gt;
&amp;gt; landed 2017-04-08:&lt;br/&gt;
&amp;gt; &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8998&quot; title=&quot;Progressive File Layout (PFL)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8998&quot;&gt;&lt;del&gt;LU-8998&lt;/del&gt;&lt;/a&gt; pfl: Basic data structures for composite layout &#8212; jinshan.xiong / gitweb&lt;br/&gt;
&amp;gt; &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8998&quot; title=&quot;Progressive File Layout (PFL)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8998&quot;&gt;&lt;del&gt;LU-8998&lt;/del&gt;&lt;/a&gt; pfl: enhance PFID EA for PFL &#8212; jinshan.xiong / gitweb&lt;br/&gt;
&amp;gt; &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8998&quot; title=&quot;Progressive File Layout (PFL)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8998&quot;&gt;&lt;del&gt;LU-8998&lt;/del&gt;&lt;/a&gt; pfl: layout LFSCK handles PFL file &#8212; jinshan.xiong / gitweb&lt;br/&gt;
&amp;gt; &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8998&quot; title=&quot;Progressive File Layout (PFL)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8998&quot;&gt;&lt;del&gt;LU-8998&lt;/del&gt;&lt;/a&gt; pfl: test cases for lfsck on PFL &#8212; jinshan.xiong / gitweb&lt;br/&gt;
&amp;gt; &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9008&quot; title=&quot;Dynamic layout modification during writes&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9008&quot;&gt;&lt;del&gt;LU-9008&lt;/del&gt;&lt;/a&gt; pfl: dynamic layout modification with write/truncate &#8212; jinshan.xiong / gitweb&lt;br/&gt;
&amp;gt; &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9165&quot; title=&quot;MDS handling of PFL layout initialization&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9165&quot;&gt;&lt;del&gt;LU-9165&lt;/del&gt;&lt;/a&gt; pfl: MDS handling of write intent IT_LAYOUT RPC &#8212; jinshan.xiong / gitweb&lt;br/&gt;
&amp;gt;  &lt;br/&gt;
&amp;gt; &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8998&quot; title=&quot;Progressive File Layout (PFL)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8998&quot;&gt;&lt;del&gt;LU-8998&lt;/del&gt;&lt;/a&gt; clio: Client side implementation for PFL &#8212; jinshan.xiong / gitweb&lt;br/&gt;
&amp;gt; &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8998&quot; title=&quot;Progressive File Layout (PFL)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8998&quot;&gt;&lt;del&gt;LU-8998&lt;/del&gt;&lt;/a&gt; lfs: user space tools for PFL &#8212; jinshan.xiong / gitweb&lt;br/&gt;
&amp;gt; &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8998&quot; title=&quot;Progressive File Layout (PFL)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8998&quot;&gt;&lt;del&gt;LU-8998&lt;/del&gt;&lt;/a&gt; docs: man pages for tools of PFL &#8212; jinshan.xiong / gitweb&lt;br/&gt;
&amp;gt; &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8998&quot; title=&quot;Progressive File Layout (PFL)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8998&quot;&gt;&lt;del&gt;LU-8998&lt;/del&gt;&lt;/a&gt; tests: test scripts for PFL &#8212; jinshan.xiong / gitweb&lt;br/&gt;
&amp;gt;  &lt;br/&gt;
&amp;gt; tag 56 test (b3565): 2017-04-23 (parallel_grouplock 100% failing)&lt;/p&gt;</comment>
                            <comment id="196780" author="adilger" created="Tue, 23 May 2017 18:04:20 +0000"  >&lt;p&gt;Will this MPI application failure be fixed by &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9490&quot; title=&quot;MPI-IO Lustre ADIO driver gets Lustre layout parameters incorrectly &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9490&quot;&gt;&lt;del&gt;LU-9490&lt;/del&gt;&lt;/a&gt; patch  &lt;a href=&quot;https://review.whamcloud.com/27183&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/27183&lt;/a&gt;&lt;br/&gt;
&quot;&lt;tt&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9490&quot; title=&quot;MPI-IO Lustre ADIO driver gets Lustre layout parameters incorrectly &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9490&quot;&gt;&lt;del&gt;LU-9490&lt;/del&gt;&lt;/a&gt; llite: return v1/v3 layout for legacy app&lt;/tt&gt;&quot; ?  We really shouldn&apos;t be breaking MPI applications due to PFL, and I think a workaround in the Lustre code is warranted.&lt;/p&gt;</comment>
                            <comment id="196861" author="bobijam" created="Wed, 24 May 2017 08:52:41 +0000"  >&lt;p&gt;I think the group lock test hung up won&apos;t be fixed by the #27183. &lt;/p&gt;

&lt;p&gt;Partial OST object instantiation will trigger layout change when un-init component extent is written, and layout will be fetched and refreshed to continue the IO; while prior getting group lock increases lov_io::lo_active_ios (would be decreased in cl_put_grouplock), so the refreshing-layout IO will come across this lo_active_ios in lov_conf_set() and wait for the lo_active_ios drops to 0. This implies that there should be no race competing getting group lock and IO caring for layout change (vvp_io_init() will call ll_layout_refresh() if !io-&amp;gt;ci_ignore_layout).  &lt;/p&gt;</comment>
                            <comment id="198346" author="adilger" created="Tue, 6 Jun 2017 17:44:49 +0000"  >&lt;p&gt;Will this test program be fixed by patch &lt;a href=&quot;https://review.whamcloud.com/26646&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/26646&lt;/a&gt; &quot;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9344&quot; title=&quot;sanity test_244: sendfile_grouplock test12() test hung&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9344&quot;&gt;&lt;del&gt;LU-9344&lt;/del&gt;&lt;/a&gt; test: hung with sendfile_grouplock test12()&quot; because group locking always instantiates the PFL components?&lt;/p&gt;</comment>
                            <comment id="198601" author="bobijam" created="Thu, 8 Jun 2017 08:25:34 +0000"  >&lt;p&gt;I tried it on my VM, it shows that the hung is not because of the PFL issue. It looks like those write threads are waiting for extent locks which is blocked by the group lock.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;parallel_grou S 0000000000000001     0 21418  21416 0x00000080
 ffff8800296cf788 0000000000000086 ccf27c28fd06e26f 000006b4ffffff9d
 593905fb000053aa 0000000000000000 ffff880000000001 00000000fffffffc
 ffff8800296cf798 ffff88001b92b0c4 ffff88001cf59060 ffff8800296cffd8
Call Trace:
 [&amp;lt;ffffffffa064e38d&amp;gt;] ldlm_completion_ast+0x67d/0x9a0 [ptlrpc]
 [&amp;lt;ffffffff810640e0&amp;gt;] ? default_wake_function+0x0/0x20
 [&amp;lt;ffffffffa0648506&amp;gt;] ldlm_cli_enqueue_fini+0x936/0xe30 [ptlrpc]
 [&amp;lt;ffffffffa06699d1&amp;gt;] ? ptlrpc_set_destroy+0x2d1/0x450 [ptlrpc]
 [&amp;lt;ffffffffa064c88d&amp;gt;] ldlm_cli_enqueue+0x3ad/0x7d0 [ptlrpc]
 [&amp;lt;ffffffffa064dd10&amp;gt;] ? ldlm_completion_ast+0x0/0x9a0 [ptlrpc]
 [&amp;lt;ffffffffa098fab0&amp;gt;] ? osc_ldlm_blocking_ast+0x0/0x3c0 [osc]
 [&amp;lt;ffffffffa098f510&amp;gt;] ? osc_ldlm_glimpse_ast+0x0/0x340 [osc]
 [&amp;lt;ffffffffa097edff&amp;gt;] osc_enqueue_base+0x1ff/0x630 [osc]
 [&amp;lt;ffffffffa099056d&amp;gt;] osc_lock_enqueue+0x2bd/0xa00 [osc]
 [&amp;lt;ffffffffa0991f90&amp;gt;] ? osc_lock_upcall+0x0/0x530 [osc]
 [&amp;lt;ffffffffa04d8b9b&amp;gt;] cl_lock_enqueue+0x6b/0x120 [obdclass]
 [&amp;lt;ffffffffa0407e17&amp;gt;] lov_lock_enqueue+0x97/0x140 [lov]
 [&amp;lt;ffffffffa04d8b9b&amp;gt;] cl_lock_enqueue+0x6b/0x120 [obdclass]
 [&amp;lt;ffffffffa04d958b&amp;gt;] cl_lock_request+0x7b/0x200 [obdclass]
 [&amp;lt;ffffffffa04dd301&amp;gt;] cl_io_lock+0x381/0x3d0 [obdclass]
 [&amp;lt;ffffffffa04dd466&amp;gt;] cl_io_loop+0x116/0xb20 [obdclass]
 [&amp;lt;ffffffffa0663066&amp;gt;] ? interval_insert+0x296/0x410 [ptlrpc]
 [&amp;lt;ffffffffa0e7d9e1&amp;gt;] ll_file_io_generic+0x231/0xaa0 [lustre]
 [&amp;lt;ffffffffa0e8021d&amp;gt;] ll_file_aio_write+0x13d/0x280 [lustre]
 [&amp;lt;ffffffffa0e8049a&amp;gt;] ll_file_write+0x13a/0x270 [lustre]
 [&amp;lt;ffffffff81189ef8&amp;gt;] vfs_write+0xb8/0x1a0
 [&amp;lt;ffffffff8118b40f&amp;gt;] ? fget_light_pos+0x3f/0x50
 [&amp;lt;ffffffff8118aa31&amp;gt;] sys_write+0x51/0xb0
 [&amp;lt;ffffffff8100b0d2&amp;gt;] system_call_fastpath+0x16/0x1b
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="206187" author="adilger" created="Wed, 23 Aug 2017 18:39:26 +0000"  >&lt;p&gt;Bobijam, could you please provide a brief description of what is needed to fix this problem, and how much work that will be to implement? &lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="46112">LU-9511</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="45980">LU-9479</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="45590">LU-9367</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="45506">LU-9344</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="47484">LU-9793</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="48240">LU-9963</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzbnj:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>