<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:58:08 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-13073] Multiple MDS deadlocks (in lod_qos_prep_create) after OSS crash</title>
                <link>https://jira.whamcloud.com/browse/LU-13073</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;After hitting a first issue on an OSS last night reported in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13072&quot; title=&quot;High OSS load due to possible deadlock w/ ofd_create_hdl and ofd_quotactl backtraces&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13072&quot;&gt;&lt;del&gt;LU-13072&lt;/del&gt;&lt;/a&gt;, several of our MDS started to exhibit blocked threads and locking issues today. A total of 3 out of our 4 MDS required to be rebooted. Only tonight things seem to have stabilized for us. Some research on this Jira has easily revealed that this is not a new problem:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9688&quot; title=&quot;Stuck MDT in lod_qos_prep_create&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9688&quot;&gt;&lt;del&gt;LU-9688&lt;/del&gt;&lt;/a&gt; Stuck MDT in lod_qos_prep_create&lt;br/&gt;
 &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10697&quot; title=&quot;MDT locking issues after failing over OSTs from hung OSS&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10697&quot;&gt;LU-10697&lt;/a&gt; MDT locking issues after failing over OSTs from hung OSS&lt;br/&gt;
 &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11091&quot; title=&quot;MDS threads stuck in lod_qos_prep_create after OSS crash&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11091&quot;&gt;&lt;del&gt;LU-11091&lt;/del&gt;&lt;/a&gt; MDS threads stuck in lod_qos_prep_create after OSS crash&lt;br/&gt;
 &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12360&quot; title=&quot;Can&amp;#39;t restart filesystem (2.12) even with abort_recov&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12360&quot;&gt;LU-12360&lt;/a&gt; Can&apos;t restart filesystem (2.12) even with abort_recov&lt;/p&gt;

&lt;p&gt;So it looks like it&apos;s not fixed in 2.12.3, and as mentioned by NASA in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11091&quot; title=&quot;MDS threads stuck in lod_qos_prep_create after OSS crash&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11091&quot;&gt;&lt;del&gt;LU-11091&lt;/del&gt;&lt;/a&gt;, the MDS deadlock can happen hours after the OSS crash, and this is exactly what happened to us today on Fir.&lt;/p&gt;

&lt;p&gt;Note that &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12360&quot; title=&quot;Can&amp;#39;t restart filesystem (2.12) even with abort_recov&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12360&quot;&gt;LU-12360&lt;/a&gt; is a bit different (and we don&apos;t see this specific problem anymore) but Patrick discussed about some possible issue that might still be related to our remaining problem here.&lt;/p&gt;

&lt;p&gt;In our case, shortly after the reboot of the OSS &lt;tt&gt;fir-io8-s1&lt;/tt&gt; this morning, a first MDS &lt;tt&gt;fir-md1-s4&lt;/tt&gt; was stuck. Later around noon, a second MDS&#160;&lt;tt&gt;fir-md1-s2&lt;/tt&gt; had the same issue. Finally this evening, MDT0000 on &lt;tt&gt;fir-md1-s1&lt;/tt&gt; was impacted too, leading to most of the filesystem unaccessible.&lt;/p&gt;

&lt;p&gt;The typical errors/backtraces we get on MDS in that case are:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[190125.381344] LustreError: 22258:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576181573, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5d81e0e780/0x91615908336938c8 lrc: 3/0,1 mode: --/CW res: [0x240038caa:0x288e:0x0].0x0 bits 0x2/0x0 rrc: 39 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 22258 timeout: 0 lvb_type: 0
[190125.381346] LustreError: 92260:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ...
[190568.849438] LNet: Service thread pid 22622 was inactive for 601.55s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
[190568.866461] Pid: 22622, comm: mdt03_009 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019
[190568.876717] Call Trace:
[190568.879275]  [&amp;lt;ffffffff9dd88c47&amp;gt;] call_rwsem_down_write_failed+0x17/0x30
[190568.886100]  [&amp;lt;ffffffffc15ce537&amp;gt;] lod_qos_statfs_update+0x97/0x2b0 [lod]
[190568.892946]  [&amp;lt;ffffffffc15d06da&amp;gt;] lod_qos_prep_create+0x16a/0x1890 [lod]
[190568.899768]  [&amp;lt;ffffffffc15d2015&amp;gt;] lod_prepare_create+0x215/0x2e0 [lod]
[190568.906432]  [&amp;lt;ffffffffc15c1e1e&amp;gt;] lod_declare_striped_create+0x1ee/0x980 [lod]
[190568.913774]  [&amp;lt;ffffffffc15c66f4&amp;gt;] lod_declare_create+0x204/0x590 [lod]
[190568.920435]  [&amp;lt;ffffffffc163cca2&amp;gt;] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd]
[190568.928419]  [&amp;lt;ffffffffc162c6dc&amp;gt;] mdd_declare_create+0x4c/0xcb0 [mdd]
[190568.934997]  [&amp;lt;ffffffffc1630067&amp;gt;] mdd_create+0x847/0x14e0 [mdd]
[190568.941041]  [&amp;lt;ffffffffc14cd5ff&amp;gt;] mdt_reint_open+0x224f/0x3240 [mdt]
[190568.947539]  [&amp;lt;ffffffffc14c0693&amp;gt;] mdt_reint_rec+0x83/0x210 [mdt]
[190568.953669]  [&amp;lt;ffffffffc149d1b3&amp;gt;] mdt_reint_internal+0x6e3/0xaf0 [mdt]
[190568.960331]  [&amp;lt;ffffffffc14a9a92&amp;gt;] mdt_intent_open+0x82/0x3a0 [mdt]
[190568.966642]  [&amp;lt;ffffffffc14a7bb5&amp;gt;] mdt_intent_policy+0x435/0xd80 [mdt]
[190568.973204]  [&amp;lt;ffffffffc0f54d46&amp;gt;] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc]
[190568.980077]  [&amp;lt;ffffffffc0f7d336&amp;gt;] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc]
[190568.987271]  [&amp;lt;ffffffffc1005a12&amp;gt;] tgt_enqueue+0x62/0x210 [ptlrpc]
[190568.993547]  [&amp;lt;ffffffffc100a36a&amp;gt;] tgt_request_handle+0xaea/0x1580 [ptlrpc]
[190569.000581]  [&amp;lt;ffffffffc0fb124b&amp;gt;] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
[190569.008382]  [&amp;lt;ffffffffc0fb4bac&amp;gt;] ptlrpc_main+0xb2c/0x1460 [ptlrpc]
[190569.014810]  [&amp;lt;ffffffff9dac2e81&amp;gt;] kthread+0xd1/0xe0
[190569.019812]  [&amp;lt;ffffffff9e177c24&amp;gt;] ret_from_fork_nospec_begin+0xe/0x21
[190569.026374]  [&amp;lt;ffffffffffffffff&amp;gt;] 0xffffffffffffffff
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Sometimes, some threads seem to be completing after a long time:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[191025.465281] LNet: Service thread pid 41905 completed after 687.75s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;The load of the server increases tremendously but it&apos;s doing almost nothing. Only very rarely the MDS can recover by itself. And if it does, there is a big chance in my experience that a few hours after, it hangs again.&lt;/p&gt;

&lt;p&gt;Our current workaround is to stop/start the impacted MDT, and it&apos;s usually fast, because now in 2.12.3 the recovery works better and doesn&apos;t hang anymore at the end. And then the load drops and things are working normally again.&lt;/p&gt;

&lt;p&gt;I&apos;m opening this ticket with our logs in that case, in the hope we can find the root cause of this recurrent MDS issue after OSS problem.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;First MDT stuck MDT0003 on MDS &lt;tt&gt;fir-md1-s4&lt;/tt&gt; at 2019-12-12-08:20:42:&lt;/b&gt;&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;vmcore uploaded to the FTP as &lt;tt&gt;vmcore_fir-md1-s4_2019-12-12-08-20-42&lt;/tt&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;      KERNEL: /usr/lib/debug/lib/modules/3.10.0-957.27.2.el7_lustre.pl2.x86_64/vmlinux
    DUMPFILE: vmcore_fir-md1-s4_2019-12-12-08-20-42  [PARTIAL DUMP]
        CPUS: 48
        DATE: Thu Dec 12 08:20:33 2019
      UPTIME: 2 days, 01:36:01
LOAD AVERAGE: 112.81, 86.62, 53.87
       TASKS: 1658
    NODENAME: fir-md1-s4
     RELEASE: 3.10.0-957.27.2.el7_lustre.pl2.x86_64
     VERSION: #1 SMP Thu Nov 7 15:26:16 PST 2019
     MACHINE: x86_64  (1996 Mhz)
      MEMORY: 255.6 GB
       PANIC: &quot;SysRq : Trigger a crash&quot;
         PID: 92886
     COMMAND: &quot;bash&quot;
        TASK: ffff9521056fe180  [THREAD_INFO: ffff952212ae0000]
         CPU: 18
       STATE: TASK_RUNNING (SYSRQ)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;ul&gt;
	&lt;li&gt;output of &quot;foreach bt&quot; as &lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/34024/34024_foreach-bt_fir-md1-s4_2019-12-12-08-20-42.txt&quot; title=&quot;foreach-bt_fir-md1-s4_2019-12-12-08-20-42.txt attached to LU-13073&quot;&gt;foreach-bt_fir-md1-s4_2019-12-12-08-20-42.txt&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;
	&lt;li&gt;vmcore-dmesg.txt as &lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/34023/34023_vmcore-dmesg_fir-md1-s4_2019-12-12-08-20-42.txt&quot; title=&quot;vmcore-dmesg_fir-md1-s4_2019-12-12-08-20-42.txt attached to LU-13073&quot;&gt;vmcore-dmesg_fir-md1-s4_2019-12-12-08-20-42.txt&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;&lt;b&gt;Second MDT stuck MDT0001 on &lt;tt&gt;fir-md1-s2&lt;/tt&gt; at 2019-12-12-12:36:21:&lt;/b&gt;&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;vmcore uploaded to the FTP as &lt;tt&gt;vmcore_fir-md1-s2-2019-12-12-12-36-21&lt;/tt&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;      KERNEL: /usr/lib/debug/lib/modules/3.10.0-957.27.2.el7_lustre.pl2.x86_64/vmlinux
    DUMPFILE: vmcore_fir-md1-s2-2019-12-12-12-36-21  [PARTIAL DUMP]
        CPUS: 48
        DATE: Thu Dec 12 12:36:12 2019
      UPTIME: 2 days, 05:06:59
LOAD AVERAGE: 59.50, 57.71, 52.86
       TASKS: 1267
    NODENAME: fir-md1-s2
     RELEASE: 3.10.0-957.27.2.el7_lustre.pl2.x86_64
     VERSION: #1 SMP Thu Nov 7 15:26:16 PST 2019
     MACHINE: x86_64  (1996 Mhz)
      MEMORY: 255.6 GB
       PANIC: &quot;SysRq : Trigger a crash&quot;
         PID: 97428
     COMMAND: &quot;bash&quot;
        TASK: ffff9e7da2444100  [THREAD_INFO: ffff9e779e61c000]
         CPU: 17
       STATE: TASK_RUNNING (SYSRQ)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;ul&gt;
	&lt;li&gt;output of &quot;foreach bt&quot; as &lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/34021/34021_foreach-bt_fir-md1-s2-2019-12-12-12-36-21.txt&quot; title=&quot;foreach-bt_fir-md1-s2-2019-12-12-12-36-21.txt attached to LU-13073&quot;&gt;foreach-bt_fir-md1-s2-2019-12-12-12-36-21.txt&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;
	&lt;li&gt;vmcore-dmesg.txt as &lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/34020/34020_vmcore-dmesg_fir-md1-s2-2019-12-12-12-36-21.txt&quot; title=&quot;vmcore-dmesg_fir-md1-s2-2019-12-12-12-36-21.txt attached to LU-13073&quot;&gt;vmcore-dmesg_fir-md1-s2-2019-12-12-12-36-21.txt&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;&lt;b&gt;Third MDT stuck MDT0000 on &lt;tt&gt;fir-md1-s1:&lt;/tt&gt;&lt;/b&gt;&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;in that case, we just did a umount/mount of MDT0000 at Dec 12 17:29:39 and it did work. Attaching kernel logs as &lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/34019/34019_fir-md1-s1_2019-12-12_kernel.log&quot; title=&quot;fir-md1-s1_2019-12-12_kernel.log attached to LU-13073&quot;&gt;fir-md1-s1_2019-12-12_kernel.log&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;
	&lt;li&gt;since the last MDT restart, this is OK&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;&lt;b&gt;OSS logs&lt;/b&gt;&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;console log for &lt;tt&gt;fir-io8-s1&lt;/tt&gt;, the OSS that originally crashed (from last boot, so you may have to scroll a bit) as &lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/34022/34022_fir-io8-s1_console.log&quot; title=&quot;fir-io8-s1_console.log attached to LU-13073&quot;&gt;fir-io8-s1_console.log&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt; (problem reported in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13072&quot; title=&quot;High OSS load due to possible deadlock w/ ofd_create_hdl and ofd_quotactl backtraces&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13072&quot;&gt;&lt;del&gt;LU-13072&lt;/del&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;NIDs:&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;MDS &lt;tt&gt;fir-md1-s1&lt;/tt&gt;: 10.0.10.51@o2ib7&lt;/li&gt;
	&lt;li&gt;MDS &lt;tt&gt;fir-md1-s2&lt;/tt&gt;: 10.0.10.52@o2ib7&lt;/li&gt;
	&lt;li&gt;MDS &lt;tt&gt;fir-md1-s3&lt;/tt&gt;: 10.0.10.53@o2ib7&lt;/li&gt;
	&lt;li&gt;MDS &lt;tt&gt;fir-md1-s4&lt;/tt&gt;: 10.0.10.54@o2ib7&lt;/li&gt;
	&lt;li&gt;OSS &lt;tt&gt;fir-io8-s1&lt;/tt&gt;: 10.0.10.115@o2ib7&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;Let me know if you need anything else.&lt;/p&gt;</description>
                <environment>lustre-2.12.3_4_g142b4d4-1.el7.x86_64, kernel-3.10.0-957.27.2.el7_lustre.pl2.x86_64</environment>
        <key id="57632">LU-13073</key>
            <summary>Multiple MDS deadlocks (in lod_qos_prep_create) after OSS crash</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="bzzz">Alex Zhuravlev</assignee>
                                    <reporter username="sthiell">Stephane Thiell</reporter>
                        <labels>
                    </labels>
                <created>Fri, 13 Dec 2019 07:11:02 +0000</created>
                <updated>Tue, 2 Aug 2022 14:19:06 +0000</updated>
                            <resolved>Thu, 11 Mar 2021 05:13:02 +0000</resolved>
                                    <version>Lustre 2.12.3</version>
                                    <fixVersion>Lustre 2.12.7</fixVersion>
                    <fixVersion>Lustre 2.15.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="259819" author="pjones" created="Fri, 13 Dec 2019 18:34:33 +0000"  >&lt;p&gt;Hongchao&lt;/p&gt;

&lt;p&gt;Can you please advise?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="282455" author="pjones" created="Fri, 16 Oct 2020 16:49:41 +0000"  >&lt;p&gt;Alex Zhuravlev (bzzz@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/40274&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/40274&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13073&quot; title=&quot;Multiple MDS deadlocks (in lod_qos_prep_create) after OSS crash&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13073&quot;&gt;&lt;del&gt;LU-13073&lt;/del&gt;&lt;/a&gt; osp: don&apos;t block waiting for new objects&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: d618b1491ec898b835d09f9992fc55fd8f3a962f&lt;/p&gt;</comment>
                            <comment id="294489" author="gerrit" created="Wed, 10 Mar 2021 08:03:54 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/40274/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/40274/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13073&quot; title=&quot;Multiple MDS deadlocks (in lod_qos_prep_create) after OSS crash&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13073&quot;&gt;&lt;del&gt;LU-13073&lt;/del&gt;&lt;/a&gt; osp: don&apos;t block waiting for new objects&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 2112ccb3c48ccf86aaf2a61c9f040571a6323f9c&lt;/p&gt;</comment>
                            <comment id="294625" author="pjones" created="Thu, 11 Mar 2021 05:13:02 +0000"  >&lt;p&gt;Landed for 2.15&lt;/p&gt;</comment>
                            <comment id="297717" author="gerrit" created="Fri, 2 Apr 2021 17:35:33 +0000"  >&lt;p&gt;Etienne AUJAMES (eaujames@ddn.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/43202&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/43202&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13073&quot; title=&quot;Multiple MDS deadlocks (in lod_qos_prep_create) after OSS crash&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13073&quot;&gt;&lt;del&gt;LU-13073&lt;/del&gt;&lt;/a&gt; osp: don&apos;t block waiting for new objects&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: b8c4a50a7b71c14cbca941be552e41024c2a3835&lt;/p&gt;</comment>
                            <comment id="300616" author="gerrit" created="Wed, 5 May 2021 21:23:21 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/43202/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/43202/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13073&quot; title=&quot;Multiple MDS deadlocks (in lod_qos_prep_create) after OSS crash&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13073&quot;&gt;&lt;del&gt;LU-13073&lt;/del&gt;&lt;/a&gt; osp: don&apos;t block waiting for new objects&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 6a023a8d772b052a70927dc5c8b481072bfe164e&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                            <outwardlinks description="duplicates">
                                                        </outwardlinks>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="62153">LU-14277</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="58816">LU-13462</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="67722">LU-15393</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="34022" name="fir-io8-s1_console.log" size="2149926" author="sthiell" created="Fri, 13 Dec 2019 06:47:31 +0000"/>
                            <attachment id="34019" name="fir-md1-s1_2019-12-12_kernel.log" size="683831" author="sthiell" created="Fri, 13 Dec 2019 06:58:49 +0000"/>
                            <attachment id="34021" name="foreach-bt_fir-md1-s2-2019-12-12-12-36-21.txt" size="727323" author="sthiell" created="Fri, 13 Dec 2019 06:55:04 +0000"/>
                            <attachment id="34024" name="foreach-bt_fir-md1-s4_2019-12-12-08-20-42.txt" size="1190117" author="sthiell" created="Fri, 13 Dec 2019 06:41:13 +0000"/>
                            <attachment id="36358" name="reproducer" size="1270" author="bzzz" created="Fri, 16 Oct 2020 08:05:02 +0000"/>
                            <attachment id="34020" name="vmcore-dmesg_fir-md1-s2-2019-12-12-12-36-21.txt" size="401982" author="sthiell" created="Fri, 13 Dec 2019 06:55:13 +0000"/>
                            <attachment id="34023" name="vmcore-dmesg_fir-md1-s4_2019-12-12-08-20-42.txt" size="687012" author="sthiell" created="Fri, 13 Dec 2019 06:41:32 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00qzj:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>