<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:09:14 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-14379] ls hangs on client and console log message LNet: Service thread pid 25241 was inactive performing mdt_reint_rename</title>
                <link>https://jira.whamcloud.com/browse/LU-14379</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;At some time the morning of Jan 27th, we got reports of directory listings (&quot;ls&quot;) hanging, where the directories were on MDT5 and MDT7.&lt;/p&gt;

&lt;p&gt;The console log of MDT5 and MDT7 both reported repeated watchdog dumps, all with very similar stacks. The first one on MDT7 appeared Thu Jan 21 12:01:33 2021&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# From zinc8 dmesg log
LNet: Service thread pid 25241 was inactive for 200.13s. The thread might be hung, or it might only be slow and will resume later. D
Pid: 25241, comm: mdt01_049 3.10.0-1160.4.1.1chaos.ch6.x86_64 #1 SMP Fri Oct 9 17:56:20 PDT 2020
Call Trace:
 [&amp;lt;ffffffffc141a460&amp;gt;] ldlm_completion_ast+0x440/0x870 [ptlrpc]
 [&amp;lt;ffffffffc141be2f&amp;gt;] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc]
 [&amp;lt;ffffffffc141ef3e&amp;gt;] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc]
 [&amp;lt;ffffffffc1982342&amp;gt;] osp_md_object_lock+0x162/0x2d0 [osp]
 [&amp;lt;ffffffffc1895194&amp;gt;] lod_object_lock+0xf4/0x780 [lod]
 [&amp;lt;ffffffffc1916ace&amp;gt;] mdd_object_lock+0x3e/0xe0 [mdd]
 [&amp;lt;ffffffffc17ae681&amp;gt;] mdt_remote_object_lock_try+0x1e1/0x750 [mdt]
 [&amp;lt;ffffffffc17aec1a&amp;gt;] mdt_remote_object_lock+0x2a/0x30 [mdt]
 [&amp;lt;ffffffffc17c407e&amp;gt;] mdt_rename_lock+0xbe/0x4b0 [mdt]
 [&amp;lt;ffffffffc17c6400&amp;gt;] mdt_reint_rename+0x2c0/0x2900 [mdt]
 [&amp;lt;ffffffffc17cf113&amp;gt;] mdt_reint_rec+0x83/0x210 [mdt]
 [&amp;lt;ffffffffc17ab303&amp;gt;] mdt_reint_internal+0x6e3/0xaf0 [mdt]
 [&amp;lt;ffffffffc17b6b37&amp;gt;] mdt_reint+0x67/0x140 [mdt] 
 [&amp;lt;ffffffffc14b8b1a&amp;gt;] tgt_request_handle+0xada/0x1570 [ptlrpc]
 [&amp;lt;ffffffffc145d80b&amp;gt;] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
 [&amp;lt;ffffffffc1461bfd&amp;gt;] ptlrpc_main+0xc4d/0x2280 [ptlrpc]
 [&amp;lt;ffffffffadecafc1&amp;gt;] kthread+0xd1/0xe0
 [&amp;lt;ffffffffae5c1ff7&amp;gt;] ret_from_fork_nospec_end+0x0/0x39
 [&amp;lt;ffffffffffffffff&amp;gt;] 0xffffffffffffffff
LustreError: dumping log to /tmp/lustre-log.1611259694.25241 &lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;The remaining dumps took a different path within ldlm_cli_enqueue:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;ptlrpc_set_wait+0x4d8/0x800 [ptlrpc]
ptlrpc_queue_wait+0x83/0x230 [ptlrpc]
ldlm_cli_enqueue+0x3d2/0x920 [ptlrpc]
osp_md_object_lock+0x162/0x2d0 [osp]
lod_object_lock+0xf4/0x780 [lod]
mdd_object_lock+0x3e/0xe0 [mdd]
mdt_remote_object_lock_try+0x1e1/0x750 [mdt]
mdt_remote_object_lock+0x2a/0x30 [mdt]
mdt_rename_lock+0xbe/0x4b0 [mdt]
mdt_reint_rename+0x2c0/0x2900 [mdt]
mdt_reint_rec+0x83/0x210 [mdt]
mdt_reint_internal+0x6e3/0xaf0 [mdt]
mdt_reint+0x67/0x140 [mdt]
tgt_request_handle+0xada/0x1570 [ptlrpc]
ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
ptlrpc_main+0xc4d/0x2280 [ptlrpc]
kthread+0xd1/0xe0
ret_from_fork_nospec_end+0x0/0x39
0xffffffffffffffff
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;I can provide mdt7 debug logs and ldlm namespace dump; I have a core dump from mdt5.&#160; And dmesg logs for both.&lt;/p&gt;</description>
                <environment>zfs-0.7&lt;br/&gt;
kernel-3.10.0-1160.4.1.1chaos.ch6.x86_64&lt;br/&gt;
lustre-2.12.5_10.llnl-3.ch6.x86_64</environment>
        <key id="62546">LU-14379</key>
            <summary>ls hangs on client and console log message LNet: Service thread pid 25241 was inactive performing mdt_reint_rename</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="laisiyao">Lai Siyao</assignee>
                                    <reporter username="ofaaland">Olaf Faaland</reporter>
                        <labels>
                            <label>llnl</label>
                    </labels>
                <created>Thu, 28 Jan 2021 07:20:26 +0000</created>
                <updated>Sat, 6 Feb 2021 16:18:17 +0000</updated>
                            <resolved>Fri, 5 Feb 2021 00:21:16 +0000</resolved>
                                                                        <due></due>
                            <votes>0</votes>
                                    <watches>3</watches>
                                                                            <comments>
                            <comment id="290559" author="ofaaland" created="Thu, 28 Jan 2021 07:20:53 +0000"  >&lt;p&gt;For my records, my local ticket is TOSS5037&lt;/p&gt;</comment>
                            <comment id="290560" author="ofaaland" created="Thu, 28 Jan 2021 07:22:09 +0000"  >&lt;p&gt;Possibly related to &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14378&quot; title=&quot;ldlm_resource_complain()) MGC172.19.3.1@o2ib600: namespace resource [0x68736c:0x2:0x0].0x0 (ffff972b9abea0c0) refcount nonzero (1) after lock cleanup; forcing cleanup.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-14378&quot;&gt;&lt;del&gt;LU-14378&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="290561" author="ofaaland" created="Thu, 28 Jan 2021 07:22:54 +0000"  >&lt;p&gt;Code listings for the point where the stacks are different:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;(gdb) l *(ldlm_cli_enqueue+0x3d2)
0x25f32 is in ldlm_cli_enqueue (/usr/src/debug/lustre-2.12.5_10.llnl/lustre/ldlm/ldlm_request.c:1036).
1031	
1032		LDLM_DEBUG(lock, &quot;sending request&quot;);
1033	
1034		rc = ptlrpc_queue_wait(req);
1035	
1036		err = ldlm_cli_enqueue_fini(exp, req, einfo-&amp;gt;ei_type, policy ? 1 : 0,
1037					&#160; &#160; einfo-&amp;gt;ei_mode, flags, lvb, lvb_len,
1038					&#160; &#160; lockh, rc);
1039	
1040		/* If ldlm_cli_enqueue_fini did not find the lock, we need to free


(gdb) l *(ldlm_cli_enqueue+0x40e)
0x25f6e is in ldlm_cli_enqueue (/usr/src/debug/lustre-2.12.5_10.llnl/lustre/ldlm/ldlm_request.c:1042).
1037					&#160; &#160; einfo-&amp;gt;ei_mode, flags, lvb, lvb_len,
1038					&#160; &#160; lockh, rc);
1039	
1040		/* If ldlm_cli_enqueue_fini did not find the lock, we need to free
1041		 * one reference that we took */
1042		if (err == -ENOLCK)
1043			LDLM_LOCK_RELEASE(lock);
1044		else
1045			rc = err;
1046	 &lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="290562" author="ofaaland" created="Thu, 28 Jan 2021 07:26:48 +0000"  >&lt;p&gt;SysRq&#160;&#160;show-backtrace-all-active-cpus(l) dumped no stacks other than the core processing the SysRq.&#160; All other cores were idling:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# From [Wed Jan 27 13:13:19 2021] 
SysRq : Show backtrace of all active CPUs
sending NMI to all CPUs:
NMI backtrace for cpu 0 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 1 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 2 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 3 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 4 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 5 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 6 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 7 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 8
CPU: 8 PID: 0 Comm: swapper/8 Kdump: loaded Tainted: P &#160; &#160; &#160; &#160; &#160; OE&#160; ------------ T 3.10.0-1160.4.1.1chaos.ch6.x86_64 #1
Hardware name: Intel Corporation S2600WTTR/S2600WTTR, BIOS SE5C610.86B.01.01.0016.033120161139 03/31/2016
task: ffff98d5ca74e300 ti: ffff98d5cab50000 task.ti: ffff98d5cab50000
RIP: 0010:[&amp;lt;ffffffffade6102b&amp;gt;]&#160; [&amp;lt;ffffffffade6102b&amp;gt;] default_send_IPI_mask_sequence_phys+0x7b/0x100
RSP: 0018:ffff9914bf203ce8&#160; EFLAGS: 00000046
RAX: ffff9914bf240000 RBX: 000000000000e02e RCX: 0000000000000020
RDX: 0000000000000009 RSI: 0000000000000012 RDI: 0000000000000000
RBP: ffff9914bf203d20 R08: ffffffffaeb5b5a0 R09: 0000000000008032
R10: 0000000000000000 R11: ffff9914bf203a2e R12: ffffffffaeb5b5a0
R13: 0000000000000400 R14: 0000000000000086 R15: 0000000000000002
FS:&#160; 0000000000000000(0000) GS:ffff9914bf200000(0000) knlGS:0000000000000000
CS:&#160; 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffff7ff8000 CR3: 0000007bf1ed8000 CR4: 00000000001607e0
Call Trace:
 &amp;lt;IRQ&amp;gt; [&amp;lt;ffffffffade66d4e&amp;gt;] physflat_send_IPI_mask+0xe/0x10
 [&amp;lt;ffffffffade6153a&amp;gt;] arch_trigger_all_cpu_backtrace+0x2ea/0x2f0
 [&amp;lt;ffffffffae2932f3&amp;gt;] sysrq_handle_showallcpus+0x13/0x20
 [&amp;lt;ffffffffae2939ed&amp;gt;] __handle_sysrq+0x11d/0x180
 [&amp;lt;ffffffffae293a76&amp;gt;] handle_sysrq+0x26/0x30
 ...
 [&amp;lt;ffffffffae5c6b2d&amp;gt;] do_IRQ+0x4d/0xf0
 [&amp;lt;ffffffffae5b836a&amp;gt;] common_interrupt+0x16a/0x16a
 &amp;lt;EOI&amp;gt; [&amp;lt;ffffffffae3eab67&amp;gt;] ? cpuidle_enter_state+0x57/0xd0
 [&amp;lt;ffffffffae3eacbe&amp;gt;] cpuidle_idle_call+0xde/0x270
 [&amp;lt;ffffffffade3919e&amp;gt;] arch_cpu_idle+0xe/0xc0
 [&amp;lt;ffffffffadf0835a&amp;gt;] cpu_startup_entry+0x14a/0x1e0
 [&amp;lt;ffffffffade5cb87&amp;gt;] start_secondary+0x207/0x280
 [&amp;lt;ffffffffade000d5&amp;gt;] start_cpu+0x5/0x14
Code: c2 01 4c 89 e7 48 63 d2 e8 93 8f 35 00 89 c2 48 63 05 ae e6 cf 00 48 39 c2 73 53 48 8b 04 d5 a0 15 b5 ae 41 83 ff 02 0f b7 34 03 &amp;lt;74&amp;gt; 5a 8b 04 25 00 93 5a ff f6 c4 10 74 15 0f 1f 80 00 00 00 00 
NMI backtrace for cpu 9 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 10 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 11 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 12 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 13 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 14 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 15 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 16 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 17 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 18 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 19 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 20 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 21 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 22 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 23 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 24 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 25 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 26 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 27 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 28 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 29 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 30 skipped: idling at pc 0xffffffffae5b6aa1
NMI backtrace for cpu 31 skipped: idling at pc 0xffffffffae5b6aa1&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="290563" author="ofaaland" created="Thu, 28 Jan 2021 07:29:33 +0000"  >&lt;p&gt;PID 25241, given in the watchdog warning above, appears in console log messages regarding changelog handling:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# [Sat Jan 23 12:21:13 2021] Lustre: 25161:0:(mdd_device.c:1811:mdd_changelog_clear()) lsh-MDD0007: Failure to clear the changelog for user 6: -22
&amp;gt;&amp;gt; [Sat Jan 23 12:21:53 2021] Lustre: 25241:0:(mdd_device.c:1811:mdd_changelog_clear()) lsh-MDD0007: Failure to clear the changelog for user 6: -22
# [Sat Jan 23 12:24:08 2021] Lustre: 25188:0:(mdd_device.c:1811:mdd_changelog_clear()) lsh-MDD0007: Failure to clear the changelog for user 6: -22 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;There are many like that - some of the others are:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[Sat Jan 23 13:38:06 2021] Lustre: 25241:0:(mdd_device.c:1811:mdd_changelog_clear()) lsh-MDD0007: Failure to clear the changelog for user 6: -22
[Sat Jan 23 13:38:06 2021] Lustre: 25241:0:(mdd_device.c:1811:mdd_changelog_clear()) Skipped 12 previous similar messages
[Sat Jan 23 15:25:52 2021] Lustre: 25241:0:(mdd_device.c:1811:mdd_changelog_clear()) lsh-MDD0007: Failure to clear the changelog for user 6: -22
[Sat Jan 23 15:25:52 2021] Lustre: 25241:0:(mdd_device.c:1811:mdd_changelog_clear()) Skipped 9 previous similar messages
[Sat Jan 23 19:39:04 2021] Lustre: 25241:0:(mdd_device.c:1811:mdd_changelog_clear()) lsh-MDD0007: Failure to clear the changelog for user 6: -22
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The changelog clear messages are from Starfish, a policy manager similar to Robinhood, which frequently clears changelog entries which have already been cleared.&#160;&lt;/p&gt;</comment>
                            <comment id="290569" author="laisiyao" created="Thu, 28 Jan 2021 08:51:22 +0000"  >&lt;p&gt;This looks to be duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13437&quot; title=&quot;rename may miss revoking LOOKUP lock to cause stale dentry on client&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13437&quot;&gt;&lt;del&gt;LU-13437&lt;/del&gt;&lt;/a&gt;, and it was landed to the latest 2.12, you can cherry-pick from there.&lt;/p&gt;</comment>
                            <comment id="290572" author="ofaaland" created="Thu, 28 Jan 2021 13:40:07 +0000"  >&lt;p&gt;Ah.&#160; Thank you.&lt;/p&gt;

&lt;p&gt;We are preparing to upgrade to lustre 2.12.6, we just don&apos;t have it on the machines yet.&lt;/p&gt;

&lt;p&gt;A little later in the day,&#160;users started reporting bad dirents, e.g.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@oslic20:test]# ll
ls: cannot access scratch-2: No such file or directory
total 1798843
-rw-rw---- 1 varley2 varley2&#160; &#160; &#160; &#160; &#160; 0 Jan 27 12:14 CHG
-rw-rw---- 1 varley2 varley2&#160; &#160; &#160; &#160; &#160; 0 Jan 27 12:14 CHGCAR
-rw-rw---- 1 varley2 varley2&#160; &#160; &#160; 27295 Jan 27 17:53 CONTCAR
-rw-rw---- 1 varley2 varley2&#160; &#160; 5621531 Jan 27 17:54 DOSCAR
-rw-rw---- 1 varley2 varley2&#160; &#160; &#160; 83839 Jan 27 17:54 EIGENVAL
-rw-rw---- 1 varley2 varley2&#160; &#160; &#160; &#160; 207 Jan 27 12:14 IBZKPT
lrwxrwxrwx 1 varley2 varley2&#160; &#160; &#160; &#160; &#160; 8 Jan 27 11:05 INCAR -&amp;gt; ../INCAR
lrwxrwxrwx 1 varley2 varley2 &#160; &#160; &#160; &#160; 10 Jan 27 11:05 KPOINTS -&amp;gt; ../KPOINTS
-rw-rw---- 1 varley2 varley2&#160; 149105972 Jan 27 17:54 LOCPOT
-rw-rw---- 1 varley2 varley2 &#160; &#160; &#160; 2121 Jan 27 17:53 OSZICAR
-rw-rw---- 1 varley2 varley2 &#160; &#160; 612859 Jan 27 17:54 OUTCAR
-rw-rw---- 1 varley2 varley2&#160; &#160; &#160; &#160; 234 Jan 27 12:14 PCDAT
-rw------- 1 varley2 varley2&#160; &#160; &#160; 27295 Jan 27 11:04 POSCAR
lrwxrwxrwx 1 varley2 varley2&#160; &#160; &#160; &#160; &#160; 9 Jan 27 11:05 POTCAR -&amp;gt; ../POTCAR
-rw-rw---- 1 varley2 varley2 &#160; 21995032 Jan 27 17:54 PROCAR
-rw-rw---- 1 varley2 varley2&#160; &#160; &#160; &#160; &#160; 0 Jan 27 12:14 REPORT
-rw------- 1 varley2 varley2 1820073216 Jan 27 17:53 WAVECAR
-rw-rw---- 1 varley2 varley2&#160; &#160; &#160; 25475 Jan 27 17:53 XDATCAR
lrwxrwxrwx 1 varley2 varley2&#160; &#160; &#160; &#160; &#160; 9 Jan 27 11:05 msub.vasp -&amp;gt; psub.vasp
-rw-rw---- 1 varley2 varley2 &#160; &#160; &#160; 9560 Jan 27 17:53 psub.out
-rw-rw---- 1 varley2 varley2&#160; &#160; &#160; &#160; 709 Jan 27 11:05 psub.vasp
-????????? ? ? &#160; &#160; &#160; ?&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; ?&#160; &#160; &#160; &#160; &#160; &#160; ? scratch-2
-rw-rw---- 1 varley2 varley2 &#160; 34802010 Jan 27 17:54 vasprun.xml &lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;and&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@oslic20:2MP]# ls -lR
.:
total 25
drwx------ 2 varley2 varley2 25600 Jan 27 16:31 2ndshell-broken


./2ndshell-broken:
ls: cannot access ./2ndshell-broken/scratch-2: No such file or directory
total 0
-????????? ? ? ? ?&#160; &#160; &#160; &#160; &#160; &#160; ? scratch-2&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Is this a likely side-affect of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13437&quot; title=&quot;rename may miss revoking LOOKUP lock to cause stale dentry on client&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13437&quot;&gt;&lt;del&gt;LU-13437&lt;/del&gt;&lt;/a&gt;?  We have some of those patches but not all of them in our 2.12.5 branch - I don&apos;t know if that&apos;s better or worse than having none:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;$ git lg -n 200 --grep LU-13437 2.12.6_3.llnl | sort -k3
* 3314727 LU-13437 llite: pack parent FID in getattr
* 1857880 LU-13437 llite: pass name in getattr by FID
* f1712b3 LU-13437 lmv: check stripe FID sanity
* 23c05e8 LU-13437 mdc: remote object support getattr from cache
* ae9fc81 LU-13437 mdt: don&apos;t fetch LOOKUP lock for remote object
* 23fa920 LU-13437 mdt: rename misses remote LOOKUP lock revoke
* daa9148 LU-13437 uapi: add OBD_CONNECT2_GETATTR_PFID

$ git lg -n 200 --grep LU-13437  2.12.5_10.llnl | sort -k3
* d06cc90 LU-13437 llite: pack parent FID in getattr
* 338d34e LU-13437 lmv: check stripe FID sanity
* 8d212e8 LU-13437 mdt: don&apos;t fetch LOOKUP lock for remote object
* 6dc8f51 LU-13437 mdt: rename misses remote LOOKUP lock revoke
* ddec375 LU-13437 uapi: add OBD_CONNECT2_GETATTR_PFID
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="290670" author="laisiyao" created="Fri, 29 Jan 2021 01:51:19 +0000"  >&lt;p&gt;Mmm, it might be because commit &quot;1857880 &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13437&quot; title=&quot;rename may miss revoking LOOKUP lock to cause stale dentry on client&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13437&quot;&gt;&lt;del&gt;LU-13437&lt;/del&gt;&lt;/a&gt; llite: pass name in getattr by FID&quot; is missing. &lt;/p&gt;</comment>
                            <comment id="291286" author="ofaaland" created="Fri, 5 Feb 2021 00:22:24 +0000"  >&lt;p&gt;Closing (left &quot;Fixed&quot;, forgot to change to &quot;Duplicate&quot;).&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="62545">LU-14378</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i01kkn:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>