[LU-12935] MDT deadlock on 2.12.3 with DoM; is it missing async_discard feature? Created: 05/Nov/19  Updated: 24/Aug/20  Resolved: 12/Dec/19

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.12.3
Fix Version/s: Lustre 2.12.4

Type: Bug Priority: Critical
Reporter: Stephane Thiell Assignee: Mikhail Pershin
Resolution: Fixed Votes: 0
Labels: None
Environment:

CentOS 7.6


Attachments: Text File crash-sysrq-fir-md1-s1-foreach-bt.log     Text File fir-md1-s1-MDT0.log     File fir-md1-s1-dk.log.gz     File fir-md1-s1-dk2.log.gz     Text File fir-md1-s1_20191105.log     Text File fir-md1-s1_lfsck-results.log     Text File fir-rbh01.dk.log    
Issue Links:
Related
is related to LU-11421 DoM: manual migration OST-MDT, MDT-MDT Resolved
is related to LU-10664 DoM: make DoM lock enqueue non-blocking Resolved
Severity: 1
Rank (Obsolete): 9223372036854775807

 Description   

Currently we cannot make MDT0 work again on Fir (2.12.3) due to these backtraces and lock timeout:

Nov 04 18:30:17 fir-md1-s1 kernel: Pid: 32408, comm: mdt01_024 3.10.0-957.27.2.el7_lustre.pl1.x86_64 #1 SMP Mon Aug 5 15:28:37 PDT 2019
Nov 04 18:30:17 fir-md1-s1 kernel: Call Trace:
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc10ccac0>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc10cd5e1>] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc]
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc15d850b>] mdt_object_local_lock+0x50b/0xb20 [mdt]
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc15d8b90>] mdt_object_lock_internal+0x70/0x360 [mdt]
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc15d8ea0>] mdt_object_lock+0x20/0x30 [mdt]
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc1617c4b>] mdt_brw_enqueue+0x44b/0x760 [mdt]
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc15c64bf>] mdt_intent_brw+0x1f/0x30 [mdt]
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc15debb5>] mdt_intent_policy+0x435/0xd80 [mdt]
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc10b3d46>] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc]
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc10dc336>] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc]
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc1164a12>] tgt_enqueue+0x62/0x210 [ptlrpc]
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc116936a>] tgt_request_handle+0xaea/0x1580 [ptlrpc]
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc111024b>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc1113bac>] ptlrpc_main+0xb2c/0x1460 [ptlrpc]
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffbe8c2e81>] kthread+0xd1/0xe0
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffbef77c24>] ret_from_fork_nospec_begin+0xe/0x21
Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffffffffff>] 0xffffffffffffffff
Nov 04 18:30:17 fir-md1-s1 kernel: LNet: Service thread pid 32415 was inactive for 201.19s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Nov 04 18:30:17 fir-md1-s1 kernel: LNet: Skipped 1 previous similar message
Nov 04 18:31:56 fir-md1-s1 kernel: LustreError: 32601:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1572920815, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0000_UUID lock: ffffa10e0b407bc0/0x675682d854098c0 lrc: 3/0,1 mode: --/PW res: [0x200034eb7:0x1:0x0].0x0 bits 0x40/0x0 rrc: 912 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 32601 timeout: 0 lvb_type: 0
Nov 04 18:31:56 fir-md1-s1 kernel: LustreError: 32601:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 224 previous similar messages
Nov 04 18:34:25 fir-md1-s1 kernel: LustreError: 32404:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1572920965, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0000_UUID lock: ffffa12da9254ec0/0x675682d8540d5dd lrc: 3/0,1 mode: --/PW res: [0x200034eb7:0x1:0x0].0x0 bits 0x40/0x0 rrc: 913 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 32404 timeout: 0 lvb_type: 0
Nov 04 18:34:25 fir-md1-s1 kernel: LustreError: 32404:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 161 previous similar messages

This looks similar to LU-11358.

I also just noticed another thing, on our new 2.12.3 clients, I can't find the async_discard import flag:

[root@sh-117-13 ~]# lctl get_param mdc.fir-*.import | grep connect_flags
    connect_flags: [ write_grant, server_lock, version, acl, xattr, create_on_write, truncate_lock, inode_bit_locks, getattr_by_fid, no_oh_for_devices, max_byte_per_rpc, early_lock_cancel, adaptive_timeouts, lru_resize, alt_checksum_algorithm, fid_is_enabled, version_recovery, pools, large_ea, full20, layout_lock, 64bithash, jobstats, umask, einprogress, grant_param, lvb_type, short_io, flock_deadlock, disp_stripe, open_by_fid, lfsck, multi_mod_rpcs, dir_stripe, subtree, bulk_mbits, second_flags, file_secctx, dir_migrate, unknown, flr, lock_convert, archive_id_array, selinux_policy, lsom, unknown2_0x4000 ]

Is it unknown2_0x4000 ? Or is it missing??

Please confirm that the async_discard patch is missing from 2.12.3? If it's the case, we'll need to perform a full downgrade or emergency patching of the cluster and Lustre servers.

Attaching logs from MDS fir-md1-s1 which serves MDT0 and that we cannot make operational again at the moment. We even tried with abort_recovery with no luck.



 Comments   
Comment by Stephane Thiell [ 05/Nov/19 ]

Apparently the patch is included in 2.12.3 as

commit e5810126b3fb488a3fed37e085e3ca4ae585324c
Author: Mikhail Pershin <mpershin@whamcloud.com>
Date:   Wed Oct 31 16:28:29 2018 +0300

    LU-11359 mdt: fix mdt_dom_discard_data() timeouts

Which includes:

diff --git a/lustre/obdclass/lprocfs_status.c b/lustre/obdclass/lprocfs_status.c
index 8894a38..13eaca4 100644
--- a/lustre/obdclass/lprocfs_status.c
+++ b/lustre/obdclass/lprocfs_status.c
@@ -785,6 +785,7 @@ static const char *obd_connect_names[] = {
        "unknown",              /* 0x200 */
        "selinux_policy",       /* 0x400 */
        "lsom",                 /* 0x800 */
+       "async_discard",        /* 0x4000 */
        NULL
 }; 

So why is the Lustre client version available on the Whamcloud repository not recognizing the async_discard import flag?

Comment by Stephane Thiell [ 05/Nov/19 ]

Note: the 2.12.3 version I build is not able to recognize the connect flag either, and shows as unknown2_0x4000. Prior 2.12.3, with our patched 2.12.0, the flag was recognized as "async_discard". If the clients don't use the new feature, that would explain the MDT deadlock after we finally upgraded 1,000+ clients.

Comment by Peter Jones [ 05/Nov/19 ]

Mike should look at this when he comes online but we will need to see what can be done to restore production in the meantime

Comment by Stephane Thiell [ 05/Nov/19 ]

Thanks! I've tried a abort_recovery on MDT0 (we have four MDTs), but this didn't work. I've restarted all MDTs without luck neither. MDT 1 to 3 are still working and some I/Os are still going on the cluster apparently, but the namespace is not accessible due to MDT0 being down/deadlocked. I think the next step will be to try an abort-recovery on all four MDTs but that will likely fail all jobs.

Comment by Andreas Dilger [ 05/Nov/19 ]

As you posted in your recent comment, 0x4000 is the unknown2_0x4000 flag, which should be interpreted as async_discard. The fact that it is being printed in the imports file shows that both the client and server understand the OBD_CONNECT2_ASYNC_DISCARD flag, though it is not being printed correctly. It appears that this is because the obd_connect_names[] array is "positional" and missing the lower-numbered flags "pcc" and "plain_layout" because those features were not backported to b2_12:

        "archive_id_array",     /* 0x100 */
        "unknown",              /* 0x200 */
        "selinux_policy",       /* 0x400 */
        "lsom",                 /* 0x800 */
        "async_discard",        /* 0x4000 */
        NULL

I will make a patch so that "async_discard" is printed properly, but that doesn't seem to be the root cause of this issue.

Comment by Stephane Thiell [ 05/Nov/19 ]

Thanks Andreas! And previous to the async_discard patch, we were still able to put the system back online.

So I'm attaching a debug log from the MDS as fir-md1-s1-dk.log.gz . Looks like there are some lfsck errors, it's perhaps something to look at, like this one:

00100000:10000000:15.0:1572924337.144468:0:34815:0:(lfsck_layout.c:374:lfsck_layout_verify_header_v1v3()) Unsupported LOV EA pattern 256 for the file [0x200029888:0x36c5:0x0] in the component 1

We started lfsck in dry-run mode this morning. This might be related.. perhaps. I know that the OI_Scrub had completed successfully on all MDTs.
Let me know what else would be helpful?

Comment by Andreas Dilger [ 05/Nov/19 ]

The lfsck_layout_verify_header_v1v3() message relates to LOV_PATTERN_MDT = 0x100. That is not yet implemented in (LU-11081), but should only prevent DoM files from being repaired by LFSCK.

I'm just looking at the debug log to see if there is anything I can see that might avoid this issue.

Comment by Stephane Thiell [ 05/Nov/19 ]

Thanks Andreas, I will follow LU-11081.

Earlier today we took a crash dump (sysrq-triggered) and I'm attaching the output of foreach bt as crash-sysrq-fir-md1-s1-foreach-bt.log. We can see many mdt threads blocked in mdt_intent_brw(). The lfsck thread is also doing things in ldiskfs but I doubt this is related to lfsck as we started to mount MDT0 using -o skip_lfsck and it was the same.

Comment by Stephane Thiell [ 05/Nov/19 ]

Attaching debug logs for a local client fir-rbh01 as fir-rbh01.dk.log that we restarted too, so the log is complete. Two kind of errors I noticed while gathering this:

00000100:00020000:36.0:1572920752.390569:0:84001:0:(layout.c:2113:__req_capsule_get()) @@@ Wrong buffer for field `obd_quotactl' (1 of 1) in format `MDS_QUOTACTL': 0 vs. 112 (server)

And:

00000100:00000400:3.0:1572926475.221568:0:111431:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1572925719/real 1572925719]  req@ffff8b4664fdb600 x1649317797341696/t0(0) o48->fir-MDT0000-mdc-ffff8b66b2630000@10.0.10.51@o2ib7:12/10 lens 336/336 e 0 to 1 dl 1572926475 ref 2 fl Rpc:X/2/ffffffff rc -11/-1
00000100:00000400:69.0:1572926475.221569:0:111377:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1572925719/real 1572925719]  req@ffff8b569d4a2400 x1649317797312576/t0(0) o41->fir-MDT0000-mdc-ffff8b66b2630000@10.0.10.51@o2ib7:12/10 lens 224/368 e 0 to 1 dl 1572926475 ref 2 fl Rpc:X/2/ffffffff rc -11/-1
Comment by Andreas Dilger [ 05/Nov/19 ]

It looks like the problem is related to the file "[0x200034eb7:0x1:0x0]", which you may be able to use with "lfs fid2path" to determine what its pathname is. The lock reference count is going up and down on this file, but stays around 870 or so, likely indicating that either there is a lock refcount leak, or the clients continue getting references on the file and never actually release them.

It seems like the MDS threads are eventually completing their operations on this file:

00010000:00020000:6.0F:1572922849.748921:0:32760:0:(ldlm_lockd.c:1348:ldlm_handle_enqueue0()) ### lock on destroyed export ffffa11d012fc400 ns: mdt-fir-MDT0000_UUID lock: ffffa12d9633d340/0x675682d8540993e lrc: 3/0,0 mode: PW/PW res: [0x200034eb7:0x1:0x0].0x0 bits 0x40/0x0 rrc: 899 type: IBT flags: 0x50200400000020 nid: 10.9.117.25@o2ib4 remote: 0x9240a829e5985bb6 expref: 283 pid: 32760 timeout: 0 lvb_type: 0
00000400:02000400:6.0:1572922849.748967:0:32760:0:(watchdog.c:397:lcw_update_time()) Service thread pid 32760 completed after 2033.77s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).
:
:
00010000:00020000:38.0F:1572922849.932606:0:32409:0:(ldlm_lockd.c:1348:ldlm_handle_enqueue0()) ### lock on destroyed export ffffa12df7e7f800 ns: mdt-fir-MDT0000_UUID lock: ffffa12d7b6cd580/0x675682d854099d1 lrc: 3/0,0 mode: PW/PW res: [0x200034eb7:0x1:0x0].0x0 bits 0x40/0x0 rrc: 882 type: IBT flags: 0x50200400000020 nid: 10.9.117.39@o2ib4 remote: 0x6ac7aeec8dbc3b60 expref: 39 pid: 32409 timeout: 0 lvb_type: 0
00000400:02000400:38.0:1572922849.932678:0:32409:0:(watchdog.c:397:lcw_update_time()) Service thread pid 32409 completed after 2033.95s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).

If LFSCK is still running, you could consider to stop it (via "lctl lfsck_stop"), as that may be driving up the load on the MDS?

Comment by Stephane Thiell [ 05/Nov/19 ]

Thanks! We are looking into it. I've got the path after a stop/start of MDT0 but I still can't stat it or get its stripping info. The corresponding user runs a custom code (aeronautics), and has 4 jobs of 240 tasks each. Not sure if all of them are accessing this file and how big is the file at this point, but it is almost certain it is using DoM.

Re: LFSCK, it has completed and just for information I'm attaching the results as fir-md1-s1_lfsck-results.log

Comment by Stephane Thiell [ 05/Nov/19 ]

Andreas, the file is empty!

stat:

  File: ‘/fir/..../DEFAULT.PKG’
  Size: 0         	Blocks: 0          IO Block: 4194304 regular empty file
Device: e64e03a8h/3863872424d	Inode: 144118891059085313  Links: 1
Access: (0644/-rw-r--r--)  Uid: (297171/    jbho)   Gid: (28669/ cfarhat)
Access: 2019-11-04 20:36:57.000000000 -0800
Modify: 2019-11-04 20:36:57.000000000 -0800
Change: 2019-11-04 20:36:57.000000000 -0800
 Birth: -

Stripping info:

.../DEFAULT.PKG
  lcm_layout_gen:    4
  lcm_mirror_count:  1
  lcm_entry_count:   4
    lcme_id:             1
    lcme_mirror_id:      0
    lcme_flags:          init
    lcme_extent.e_start: 0
    lcme_extent.e_end:   131072
      lmm_stripe_count:  0
      lmm_stripe_size:   131072
      lmm_pattern:       mdt
      lmm_layout_gen:    0
      lmm_stripe_offset: 0

    lcme_id:             2
    lcme_mirror_id:      0
    lcme_flags:          0
    lcme_extent.e_start: 131072
    lcme_extent.e_end:   134217728
      lmm_stripe_count:  1
      lmm_stripe_size:   4194304
      lmm_pattern:       raid0
      lmm_layout_gen:    0
      lmm_stripe_offset: -1

    lcme_id:             3
    lcme_mirror_id:      0
    lcme_flags:          0
    lcme_extent.e_start: 134217728
    lcme_extent.e_end:   137438953472
      lmm_stripe_count:  2
      lmm_stripe_size:   4194304
      lmm_pattern:       raid0
      lmm_layout_gen:    0
      lmm_stripe_offset: -1

    lcme_id:             4
    lcme_mirror_id:      0
    lcme_flags:          0
    lcme_extent.e_start: 137438953472
    lcme_extent.e_end:   EOF
      lmm_stripe_count:  4
      lmm_stripe_size:   4194304
      lmm_pattern:       raid0
      lmm_layout_gen:    0
      lmm_stripe_offset: -1
Comment by Stephane Thiell [ 05/Nov/19 ]

Andreas, at this point, the filesystem seems back online again. I did several stop/start of MDT0 while I had a stat and lfs getstripe running, but it tooks several attempts to get the result. But then the file was not empty anymore and contained 186 bytes of text then after a few minutes was deleted.

MDS logs:

Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: fir-MDT0000: recovery is timed out, evict stale exports
Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: fir-MDT0000: disconnecting 1 stale clients
Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: 40926:0:(ldlm_lib.c:1765:extend_recovery_timer()) fir-MDT0000: extended recovery timer reaching hard limit: 900, extend: 1
Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: 40926:0:(ldlm_lib.c:1765:extend_recovery_timer()) Skipped 77 previous similar messages
Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: fir-MDT0000: Recovery over after 2:30, of 1280 clients 1279 recovered and 1 was evicted.
Nov 04 20:55:54 fir-md1-s1 kernel: LustreError: 21591:0:(ldlm_lockd.c:256:expired_lock_main()) ### lock callback timer expired after 150s: evicting client at 10.9.117.37@o2ib4  ns: mdt-fir-MDT0000_UUID lock: ffffa13e2631e300/0x675682d85feceb8 lrc: 3/0,0 mode: PW/PW res: [0x200035e2b:0x1:0x0].0x0 bits 0x40/0x0 rrc: 888 type: IBT flags: 0x60000400000020 nid: 10.9.117.37@o2ib4 remote: 0x4d1afbc8d54ee1c expref: 21 pid: 40926 timeout: 19031 lvb_type: 0
Nov 04 20:55:55 fir-md1-s1 kernel: LustreError: 41055:0:(service.c:2128:ptlrpc_server_handle_request()) @@@ Dropping timed-out request from 12345-10.9.108.53@o2ib4: deadline 100:15s ago
                                     req@ffffa13e32f00000 x1649335780509632/t0(0) o38-><?>@<?>:0/0 lens 520/0 e 0 to 0 dl 1572929740 ref 1 fl Interpret:/0/ffffffff rc 0/-1
Nov 04 20:55:55 fir-md1-s1 kernel: Lustre: 41128:0:(service.c:2165:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (100:7s); client may timeout.  req@ffffa13e285d1b00 x1649334750879840/t0(0) o38-><?>@<?>:0/0 lens 520/0 e 0 to 0 dl 1572929748 ref 1 fl Interpret:/0/ffffffff rc 0/-1
Nov 04 20:55:55 fir-md1-s1 kernel: Lustre: 41128:0:(service.c:2165:ptlrpc_server_handle_request()) Skipped 104 previous similar messages
Nov 04 20:55:55 fir-md1-s1 kernel: LustreError: 41055:0:(service.c:2128:ptlrpc_server_handle_request()) Skipped 110 previous similar messages
Nov 04 20:56:58 fir-md1-s1 kernel: Lustre: fir-MDT0000: Client 9b6df508-4fd4-62e6-a19b-42f88c25e71f (at 10.8.26.4@o2ib6) reconnecting
Nov 04 20:56:58 fir-md1-s1 kernel: Lustre: Skipped 39 previous similar messages
Nov 04 21:00:35 fir-md1-s1 kernel: LustreError: 137-5: fir-MDT0003_UUID: not available for connect from 10.8.11.28@o2ib6 (no target). If you are running an HA pair check that the target is mounted on the other server.
Nov 04 21:00:35 fir-md1-s1 kernel: LustreError: Skipped 667 previous similar messages
Comment by Stephane Thiell [ 05/Nov/19 ]

Updated MDS dk logs attached as fir-md1-s1-dk2.log.gz and filesystem is still running.

Comment by Stephane Thiell [ 05/Nov/19 ]

The file in question, DEFAULT.PKG, is actually used by the AERO-F software to restart a simulation (https://frg.bitbucket.io/aero-f/index.html#Restart)

Comment by Mikhail Pershin [ 05/Nov/19 ]

Stephane, similar stack trace was seen several times in couple bugs like LU-11358 you've mentioned but in that cases there was also other thread which was blocker, but I don't see any other thread in logs which would block all these mdt_brw_intent() threads.
In fact DoM is vulnerable to such access patterns like we have here - multiple writers to single file, because DoM lock covers whole DoM region but not stripes, so only one writer at the time is allowed. That can become bottleneck and may cause lock timeouts on high load. So general recommendation in that case - use ordinary file with OST stripes instead.
Meanwhile that shouldn't cause such server hung as you've experienced, so I will investigate that more.

Comment by Stephane Thiell [ 05/Nov/19 ]

Hi Mike,

Thanks for looking at this!

You say the whole DoM region, but in our case, the DoM size is only 128 KB. It's less than the size of an OST stripe (4MB), so I'm not sure to understand why DoM wouldn't be as efficient. For larger DoM sizes, I understand.

And we do have arrays of SSDs for the MDT storage. So disk I/Os for DoM should not be a problem.

Our main problem here is that it blocked MDT0 and thus the filesystem/namespace. I was hoping you could find a blocking thread maybe from crash-sysrq-fir-md1-s1-foreach-bt.log . Do you want the associated crash dump? Let me know, I can upload it to your FTP.

Comment by Stephane Thiell [ 06/Nov/19 ]

A similar situation happened again after the user relaunched his jobs, but the traces on the MDS are a bit different I think, at least the first ones. Looks like the filesystem is blocked again.

One of these traces:

Nov 05 20:58:18 fir-md1-s1 kernel: NMI watchdog: BUG: soft lockup - CPU#38 stuck for 22s! [mdt_io02_034:41734]
...
Nov 05 20:58:18 fir-md1-s1 kernel: CPU: 38 PID: 41734 Comm: mdt_io02_034 Kdump: loaded Tainted: G           OEL ------------   3.10.0-957.27.2.el7_lustre.pl1.x86_64 #1
Nov 05 20:58:18 fir-md1-s1 kernel: Hardware name: Dell Inc. PowerEdge R6415/065PKD, BIOS 1.10.6 08/15/2019
Nov 05 20:58:18 fir-md1-s1 kernel: task: ffffa11e2a1e4100 ti: ffffa13cd3a70000 task.ti: ffffa13cd3a70000
Nov 05 20:58:18 fir-md1-s1 kernel: RIP: 0010:[<ffffffffbe913536>]  [<ffffffffbe913536>] native_queued_spin_lock_slowpath+0x126/0x200
Nov 05 20:58:18 fir-md1-s1 kernel: RSP: 0018:ffffa13cd3a73800  EFLAGS: 00000246
Nov 05 20:58:18 fir-md1-s1 kernel: RAX: 0000000000000000 RBX: ffffa130a9500be0 RCX: 0000000001310000
Nov 05 20:58:18 fir-md1-s1 kernel: RDX: ffffa12e3f8db780 RSI: 0000000001710101 RDI: ffffa13e3710f480
Nov 05 20:58:18 fir-md1-s1 kernel: RBP: ffffa13cd3a73800 R08: ffffa12e3f85b780 R09: 0000000000000000
Nov 05 20:58:18 fir-md1-s1 kernel: R10: ffffa12e3f85f140 R11: ffffda91d59da200 R12: 0000000000000000
Nov 05 20:58:18 fir-md1-s1 kernel: R13: ffffa13cd3a737a0 R14: ffffa130a9500948 R15: 0000000000000000
Nov 05 20:58:18 fir-md1-s1 kernel: FS:  00007f38ccb1d700(0000) GS:ffffa12e3f840000(0000) knlGS:0000000000000000
Nov 05 20:58:18 fir-md1-s1 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Nov 05 20:58:18 fir-md1-s1 kernel: CR2: 000000000124f178 CR3: 000000364fa10000 CR4: 00000000003407e0
Nov 05 20:58:18 fir-md1-s1 kernel: Call Trace:
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbef5f2cb>] queued_spin_lock_slowpath+0xb/0xf
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbef6d7a0>] _raw_spin_lock+0x20/0x30
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc13e2c07>] ldiskfs_es_lru_add+0x57/0x90 [ldiskfs]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc13ad6a5>] ldiskfs_ext_map_blocks+0x7b5/0xf60 [ldiskfs]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe902372>] ? ktime_get_ts64+0x52/0xf0
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe903612>] ? ktime_get+0x52/0xe0
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc0bab14b>] ? kiblnd_post_tx_locked+0x7bb/0xa50 [ko2iblnd]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc13e9728>] ldiskfs_map_blocks+0x98/0x700 [ldiskfs]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc0b40203>] ? cfs_hash_bd_lookup_intent+0x63/0x170 [libcfs]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe902372>] ? ktime_get_ts64+0x52/0xf0
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc14bab63>] osd_ldiskfs_map_inode_pages+0x143/0x420 [osd_ldiskfs]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc14bc996>] osd_write_prep+0x2b6/0x360 [osd_ldiskfs]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc1614c3b>] mdt_obd_preprw+0x65b/0x10a0 [mdt]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc116d1bc>] tgt_brw_write+0xc7c/0x1cf0 [ptlrpc]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8e59c8>] ? load_balance+0x178/0x9a0
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8e143c>] ? update_curr+0x14c/0x1e0
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8dca58>] ? __enqueue_entity+0x78/0x80
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8e367f>] ? enqueue_entity+0x2ef/0xbe0
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc1159a7d>] ? tgt_lookup_reply+0x2d/0x190 [ptlrpc]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc116936a>] tgt_request_handle+0xaea/0x1580 [ptlrpc]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc1144da1>] ? ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc0b34bde>] ? ktime_get_real_seconds+0xe/0x10 [libcfs]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc111024b>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc110b805>] ? ptlrpc_wait_event+0xa5/0x360 [ptlrpc]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8cfeb4>] ? __wake_up+0x44/0x50
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc1113bac>] ptlrpc_main+0xb2c/0x1460 [ptlrpc]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc1113080>] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc]
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8c2e81>] kthread+0xd1/0xe0
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8c2db0>] ? insert_kthread_work+0x40/0x40
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbef77c24>] ret_from_fork_nospec_begin+0xe/0x21
Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8c2db0>] ? insert_kthread_work+0x40/0x40

Attaching MDS logs as fir-md1-s1_20191105.log

Comment by Mikhail Pershin [ 06/Nov/19 ]

Stephane, yes, I was checking exactly that sysrq file but found no good candidates. As for DoM file efficiency, it is not about its size but how many processes are accessing it at the same time. I think with file DEFAULT.PKG many processes are trying to write to the file beginning so DOM region becomes bottkeneck and OST stripes are not so useful in that case, each process need access to DOM region first each time?

Comment by Stephane Thiell [ 06/Nov/19 ]

Ah yes, I see, thanks. And I just found 250+ files like these that may be used by these aero-f jobs:

-rw-r--r-- 1 jbho cfarhat  145736 Nov  5 21:27 /scratch/users/jbho/aerof_simulations/maewing/DEFAULT.D2W115
-rw-r--r-- 1 jbho cfarhat  149248 Nov  5 21:27 /scratch/users/jbho/aerof_simulations/maewing/DEFAULT.D2W116

So they are small files but use the full 128KB DoM region but they are all on a single MDT, and accessed by 4 x 240 tasks (potentially).

I'll make changes so that these won't be using DoM anymore.

Comment by Mikhail Pershin [ 06/Nov/19 ]

Stephane, there is one ticket for DoM improvements which can improve such access patterns, LU-10664, but it has no patch at the moment.

Comment by Stephane Thiell [ 06/Nov/19 ]

Thanks Mike. We're currently discussing about changing our default stripping to avoid further issues like these and perhaps only use DoM on specific cases.

FYI, yesterday, after I killed the suspected jobs, I still had to do a stop/start of MDT0 to resume filesystem operations (it didn't recover by itself).

Comment by Stephane Thiell [ 08/Nov/19 ]

To avoid further issues for now, we have removed the default DoM stripping from all directories on this filesystem (only kept a PFL stripping). New files won't use DoM anymore. We'll see if that helps.

Comment by Andreas Dilger [ 26/Nov/19 ]

Stephane, it would make sense to get an strace (or equivalent Lustre "{{lctl set_param debug=+vfstrace +dlmtrace) from these jobs to see just how many times they write to the same file.

Mike, since it is possible to migrate DoM components to OSTs (either with full-file copy in 2.12 or via FLR mirror in 2.13 patch https://review.whamcloud.com/35359 "LU-11421 dom: manual OST-to-DOM migration via mirroring"), have you thought about automatically migrating files with high write lock contention from DoM to a regular OST object? Since the amount of data to be moved is very small (under 150KB in this case), the migration should be very fast, and it would allow extent locks to be used on the file.

That said, I have no idea how hard this would be, and only makes sense if there are multiple writers repeatedly contending on the same DoM file component (which I suspect is rare in most cases). Even here, it may be that if the clients are only writing to the same file a handful of times that the extra migration step would make the performance worse rather than better. If they write to the same file hundreds of times then it might be worthwhile to implement.

Even in IO-500 ior-hard-write the chunk size is 47008 bytes, so at most 2-3 ranks would be contending on a 64KB or 128KB DoM component, and we never had problems with this in our testing.

Comment by Gerrit Updater [ 27/Nov/19 ]

Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/36881
Subject: LU-12935 obdclass: fix import connect flag printing
Project: fs/lustre-release
Branch: b2_12
Current Patch Set: 1
Commit: b2afbb33175af02608d9d4127370dc874542148b

Comment by Gerrit Updater [ 12/Dec/19 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/36881/
Subject: LU-12935 obdclass: fix import connect flag printing
Project: fs/lustre-release
Branch: b2_12
Current Patch Set:
Commit: b66efa7d500f94d755f318d43804f0f3eb883835

Comment by Peter Jones [ 12/Dec/19 ]

Landed for 2.12.4 - does not affect master

Generated at Sat Feb 10 02:56:56 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.