Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-12935

MDT deadlock on 2.12.3 with DoM; is it missing async_discard feature?

Details

    • Bug
    • Resolution: Fixed
    • Critical
    • Lustre 2.12.4
    • Lustre 2.12.3
    • None
    • CentOS 7.6
    • 1
    • 9223372036854775807

    Description

      Currently we cannot make MDT0 work again on Fir (2.12.3) due to these backtraces and lock timeout:

      Nov 04 18:30:17 fir-md1-s1 kernel: Pid: 32408, comm: mdt01_024 3.10.0-957.27.2.el7_lustre.pl1.x86_64 #1 SMP Mon Aug 5 15:28:37 PDT 2019
      Nov 04 18:30:17 fir-md1-s1 kernel: Call Trace:
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc10ccac0>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc10cd5e1>] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc]
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc15d850b>] mdt_object_local_lock+0x50b/0xb20 [mdt]
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc15d8b90>] mdt_object_lock_internal+0x70/0x360 [mdt]
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc15d8ea0>] mdt_object_lock+0x20/0x30 [mdt]
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc1617c4b>] mdt_brw_enqueue+0x44b/0x760 [mdt]
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc15c64bf>] mdt_intent_brw+0x1f/0x30 [mdt]
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc15debb5>] mdt_intent_policy+0x435/0xd80 [mdt]
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc10b3d46>] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc]
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc10dc336>] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc]
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc1164a12>] tgt_enqueue+0x62/0x210 [ptlrpc]
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc116936a>] tgt_request_handle+0xaea/0x1580 [ptlrpc]
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc111024b>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffc1113bac>] ptlrpc_main+0xb2c/0x1460 [ptlrpc]
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffbe8c2e81>] kthread+0xd1/0xe0
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffbef77c24>] ret_from_fork_nospec_begin+0xe/0x21
      Nov 04 18:30:17 fir-md1-s1 kernel:  [<ffffffffffffffff>] 0xffffffffffffffff
      Nov 04 18:30:17 fir-md1-s1 kernel: LNet: Service thread pid 32415 was inactive for 201.19s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
      Nov 04 18:30:17 fir-md1-s1 kernel: LNet: Skipped 1 previous similar message
      Nov 04 18:31:56 fir-md1-s1 kernel: LustreError: 32601:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1572920815, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0000_UUID lock: ffffa10e0b407bc0/0x675682d854098c0 lrc: 3/0,1 mode: --/PW res: [0x200034eb7:0x1:0x0].0x0 bits 0x40/0x0 rrc: 912 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 32601 timeout: 0 lvb_type: 0
      Nov 04 18:31:56 fir-md1-s1 kernel: LustreError: 32601:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 224 previous similar messages
      Nov 04 18:34:25 fir-md1-s1 kernel: LustreError: 32404:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1572920965, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0000_UUID lock: ffffa12da9254ec0/0x675682d8540d5dd lrc: 3/0,1 mode: --/PW res: [0x200034eb7:0x1:0x0].0x0 bits 0x40/0x0 rrc: 913 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 32404 timeout: 0 lvb_type: 0
      Nov 04 18:34:25 fir-md1-s1 kernel: LustreError: 32404:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 161 previous similar messages
      

      This looks similar to LU-11358.

      I also just noticed another thing, on our new 2.12.3 clients, I can't find the async_discard import flag:

      [root@sh-117-13 ~]# lctl get_param mdc.fir-*.import | grep connect_flags
          connect_flags: [ write_grant, server_lock, version, acl, xattr, create_on_write, truncate_lock, inode_bit_locks, getattr_by_fid, no_oh_for_devices, max_byte_per_rpc, early_lock_cancel, adaptive_timeouts, lru_resize, alt_checksum_algorithm, fid_is_enabled, version_recovery, pools, large_ea, full20, layout_lock, 64bithash, jobstats, umask, einprogress, grant_param, lvb_type, short_io, flock_deadlock, disp_stripe, open_by_fid, lfsck, multi_mod_rpcs, dir_stripe, subtree, bulk_mbits, second_flags, file_secctx, dir_migrate, unknown, flr, lock_convert, archive_id_array, selinux_policy, lsom, unknown2_0x4000 ]
      

      Is it unknown2_0x4000 ? Or is it missing??

      Please confirm that the async_discard patch is missing from 2.12.3? If it's the case, we'll need to perform a full downgrade or emergency patching of the cluster and Lustre servers.

      Attaching logs from MDS fir-md1-s1 which serves MDT0 and that we cannot make operational again at the moment. We even tried with abort_recovery with no luck.

      Attachments

        1. crash-sysrq-fir-md1-s1-foreach-bt.log
          1.35 MB
        2. fir-md1-s1_20191105.log
          758 kB
        3. fir-md1-s1_lfsck-results.log
          3 kB
        4. fir-md1-s1-dk.log.gz
          3.30 MB
        5. fir-md1-s1-dk2.log.gz
          395 kB
        6. fir-md1-s1-MDT0.log
          433 kB
        7. fir-rbh01.dk.log
          651 kB

        Issue Links

          Activity

            [LU-12935] MDT deadlock on 2.12.3 with DoM; is it missing async_discard feature?

            Stephane, yes, I was checking exactly that sysrq file but found no good candidates. As for DoM file efficiency, it is not about its size but how many processes are accessing it at the same time. I think with file DEFAULT.PKG many processes are trying to write to the file beginning so DOM region becomes bottkeneck and OST stripes are not so useful in that case, each process need access to DOM region first each time?

            tappro Mikhail Pershin added a comment - Stephane, yes, I was checking exactly that sysrq file but found no good candidates. As for DoM file efficiency, it is not about its size but how many processes are accessing it at the same time. I think with file DEFAULT.PKG many processes are trying to write to the file beginning so DOM region becomes bottkeneck and OST stripes are not so useful in that case, each process need access to DOM region first each time?

            A similar situation happened again after the user relaunched his jobs, but the traces on the MDS are a bit different I think, at least the first ones. Looks like the filesystem is blocked again.

            One of these traces:

            Nov 05 20:58:18 fir-md1-s1 kernel: NMI watchdog: BUG: soft lockup - CPU#38 stuck for 22s! [mdt_io02_034:41734]
            ...
            Nov 05 20:58:18 fir-md1-s1 kernel: CPU: 38 PID: 41734 Comm: mdt_io02_034 Kdump: loaded Tainted: G           OEL ------------   3.10.0-957.27.2.el7_lustre.pl1.x86_64 #1
            Nov 05 20:58:18 fir-md1-s1 kernel: Hardware name: Dell Inc. PowerEdge R6415/065PKD, BIOS 1.10.6 08/15/2019
            Nov 05 20:58:18 fir-md1-s1 kernel: task: ffffa11e2a1e4100 ti: ffffa13cd3a70000 task.ti: ffffa13cd3a70000
            Nov 05 20:58:18 fir-md1-s1 kernel: RIP: 0010:[<ffffffffbe913536>]  [<ffffffffbe913536>] native_queued_spin_lock_slowpath+0x126/0x200
            Nov 05 20:58:18 fir-md1-s1 kernel: RSP: 0018:ffffa13cd3a73800  EFLAGS: 00000246
            Nov 05 20:58:18 fir-md1-s1 kernel: RAX: 0000000000000000 RBX: ffffa130a9500be0 RCX: 0000000001310000
            Nov 05 20:58:18 fir-md1-s1 kernel: RDX: ffffa12e3f8db780 RSI: 0000000001710101 RDI: ffffa13e3710f480
            Nov 05 20:58:18 fir-md1-s1 kernel: RBP: ffffa13cd3a73800 R08: ffffa12e3f85b780 R09: 0000000000000000
            Nov 05 20:58:18 fir-md1-s1 kernel: R10: ffffa12e3f85f140 R11: ffffda91d59da200 R12: 0000000000000000
            Nov 05 20:58:18 fir-md1-s1 kernel: R13: ffffa13cd3a737a0 R14: ffffa130a9500948 R15: 0000000000000000
            Nov 05 20:58:18 fir-md1-s1 kernel: FS:  00007f38ccb1d700(0000) GS:ffffa12e3f840000(0000) knlGS:0000000000000000
            Nov 05 20:58:18 fir-md1-s1 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
            Nov 05 20:58:18 fir-md1-s1 kernel: CR2: 000000000124f178 CR3: 000000364fa10000 CR4: 00000000003407e0
            Nov 05 20:58:18 fir-md1-s1 kernel: Call Trace:
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbef5f2cb>] queued_spin_lock_slowpath+0xb/0xf
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbef6d7a0>] _raw_spin_lock+0x20/0x30
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc13e2c07>] ldiskfs_es_lru_add+0x57/0x90 [ldiskfs]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc13ad6a5>] ldiskfs_ext_map_blocks+0x7b5/0xf60 [ldiskfs]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe902372>] ? ktime_get_ts64+0x52/0xf0
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe903612>] ? ktime_get+0x52/0xe0
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc0bab14b>] ? kiblnd_post_tx_locked+0x7bb/0xa50 [ko2iblnd]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc13e9728>] ldiskfs_map_blocks+0x98/0x700 [ldiskfs]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc0b40203>] ? cfs_hash_bd_lookup_intent+0x63/0x170 [libcfs]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe902372>] ? ktime_get_ts64+0x52/0xf0
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc14bab63>] osd_ldiskfs_map_inode_pages+0x143/0x420 [osd_ldiskfs]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc14bc996>] osd_write_prep+0x2b6/0x360 [osd_ldiskfs]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc1614c3b>] mdt_obd_preprw+0x65b/0x10a0 [mdt]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc116d1bc>] tgt_brw_write+0xc7c/0x1cf0 [ptlrpc]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8e59c8>] ? load_balance+0x178/0x9a0
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8e143c>] ? update_curr+0x14c/0x1e0
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8dca58>] ? __enqueue_entity+0x78/0x80
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8e367f>] ? enqueue_entity+0x2ef/0xbe0
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc1159a7d>] ? tgt_lookup_reply+0x2d/0x190 [ptlrpc]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc116936a>] tgt_request_handle+0xaea/0x1580 [ptlrpc]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc1144da1>] ? ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc0b34bde>] ? ktime_get_real_seconds+0xe/0x10 [libcfs]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc111024b>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc110b805>] ? ptlrpc_wait_event+0xa5/0x360 [ptlrpc]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8cfeb4>] ? __wake_up+0x44/0x50
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc1113bac>] ptlrpc_main+0xb2c/0x1460 [ptlrpc]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffc1113080>] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc]
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8c2e81>] kthread+0xd1/0xe0
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8c2db0>] ? insert_kthread_work+0x40/0x40
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbef77c24>] ret_from_fork_nospec_begin+0xe/0x21
            Nov 05 20:58:18 fir-md1-s1 kernel:  [<ffffffffbe8c2db0>] ? insert_kthread_work+0x40/0x40
            

            Attaching MDS logs as fir-md1-s1_20191105.log

            sthiell Stephane Thiell added a comment - A similar situation happened again after the user relaunched his jobs, but the traces on the MDS are a bit different I think, at least the first ones. Looks like the filesystem is blocked again. One of these traces: Nov 05 20:58:18 fir-md1-s1 kernel: NMI watchdog: BUG: soft lockup - CPU#38 stuck for 22s! [mdt_io02_034:41734] ... Nov 05 20:58:18 fir-md1-s1 kernel: CPU: 38 PID: 41734 Comm: mdt_io02_034 Kdump: loaded Tainted: G OEL ------------ 3.10.0-957.27.2.el7_lustre.pl1.x86_64 #1 Nov 05 20:58:18 fir-md1-s1 kernel: Hardware name: Dell Inc. PowerEdge R6415/065PKD, BIOS 1.10.6 08/15/2019 Nov 05 20:58:18 fir-md1-s1 kernel: task: ffffa11e2a1e4100 ti: ffffa13cd3a70000 task.ti: ffffa13cd3a70000 Nov 05 20:58:18 fir-md1-s1 kernel: RIP: 0010:[<ffffffffbe913536>] [<ffffffffbe913536>] native_queued_spin_lock_slowpath+0x126/0x200 Nov 05 20:58:18 fir-md1-s1 kernel: RSP: 0018:ffffa13cd3a73800 EFLAGS: 00000246 Nov 05 20:58:18 fir-md1-s1 kernel: RAX: 0000000000000000 RBX: ffffa130a9500be0 RCX: 0000000001310000 Nov 05 20:58:18 fir-md1-s1 kernel: RDX: ffffa12e3f8db780 RSI: 0000000001710101 RDI: ffffa13e3710f480 Nov 05 20:58:18 fir-md1-s1 kernel: RBP: ffffa13cd3a73800 R08: ffffa12e3f85b780 R09: 0000000000000000 Nov 05 20:58:18 fir-md1-s1 kernel: R10: ffffa12e3f85f140 R11: ffffda91d59da200 R12: 0000000000000000 Nov 05 20:58:18 fir-md1-s1 kernel: R13: ffffa13cd3a737a0 R14: ffffa130a9500948 R15: 0000000000000000 Nov 05 20:58:18 fir-md1-s1 kernel: FS: 00007f38ccb1d700(0000) GS:ffffa12e3f840000(0000) knlGS:0000000000000000 Nov 05 20:58:18 fir-md1-s1 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Nov 05 20:58:18 fir-md1-s1 kernel: CR2: 000000000124f178 CR3: 000000364fa10000 CR4: 00000000003407e0 Nov 05 20:58:18 fir-md1-s1 kernel: Call Trace: Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffbef5f2cb>] queued_spin_lock_slowpath+0xb/0xf Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffbef6d7a0>] _raw_spin_lock+0x20/0x30 Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc13e2c07>] ldiskfs_es_lru_add+0x57/0x90 [ldiskfs] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc13ad6a5>] ldiskfs_ext_map_blocks+0x7b5/0xf60 [ldiskfs] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffbe902372>] ? ktime_get_ts64+0x52/0xf0 Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffbe903612>] ? ktime_get+0x52/0xe0 Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc0bab14b>] ? kiblnd_post_tx_locked+0x7bb/0xa50 [ko2iblnd] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc13e9728>] ldiskfs_map_blocks+0x98/0x700 [ldiskfs] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc0b40203>] ? cfs_hash_bd_lookup_intent+0x63/0x170 [libcfs] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffbe902372>] ? ktime_get_ts64+0x52/0xf0 Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc14bab63>] osd_ldiskfs_map_inode_pages+0x143/0x420 [osd_ldiskfs] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc14bc996>] osd_write_prep+0x2b6/0x360 [osd_ldiskfs] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc1614c3b>] mdt_obd_preprw+0x65b/0x10a0 [mdt] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc116d1bc>] tgt_brw_write+0xc7c/0x1cf0 [ptlrpc] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffbe8e59c8>] ? load_balance+0x178/0x9a0 Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffbe8e143c>] ? update_curr+0x14c/0x1e0 Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffbe8dca58>] ? __enqueue_entity+0x78/0x80 Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffbe8e367f>] ? enqueue_entity+0x2ef/0xbe0 Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc1159a7d>] ? tgt_lookup_reply+0x2d/0x190 [ptlrpc] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc116936a>] tgt_request_handle+0xaea/0x1580 [ptlrpc] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc1144da1>] ? ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc0b34bde>] ? ktime_get_real_seconds+0xe/0x10 [libcfs] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc111024b>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc110b805>] ? ptlrpc_wait_event+0xa5/0x360 [ptlrpc] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffbe8cfeb4>] ? __wake_up+0x44/0x50 Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc1113bac>] ptlrpc_main+0xb2c/0x1460 [ptlrpc] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffc1113080>] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc] Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffbe8c2e81>] kthread+0xd1/0xe0 Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffbe8c2db0>] ? insert_kthread_work+0x40/0x40 Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffbef77c24>] ret_from_fork_nospec_begin+0xe/0x21 Nov 05 20:58:18 fir-md1-s1 kernel: [<ffffffffbe8c2db0>] ? insert_kthread_work+0x40/0x40 Attaching MDS logs as fir-md1-s1_20191105.log

            Hi Mike,

            Thanks for looking at this!

            You say the whole DoM region, but in our case, the DoM size is only 128 KB. It's less than the size of an OST stripe (4MB), so I'm not sure to understand why DoM wouldn't be as efficient. For larger DoM sizes, I understand.

            And we do have arrays of SSDs for the MDT storage. So disk I/Os for DoM should not be a problem.

            Our main problem here is that it blocked MDT0 and thus the filesystem/namespace. I was hoping you could find a blocking thread maybe from crash-sysrq-fir-md1-s1-foreach-bt.log . Do you want the associated crash dump? Let me know, I can upload it to your FTP.

            sthiell Stephane Thiell added a comment - Hi Mike, Thanks for looking at this! You say the whole DoM region, but in our case, the DoM size is only 128 KB. It's less than the size of an OST stripe (4MB), so I'm not sure to understand why DoM wouldn't be as efficient. For larger DoM sizes, I understand. And we do have arrays of SSDs for the MDT storage. So disk I/Os for DoM should not be a problem. Our main problem here is that it blocked MDT0 and thus the filesystem/namespace. I was hoping you could find a blocking thread maybe from crash-sysrq-fir-md1-s1-foreach-bt.log . Do you want the associated crash dump? Let me know, I can upload it to your FTP.

            Stephane, similar stack trace was seen several times in couple bugs like LU-11358 you've mentioned but in that cases there was also other thread which was blocker, but I don't see any other thread in logs which would block all these mdt_brw_intent() threads.
            In fact DoM is vulnerable to such access patterns like we have here - multiple writers to single file, because DoM lock covers whole DoM region but not stripes, so only one writer at the time is allowed. That can become bottleneck and may cause lock timeouts on high load. So general recommendation in that case - use ordinary file with OST stripes instead.
            Meanwhile that shouldn't cause such server hung as you've experienced, so I will investigate that more.

            tappro Mikhail Pershin added a comment - Stephane, similar stack trace was seen several times in couple bugs like LU-11358 you've mentioned but in that cases there was also other thread which was blocker, but I don't see any other thread in logs which would block all these mdt_brw_intent() threads. In fact DoM is vulnerable to such access patterns like we have here - multiple writers to single file, because DoM lock covers whole DoM region but not stripes, so only one writer at the time is allowed. That can become bottleneck and may cause lock timeouts on high load. So general recommendation in that case - use ordinary file with OST stripes instead. Meanwhile that shouldn't cause such server hung as you've experienced, so I will investigate that more.
            sthiell Stephane Thiell added a comment - - edited

            The file in question, DEFAULT.PKG, is actually used by the AERO-F software to restart a simulation (https://frg.bitbucket.io/aero-f/index.html#Restart)

            sthiell Stephane Thiell added a comment - - edited The file in question, DEFAULT.PKG, is actually used by the AERO-F software to restart a simulation ( https://frg.bitbucket.io/aero-f/index.html#Restart )

            Updated MDS dk logs attached as fir-md1-s1-dk2.log.gz and filesystem is still running.

            sthiell Stephane Thiell added a comment - Updated MDS dk logs attached as fir-md1-s1-dk2.log.gz and filesystem is still running.

            Andreas, at this point, the filesystem seems back online again. I did several stop/start of MDT0 while I had a stat and lfs getstripe running, but it tooks several attempts to get the result. But then the file was not empty anymore and contained 186 bytes of text then after a few minutes was deleted.

            MDS logs:

            Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: fir-MDT0000: recovery is timed out, evict stale exports
            Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: fir-MDT0000: disconnecting 1 stale clients
            Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: 40926:0:(ldlm_lib.c:1765:extend_recovery_timer()) fir-MDT0000: extended recovery timer reaching hard limit: 900, extend: 1
            Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: 40926:0:(ldlm_lib.c:1765:extend_recovery_timer()) Skipped 77 previous similar messages
            Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: fir-MDT0000: Recovery over after 2:30, of 1280 clients 1279 recovered and 1 was evicted.
            Nov 04 20:55:54 fir-md1-s1 kernel: LustreError: 21591:0:(ldlm_lockd.c:256:expired_lock_main()) ### lock callback timer expired after 150s: evicting client at 10.9.117.37@o2ib4  ns: mdt-fir-MDT0000_UUID lock: ffffa13e2631e300/0x675682d85feceb8 lrc: 3/0,0 mode: PW/PW res: [0x200035e2b:0x1:0x0].0x0 bits 0x40/0x0 rrc: 888 type: IBT flags: 0x60000400000020 nid: 10.9.117.37@o2ib4 remote: 0x4d1afbc8d54ee1c expref: 21 pid: 40926 timeout: 19031 lvb_type: 0
            Nov 04 20:55:55 fir-md1-s1 kernel: LustreError: 41055:0:(service.c:2128:ptlrpc_server_handle_request()) @@@ Dropping timed-out request from 12345-10.9.108.53@o2ib4: deadline 100:15s ago
                                                 req@ffffa13e32f00000 x1649335780509632/t0(0) o38-><?>@<?>:0/0 lens 520/0 e 0 to 0 dl 1572929740 ref 1 fl Interpret:/0/ffffffff rc 0/-1
            Nov 04 20:55:55 fir-md1-s1 kernel: Lustre: 41128:0:(service.c:2165:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (100:7s); client may timeout.  req@ffffa13e285d1b00 x1649334750879840/t0(0) o38-><?>@<?>:0/0 lens 520/0 e 0 to 0 dl 1572929748 ref 1 fl Interpret:/0/ffffffff rc 0/-1
            Nov 04 20:55:55 fir-md1-s1 kernel: Lustre: 41128:0:(service.c:2165:ptlrpc_server_handle_request()) Skipped 104 previous similar messages
            Nov 04 20:55:55 fir-md1-s1 kernel: LustreError: 41055:0:(service.c:2128:ptlrpc_server_handle_request()) Skipped 110 previous similar messages
            Nov 04 20:56:58 fir-md1-s1 kernel: Lustre: fir-MDT0000: Client 9b6df508-4fd4-62e6-a19b-42f88c25e71f (at 10.8.26.4@o2ib6) reconnecting
            Nov 04 20:56:58 fir-md1-s1 kernel: Lustre: Skipped 39 previous similar messages
            Nov 04 21:00:35 fir-md1-s1 kernel: LustreError: 137-5: fir-MDT0003_UUID: not available for connect from 10.8.11.28@o2ib6 (no target). If you are running an HA pair check that the target is mounted on the other server.
            Nov 04 21:00:35 fir-md1-s1 kernel: LustreError: Skipped 667 previous similar messages
            
            sthiell Stephane Thiell added a comment - Andreas, at this point, the filesystem seems back online again. I did several stop/start of MDT0 while I had a stat and lfs getstripe running, but it tooks several attempts to get the result. But then the file was not empty anymore and contained 186 bytes of text then after a few minutes was deleted. MDS logs: Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: fir-MDT0000: recovery is timed out, evict stale exports Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: fir-MDT0000: disconnecting 1 stale clients Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: 40926:0:(ldlm_lib.c:1765:extend_recovery_timer()) fir-MDT0000: extended recovery timer reaching hard limit: 900, extend: 1 Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: 40926:0:(ldlm_lib.c:1765:extend_recovery_timer()) Skipped 77 previous similar messages Nov 04 20:53:24 fir-md1-s1 kernel: Lustre: fir-MDT0000: Recovery over after 2:30, of 1280 clients 1279 recovered and 1 was evicted. Nov 04 20:55:54 fir-md1-s1 kernel: LustreError: 21591:0:(ldlm_lockd.c:256:expired_lock_main()) ### lock callback timer expired after 150s: evicting client at 10.9.117.37@o2ib4 ns: mdt-fir-MDT0000_UUID lock: ffffa13e2631e300/0x675682d85feceb8 lrc: 3/0,0 mode: PW/PW res: [0x200035e2b:0x1:0x0].0x0 bits 0x40/0x0 rrc: 888 type: IBT flags: 0x60000400000020 nid: 10.9.117.37@o2ib4 remote: 0x4d1afbc8d54ee1c expref: 21 pid: 40926 timeout: 19031 lvb_type: 0 Nov 04 20:55:55 fir-md1-s1 kernel: LustreError: 41055:0:(service.c:2128:ptlrpc_server_handle_request()) @@@ Dropping timed-out request from 12345-10.9.108.53@o2ib4: deadline 100:15s ago req@ffffa13e32f00000 x1649335780509632/t0(0) o38-><?>@<?>:0/0 lens 520/0 e 0 to 0 dl 1572929740 ref 1 fl Interpret:/0/ffffffff rc 0/-1 Nov 04 20:55:55 fir-md1-s1 kernel: Lustre: 41128:0:(service.c:2165:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (100:7s); client may timeout. req@ffffa13e285d1b00 x1649334750879840/t0(0) o38-><?>@<?>:0/0 lens 520/0 e 0 to 0 dl 1572929748 ref 1 fl Interpret:/0/ffffffff rc 0/-1 Nov 04 20:55:55 fir-md1-s1 kernel: Lustre: 41128:0:(service.c:2165:ptlrpc_server_handle_request()) Skipped 104 previous similar messages Nov 04 20:55:55 fir-md1-s1 kernel: LustreError: 41055:0:(service.c:2128:ptlrpc_server_handle_request()) Skipped 110 previous similar messages Nov 04 20:56:58 fir-md1-s1 kernel: Lustre: fir-MDT0000: Client 9b6df508-4fd4-62e6-a19b-42f88c25e71f (at 10.8.26.4@o2ib6) reconnecting Nov 04 20:56:58 fir-md1-s1 kernel: Lustre: Skipped 39 previous similar messages Nov 04 21:00:35 fir-md1-s1 kernel: LustreError: 137-5: fir-MDT0003_UUID: not available for connect from 10.8.11.28@o2ib6 (no target). If you are running an HA pair check that the target is mounted on the other server. Nov 04 21:00:35 fir-md1-s1 kernel: LustreError: Skipped 667 previous similar messages

            Andreas, the file is empty!

            stat:

              File: ‘/fir/..../DEFAULT.PKG’
              Size: 0         	Blocks: 0          IO Block: 4194304 regular empty file
            Device: e64e03a8h/3863872424d	Inode: 144118891059085313  Links: 1
            Access: (0644/-rw-r--r--)  Uid: (297171/    jbho)   Gid: (28669/ cfarhat)
            Access: 2019-11-04 20:36:57.000000000 -0800
            Modify: 2019-11-04 20:36:57.000000000 -0800
            Change: 2019-11-04 20:36:57.000000000 -0800
             Birth: -
            

            Stripping info:

            .../DEFAULT.PKG
              lcm_layout_gen:    4
              lcm_mirror_count:  1
              lcm_entry_count:   4
                lcme_id:             1
                lcme_mirror_id:      0
                lcme_flags:          init
                lcme_extent.e_start: 0
                lcme_extent.e_end:   131072
                  lmm_stripe_count:  0
                  lmm_stripe_size:   131072
                  lmm_pattern:       mdt
                  lmm_layout_gen:    0
                  lmm_stripe_offset: 0
            
                lcme_id:             2
                lcme_mirror_id:      0
                lcme_flags:          0
                lcme_extent.e_start: 131072
                lcme_extent.e_end:   134217728
                  lmm_stripe_count:  1
                  lmm_stripe_size:   4194304
                  lmm_pattern:       raid0
                  lmm_layout_gen:    0
                  lmm_stripe_offset: -1
            
                lcme_id:             3
                lcme_mirror_id:      0
                lcme_flags:          0
                lcme_extent.e_start: 134217728
                lcme_extent.e_end:   137438953472
                  lmm_stripe_count:  2
                  lmm_stripe_size:   4194304
                  lmm_pattern:       raid0
                  lmm_layout_gen:    0
                  lmm_stripe_offset: -1
            
                lcme_id:             4
                lcme_mirror_id:      0
                lcme_flags:          0
                lcme_extent.e_start: 137438953472
                lcme_extent.e_end:   EOF
                  lmm_stripe_count:  4
                  lmm_stripe_size:   4194304
                  lmm_pattern:       raid0
                  lmm_layout_gen:    0
                  lmm_stripe_offset: -1
            
            sthiell Stephane Thiell added a comment - Andreas, the file is empty! stat: File: ‘/fir/..../DEFAULT.PKG’ Size: 0 Blocks: 0 IO Block: 4194304 regular empty file Device: e64e03a8h/3863872424d Inode: 144118891059085313 Links: 1 Access: (0644/-rw-r--r--) Uid: (297171/ jbho) Gid: (28669/ cfarhat) Access: 2019-11-04 20:36:57.000000000 -0800 Modify: 2019-11-04 20:36:57.000000000 -0800 Change: 2019-11-04 20:36:57.000000000 -0800 Birth: - Stripping info: .../DEFAULT.PKG lcm_layout_gen: 4 lcm_mirror_count: 1 lcm_entry_count: 4 lcme_id: 1 lcme_mirror_id: 0 lcme_flags: init lcme_extent.e_start: 0 lcme_extent.e_end: 131072 lmm_stripe_count: 0 lmm_stripe_size: 131072 lmm_pattern: mdt lmm_layout_gen: 0 lmm_stripe_offset: 0 lcme_id: 2 lcme_mirror_id: 0 lcme_flags: 0 lcme_extent.e_start: 131072 lcme_extent.e_end: 134217728 lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: -1 lcme_id: 3 lcme_mirror_id: 0 lcme_flags: 0 lcme_extent.e_start: 134217728 lcme_extent.e_end: 137438953472 lmm_stripe_count: 2 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: -1 lcme_id: 4 lcme_mirror_id: 0 lcme_flags: 0 lcme_extent.e_start: 137438953472 lcme_extent.e_end: EOF lmm_stripe_count: 4 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: -1

            Thanks! We are looking into it. I've got the path after a stop/start of MDT0 but I still can't stat it or get its stripping info. The corresponding user runs a custom code (aeronautics), and has 4 jobs of 240 tasks each. Not sure if all of them are accessing this file and how big is the file at this point, but it is almost certain it is using DoM.

            Re: LFSCK, it has completed and just for information I'm attaching the results as fir-md1-s1_lfsck-results.log

            sthiell Stephane Thiell added a comment - Thanks! We are looking into it. I've got the path after a stop/start of MDT0 but I still can't stat it or get its stripping info. The corresponding user runs a custom code (aeronautics), and has 4 jobs of 240 tasks each. Not sure if all of them are accessing this file and how big is the file at this point, but it is almost certain it is using DoM. Re: LFSCK, it has completed and just for information I'm attaching the results as fir-md1-s1_lfsck-results.log

            It looks like the problem is related to the file "[0x200034eb7:0x1:0x0]", which you may be able to use with "lfs fid2path" to determine what its pathname is. The lock reference count is going up and down on this file, but stays around 870 or so, likely indicating that either there is a lock refcount leak, or the clients continue getting references on the file and never actually release them.

            It seems like the MDS threads are eventually completing their operations on this file:

            00010000:00020000:6.0F:1572922849.748921:0:32760:0:(ldlm_lockd.c:1348:ldlm_handle_enqueue0()) ### lock on destroyed export ffffa11d012fc400 ns: mdt-fir-MDT0000_UUID lock: ffffa12d9633d340/0x675682d8540993e lrc: 3/0,0 mode: PW/PW res: [0x200034eb7:0x1:0x0].0x0 bits 0x40/0x0 rrc: 899 type: IBT flags: 0x50200400000020 nid: 10.9.117.25@o2ib4 remote: 0x9240a829e5985bb6 expref: 283 pid: 32760 timeout: 0 lvb_type: 0
            00000400:02000400:6.0:1572922849.748967:0:32760:0:(watchdog.c:397:lcw_update_time()) Service thread pid 32760 completed after 2033.77s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).
            :
            :
            00010000:00020000:38.0F:1572922849.932606:0:32409:0:(ldlm_lockd.c:1348:ldlm_handle_enqueue0()) ### lock on destroyed export ffffa12df7e7f800 ns: mdt-fir-MDT0000_UUID lock: ffffa12d7b6cd580/0x675682d854099d1 lrc: 3/0,0 mode: PW/PW res: [0x200034eb7:0x1:0x0].0x0 bits 0x40/0x0 rrc: 882 type: IBT flags: 0x50200400000020 nid: 10.9.117.39@o2ib4 remote: 0x6ac7aeec8dbc3b60 expref: 39 pid: 32409 timeout: 0 lvb_type: 0
            00000400:02000400:38.0:1572922849.932678:0:32409:0:(watchdog.c:397:lcw_update_time()) Service thread pid 32409 completed after 2033.95s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).
            

            If LFSCK is still running, you could consider to stop it (via "lctl lfsck_stop"), as that may be driving up the load on the MDS?

            adilger Andreas Dilger added a comment - It looks like the problem is related to the file " [0x200034eb7:0x1:0x0] ", which you may be able to use with " lfs fid2path " to determine what its pathname is. The lock reference count is going up and down on this file, but stays around 870 or so, likely indicating that either there is a lock refcount leak, or the clients continue getting references on the file and never actually release them. It seems like the MDS threads are eventually completing their operations on this file: 00010000:00020000:6.0F:1572922849.748921:0:32760:0:(ldlm_lockd.c:1348:ldlm_handle_enqueue0()) ### lock on destroyed export ffffa11d012fc400 ns: mdt-fir-MDT0000_UUID lock: ffffa12d9633d340/0x675682d8540993e lrc: 3/0,0 mode: PW/PW res: [0x200034eb7:0x1:0x0].0x0 bits 0x40/0x0 rrc: 899 type: IBT flags: 0x50200400000020 nid: 10.9.117.25@o2ib4 remote: 0x9240a829e5985bb6 expref: 283 pid: 32760 timeout: 0 lvb_type: 0 00000400:02000400:6.0:1572922849.748967:0:32760:0:(watchdog.c:397:lcw_update_time()) Service thread pid 32760 completed after 2033.77s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). : : 00010000:00020000:38.0F:1572922849.932606:0:32409:0:(ldlm_lockd.c:1348:ldlm_handle_enqueue0()) ### lock on destroyed export ffffa12df7e7f800 ns: mdt-fir-MDT0000_UUID lock: ffffa12d7b6cd580/0x675682d854099d1 lrc: 3/0,0 mode: PW/PW res: [0x200034eb7:0x1:0x0].0x0 bits 0x40/0x0 rrc: 882 type: IBT flags: 0x50200400000020 nid: 10.9.117.39@o2ib4 remote: 0x6ac7aeec8dbc3b60 expref: 39 pid: 32409 timeout: 0 lvb_type: 0 00000400:02000400:38.0:1572922849.932678:0:32409:0:(watchdog.c:397:lcw_update_time()) Service thread pid 32409 completed after 2033.95s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). If LFSCK is still running, you could consider to stop it (via " lctl lfsck_stop "), as that may be driving up the load on the MDS?

            Attaching debug logs for a local client fir-rbh01 as fir-rbh01.dk.log that we restarted too, so the log is complete. Two kind of errors I noticed while gathering this:

            00000100:00020000:36.0:1572920752.390569:0:84001:0:(layout.c:2113:__req_capsule_get()) @@@ Wrong buffer for field `obd_quotactl' (1 of 1) in format `MDS_QUOTACTL': 0 vs. 112 (server)
            

            And:

            00000100:00000400:3.0:1572926475.221568:0:111431:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1572925719/real 1572925719]  req@ffff8b4664fdb600 x1649317797341696/t0(0) o48->fir-MDT0000-mdc-ffff8b66b2630000@10.0.10.51@o2ib7:12/10 lens 336/336 e 0 to 1 dl 1572926475 ref 2 fl Rpc:X/2/ffffffff rc -11/-1
            00000100:00000400:69.0:1572926475.221569:0:111377:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1572925719/real 1572925719]  req@ffff8b569d4a2400 x1649317797312576/t0(0) o41->fir-MDT0000-mdc-ffff8b66b2630000@10.0.10.51@o2ib7:12/10 lens 224/368 e 0 to 1 dl 1572926475 ref 2 fl Rpc:X/2/ffffffff rc -11/-1
            
            sthiell Stephane Thiell added a comment - Attaching debug logs for a local client fir-rbh01 as fir-rbh01.dk.log that we restarted too, so the log is complete. Two kind of errors I noticed while gathering this: 00000100:00020000:36.0:1572920752.390569:0:84001:0:(layout.c:2113:__req_capsule_get()) @@@ Wrong buffer for field `obd_quotactl' (1 of 1) in format `MDS_QUOTACTL': 0 vs. 112 (server) And: 00000100:00000400:3.0:1572926475.221568:0:111431:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1572925719/real 1572925719] req@ffff8b4664fdb600 x1649317797341696/t0(0) o48->fir-MDT0000-mdc-ffff8b66b2630000@10.0.10.51@o2ib7:12/10 lens 336/336 e 0 to 1 dl 1572926475 ref 2 fl Rpc:X/2/ffffffff rc -11/-1 00000100:00000400:69.0:1572926475.221569:0:111377:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1572925719/real 1572925719] req@ffff8b569d4a2400 x1649317797312576/t0(0) o41->fir-MDT0000-mdc-ffff8b66b2630000@10.0.10.51@o2ib7:12/10 lens 224/368 e 0 to 1 dl 1572926475 ref 2 fl Rpc:X/2/ffffffff rc -11/-1

            People

              tappro Mikhail Pershin
              sthiell Stephane Thiell
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: