[LU-15517] MDT stuck in recovery if one other MDT is failed over to partner node Created: 03/Feb/22  Updated: 22/Jul/22

Status: Open
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.12.8
Fix Version/s: None

Type: Bug Priority: Critical
Reporter: Olaf Faaland Assignee: Mikhail Pershin
Resolution: Unresolved Votes: 0
Labels: llnl
Environment:

3.10.0-1160.53.1.1chaos.ch6.x86_64
zfs-0.7.11-9.8llnl.ch6.x86_64
lustre-2.12.8_6.llnl-1.ch6.x86_64


Attachments: File dk.1.jet16.txt.gz     File dk.3.txt.gz     File dmesg.jet16.txt.gz     File dmesg.txt.gz     File lu-15517.llog_reader.out    
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

MDT fails to enter recovery when there is one MDT running on its partner MDS. The recovery_status file indicates "WAITING" and reports the MDT running on its parter as a non-ready MDT.

The console log of the MDT unable to enter recovery repeatedly shows messages like:

[Thu Feb  3 12:15:18 2022] Lustre: 4102:0:(ldlm_lib.c:1827:extend_recovery_timer()) lquake-MDT0009: extended recovery timer reached hard limit: 900, extend: 1
[Thu Feb  3 12:15:18 2022] Lustre: 4102:0:(ldlm_lib.c:1827:extend_recovery_timer()) Skipped 29 previous similar messages

My steps to reproduce:
1. Start all MDTs on their primary MDS (in my case, that's jet1...jet16 => MDT0000...MDT000f). Allow them to complete recovery.
2. umount MDT000e on jet15, and mount it on jet16. Allow it to complete recovery.
3. umount MDT0009 on jet10, and then mount it again (on the same node, jet10, where it was happily running moments ago).

Debug log shows that the update log for MDT000e was not received. There are no messages in the console log regarding MDT000e after MDT0009 starts up.

I determined the PID of the thread lod_sub_recovery_thread() for MDT000e. The debug log shows the thread starts up, follows roughly this sequence of calls, and never returns from ptlrpc_set_wait().

lod_sub_prep_llog>llog_osd_get_cat_list->dt_locate_at->lu_object_find_at-> ?->
osp_attr_get->osp_remote_sync->ptlrpc_queue_wait->ptlrpc_set_wait

So apparently the RPC never times out, so upper layers never get to retry, there are no error messages reporting the problem, and the MDT never enters recovery.

Our patch stack is:

82ea54e (tag: 2.12.8_6.llnl) LU-13356 client: don't use OBD_CONNECT_MNE_SWAB
d06f5b2 LU-15357 mdd: fix changelog context leak
543b60b LU-9964 llite: prevent mulitple group locks
d776b67 LU-15234 lnet: Race on discovery queue
77040da Revert "LU-15234 lnet: Race on discovery queue"
bba827c LU-15234 lnet: Race on discovery queue
7faa872 LU-14865 utils: llog_reader.c printf type mismatch
5dc104e LU-13946 build: OpenZFS 2.0 compatibility
0fef268 TOSS-4917 grant: chatty warning in tgt_grant_incoming
5b70822 TOSS-4917 grant: improve debug for grant calcs
8af02e8 log lfs setstripe paths to syslog
391be81 Don't install lustre init script on systemd systems
c613926 LLNL build customizations
a4f71cd TOSS-4431 build: build ldiskfs only for x86_64
067cb55 (tag: v2_12_8, tag: 2.12.8) New release 2.12.8

See https://github.com/LLNL/lustre/tree/2.12.8-llnl for the details.



 Comments   
Comment by Olaf Faaland [ 03/Feb/22 ]

This appears to be new to us, which makes me a bit suspicious of the top commit, see LU-15453 / LU-13356.  But I haven't yet proven it by reverting that patch and re-testing.

Comment by Olaf Faaland [ 03/Feb/22 ]

Debug log and console log attached.

Comment by Olaf Faaland [ 03/Feb/22 ]

Note there is no network or lnet issue within the cluster.  As I mentioned, all the targets mount fine on their primary MDS, and (one at a time), on their secondary MDS.  lnetctl ping works correctly in both directions.

Comment by Olaf Faaland [ 03/Feb/22 ]

I've also attached the debug log and dmesg output from jet16, where both MDT000e and MDT000f are running.

Comment by Olaf Faaland [ 03/Feb/22 ]

For my reference, my local ticket is TOSS5536

Comment by Olaf Faaland [ 03/Feb/22 ]

Once it's entered ptlrpc_set_wait(), the debug log reports this over and over again:

00000100:00100000:13.0:1643913970.009932:0:4100:0:(client.c:2390:ptlrpc_set_wait()) set ffffa1897c264e00 going to sleep for 0 seconds
00000100:00000001:13.0:1643913970.009933:0:4100:0:(client.c:1726:ptlrpc_check_set()) Process entered
00000100:00000001:13.0:1643913970.009933:0:4100:0:(client.c:2647:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:13.0:1643913970.009934:0:4100:0:(client.c:1195:ptlrpc_import_delay_req()) Process entered
00000100:00000001:13.0:1643913970.009934:0:4100:0:(client.c:1250:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:13.0:1643913970.009935:0:4100:0:(client.c:2140:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:13.0:1643913970.009936:0:4100:0:(client.c:1726:ptlrpc_check_set()) Process entered
00000100:00000001:13.0:1643913970.009936:0:4100:0:(client.c:2647:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:13.0:1643913970.009936:0:4100:0:(client.c:1195:ptlrpc_import_delay_req()) Process entered
00000100:00000001:13.0:1643913970.009937:0:4100:0:(client.c:1250:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:13.0:1643913970.009937:0:4100:0:(client.c:2140:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)  
Comment by Peter Jones [ 04/Feb/22 ]

Mike

Could you please advise?

Thanks

Peter

Comment by Olaf Faaland [ 04/Feb/22 ]

I re-tested with lustre-2.12.8_5.llnl (without the OBD_CONNECT_MNE_SWAB patch) and see the same issue.   I'll have to do a lot more testing to determine how far back this behavior was introduced, if that's necessary.

Comment by Olaf Faaland [ 07/Feb/22 ]

I tested with lustre-2.12.7_2.llnl-2.ch6.x86_64 and reproduced the issue, so it's not new.  See https://github.com/LLNL/lustre/releases/tag/2.12.7_2.llnl for the code.  Patch stack is:

* f372cfb (tag: 2.12.7_2.llnl) LU-14865 utils: llog_reader.c printf type mismatch
* 5b7709c LU-14733 o2iblnd: Avoid double posting invalidate
* cdde64a LU-14733 o2iblnd: Move racy NULL assignment
* 38fade1 LU-12836 osd-zfs: Catch all ZFS pool change events
* c7d6ecf LU-13946 build: OpenZFS 2.0 compatibility
* ec7dd98 TOSS-4917 grant: chatty warning in tgt_grant_incoming
* eb6d90f TOSS-4917 grant: improve debug for grant calcs
* 4f202a3 log lfs setstripe paths to syslog
* 939ba8a Don't install lustre init script on systemd systems
* 3cfca06 LLNL build customizations
* ce7af9a TOSS-4431 build: build ldiskfs only for x86_64
* 6030d0c (tag: v2_12_7, tag: 2.12.7) New release 2.12.7

The amount of time I waited in this test was 20 minutes. I don't recall how long I waited in my earlier experiments.

Comment by Olaf Faaland [ 07/Feb/22 ]

I reproduced with lustre-2.12.5_10.llnl

Comment by Olaf Faaland [ 21/Mar/22 ]

Ping

Comment by Mikhail Pershin [ 22/Mar/22 ]

Olaf, are you able to reproduce that on clean system or it is reproduced on the working cluster? In the last case, could you collect update llog info from the MDT with stuck recovery? (jet16 in your last example). You can use debugfs for that:

# debugfs -c <device> -R "ls -l update_log_dir/" > jet16_update_log_dir.txt

 

Thanks

Comment by Olaf Faaland [ 07/Apr/22 ]

Hi Mikhail,
Sorry for the delay. Good question - I was not able to reproduce on a clean system (I thought I'd tried that, but apparently not).

I reproduced on jet again. In this instance, lquake-MDT000e is failed over and running on jet16, when it normally would be on jet15. lquake-MDT0009 was umounted and remounted on jet10, and is stuck in WAITING.

NIDs are as follows, in case it helps interpret what you see:

ejet10: 172.19.1.120@o2ib100
ejet15: 172.19.1.125@o2ib100
ejet16: 172.19.1.126@o2ib100

The file containing the contents of update_log_dir/ is 2.1GB compressed. Do you still want me to send it to you? If not, do you want me to run llog_reader on each file and collect the results instead? The ones I spot-checked reported 0 records, even when they were large. Here's an example:

[root@jet10:update_log_dir]# stat [0x40002d2c5:0x2:0x0]
  File: '[0x40002d2c5:0x2:0x0]'
  Size: 938979232       Blocks: 14913761   IO Block: 512    regular file
Device: 31h/49d Inode: 28181515    Links: 1
Access: (0644/-rw-r--r--)  Uid: (56288/ UNKNOWN)   Gid: (56288/ UNKNOWN)
Access: 2018-02-23 08:14:08.000000000 -0800
Modify: 2018-02-23 08:14:08.000000000 -0800
Change: 2018-02-23 08:14:08.000000000 -0800
 Birth: -

[root@jet10:update_log_dir]# llog_reader [0x40002d2c5:0x2:0x0]
Header size : 32768      llh_size : 472
Time : Fri Feb 23 08:14:08 2018
Number of records: 0    cat_idx: 13     last_idx: 161679
Target uuid : 
-----------------------

thanks,
Olaf

Comment by Olaf Faaland [ 07/Apr/22 ]

I ran llog_reader on each of the files in update_log_dir and attached the output in the file named lu-15517.llog_reader.out.

The files with Number of records > 0 are:
[0x400000401:0x1:0x0]
[0x400000402:0x1:0x0]
[0x400000403:0x1:0x0]
[0x400000404:0x1:0x0]
[0x400000405:0x1:0x0]
[0x400000406:0x1:0x0]
[0x400000407:0x1:0x0]
[0x400000408:0x1:0x0]
[0x400000409:0x1:0x0]
[0x40000040a:0x1:0x0]
[0x40000040b:0x1:0x0]
[0x40000040c:0x1:0x0]
[0x40000040d:0x1:0x0]
[0x40000040e:0x1:0x0]
[0x40000040f:0x1:0x0]

Do you want me to do something with the files in the OI they refer to?

Comment by Mikhail Pershin [ 07/Apr/22 ]

Olaf, please check these files with llog_reader. I wonder if any of them contains 'holes' in records but llog_reader in your version of Lustre might not be able to translate update records correctly and report them as 'Unknown' type, you can check.  I'd ask you to upload these files listed above for analysis here, is that possible?

Comment by Mikhail Pershin [ 07/Apr/22 ]

ah, missed that you've uploaded llog_reader output. Looks like it knows about update log, let me check it first

Comment by Mikhail Pershin [ 07/Apr/22 ]

Olaf, no need to upload all llogs right now, let me think a bit first

Comment by Mikhail Pershin [ 09/Apr/22 ]

Olaf, the reason why I've asked about reproducing that on clean system is that there were several issues with update log processing causing MDT-MDT recovery to fail. Could you check if any of MDT reported errors from lod_sub_recovery_thread or llog_process_thread() or any other llog processing complains when MDT is stuck in WAITING state? 

Comment by Olaf Faaland [ 11/Apr/22 ]

Hi Mikhail,
I found that there are several errors reported in lod_sub_process_config().  They were not encountered during my experiment, but earlier in the day today when I stopped the targets prior to a software update.  Note they appeared on several nodes, all MDTs.

Apr 11 14:46:58 jet2 kernel: LustreError: 27555:0:(lod_dev.c:267:lod_sub_process_config()) lquake-MDT0001-mdtlov: error cleaning up LOD index 3: cmd 0xcf031 : rc = -19
Apr 11 14:46:58 jet3 kernel: LustreError: 15820:0:(lod_dev.c:267:lod_sub_process_config()) lquake-MDT0002-mdtlov: error cleaning up LOD index 3: cmd 0xcf031 : rc = -19
Apr 11 14:46:58 jet5 kernel: LustreError: 5267:0:(lod_dev.c:267:lod_sub_process_config()) lquake-MDT0004-mdtlov: error cleaning up LOD index 3: cmd 0xcf031 : rc = -19
Apr 11 14:46:58 jet7 kernel: LustreError: 22882:0:(lod_dev.c:267:lod_sub_process_config()) lquake-MDT0006-mdtlov: error cleaning up LOD index 3: cmd 0xcf031 : rc = -19
Apr 11 14:46:58 jet8 kernel: LustreError: 21185:0:(lod_dev.c:267:lod_sub_process_config()) lquake-MDT0007-mdtlov: error cleaning up LOD index 3: cmd 0xcf031 : rc = -19
Apr 11 14:46:58 jet9 kernel: LustreError: 3160:0:(lod_dev.c:267:lod_sub_process_config()) lquake-MDT0008-mdtlov: error cleaning up LOD index 1: cmd 0xcf031 : rc = -19
Apr 11 14:46:58 jet10 kernel: LustreError: 18778:0:(lod_dev.c:267:lod_sub_process_config()) lquake-MDT0009-mdtlov: error cleaning up LOD index 1: cmd 0xcf031 : rc = -19
Apr 11 14:46:58 jet12 kernel: LustreError: 3992:0:(lod_dev.c:267:lod_sub_process_config()) lquake-MDT000b-mdtlov: error cleaning up LOD index 3: cmd 0xcf031 : rc = -19
Apr 11 14:46:58 jet14 kernel: LustreError: 1060:0:(lod_dev.c:267:lod_sub_process_config()) lquake-MDT000d-mdtlov: error cleaning up LOD index 1: cmd 0xcf031 : rc = -19
Apr 11 14:46:58 jet15 kernel: LustreError: 22650:0:(lod_dev.c:267:lod_sub_process_config()) lquake-MDT000e-mdtlov: error cleaning up LOD index 3: cmd 0xcf031 : rc = -19
Comment by Olaf Faaland [ 11/Apr/22 ]

I also saw these errors on jet6 (which was hosting MDT0005):

2022-04-11 14:48:39 [1649713719.122533] LustreError: 30035:0:(osp_object.c:594:osp_attr_get()) lquake-MDT0000-osp-MDT0005:osp_attr_get update error [0x20000000a:0x0:0x0]: rc = -5
2022-04-11 14:48:39 [1649713719.141533] LustreError: 30035:0:(llog_cat.c:447:llog_cat_close()) lquake-MDT0000-osp-MDT0005: failure destroying log during cleanup: rc = -5
2022-04-11 14:48:39 [1649713719.467531] Lustre: server umount lquake-MDT0005 complete 
Comment by Olaf Faaland [ 11/Apr/22 ]

In all cases, the lod_sub_process_config() error was preceeded by a osp_disconnect() error, like this:

2022-04-11 14:46:58 [1649713618.877752] Lustre: Failing over lquake-MDT0004
2022-04-11 14:46:58 [1649713618.896752] LustreError: 11-0: lquake-MDT0001-osp-MDT0004: operation mds_disconnect to node 172.19.1.112@o2ib100 failed: rc = -107
2022-04-11 14:46:58 [1649713618.915752] Lustre: lquake-MDT000e-osp-MDT0004: Connection to lquake-MDT000e (at 172.19.1.125@o2ib100) was lost; in progress operations using this service will wait for recovery to complete
2022-04-11 14:46:58 [1649713618.936752] LustreError: 5267:0:(osp_dev.c:485:osp_disconnect()) lquake-MDT0003-osp-MDT0004: can't disconnect: rc = -19
2022-04-11 14:46:58 [1649713618.950752] LustreError: 5267:0:(lod_dev.c:267:lod_sub_process_config()) lquake-MDT0004-mdtlov: error cleaning up LOD index 3: cmd 0xcf031 : rc = -19 
Comment by Mikhail Pershin [ 19/Apr/22 ]

Olaf, I'd propose to add several patches to your stack. I think the problem in general is that update llogs became quite big at some or several MDTs. There are series of patches to manage update llogs more gracefully (top to bottom):

This might help MDT to process update logs after all if problem is in them as I am expecting. Another patch that could also be useful is about OSP_DISCONNECT handling between MDTs:

https://review.whamcloud.com/44753 - osp: do force disconnect if import is not ready

Mike

 

Generated at Sat Feb 10 03:18:59 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.