[ 0.000000] Booting Linux on physical CPU 0x0000080000 [0x481fd010] [ 0.000000] Linux version 5.10.0-188.0.0.101.oe2203sp3.aarch64 (root@dc-64g.compass-ci) (gcc_old (GCC) 10.3.1, GNU ld (GNU Binutils) 2.37) #1 SMP Wed Feb 21 13:52:43 CST 2024 [ 0.000000] efi: EFI v2.70 by EDK II [ 0.000000] efi: SMBIOS 3.0=0x2f800000 ACPI 2.0=0x2f870000 MEMATTR=0x2d87d018 ESRT=0x2d8f6f18 MOKvar=0x2d880000 MEMRESERVE=0x2d890098 [ 0.000000] esrt: Reserving ESRT space from 0x000000002d8f6f18 to 0x000000002d8f6f78. [ 0.000000] ACPI: Early table checksum verification disabled [ 0.000000] ACPI: RSDP 0x000000002F870000 000024 (v02 HISI ) [ 0.000000] ACPI: XSDT 0x000000002F860000 00009C (v01 HISI HIP08 00000000 01000013) [ 0.000000] ACPI: FACP 0x000000002F350000 000114 (v06 HISI HIP08 00000000 HISI 20151124) [ 0.000000] ACPI: DSDT 0x000000002F0C0000 00CE0E (v02 HISI HIP08 00000000 INTL 20181213) [ 0.000000] ACPI: BERT 0x000000002F7A0000 000030 (v01 HISI HIP08 00000000 HISI 20151124) [ 0.000000] ACPI: HEST 0x000000002F780000 00058C (v01 HISI HIP08 00000000 HISI 20151124) [ 0.000000] ACPI: ERST 0x000000002F740000 000230 (v01 HISI HIP08 00000000 HISI 20151124) [ 0.000000] ACPI: EINJ 0x000000002F730000 000170 (v01 HISI HIP08 00000000 HISI 20151124) [ 0.000000] ACPI: GTDT 0x000000002F330000 00007C (v02 HISI HIP08 00000000 HISI 20151124) [ 0.000000] ACPI: SDEI 0x000000002F310000 000030 (v01 HISI HIP08 00000000 HISI 20151124) [ 0.000000] ACPI: MCFG 0x000000002F110000 00003C (v01 HISI HIP08 00000000 HISI 20151124) [ 0.000000] ACPI: SLIT 0x000000002F100000 00003C (v01 HISI HIP08 00000000 HISI 20151124) [ 0.000000] ACPI: SPCR 0x000000002F0F0000 000050 (v02 HISI HIP08 00000000 HISI 20151124) [ 0.000000] ACPI: SRAT 0x000000002F0E0000 0007D0 (v03 HISI HIP08 00000000 HISI 20151124) [ 0.000000] ACPI: APIC 0x000000002F0D0000 001E6C (v04 HISI HIP08 00000000 HISI 20151124) [ 0.000000] ACPI: IORT 0x000000002F0B0000 001060 (v00 HISI HIP08 00000000 INTL 20181213) [ 0.000000] ACPI: PPTT 0x000000002D8D0000 0031B0 (v01 HISI HIP08 00000000 HISI 20151124) [ 0.000000] ACPI: SPMI 0x000000002D8E0000 000041 (v05 HISI HIP08 00000000 HISI 20151124) [ 0.000000] ACPI: SPCR: console: uart,mmio,0x3f00002f8,115200 [ 0.000000] ACPI: SRAT: Node 0 PXM 0 [mem 0x2080000000-0x3fffffffff] [ 0.000000] ACPI: SRAT: Node 1 PXM 1 [mem 0x4000000000-0x5fffffffff] [ 0.000000] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] [ 0.000000] ACPI: SRAT: Node 2 PXM 2 [mem 0x202000000000-0x203fffffffff] [ 0.000000] ACPI: SRAT: Node 3 PXM 3 [mem 0x204000000000-0x205fffffffff] [ 0.000000] NUMA: NODE_DATA [mem 0x3fffffa2c0-0x3fffffffff] [ 0.000000] NUMA: NODE_DATA [mem 0x5fffffa2c0-0x5fffffffff] [ 0.000000] NUMA: NODE_DATA [mem 0x203fffffa2c0-0x203fffffffff] [ 0.000000] NUMA: NODE_DATA [mem 0x205fbfde12c0-0x205fbfde6fff] [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x0000000000000000-0x00000000ffffffff] [ 0.000000] DMA32 empty [ 0.000000] Normal [mem 0x0000000100000000-0x0000205fffffffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x0000000000000000-0x000000000000ffff] [ 0.000000] node 0: [mem 0x0000000000010000-0x000000002d87ffff] [ 0.000000] node 0: [mem 0x000000002d880000-0x000000002d88ffff] [ 0.000000] node 0: [mem 0x000000002d890000-0x000000002f11ffff] [ 0.000000] node 0: [mem 0x000000002f120000-0x000000002f30ffff] [ 0.000000] node 0: [mem 0x000000002f310000-0x000000002f31ffff] [ 0.000000] node 0: [mem 0x000000002f320000-0x000000002f32ffff] [ 0.000000] node 0: [mem 0x000000002f330000-0x000000002f33ffff] [ 0.000000] node 0: [mem 0x000000002f340000-0x000000002f34ffff] [ 0.000000] node 0: [mem 0x000000002f350000-0x000000002f35ffff] [ 0.000000] node 0: [mem 0x000000002f360000-0x000000002f43ffff] [ 0.000000] node 0: [mem 0x000000002f440000-0x000000002f48ffff] [ 0.000000] node 0: [mem 0x000000002f490000-0x000000002f52ffff] [ 0.000000] node 0: [mem 0x000000002f530000-0x000000002f54ffff] [ 0.000000] node 0: [mem 0x000000002f550000-0x000000002f68ffff] [ 0.000000] node 0: [mem 0x000000002f690000-0x000000002f74ffff] [ 0.000000] node 0: [mem 0x000000002f750000-0x000000002f751fff] [ 0.000000] node 0: [mem 0x000000002f752000-0x000000002f75ffff] [ 0.000000] node 0: [mem 0x000000002f760000-0x000000002f760fff] [ 0.000000] node 0: [mem 0x000000002f761000-0x000000002f76ffff] [ 0.000000] node 0: [mem 0x000000002f770000-0x000000002f771fff] [ 0.000000] node 0: [mem 0x000000002f772000-0x000000002f78ffff] [ 0.000000] node 0: [mem 0x000000002f790000-0x000000002f791fff] [ 0.000000] node 0: [mem 0x000000002f792000-0x000000002f7affff] [ 0.000000] node 0: [mem 0x000000002f7b0000-0x000000002f7b0fff] [ 0.000000] node 0: [mem 0x000000002f7b1000-0x000000002f7effff] [ 0.000000] node 0: [mem 0x000000002f7f0000-0x000000002f810fff] [ 0.000000] node 0: [mem 0x000000002f811000-0x000000002f87ffff] [ 0.000000] node 0: [mem 0x000000002f880000-0x000000002fb1ffff] [ 0.000000] node 0: [mem 0x000000002fb20000-0x000000003eecffff] [ 0.000000] node 0: [mem 0x000000003eed0000-0x000000003eefffff] [ 0.000000] node 0: [mem 0x000000003ef00000-0x000000003fbfffff] [ 0.000000] node 0: [mem 0x0000000040000000-0x0000000043ffffff] [ 0.000000] node 0: [mem 0x0000000044030000-0x000000004fffffff] [ 0.000000] node 0: [mem 0x0000000050000000-0x000000007fffffff] [ 0.000000] node 0: [mem 0x0000002080000000-0x0000003fffffffff] [ 0.000000] node 1: [mem 0x0000004000000000-0x0000005fffffffff] [ 0.000000] node 2: [mem 0x0000202000000000-0x0000203fffffffff] [ 0.000000] node 3: [mem 0x0000204000000000-0x0000205fffffffff] [ 0.000000] Initmem setup node 0 [mem 0x0000000000000000-0x0000003fffffffff] [ 0.000000] On node 0 totalpages: 33553360 [ 0.000000] DMA zone: 8176 pages used for memmap [ 0.000000] DMA zone: 0 pages reserved [ 0.000000] DMA zone: 523216 pages, LIFO batch:63 [ 0.000000] Normal zone: 516096 pages used for memmap [ 0.000000] Normal zone: 33030144 pages, LIFO batch:63 [ 0.000000] Initmem setup node 1 [mem 0x0000004000000000-0x0000005fffffffff] [ 0.000000] On node 1 totalpages: 33554432 [ 0.000000] Normal zone: 524288 pages used for memmap [ 0.000000] Normal zone: 33554432 pages, LIFO batch:63 [ 0.000000] Initmem setup node 2 [mem 0x0000202000000000-0x0000203fffffffff] [ 0.000000] On node 2 totalpages: 33554432 [ 0.000000] Normal zone: 524288 pages used for memmap [ 0.000000] Normal zone: 33554432 pages, LIFO batch:63 [ 0.000000] Initmem setup node 3 [mem 0x0000204000000000-0x0000205fffffffff] [ 0.000000] On node 3 totalpages: 33554432 [ 0.000000] Normal zone: 524288 pages used for memmap [ 0.000000] Normal zone: 33554432 pages, LIFO batch:63 [ 0.000000] Reserving 256MB of low memory at 1792MB for crashkernel (low RAM limit: 4096MB) [ 0.000000] Reserving 1024MB of memory at 33946624MB for crashkernel (System RAM: 524283MB) [ 0.000000] psci: probing for conduit method from ACPI. [ 0.000000] psci: PSCIv1.1 detected in firmware. [ 0.000000] psci: Using standard PSCI v0.2 function IDs [ 0.000000] psci: MIGRATE_INFO_TYPE not supported. [ 0.000000] psci: SMC Calling Convention v1.1 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80200 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80300 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90200 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90300 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0200 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0300 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0200 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0300 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0200 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0300 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0200 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0300 -> Node 0 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x180000 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x180100 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x180200 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x180300 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x190000 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x190100 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x190200 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x190300 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1a0000 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1a0100 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1a0200 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1a0300 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1b0000 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1b0100 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1b0200 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1b0300 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1c0000 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1c0100 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1c0200 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1c0300 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1d0000 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1d0100 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1d0200 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 1 -> MPIDR 0x1d0300 -> Node 1 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x280000 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x280100 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x280200 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x280300 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x290000 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x290100 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x290200 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x290300 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2a0000 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2a0100 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2a0200 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2a0300 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2b0000 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2b0100 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2b0200 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2b0300 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2c0000 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2c0100 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2c0200 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2c0300 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2d0000 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2d0100 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2d0200 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 2 -> MPIDR 0x2d0300 -> Node 2 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x380000 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x380100 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x380200 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x380300 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x390000 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x390100 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x390200 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x390300 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3a0000 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3a0100 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3a0200 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3a0300 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3b0000 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3b0100 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3b0200 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3b0300 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3c0000 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3c0100 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3c0200 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3c0300 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3d0000 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3d0100 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3d0200 -> Node 3 [ 0.000000] ACPI: NUMA: SRAT: PXM 3 -> MPIDR 0x3d0300 -> Node 3 [ 0.000000] percpu: Embedded 35 pages/cpu s106328 r8192 d28840 u143360 [ 0.000000] pcpu-alloc: s106328 r8192 d28840 u143360 alloc=35*4096 [ 0.000000] pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 [ 0.000000] pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 [ 0.000000] pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 [ 0.000000] pcpu-alloc: [1] 24 [1] 25 [1] 26 [1] 27 [1] 28 [1] 29 [1] 30 [1] 31 [ 0.000000] pcpu-alloc: [1] 32 [1] 33 [1] 34 [1] 35 [1] 36 [1] 37 [1] 38 [1] 39 [ 0.000000] pcpu-alloc: [1] 40 [1] 41 [1] 42 [1] 43 [1] 44 [1] 45 [1] 46 [1] 47 [ 0.000000] pcpu-alloc: [2] 48 [2] 49 [2] 50 [2] 51 [2] 52 [2] 53 [2] 54 [2] 55 [ 0.000000] pcpu-alloc: [2] 56 [2] 57 [2] 58 [2] 59 [2] 60 [2] 61 [2] 62 [2] 63 [ 0.000000] pcpu-alloc: [2] 64 [2] 65 [2] 66 [2] 67 [2] 68 [2] 69 [2] 70 [2] 71 [ 0.000000] pcpu-alloc: [3] 72 [3] 73 [3] 74 [3] 75 [3] 76 [3] 77 [3] 78 [3] 79 [ 0.000000] pcpu-alloc: [3] 80 [3] 81 [3] 82 [3] 83 [3] 84 [3] 85 [3] 86 [3] 87 [ 0.000000] pcpu-alloc: [3] 88 [3] 89 [3] 90 [3] 91 [3] 92 [3] 93 [3] 94 [3] 95 [ 0.000000] Detected VIPT I-cache on CPU0 [ 0.000000] CPU features: detected: GIC system register CPU interface [ 0.000000] CPU features: detected: Virtualization Host Extensions [ 0.000000] CPU features: detected: Hardware dirty bit management [ 0.000000] alternatives: patching kernel code [ 0.000000] Fallback order for Node 0: 0 1 2 3 [ 0.000000] Fallback order for Node 1: 1 0 2 3 [ 0.000000] Fallback order for Node 2: 2 3 0 1 [ 0.000000] Fallback order for Node 3: 3 2 0 1 [ 0.000000] Built 4 zonelists, mobility grouping on. Total pages: 132119520 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-5.10.0-188.0.0.101.oe2203sp3.aarch64 root=/dev/mapper/openeuler-root ro rd.lvm.lv=openeuler/root rd.lvm.lv=openeuler/swap video=VGA-1:640x480-32@60me rhgb quiet console=tty0 crashkernel=1024M,high smmu.bypassdev=0x1000:0x17 smmu.bypassdev=0x1000:0x15 video=efifb:off default_hugepagesz=1G hugepagesz=1G sched_debug loglevel=7 [ 0.000000] HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB [ 0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off [ 0.000000] software IO TLB: mapped [mem 0x000000006c000000-0x0000000070000000] (64MB) [ 0.000000] Memory: 526735192K/536866624K available (13504K kernel code, 5448K rwdata, 9584K rodata, 4608K init, 11309K bss, 10131432K reserved, 0K cma-reserved) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=96, Nodes=4 [ 0.000000] ftrace: allocating 45051 entries in 176 pages [ 0.000000] ftrace: allocated 176 pages with 3 groups [ 0.000000] rcu: Hierarchical RCU implementation. [ 0.000000] rcu: RCU restricting CPUs from NR_CPUS=4096 to nr_cpu_ids=96. [ 0.000000] Rude variant of Tasks RCU enabled. [ 0.000000] Tracing variant of Tasks RCU enabled. [ 0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies. [ 0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=96 [ 0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 [ 0.000000] GICv3: GIC: Using split EOI/Deactivate mode [ 0.000000] GICv3: 640 SPIs implemented [ 0.000000] GICv3: 0 Extended SPIs implemented [ 0.000000] GICv3: Distributor has no Range Selector support [ 0.000000] GICv3: 16 PPIs implemented [ 0.000000] GICv3: GICv4 features: DirectLPI [ 0.000000] GICv3: CPU0: found redistributor 80000 region 0:0x00000000ae100000 [ 0.000000] SRAT: PXM 0 -> ITS 0 -> Node 0 [ 0.000000] SRAT: PXM 2 -> ITS 1 -> Node 2 [ 0.000000] ITS [mem 0x202100000-0x20211ffff] [ 0.000000] ITS@0x0000000202100000: Using ITS number 0 [ 0.000000] ITS@0x0000000202100000: allocated 65536 Devices @2080380000 (flat, esz 8, psz 16K, shr 1) [ 0.000000] ITS@0x0000000202100000: allocated 65536 Virtual CPUs @2080800000 (flat, esz 16, psz 4K, shr 1) [ 0.000000] ITS@0x0000000202100000: allocated 256 Interrupt Collections @208034a000 (flat, esz 16, psz 4K, shr 1) [ 0.000000] ITS [mem 0x200202100000-0x20020211ffff] [ 0.000000] ITS@0x0000200202100000: Using ITS number 1 [ 0.000000] ITS@0x0000200202100000: allocated 65536 Devices @202000080000 (flat, esz 8, psz 16K, shr 1) [ 0.000000] ITS@0x0000200202100000: allocated 65536 Virtual CPUs @202000100000 (flat, esz 16, psz 4K, shr 1) [ 0.000000] ITS@0x0000200202100000: allocated 256 Interrupt Collections @202000001000 (flat, esz 16, psz 4K, shr 1) [ 0.000000] GICv3: using LPI property table @0x0000002080360000 [ 0.000000] ITS: Using DirectLPI for VPE invalidation [ 0.000000] ITS: Enabling GICv4 support [ 0.000000] GICv3: CPU0: using allocated LPI pending table @0x0000002080370000 [ 0.000000] rcu: Offload RCU callbacks from CPUs: (none). [ 0.000000] arch_timer: cp15 timer(s) running at 100.00MHz (phys). [ 0.000000] clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x171024e7e0, max_idle_ns: 440795205315 ns [ 0.000001] sched_clock: 57 bits at 100MHz, resolution 10ns, wraps every 4398046511100ns [ 0.000119] Console: colour dummy device 80x25 [ 0.000464] printk: console [tty0] enabled [ 0.000502] mempolicy: Enabling automatic NUMA balancing. Configure with numa_balancing= or the kernel.numa_balancing sysctl [ 0.000514] ACPI: Core revision 20200925 [ 0.000671] Calibrating delay loop (skipped), value calculated using timer frequency.. 200.00 BogoMIPS (lpj=400000) [ 0.000677] pid_max: default: 98304 minimum: 768 [ 0.000759] LSM: Security Framework initializing [ 0.000775] Yama: becoming mindful. [ 0.000784] SELinux: Initializing. [ 0.018651] Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, vmalloc) [ 0.027573] Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, vmalloc) [ 0.027895] Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) [ 0.028186] Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) [ 0.030673] rcu: Hierarchical SRCU implementation. [ 0.030896] Platform MSI: ITS@0x202100000 domain created [ 0.030902] Platform MSI: ITS@0x200202100000 domain created [ 0.030910] PCI/MSI: ITS@0x202100000 domain created [ 0.030914] PCI/MSI: ITS@0x200202100000 domain created [ 0.030923] Remapping and enabling EFI services. [ 0.032643] smp: Bringing up secondary CPUs ... [ 0.032810] Detected VIPT I-cache on CPU1 [ 0.032818] GICv3: CPU1: found redistributor 80100 region 1:0x00000000ae140000 [ 0.032838] GICv3: CPU1: using allocated LPI pending table @0x0000002080900000 [ 0.032890] CPU1: Booted secondary processor 0x0000080100 [0x481fd010] [ 0.033829] Detected VIPT I-cache on CPU2 [ 0.033833] GICv3: CPU2: found redistributor 80200 region 2:0x00000000ae180000 [ 0.033848] GICv3: CPU2: using allocated LPI pending table @0x0000002080910000 [ 0.033895] CPU2: Booted secondary processor 0x0000080200 [0x481fd010] [ 0.034845] Detected VIPT I-cache on CPU3 [ 0.034850] GICv3: CPU3: found redistributor 80300 region 3:0x00000000ae1c0000 [ 0.034864] GICv3: CPU3: using allocated LPI pending table @0x0000002080920000 [ 0.034910] CPU3: Booted secondary processor 0x0000080300 [0x481fd010] [ 0.035878] Detected VIPT I-cache on CPU4 [ 0.035888] GICv3: CPU4: found redistributor 90000 region 4:0x00000000ae200000 [ 0.035912] GICv3: CPU4: using allocated LPI pending table @0x0000002080930000 [ 0.035967] CPU4: Booted secondary processor 0x0000090000 [0x481fd010] [ 0.036916] Detected VIPT I-cache on CPU5 [ 0.036922] GICv3: CPU5: found redistributor 90100 region 5:0x00000000ae240000 [ 0.036937] GICv3: CPU5: using allocated LPI pending table @0x0000002080940000 [ 0.036983] CPU5: Booted secondary processor 0x0000090100 [0x481fd010] [ 0.037934] Detected VIPT I-cache on CPU6 [ 0.037940] GICv3: CPU6: found redistributor 90200 region 6:0x00000000ae280000 [ 0.037955] GICv3: CPU6: using allocated LPI pending table @0x0000002080950000 [ 0.038002] CPU6: Booted secondary processor 0x0000090200 [0x481fd010] [ 0.038966] Detected VIPT I-cache on CPU7 [ 0.038972] GICv3: CPU7: found redistributor 90300 region 7:0x00000000ae2c0000 [ 0.038987] GICv3: CPU7: using allocated LPI pending table @0x0000002080960000 [ 0.039034] CPU7: Booted secondary processor 0x0000090300 [0x481fd010] [ 0.040011] Detected VIPT I-cache on CPU8 [ 0.040019] GICv3: CPU8: found redistributor a0000 region 8:0x00000000ae300000 [ 0.040040] GICv3: CPU8: using allocated LPI pending table @0x0000002080970000 [ 0.040092] CPU8: Booted secondary processor 0x00000a0000 [0x481fd010] [ 0.041058] Detected VIPT I-cache on CPU9 [ 0.041064] GICv3: CPU9: found redistributor a0100 region 9:0x00000000ae340000 [ 0.041079] GICv3: CPU9: using allocated LPI pending table @0x0000002080980000 [ 0.041126] CPU9: Booted secondary processor 0x00000a0100 [0x481fd010] [ 0.042103] Detected VIPT I-cache on CPU10 [ 0.042110] GICv3: CPU10: found redistributor a0200 region 10:0x00000000ae380000 [ 0.042124] GICv3: CPU10: using allocated LPI pending table @0x0000002080990000 [ 0.042170] CPU10: Booted secondary processor 0x00000a0200 [0x481fd010] [ 0.043119] Detected VIPT I-cache on CPU11 [ 0.043126] GICv3: CPU11: found redistributor a0300 region 11:0x00000000ae3c0000 [ 0.043140] GICv3: CPU11: using allocated LPI pending table @0x00000020809a0000 [ 0.043188] CPU11: Booted secondary processor 0x00000a0300 [0x481fd010] [ 0.044165] Detected VIPT I-cache on CPU12 [ 0.044175] GICv3: CPU12: found redistributor b0000 region 12:0x00000000ae400000 [ 0.044196] GICv3: CPU12: using allocated LPI pending table @0x00000020809b0000 [ 0.044246] CPU12: Booted secondary processor 0x00000b0000 [0x481fd010] [ 0.045199] Detected VIPT I-cache on CPU13 [ 0.045207] GICv3: CPU13: found redistributor b0100 region 13:0x00000000ae440000 [ 0.045222] GICv3: CPU13: using allocated LPI pending table @0x00000020809c0000 [ 0.045269] CPU13: Booted secondary processor 0x00000b0100 [0x481fd010] [ 0.046216] Detected VIPT I-cache on CPU14 [ 0.046224] GICv3: CPU14: found redistributor b0200 region 14:0x00000000ae480000 [ 0.046239] GICv3: CPU14: using allocated LPI pending table @0x00000020809d0000 [ 0.046286] CPU14: Booted secondary processor 0x00000b0200 [0x481fd010] [ 0.047237] Detected VIPT I-cache on CPU15 [ 0.047245] GICv3: CPU15: found redistributor b0300 region 15:0x00000000ae4c0000 [ 0.047260] GICv3: CPU15: using allocated LPI pending table @0x00000020809e0000 [ 0.047309] CPU15: Booted secondary processor 0x00000b0300 [0x481fd010] [ 0.048258] Detected VIPT I-cache on CPU16 [ 0.048269] GICv3: CPU16: found redistributor c0000 region 16:0x00000000ae500000 [ 0.048290] GICv3: CPU16: using allocated LPI pending table @0x00000020809f0000 [ 0.048339] CPU16: Booted secondary processor 0x00000c0000 [0x481fd010] [ 0.049297] Detected VIPT I-cache on CPU17 [ 0.049306] GICv3: CPU17: found redistributor c0100 region 17:0x00000000ae540000 [ 0.049321] GICv3: CPU17: using allocated LPI pending table @0x0000002080a00000 [ 0.049367] CPU17: Booted secondary processor 0x00000c0100 [0x481fd010] [ 0.050316] Detected VIPT I-cache on CPU18 [ 0.050324] GICv3: CPU18: found redistributor c0200 region 18:0x00000000ae580000 [ 0.050339] GICv3: CPU18: using allocated LPI pending table @0x0000002080a10000 [ 0.050388] CPU18: Booted secondary processor 0x00000c0200 [0x481fd010] [ 0.051333] Detected VIPT I-cache on CPU19 [ 0.051341] GICv3: CPU19: found redistributor c0300 region 19:0x00000000ae5c0000 [ 0.051356] GICv3: CPU19: using allocated LPI pending table @0x0000002080a20000 [ 0.051402] CPU19: Booted secondary processor 0x00000c0300 [0x481fd010] [ 0.052377] Detected VIPT I-cache on CPU20 [ 0.052389] GICv3: CPU20: found redistributor d0000 region 20:0x00000000ae600000 [ 0.052413] GICv3: CPU20: using allocated LPI pending table @0x0000002080a30000 [ 0.052466] CPU20: Booted secondary processor 0x00000d0000 [0x481fd010] [ 0.053409] Detected VIPT I-cache on CPU21 [ 0.053418] GICv3: CPU21: found redistributor d0100 region 21:0x00000000ae640000 [ 0.053434] GICv3: CPU21: using allocated LPI pending table @0x0000002080a40000 [ 0.053482] CPU21: Booted secondary processor 0x00000d0100 [0x481fd010] [ 0.054427] Detected VIPT I-cache on CPU22 [ 0.054436] GICv3: CPU22: found redistributor d0200 region 22:0x00000000ae680000 [ 0.054451] GICv3: CPU22: using allocated LPI pending table @0x0000002080a50000 [ 0.054499] CPU22: Booted secondary processor 0x00000d0200 [0x481fd010] [ 0.055458] Detected VIPT I-cache on CPU23 [ 0.055467] GICv3: CPU23: found redistributor d0300 region 23:0x00000000ae6c0000 [ 0.055482] GICv3: CPU23: using allocated LPI pending table @0x0000002080a60000 [ 0.055527] CPU23: Booted secondary processor 0x00000d0300 [0x481fd010] [ 0.056503] Detected VIPT I-cache on CPU24 [ 0.056521] GICv3: CPU24: found redistributor 180000 region 24:0x00000000aa100000 [ 0.056556] GICv3: CPU24: using allocated LPI pending table @0x0000002080a70000 [ 0.056621] CPU24: Booted secondary processor 0x0000180000 [0x481fd010] [ 0.057619] Detected VIPT I-cache on CPU25 [ 0.057630] GICv3: CPU25: found redistributor 180100 region 25:0x00000000aa140000 [ 0.057645] GICv3: CPU25: using allocated LPI pending table @0x0000002080a80000 [ 0.057703] CPU25: Booted secondary processor 0x0000180100 [0x481fd010] [ 0.058666] Detected VIPT I-cache on CPU26 [ 0.058677] GICv3: CPU26: found redistributor 180200 region 26:0x00000000aa180000 [ 0.058692] GICv3: CPU26: using allocated LPI pending table @0x0000002080a90000 [ 0.058747] CPU26: Booted secondary processor 0x0000180200 [0x481fd010] [ 0.059696] Detected VIPT I-cache on CPU27 [ 0.059708] GICv3: CPU27: found redistributor 180300 region 27:0x00000000aa1c0000 [ 0.059723] GICv3: CPU27: using allocated LPI pending table @0x0000002080aa0000 [ 0.059778] CPU27: Booted secondary processor 0x0000180300 [0x481fd010] [ 0.060739] Detected VIPT I-cache on CPU28 [ 0.060757] GICv3: CPU28: found redistributor 190000 region 28:0x00000000aa200000 [ 0.060789] GICv3: CPU28: using allocated LPI pending table @0x0000002080ab0000 [ 0.060854] CPU28: Booted secondary processor 0x0000190000 [0x481fd010] [ 0.061797] Detected VIPT I-cache on CPU29 [ 0.061809] GICv3: CPU29: found redistributor 190100 region 29:0x00000000aa240000 [ 0.061826] GICv3: CPU29: using allocated LPI pending table @0x0000002080ac0000 [ 0.061884] CPU29: Booted secondary processor 0x0000190100 [0x481fd010] [ 0.062836] Detected VIPT I-cache on CPU30 [ 0.062848] GICv3: CPU30: found redistributor 190200 region 30:0x00000000aa280000 [ 0.062864] GICv3: CPU30: using allocated LPI pending table @0x0000002080ad0000 [ 0.062922] CPU30: Booted secondary processor 0x0000190200 [0x481fd010] [ 0.063886] Detected VIPT I-cache on CPU31 [ 0.063899] GICv3: CPU31: found redistributor 190300 region 31:0x00000000aa2c0000 [ 0.063915] GICv3: CPU31: using allocated LPI pending table @0x0000002080ae0000 [ 0.063971] CPU31: Booted secondary processor 0x0000190300 [0x481fd010] [ 0.064928] Detected VIPT I-cache on CPU32 [ 0.064946] GICv3: CPU32: found redistributor 1a0000 region 32:0x00000000aa300000 [ 0.064972] GICv3: CPU32: using allocated LPI pending table @0x0000002080af0000 [ 0.065034] CPU32: Booted secondary processor 0x00001a0000 [0x481fd010] [ 0.065986] Detected VIPT I-cache on CPU33 [ 0.065999] GICv3: CPU33: found redistributor 1a0100 region 33:0x00000000aa340000 [ 0.066016] GICv3: CPU33: using allocated LPI pending table @0x0000002080b00000 [ 0.066072] CPU33: Booted secondary processor 0x00001a0100 [0x481fd010] [ 0.067031] Detected VIPT I-cache on CPU34 [ 0.067044] GICv3: CPU34: found redistributor 1a0200 region 34:0x00000000aa380000 [ 0.067061] GICv3: CPU34: using allocated LPI pending table @0x0000002080b10000 [ 0.067117] CPU34: Booted secondary processor 0x00001a0200 [0x481fd010] [ 0.068070] Detected VIPT I-cache on CPU35 [ 0.068083] GICv3: CPU35: found redistributor 1a0300 region 35:0x00000000aa3c0000 [ 0.068100] GICv3: CPU35: using allocated LPI pending table @0x0000002080b20000 [ 0.068157] CPU35: Booted secondary processor 0x00001a0300 [0x481fd010] [ 0.069119] Detected VIPT I-cache on CPU36 [ 0.069137] GICv3: CPU36: found redistributor 1b0000 region 36:0x00000000aa400000 [ 0.069165] GICv3: CPU36: using allocated LPI pending table @0x0000002080b30000 [ 0.069227] CPU36: Booted secondary processor 0x00001b0000 [0x481fd010] [ 0.070167] Detected VIPT I-cache on CPU37 [ 0.070181] GICv3: CPU37: found redistributor 1b0100 region 37:0x00000000aa440000 [ 0.070197] GICv3: CPU37: using allocated LPI pending table @0x0000002080b40000 [ 0.070254] CPU37: Booted secondary processor 0x00001b0100 [0x481fd010] [ 0.071205] Detected VIPT I-cache on CPU38 [ 0.071219] GICv3: CPU38: found redistributor 1b0200 region 38:0x00000000aa480000 [ 0.071236] GICv3: CPU38: using allocated LPI pending table @0x0000002080b50000 [ 0.071293] CPU38: Booted secondary processor 0x00001b0200 [0x481fd010] [ 0.072232] Detected VIPT I-cache on CPU39 [ 0.072246] GICv3: CPU39: found redistributor 1b0300 region 39:0x00000000aa4c0000 [ 0.072262] GICv3: CPU39: using allocated LPI pending table @0x0000002080b60000 [ 0.072320] CPU39: Booted secondary processor 0x00001b0300 [0x481fd010] [ 0.073272] Detected VIPT I-cache on CPU40 [ 0.073291] GICv3: CPU40: found redistributor 1c0000 region 40:0x00000000aa500000 [ 0.073317] GICv3: CPU40: using allocated LPI pending table @0x0000002080b70000 [ 0.073380] CPU40: Booted secondary processor 0x00001c0000 [0x481fd010] [ 0.074322] Detected VIPT I-cache on CPU41 [ 0.074337] GICv3: CPU41: found redistributor 1c0100 region 41:0x00000000aa540000 [ 0.074354] GICv3: CPU41: using allocated LPI pending table @0x0000002080b80000 [ 0.074409] CPU41: Booted secondary processor 0x00001c0100 [0x481fd010] [ 0.075357] Detected VIPT I-cache on CPU42 [ 0.075372] GICv3: CPU42: found redistributor 1c0200 region 42:0x00000000aa580000 [ 0.075388] GICv3: CPU42: using allocated LPI pending table @0x0000002080b90000 [ 0.075445] CPU42: Booted secondary processor 0x00001c0200 [0x481fd010] [ 0.076405] Detected VIPT I-cache on CPU43 [ 0.076420] GICv3: CPU43: found redistributor 1c0300 region 43:0x00000000aa5c0000 [ 0.076435] GICv3: CPU43: using allocated LPI pending table @0x0000002080ba0000 [ 0.076494] CPU43: Booted secondary processor 0x00001c0300 [0x481fd010] [ 0.077471] Detected VIPT I-cache on CPU44 [ 0.077492] GICv3: CPU44: found redistributor 1d0000 region 44:0x00000000aa600000 [ 0.077520] GICv3: CPU44: using allocated LPI pending table @0x0000002080bb0000 [ 0.077582] CPU44: Booted secondary processor 0x00001d0000 [0x481fd010] [ 0.078525] Detected VIPT I-cache on CPU45 [ 0.078540] GICv3: CPU45: found redistributor 1d0100 region 45:0x00000000aa640000 [ 0.078558] GICv3: CPU45: using allocated LPI pending table @0x0000002080bc0000 [ 0.078616] CPU45: Booted secondary processor 0x00001d0100 [0x481fd010] [ 0.079555] Detected VIPT I-cache on CPU46 [ 0.079570] GICv3: CPU46: found redistributor 1d0200 region 46:0x00000000aa680000 [ 0.079588] GICv3: CPU46: using allocated LPI pending table @0x0000002080bd0000 [ 0.079645] CPU46: Booted secondary processor 0x00001d0200 [0x481fd010] [ 0.080584] Detected VIPT I-cache on CPU47 [ 0.080599] GICv3: CPU47: found redistributor 1d0300 region 47:0x00000000aa6c0000 [ 0.080616] GICv3: CPU47: using allocated LPI pending table @0x0000002080be0000 [ 0.080674] CPU47: Booted secondary processor 0x00001d0300 [0x481fd010] [ 0.081694] Detected VIPT I-cache on CPU48 [ 0.081739] GICv3: CPU48: found redistributor 280000 region 48:0x00002000ae100000 [ 0.081799] GICv3: CPU48: using allocated LPI pending table @0x0000002080bf0000 [ 0.081904] CPU48: Booted secondary processor 0x0000280000 [0x481fd010] [ 0.082873] Detected VIPT I-cache on CPU49 [ 0.082902] GICv3: CPU49: found redistributor 280100 region 49:0x00002000ae140000 [ 0.082920] GICv3: CPU49: using allocated LPI pending table @0x0000002080c00000 [ 0.083006] CPU49: Booted secondary processor 0x0000280100 [0x481fd010] [ 0.083929] Detected VIPT I-cache on CPU50 [ 0.083958] GICv3: CPU50: found redistributor 280200 region 50:0x00002000ae180000 [ 0.083977] GICv3: CPU50: using allocated LPI pending table @0x0000002080c10000 [ 0.084061] CPU50: Booted secondary processor 0x0000280200 [0x481fd010] [ 0.085001] Detected VIPT I-cache on CPU51 [ 0.085031] GICv3: CPU51: found redistributor 280300 region 51:0x00002000ae1c0000 [ 0.085049] GICv3: CPU51: using allocated LPI pending table @0x0000002080c20000 [ 0.085132] CPU51: Booted secondary processor 0x0000280300 [0x481fd010] [ 0.086064] Detected VIPT I-cache on CPU52 [ 0.086096] GICv3: CPU52: found redistributor 290000 region 52:0x00002000ae200000 [ 0.086123] GICv3: CPU52: using allocated LPI pending table @0x0000002080c30000 [ 0.086210] CPU52: Booted secondary processor 0x0000290000 [0x481fd010] [ 0.087143] Detected VIPT I-cache on CPU53 [ 0.087172] GICv3: CPU53: found redistributor 290100 region 53:0x00002000ae240000 [ 0.087193] GICv3: CPU53: using allocated LPI pending table @0x0000002080c40000 [ 0.087279] CPU53: Booted secondary processor 0x0000290100 [0x481fd010] [ 0.088192] Detected VIPT I-cache on CPU54 [ 0.088221] GICv3: CPU54: found redistributor 290200 region 54:0x00002000ae280000 [ 0.088240] GICv3: CPU54: using allocated LPI pending table @0x0000002080c50000 [ 0.088324] CPU54: Booted secondary processor 0x0000290200 [0x481fd010] [ 0.089258] Detected VIPT I-cache on CPU55 [ 0.089288] GICv3: CPU55: found redistributor 290300 region 55:0x00002000ae2c0000 [ 0.089306] GICv3: CPU55: using allocated LPI pending table @0x0000002080c60000 [ 0.089391] CPU55: Booted secondary processor 0x0000290300 [0x481fd010] [ 0.090324] Detected VIPT I-cache on CPU56 [ 0.090356] GICv3: CPU56: found redistributor 2a0000 region 56:0x00002000ae300000 [ 0.090384] GICv3: CPU56: using allocated LPI pending table @0x0000002080c70000 [ 0.090473] CPU56: Booted secondary processor 0x00002a0000 [0x481fd010] [ 0.091391] Detected VIPT I-cache on CPU57 [ 0.091421] GICv3: CPU57: found redistributor 2a0100 region 57:0x00002000ae340000 [ 0.091441] GICv3: CPU57: using allocated LPI pending table @0x0000002080c80000 [ 0.091526] CPU57: Booted secondary processor 0x00002a0100 [0x481fd010] [ 0.092446] Detected VIPT I-cache on CPU58 [ 0.092477] GICv3: CPU58: found redistributor 2a0200 region 58:0x00002000ae380000 [ 0.092496] GICv3: CPU58: using allocated LPI pending table @0x0000002080c90000 [ 0.092581] CPU58: Booted secondary processor 0x00002a0200 [0x481fd010] [ 0.093505] Detected VIPT I-cache on CPU59 [ 0.093536] GICv3: CPU59: found redistributor 2a0300 region 59:0x00002000ae3c0000 [ 0.093556] GICv3: CPU59: using allocated LPI pending table @0x0000002080ca0000 [ 0.093640] CPU59: Booted secondary processor 0x00002a0300 [0x481fd010] [ 0.094582] Detected VIPT I-cache on CPU60 [ 0.094615] GICv3: CPU60: found redistributor 2b0000 region 60:0x00002000ae400000 [ 0.094643] GICv3: CPU60: using allocated LPI pending table @0x0000002080cb0000 [ 0.094731] CPU60: Booted secondary processor 0x00002b0000 [0x481fd010] [ 0.095652] Detected VIPT I-cache on CPU61 [ 0.095684] GICv3: CPU61: found redistributor 2b0100 region 61:0x00002000ae440000 [ 0.095703] GICv3: CPU61: using allocated LPI pending table @0x0000002080cc0000 [ 0.095788] CPU61: Booted secondary processor 0x00002b0100 [0x481fd010] [ 0.096726] Detected VIPT I-cache on CPU62 [ 0.096758] GICv3: CPU62: found redistributor 2b0200 region 62:0x00002000ae480000 [ 0.096777] GICv3: CPU62: using allocated LPI pending table @0x0000002080cd0000 [ 0.096864] CPU62: Booted secondary processor 0x00002b0200 [0x481fd010] [ 0.097789] Detected VIPT I-cache on CPU63 [ 0.097821] GICv3: CPU63: found redistributor 2b0300 region 63:0x00002000ae4c0000 [ 0.097840] GICv3: CPU63: using allocated LPI pending table @0x0000002080ce0000 [ 0.097927] CPU63: Booted secondary processor 0x00002b0300 [0x481fd010] [ 0.098862] Detected VIPT I-cache on CPU64 [ 0.098895] GICv3: CPU64: found redistributor 2c0000 region 64:0x00002000ae500000 [ 0.098923] GICv3: CPU64: using allocated LPI pending table @0x0000002080cf0000 [ 0.099010] CPU64: Booted secondary processor 0x00002c0000 [0x481fd010] [ 0.099927] Detected VIPT I-cache on CPU65 [ 0.099958] GICv3: CPU65: found redistributor 2c0100 region 65:0x00002000ae540000 [ 0.099977] GICv3: CPU65: using allocated LPI pending table @0x0000002080d00000 [ 0.100062] CPU65: Booted secondary processor 0x00002c0100 [0x481fd010] [ 0.100998] Detected VIPT I-cache on CPU66 [ 0.101030] GICv3: CPU66: found redistributor 2c0200 region 66:0x00002000ae580000 [ 0.101050] GICv3: CPU66: using allocated LPI pending table @0x0000002080d10000 [ 0.101134] CPU66: Booted secondary processor 0x00002c0200 [0x481fd010] [ 0.102066] Detected VIPT I-cache on CPU67 [ 0.102097] GICv3: CPU67: found redistributor 2c0300 region 67:0x00002000ae5c0000 [ 0.102116] GICv3: CPU67: using allocated LPI pending table @0x0000002080d20000 [ 0.102200] CPU67: Booted secondary processor 0x00002c0300 [0x481fd010] [ 0.103143] Detected VIPT I-cache on CPU68 [ 0.103177] GICv3: CPU68: found redistributor 2d0000 region 68:0x00002000ae600000 [ 0.103206] GICv3: CPU68: using allocated LPI pending table @0x0000002080d30000 [ 0.103294] CPU68: Booted secondary processor 0x00002d0000 [0x481fd010] [ 0.104215] Detected VIPT I-cache on CPU69 [ 0.104248] GICv3: CPU69: found redistributor 2d0100 region 69:0x00002000ae640000 [ 0.104267] GICv3: CPU69: using allocated LPI pending table @0x0000002080d40000 [ 0.104351] CPU69: Booted secondary processor 0x00002d0100 [0x481fd010] [ 0.105299] Detected VIPT I-cache on CPU70 [ 0.105331] GICv3: CPU70: found redistributor 2d0200 region 70:0x00002000ae680000 [ 0.105351] GICv3: CPU70: using allocated LPI pending table @0x0000002080d50000 [ 0.105435] CPU70: Booted secondary processor 0x00002d0200 [0x481fd010] [ 0.106366] Detected VIPT I-cache on CPU71 [ 0.106398] GICv3: CPU71: found redistributor 2d0300 region 71:0x00002000ae6c0000 [ 0.106419] GICv3: CPU71: using allocated LPI pending table @0x0000002080d60000 [ 0.106502] CPU71: Booted secondary processor 0x00002d0300 [0x481fd010] [ 0.107485] Detected VIPT I-cache on CPU72 [ 0.107550] GICv3: CPU72: found redistributor 380000 region 72:0x00002000aa100000 [ 0.107623] GICv3: CPU72: using allocated LPI pending table @0x0000002080d70000 [ 0.107742] CPU72: Booted secondary processor 0x0000380000 [0x481fd010] [ 0.108659] Detected VIPT I-cache on CPU73 [ 0.108698] GICv3: CPU73: found redistributor 380100 region 73:0x00002000aa140000 [ 0.108719] GICv3: CPU73: using allocated LPI pending table @0x0000002080d80000 [ 0.108813] CPU73: Booted secondary processor 0x0000380100 [0x481fd010] [ 0.109727] Detected VIPT I-cache on CPU74 [ 0.109765] GICv3: CPU74: found redistributor 380200 region 74:0x00002000aa180000 [ 0.109786] GICv3: CPU74: using allocated LPI pending table @0x0000002080d90000 [ 0.109882] CPU74: Booted secondary processor 0x0000380200 [0x481fd010] [ 0.110814] Detected VIPT I-cache on CPU75 [ 0.110852] GICv3: CPU75: found redistributor 380300 region 75:0x00002000aa1c0000 [ 0.110873] GICv3: CPU75: using allocated LPI pending table @0x0000002080da0000 [ 0.110966] CPU75: Booted secondary processor 0x0000380300 [0x481fd010] [ 0.111876] Detected VIPT I-cache on CPU76 [ 0.111917] GICv3: CPU76: found redistributor 390000 region 76:0x00002000aa200000 [ 0.111946] GICv3: CPU76: using allocated LPI pending table @0x0000002080db0000 [ 0.112042] CPU76: Booted secondary processor 0x0000390000 [0x481fd010] [ 0.112983] Detected VIPT I-cache on CPU77 [ 0.113022] GICv3: CPU77: found redistributor 390100 region 77:0x00002000aa240000 [ 0.113045] GICv3: CPU77: using allocated LPI pending table @0x0000002080dc0000 [ 0.113140] CPU77: Booted secondary processor 0x0000390100 [0x481fd010] [ 0.114045] Detected VIPT I-cache on CPU78 [ 0.114083] GICv3: CPU78: found redistributor 390200 region 78:0x00002000aa280000 [ 0.114105] GICv3: CPU78: using allocated LPI pending table @0x0000002080dd0000 [ 0.114198] CPU78: Booted secondary processor 0x0000390200 [0x481fd010] [ 0.115129] Detected VIPT I-cache on CPU79 [ 0.115169] GICv3: CPU79: found redistributor 390300 region 79:0x00002000aa2c0000 [ 0.115190] GICv3: CPU79: using allocated LPI pending table @0x0000002080de0000 [ 0.115284] CPU79: Booted secondary processor 0x0000390300 [0x481fd010] [ 0.116216] Detected VIPT I-cache on CPU80 [ 0.116257] GICv3: CPU80: found redistributor 3a0000 region 80:0x00002000aa300000 [ 0.116287] GICv3: CPU80: using allocated LPI pending table @0x0000002080df0000 [ 0.116382] CPU80: Booted secondary processor 0x00003a0000 [0x481fd010] [ 0.117310] Detected VIPT I-cache on CPU81 [ 0.117349] GICv3: CPU81: found redistributor 3a0100 region 81:0x00002000aa340000 [ 0.117370] GICv3: CPU81: using allocated LPI pending table @0x0000002080e00000 [ 0.117462] CPU81: Booted secondary processor 0x00003a0100 [0x481fd010] [ 0.118372] Detected VIPT I-cache on CPU82 [ 0.118411] GICv3: CPU82: found redistributor 3a0200 region 82:0x00002000aa380000 [ 0.118433] GICv3: CPU82: using allocated LPI pending table @0x0000002080e10000 [ 0.118526] CPU82: Booted secondary processor 0x00003a0200 [0x481fd010] [ 0.119438] Detected VIPT I-cache on CPU83 [ 0.119478] GICv3: CPU83: found redistributor 3a0300 region 83:0x00002000aa3c0000 [ 0.119501] GICv3: CPU83: using allocated LPI pending table @0x0000002080e20000 [ 0.119592] CPU83: Booted secondary processor 0x00003a0300 [0x481fd010] [ 0.120519] Detected VIPT I-cache on CPU84 [ 0.120561] GICv3: CPU84: found redistributor 3b0000 region 84:0x00002000aa400000 [ 0.120592] GICv3: CPU84: using allocated LPI pending table @0x0000002080e30000 [ 0.120687] CPU84: Booted secondary processor 0x00003b0000 [0x481fd010] [ 0.121602] Detected VIPT I-cache on CPU85 [ 0.121643] GICv3: CPU85: found redistributor 3b0100 region 85:0x00002000aa440000 [ 0.121664] GICv3: CPU85: using allocated LPI pending table @0x0000002080e40000 [ 0.121757] CPU85: Booted secondary processor 0x00003b0100 [0x481fd010] [ 0.122675] Detected VIPT I-cache on CPU86 [ 0.122715] GICv3: CPU86: found redistributor 3b0200 region 86:0x00002000aa480000 [ 0.122737] GICv3: CPU86: using allocated LPI pending table @0x0000002080e50000 [ 0.122831] CPU86: Booted secondary processor 0x00003b0200 [0x481fd010] [ 0.123747] Detected VIPT I-cache on CPU87 [ 0.123788] GICv3: CPU87: found redistributor 3b0300 region 87:0x00002000aa4c0000 [ 0.123809] GICv3: CPU87: using allocated LPI pending table @0x0000002080e60000 [ 0.123903] CPU87: Booted secondary processor 0x00003b0300 [0x481fd010] [ 0.124823] Detected VIPT I-cache on CPU88 [ 0.124865] GICv3: CPU88: found redistributor 3c0000 region 88:0x00002000aa500000 [ 0.124896] GICv3: CPU88: using allocated LPI pending table @0x0000002080e70000 [ 0.124992] CPU88: Booted secondary processor 0x00003c0000 [0x481fd010] [ 0.125894] Detected VIPT I-cache on CPU89 [ 0.125935] GICv3: CPU89: found redistributor 3c0100 region 89:0x00002000aa540000 [ 0.125955] GICv3: CPU89: using allocated LPI pending table @0x0000002080e80000 [ 0.126050] CPU89: Booted secondary processor 0x00003c0100 [0x481fd010] [ 0.126979] Detected VIPT I-cache on CPU90 [ 0.127019] GICv3: CPU90: found redistributor 3c0200 region 90:0x00002000aa580000 [ 0.127043] GICv3: CPU90: using allocated LPI pending table @0x0000002080e90000 [ 0.127137] CPU90: Booted secondary processor 0x00003c0200 [0x481fd010] [ 0.128043] Detected VIPT I-cache on CPU91 [ 0.128083] GICv3: CPU91: found redistributor 3c0300 region 91:0x00002000aa5c0000 [ 0.128106] GICv3: CPU91: using allocated LPI pending table @0x0000002080ea0000 [ 0.128200] CPU91: Booted secondary processor 0x00003c0300 [0x481fd010] [ 0.129125] Detected VIPT I-cache on CPU92 [ 0.129169] GICv3: CPU92: found redistributor 3d0000 region 92:0x00002000aa600000 [ 0.129200] GICv3: CPU92: using allocated LPI pending table @0x0000002080eb0000 [ 0.129297] CPU92: Booted secondary processor 0x00003d0000 [0x481fd010] [ 0.130208] Detected VIPT I-cache on CPU93 [ 0.130248] GICv3: CPU93: found redistributor 3d0100 region 93:0x00002000aa640000 [ 0.130270] GICv3: CPU93: using allocated LPI pending table @0x0000002080ec0000 [ 0.130362] CPU93: Booted secondary processor 0x00003d0100 [0x481fd010] [ 0.131273] Detected VIPT I-cache on CPU94 [ 0.131314] GICv3: CPU94: found redistributor 3d0200 region 94:0x00002000aa680000 [ 0.131338] GICv3: CPU94: using allocated LPI pending table @0x0000002080ed0000 [ 0.131430] CPU94: Booted secondary processor 0x00003d0200 [0x481fd010] [ 0.132340] Detected VIPT I-cache on CPU95 [ 0.132381] GICv3: CPU95: found redistributor 3d0300 region 95:0x00002000aa6c0000 [ 0.132403] GICv3: CPU95: using allocated LPI pending table @0x0000002080ee0000 [ 0.132497] CPU95: Booted secondary processor 0x00003d0300 [0x481fd010] [ 0.133218] smp: Brought up 4 nodes, 96 CPUs [ 0.133825] SMP: Total of 96 processors activated. [ 0.133828] CPU features: detected: Privileged Access Never [ 0.133830] CPU features: detected: LSE atomic instructions [ 0.133833] CPU features: detected: Common not Private translations [ 0.133835] CPU features: detected: Data cache clean to Point of Persistence [ 0.133838] CPU features: detected: RAS Extension Support [ 0.133840] CPU features: detected: Data cache clean to the PoU not required for I/D coherence [ 0.133843] CPU features: detected: CRC32 instructions [ 0.133846] CPU features: detected: ARM64 MPAM Extension Support [ 0.133849] CPU features: detected: Taishan IDC coherence workaround [ 0.184050] CPU: All CPU(s) started at EL2 [ 0.187815] CPU0 attaching sched-domain(s): [ 0.187818] domain-0: span=0-23 level=MC [ 0.187822] groups: 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 } [ 0.187865] domain-1: span=0-47 level=NUMA [ 0.187868] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.187874] domain-2: span=0-71 level=NUMA [ 0.187876] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.187882] domain-3: span=0-95 level=NUMA [ 0.187885] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.187898] CPU1 attaching sched-domain(s): [ 0.187899] domain-0: span=0-23 level=MC [ 0.187901] groups: 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 } [ 0.187943] domain-1: span=0-47 level=NUMA [ 0.187945] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.187951] domain-2: span=0-71 level=NUMA [ 0.187953] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.187959] domain-3: span=0-95 level=NUMA [ 0.187961] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.187971] CPU2 attaching sched-domain(s): [ 0.187972] domain-0: span=0-23 level=MC [ 0.187973] groups: 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 } [ 0.188016] domain-1: span=0-47 level=NUMA [ 0.188018] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.188024] domain-2: span=0-71 level=NUMA [ 0.188026] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.188032] domain-3: span=0-95 level=NUMA [ 0.188034] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.188043] CPU3 attaching sched-domain(s): [ 0.188044] domain-0: span=0-23 level=MC [ 0.188046] groups: 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 } [ 0.188089] domain-1: span=0-47 level=NUMA [ 0.188091] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.188097] domain-2: span=0-71 level=NUMA [ 0.188099] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.188105] domain-3: span=0-95 level=NUMA [ 0.188107] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.188116] CPU4 attaching sched-domain(s): [ 0.188117] domain-0: span=0-23 level=MC [ 0.188119] groups: 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 } [ 0.188159] domain-1: span=0-47 level=NUMA [ 0.188161] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.188167] domain-2: span=0-71 level=NUMA [ 0.188169] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.188175] domain-3: span=0-95 level=NUMA [ 0.188177] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.188187] CPU5 attaching sched-domain(s): [ 0.188188] domain-0: span=0-23 level=MC [ 0.188189] groups: 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 } [ 0.188231] domain-1: span=0-47 level=NUMA [ 0.188233] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.188238] domain-2: span=0-71 level=NUMA [ 0.188241] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.188246] domain-3: span=0-95 level=NUMA [ 0.188249] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.188258] CPU6 attaching sched-domain(s): [ 0.188259] domain-0: span=0-23 level=MC [ 0.188261] groups: 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 } [ 0.188303] domain-1: span=0-47 level=NUMA [ 0.188305] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.188310] domain-2: span=0-71 level=NUMA [ 0.188312] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.188318] domain-3: span=0-95 level=NUMA [ 0.188321] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.188330] CPU7 attaching sched-domain(s): [ 0.188331] domain-0: span=0-23 level=MC [ 0.188333] groups: 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 } [ 0.188375] domain-1: span=0-47 level=NUMA [ 0.188377] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.188383] domain-2: span=0-71 level=NUMA [ 0.188385] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.188391] domain-3: span=0-95 level=NUMA [ 0.188393] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.188403] CPU8 attaching sched-domain(s): [ 0.188404] domain-0: span=0-23 level=MC [ 0.188405] groups: 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 } [ 0.188448] domain-1: span=0-47 level=NUMA [ 0.188449] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.188455] domain-2: span=0-71 level=NUMA [ 0.188457] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.188463] domain-3: span=0-95 level=NUMA [ 0.188465] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.188475] CPU9 attaching sched-domain(s): [ 0.188475] domain-0: span=0-23 level=MC [ 0.188477] groups: 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 } [ 0.188519] domain-1: span=0-47 level=NUMA [ 0.188521] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.188527] domain-2: span=0-71 level=NUMA [ 0.188529] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.188534] domain-3: span=0-95 level=NUMA [ 0.188537] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.188546] CPU10 attaching sched-domain(s): [ 0.188547] domain-0: span=0-23 level=MC [ 0.188549] groups: 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 } [ 0.188591] domain-1: span=0-47 level=NUMA [ 0.188592] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.188598] domain-2: span=0-71 level=NUMA [ 0.188600] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.188606] domain-3: span=0-95 level=NUMA [ 0.188608] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.188618] CPU11 attaching sched-domain(s): [ 0.188619] domain-0: span=0-23 level=MC [ 0.188620] groups: 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 } [ 0.188662] domain-1: span=0-47 level=NUMA [ 0.188664] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.188669] domain-2: span=0-71 level=NUMA [ 0.188671] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.188677] domain-3: span=0-95 level=NUMA [ 0.188679] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.188689] CPU12 attaching sched-domain(s): [ 0.188690] domain-0: span=0-23 level=MC [ 0.188691] groups: 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 } [ 0.188732] domain-1: span=0-47 level=NUMA [ 0.188734] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.188740] domain-2: span=0-71 level=NUMA [ 0.188742] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.188748] domain-3: span=0-95 level=NUMA [ 0.188750] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.188759] CPU13 attaching sched-domain(s): [ 0.188760] domain-0: span=0-23 level=MC [ 0.188762] groups: 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 } [ 0.188804] domain-1: span=0-47 level=NUMA [ 0.188805] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.188811] domain-2: span=0-71 level=NUMA [ 0.188813] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.188819] domain-3: span=0-95 level=NUMA [ 0.188821] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.188831] CPU14 attaching sched-domain(s): [ 0.188831] domain-0: span=0-23 level=MC [ 0.188833] groups: 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 } [ 0.188875] domain-1: span=0-47 level=NUMA [ 0.188876] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.188882] domain-2: span=0-71 level=NUMA [ 0.188884] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.188890] domain-3: span=0-95 level=NUMA [ 0.188892] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.188901] CPU15 attaching sched-domain(s): [ 0.188902] domain-0: span=0-23 level=MC [ 0.188904] groups: 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 } [ 0.188946] domain-1: span=0-47 level=NUMA [ 0.188948] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.188954] domain-2: span=0-71 level=NUMA [ 0.188955] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.188961] domain-3: span=0-95 level=NUMA [ 0.188964] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.188973] CPU16 attaching sched-domain(s): [ 0.188974] domain-0: span=0-23 level=MC [ 0.188976] groups: 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 } [ 0.189018] domain-1: span=0-47 level=NUMA [ 0.189019] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.189025] domain-2: span=0-71 level=NUMA [ 0.189027] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.189033] domain-3: span=0-95 level=NUMA [ 0.189035] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.189045] CPU17 attaching sched-domain(s): [ 0.189046] domain-0: span=0-23 level=MC [ 0.189047] groups: 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 } [ 0.189090] domain-1: span=0-47 level=NUMA [ 0.189091] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.189097] domain-2: span=0-71 level=NUMA [ 0.189099] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.189105] domain-3: span=0-95 level=NUMA [ 0.189107] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.189117] CPU18 attaching sched-domain(s): [ 0.189118] domain-0: span=0-23 level=MC [ 0.189119] groups: 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 } [ 0.189161] domain-1: span=0-47 level=NUMA [ 0.189162] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.189168] domain-2: span=0-71 level=NUMA [ 0.189170] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.189176] domain-3: span=0-95 level=NUMA [ 0.189178] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.189188] CPU19 attaching sched-domain(s): [ 0.189188] domain-0: span=0-23 level=MC [ 0.189190] groups: 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 } [ 0.189231] domain-1: span=0-47 level=NUMA [ 0.189233] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.189239] domain-2: span=0-71 level=NUMA [ 0.189241] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.189247] domain-3: span=0-95 level=NUMA [ 0.189249] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.189258] CPU20 attaching sched-domain(s): [ 0.189259] domain-0: span=0-23 level=MC [ 0.189261] groups: 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 } [ 0.189301] domain-1: span=0-47 level=NUMA [ 0.189303] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.189309] domain-2: span=0-71 level=NUMA [ 0.189311] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.189317] domain-3: span=0-95 level=NUMA [ 0.189319] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.189329] CPU21 attaching sched-domain(s): [ 0.189330] domain-0: span=0-23 level=MC [ 0.189331] groups: 21:{ span=21 }, 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 } [ 0.189373] domain-1: span=0-47 level=NUMA [ 0.189375] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.189381] domain-2: span=0-71 level=NUMA [ 0.189383] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.189388] domain-3: span=0-95 level=NUMA [ 0.189391] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.189400] CPU22 attaching sched-domain(s): [ 0.189401] domain-0: span=0-23 level=MC [ 0.189403] groups: 22:{ span=22 }, 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 } [ 0.189444] domain-1: span=0-47 level=NUMA [ 0.189446] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.189452] domain-2: span=0-71 level=NUMA [ 0.189454] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.189460] domain-3: span=0-95 level=NUMA [ 0.189462] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.189471] CPU23 attaching sched-domain(s): [ 0.189472] domain-0: span=0-23 level=MC [ 0.189474] groups: 23:{ span=23 }, 0:{ span=0 cap=1023 }, 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5 }, 6:{ span=6 }, 7:{ span=7 }, 8:{ span=8 }, 9:{ span=9 }, 10:{ span=10 }, 11:{ span=11 }, 12:{ span=12 }, 13:{ span=13 }, 14:{ span=14 }, 15:{ span=15 }, 16:{ span=16 }, 17:{ span=17 }, 18:{ span=18 }, 19:{ span=19 }, 20:{ span=20 }, 21:{ span=21 }, 22:{ span=22 } [ 0.189515] domain-1: span=0-47 level=NUMA [ 0.189517] groups: 0:{ span=0-23 cap=24575 }, 24:{ span=24-47 cap=24576 } [ 0.189523] domain-2: span=0-71 level=NUMA [ 0.189525] groups: 0:{ span=0-47 cap=49151 }, 48:{ span=48-71 cap=24576 } [ 0.189531] domain-3: span=0-95 level=NUMA [ 0.189533] groups: 0:{ span=0-71 mask=0-23 cap=73727 }, 72:{ span=48-95 mask=72-95 cap=49152 } [ 0.189543] CPU24 attaching sched-domain(s): [ 0.189545] domain-0: span=24-47 level=MC [ 0.189546] groups: 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 } [ 0.189588] domain-1: span=0-47 level=NUMA [ 0.189590] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.189596] domain-2: span=0-71 level=NUMA [ 0.189598] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.189605] domain-3: span=0-95 level=NUMA [ 0.189607] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.189617] CPU25 attaching sched-domain(s): [ 0.189618] domain-0: span=24-47 level=MC [ 0.189620] groups: 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 } [ 0.189662] domain-1: span=0-47 level=NUMA [ 0.189664] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.189670] domain-2: span=0-71 level=NUMA [ 0.189672] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.189679] domain-3: span=0-95 level=NUMA [ 0.189681] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.189691] CPU26 attaching sched-domain(s): [ 0.189692] domain-0: span=24-47 level=MC [ 0.189694] groups: 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 } [ 0.189735] domain-1: span=0-47 level=NUMA [ 0.189737] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.189743] domain-2: span=0-71 level=NUMA [ 0.189745] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.189752] domain-3: span=0-95 level=NUMA [ 0.189754] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.189764] CPU27 attaching sched-domain(s): [ 0.189765] domain-0: span=24-47 level=MC [ 0.189766] groups: 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 } [ 0.189808] domain-1: span=0-47 level=NUMA [ 0.189810] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.189816] domain-2: span=0-71 level=NUMA [ 0.189818] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.189825] domain-3: span=0-95 level=NUMA [ 0.189827] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.189837] CPU28 attaching sched-domain(s): [ 0.189838] domain-0: span=24-47 level=MC [ 0.189839] groups: 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 } [ 0.189880] domain-1: span=0-47 level=NUMA [ 0.189882] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.189888] domain-2: span=0-71 level=NUMA [ 0.189890] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.189896] domain-3: span=0-95 level=NUMA [ 0.189898] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.189909] CPU29 attaching sched-domain(s): [ 0.189909] domain-0: span=24-47 level=MC [ 0.189911] groups: 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 } [ 0.189952] domain-1: span=0-47 level=NUMA [ 0.189954] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.189960] domain-2: span=0-71 level=NUMA [ 0.189962] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.189969] domain-3: span=0-95 level=NUMA [ 0.189971] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.189981] CPU30 attaching sched-domain(s): [ 0.189982] domain-0: span=24-47 level=MC [ 0.189984] groups: 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 } [ 0.190032] domain-1: span=0-47 level=NUMA [ 0.190034] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.190040] domain-2: span=0-71 level=NUMA [ 0.190042] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.190049] domain-3: span=0-95 level=NUMA [ 0.190051] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.190061] CPU31 attaching sched-domain(s): [ 0.190062] domain-0: span=24-47 level=MC [ 0.190064] groups: 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 } [ 0.190105] domain-1: span=0-47 level=NUMA [ 0.190107] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.190113] domain-2: span=0-71 level=NUMA [ 0.190114] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.190121] domain-3: span=0-95 level=NUMA [ 0.190123] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.190134] CPU32 attaching sched-domain(s): [ 0.190135] domain-0: span=24-47 level=MC [ 0.190137] groups: 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 } [ 0.190178] domain-1: span=0-47 level=NUMA [ 0.190180] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.190186] domain-2: span=0-71 level=NUMA [ 0.190188] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.190194] domain-3: span=0-95 level=NUMA [ 0.190196] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.190207] CPU33 attaching sched-domain(s): [ 0.190207] domain-0: span=24-47 level=MC [ 0.190209] groups: 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 } [ 0.190250] domain-1: span=0-47 level=NUMA [ 0.190252] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.190258] domain-2: span=0-71 level=NUMA [ 0.190260] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.190267] domain-3: span=0-95 level=NUMA [ 0.190269] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.190279] CPU34 attaching sched-domain(s): [ 0.190280] domain-0: span=24-47 level=MC [ 0.190282] groups: 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 } [ 0.190323] domain-1: span=0-47 level=NUMA [ 0.190325] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.190331] domain-2: span=0-71 level=NUMA [ 0.190333] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.190339] domain-3: span=0-95 level=NUMA [ 0.190342] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.190352] CPU35 attaching sched-domain(s): [ 0.190353] domain-0: span=24-47 level=MC [ 0.190355] groups: 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 } [ 0.190395] domain-1: span=0-47 level=NUMA [ 0.190397] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.190403] domain-2: span=0-71 level=NUMA [ 0.190405] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.190412] domain-3: span=0-95 level=NUMA [ 0.190414] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.190424] CPU36 attaching sched-domain(s): [ 0.190425] domain-0: span=24-47 level=MC [ 0.190427] groups: 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 } [ 0.190468] domain-1: span=0-47 level=NUMA [ 0.190470] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.190476] domain-2: span=0-71 level=NUMA [ 0.190478] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.190485] domain-3: span=0-95 level=NUMA [ 0.190487] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.190497] CPU37 attaching sched-domain(s): [ 0.190498] domain-0: span=24-47 level=MC [ 0.190499] groups: 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 } [ 0.190540] domain-1: span=0-47 level=NUMA [ 0.190542] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.190548] domain-2: span=0-71 level=NUMA [ 0.190550] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.190557] domain-3: span=0-95 level=NUMA [ 0.190559] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.190570] CPU38 attaching sched-domain(s): [ 0.190570] domain-0: span=24-47 level=MC [ 0.190572] groups: 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 } [ 0.190613] domain-1: span=0-47 level=NUMA [ 0.190615] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.190621] domain-2: span=0-71 level=NUMA [ 0.190623] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.190630] domain-3: span=0-95 level=NUMA [ 0.190632] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.190642] CPU39 attaching sched-domain(s): [ 0.190643] domain-0: span=24-47 level=MC [ 0.190644] groups: 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 } [ 0.190685] domain-1: span=0-47 level=NUMA [ 0.190687] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.190693] domain-2: span=0-71 level=NUMA [ 0.190695] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.190702] domain-3: span=0-95 level=NUMA [ 0.190704] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.190714] CPU40 attaching sched-domain(s): [ 0.190715] domain-0: span=24-47 level=MC [ 0.190717] groups: 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 } [ 0.190758] domain-1: span=0-47 level=NUMA [ 0.190760] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.190766] domain-2: span=0-71 level=NUMA [ 0.190768] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.190774] domain-3: span=0-95 level=NUMA [ 0.190776] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.190787] CPU41 attaching sched-domain(s): [ 0.190788] domain-0: span=24-47 level=MC [ 0.190789] groups: 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 } [ 0.190831] domain-1: span=0-47 level=NUMA [ 0.190832] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.190838] domain-2: span=0-71 level=NUMA [ 0.190840] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.190847] domain-3: span=0-95 level=NUMA [ 0.190849] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.190859] CPU42 attaching sched-domain(s): [ 0.190860] domain-0: span=24-47 level=MC [ 0.190862] groups: 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 } [ 0.190903] domain-1: span=0-47 level=NUMA [ 0.190905] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.190911] domain-2: span=0-71 level=NUMA [ 0.190913] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.190920] domain-3: span=0-95 level=NUMA [ 0.190922] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.190932] CPU43 attaching sched-domain(s): [ 0.190933] domain-0: span=24-47 level=MC [ 0.190935] groups: 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 } [ 0.190976] domain-1: span=0-47 level=NUMA [ 0.190977] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.190983] domain-2: span=0-71 level=NUMA [ 0.190985] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.190992] domain-3: span=0-95 level=NUMA [ 0.190994] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.191004] CPU44 attaching sched-domain(s): [ 0.191005] domain-0: span=24-47 level=MC [ 0.191007] groups: 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 } [ 0.191047] domain-1: span=0-47 level=NUMA [ 0.191049] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.191055] domain-2: span=0-71 level=NUMA [ 0.191057] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.191064] domain-3: span=0-95 level=NUMA [ 0.191066] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.191076] CPU45 attaching sched-domain(s): [ 0.191077] domain-0: span=24-47 level=MC [ 0.191078] groups: 45:{ span=45 }, 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 } [ 0.191120] domain-1: span=0-47 level=NUMA [ 0.191122] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.191127] domain-2: span=0-71 level=NUMA [ 0.191130] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.191136] domain-3: span=0-95 level=NUMA [ 0.191138] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.191148] CPU46 attaching sched-domain(s): [ 0.191149] domain-0: span=24-47 level=MC [ 0.191151] groups: 46:{ span=46 }, 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 } [ 0.191192] domain-1: span=0-47 level=NUMA [ 0.191194] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.191200] domain-2: span=0-71 level=NUMA [ 0.191202] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.191209] domain-3: span=0-95 level=NUMA [ 0.191211] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.191221] CPU47 attaching sched-domain(s): [ 0.191222] domain-0: span=24-47 level=MC [ 0.191224] groups: 47:{ span=47 }, 24:{ span=24 }, 25:{ span=25 }, 26:{ span=26 }, 27:{ span=27 }, 28:{ span=28 }, 29:{ span=29 }, 30:{ span=30 }, 31:{ span=31 }, 32:{ span=32 }, 33:{ span=33 }, 34:{ span=34 }, 35:{ span=35 }, 36:{ span=36 }, 37:{ span=37 }, 38:{ span=38 }, 39:{ span=39 }, 40:{ span=40 }, 41:{ span=41 }, 42:{ span=42 }, 43:{ span=43 }, 44:{ span=44 }, 45:{ span=45 }, 46:{ span=46 } [ 0.191264] domain-1: span=0-47 level=NUMA [ 0.191266] groups: 24:{ span=24-47 cap=24576 }, 0:{ span=0-23 cap=24575 } [ 0.191272] domain-2: span=0-71 level=NUMA [ 0.191274] groups: 24:{ span=0-47 mask=24-47 cap=49152 }, 48:{ span=48-71 cap=24576 } [ 0.191281] domain-3: span=0-95 level=NUMA [ 0.191283] groups: 24:{ span=0-71 mask=24-47 cap=73728 }, 72:{ span=0-23,48-95 mask=72-95 cap=73728 } [ 0.191294] CPU48 attaching sched-domain(s): [ 0.191295] domain-0: span=48-71 level=MC [ 0.191296] groups: 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 } [ 0.191339] domain-1: span=48-95 level=NUMA [ 0.191340] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.191346] domain-2: span=0-23,48-95 level=NUMA [ 0.191348] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.191354] domain-3: span=0-95 level=NUMA [ 0.191356] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.191368] CPU49 attaching sched-domain(s): [ 0.191369] domain-0: span=48-71 level=MC [ 0.191370] groups: 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 } [ 0.191412] domain-1: span=48-95 level=NUMA [ 0.191413] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.191419] domain-2: span=0-23,48-95 level=NUMA [ 0.191421] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.191427] domain-3: span=0-95 level=NUMA [ 0.191430] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.191441] CPU50 attaching sched-domain(s): [ 0.191442] domain-0: span=48-71 level=MC [ 0.191444] groups: 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 } [ 0.191485] domain-1: span=48-95 level=NUMA [ 0.191487] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.191493] domain-2: span=0-23,48-95 level=NUMA [ 0.191495] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.191501] domain-3: span=0-95 level=NUMA [ 0.191503] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.191515] CPU51 attaching sched-domain(s): [ 0.191515] domain-0: span=48-71 level=MC [ 0.191517] groups: 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 } [ 0.191559] domain-1: span=48-95 level=NUMA [ 0.191560] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.191566] domain-2: span=0-23,48-95 level=NUMA [ 0.191568] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.191574] domain-3: span=0-95 level=NUMA [ 0.191576] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.191588] CPU52 attaching sched-domain(s): [ 0.191589] domain-0: span=48-71 level=MC [ 0.191590] groups: 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 } [ 0.191631] domain-1: span=48-95 level=NUMA [ 0.191633] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.191638] domain-2: span=0-23,48-95 level=NUMA [ 0.191640] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.191646] domain-3: span=0-95 level=NUMA [ 0.191649] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.191660] CPU53 attaching sched-domain(s): [ 0.191661] domain-0: span=48-71 level=MC [ 0.191662] groups: 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 } [ 0.191705] domain-1: span=48-95 level=NUMA [ 0.191706] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.191713] domain-2: span=0-23,48-95 level=NUMA [ 0.191715] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.191722] domain-3: span=0-95 level=NUMA [ 0.191724] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.191735] CPU54 attaching sched-domain(s): [ 0.191736] domain-0: span=48-71 level=MC [ 0.191738] groups: 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 } [ 0.191780] domain-1: span=48-95 level=NUMA [ 0.191781] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.191787] domain-2: span=0-23,48-95 level=NUMA [ 0.191790] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.191795] domain-3: span=0-95 level=NUMA [ 0.191798] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.191809] CPU55 attaching sched-domain(s): [ 0.191810] domain-0: span=48-71 level=MC [ 0.191811] groups: 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 } [ 0.191852] domain-1: span=48-95 level=NUMA [ 0.191854] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.191860] domain-2: span=0-23,48-95 level=NUMA [ 0.191862] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.191868] domain-3: span=0-95 level=NUMA [ 0.191870] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.191882] CPU56 attaching sched-domain(s): [ 0.191882] domain-0: span=48-71 level=MC [ 0.191884] groups: 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 } [ 0.191925] domain-1: span=48-95 level=NUMA [ 0.191927] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.191933] domain-2: span=0-23,48-95 level=NUMA [ 0.191935] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.191941] domain-3: span=0-95 level=NUMA [ 0.191943] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.191954] CPU57 attaching sched-domain(s): [ 0.191955] domain-0: span=48-71 level=MC [ 0.191957] groups: 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 } [ 0.191998] domain-1: span=48-95 level=NUMA [ 0.192000] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.192006] domain-2: span=0-23,48-95 level=NUMA [ 0.192008] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.192014] domain-3: span=0-95 level=NUMA [ 0.192016] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.192028] CPU58 attaching sched-domain(s): [ 0.192028] domain-0: span=48-71 level=MC [ 0.192030] groups: 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 } [ 0.192071] domain-1: span=48-95 level=NUMA [ 0.192073] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.192079] domain-2: span=0-23,48-95 level=NUMA [ 0.192081] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.192088] domain-3: span=0-95 level=NUMA [ 0.192090] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.192101] CPU59 attaching sched-domain(s): [ 0.192102] domain-0: span=48-71 level=MC [ 0.192104] groups: 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 } [ 0.192145] domain-1: span=48-95 level=NUMA [ 0.192147] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.192153] domain-2: span=0-23,48-95 level=NUMA [ 0.192155] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.192161] domain-3: span=0-95 level=NUMA [ 0.192163] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.192175] CPU60 attaching sched-domain(s): [ 0.192176] domain-0: span=48-71 level=MC [ 0.192177] groups: 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 } [ 0.192218] domain-1: span=48-95 level=NUMA [ 0.192219] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.192225] domain-2: span=0-23,48-95 level=NUMA [ 0.192227] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.192233] domain-3: span=0-95 level=NUMA [ 0.192236] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.192247] CPU61 attaching sched-domain(s): [ 0.192248] domain-0: span=48-71 level=MC [ 0.192249] groups: 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 } [ 0.192291] domain-1: span=48-95 level=NUMA [ 0.192293] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.192299] domain-2: span=0-23,48-95 level=NUMA [ 0.192301] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.192307] domain-3: span=0-95 level=NUMA [ 0.192309] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.192320] CPU62 attaching sched-domain(s): [ 0.192321] domain-0: span=48-71 level=MC [ 0.192323] groups: 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 } [ 0.192364] domain-1: span=48-95 level=NUMA [ 0.192366] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.192371] domain-2: span=0-23,48-95 level=NUMA [ 0.192373] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.192379] domain-3: span=0-95 level=NUMA [ 0.192382] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.192392] CPU63 attaching sched-domain(s): [ 0.192393] domain-0: span=48-71 level=MC [ 0.192395] groups: 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 } [ 0.192437] domain-1: span=48-95 level=NUMA [ 0.192439] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.192444] domain-2: span=0-23,48-95 level=NUMA [ 0.192446] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.192452] domain-3: span=0-95 level=NUMA [ 0.192455] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.192466] CPU64 attaching sched-domain(s): [ 0.192467] domain-0: span=48-71 level=MC [ 0.192468] groups: 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 } [ 0.192509] domain-1: span=48-95 level=NUMA [ 0.192511] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.192517] domain-2: span=0-23,48-95 level=NUMA [ 0.192519] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.192525] domain-3: span=0-95 level=NUMA [ 0.192527] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.192538] CPU65 attaching sched-domain(s): [ 0.192539] domain-0: span=48-71 level=MC [ 0.192541] groups: 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 } [ 0.192584] domain-1: span=48-95 level=NUMA [ 0.192586] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.192591] domain-2: span=0-23,48-95 level=NUMA [ 0.192593] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.192599] domain-3: span=0-95 level=NUMA [ 0.192602] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.192613] CPU66 attaching sched-domain(s): [ 0.192613] domain-0: span=48-71 level=MC [ 0.192615] groups: 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 } [ 0.192656] domain-1: span=48-95 level=NUMA [ 0.192658] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.192664] domain-2: span=0-23,48-95 level=NUMA [ 0.192666] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.192671] domain-3: span=0-95 level=NUMA [ 0.192674] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.192685] CPU67 attaching sched-domain(s): [ 0.192686] domain-0: span=48-71 level=MC [ 0.192688] groups: 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 } [ 0.192729] domain-1: span=48-95 level=NUMA [ 0.192731] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.192737] domain-2: span=0-23,48-95 level=NUMA [ 0.192739] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.192745] domain-3: span=0-95 level=NUMA [ 0.192747] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.192759] CPU68 attaching sched-domain(s): [ 0.192759] domain-0: span=48-71 level=MC [ 0.192761] groups: 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 } [ 0.192801] domain-1: span=48-95 level=NUMA [ 0.192803] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.192809] domain-2: span=0-23,48-95 level=NUMA [ 0.192811] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.192817] domain-3: span=0-95 level=NUMA [ 0.192819] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.192830] CPU69 attaching sched-domain(s): [ 0.192831] domain-0: span=48-71 level=MC [ 0.192833] groups: 69:{ span=69 }, 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 } [ 0.192874] domain-1: span=48-95 level=NUMA [ 0.192876] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.192882] domain-2: span=0-23,48-95 level=NUMA [ 0.192884] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.192890] domain-3: span=0-95 level=NUMA [ 0.192892] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.192903] CPU70 attaching sched-domain(s): [ 0.192904] domain-0: span=48-71 level=MC [ 0.192906] groups: 70:{ span=70 }, 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 } [ 0.192947] domain-1: span=48-95 level=NUMA [ 0.192949] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.192954] domain-2: span=0-23,48-95 level=NUMA [ 0.192956] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.192962] domain-3: span=0-95 level=NUMA [ 0.192965] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.192976] CPU71 attaching sched-domain(s): [ 0.192977] domain-0: span=48-71 level=MC [ 0.192979] groups: 71:{ span=71 }, 48:{ span=48 }, 49:{ span=49 }, 50:{ span=50 }, 51:{ span=51 }, 52:{ span=52 }, 53:{ span=53 }, 54:{ span=54 }, 55:{ span=55 }, 56:{ span=56 }, 57:{ span=57 }, 58:{ span=58 }, 59:{ span=59 }, 60:{ span=60 }, 61:{ span=61 }, 62:{ span=62 }, 63:{ span=63 }, 64:{ span=64 }, 65:{ span=65 }, 66:{ span=66 }, 67:{ span=67 }, 68:{ span=68 }, 69:{ span=69 }, 70:{ span=70 } [ 0.193020] domain-1: span=48-95 level=NUMA [ 0.193022] groups: 48:{ span=48-71 cap=24576 }, 72:{ span=72-95 cap=24576 } [ 0.193027] domain-2: span=0-23,48-95 level=NUMA [ 0.193029] groups: 48:{ span=48-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.193035] domain-3: span=0-95 level=NUMA [ 0.193038] groups: 48:{ span=0-23,48-95 mask=48-71 cap=73728 }, 24:{ span=0-47 mask=24-47 cap=49152 } [ 0.193050] CPU72 attaching sched-domain(s): [ 0.193051] domain-0: span=72-95 level=MC [ 0.193053] groups: 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 } [ 0.193095] domain-1: span=48-95 level=NUMA [ 0.193097] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.193103] domain-2: span=0-23,48-95 level=NUMA [ 0.193105] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.193112] domain-3: span=0-95 level=NUMA [ 0.193114] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.193126] CPU73 attaching sched-domain(s): [ 0.193127] domain-0: span=72-95 level=MC [ 0.193129] groups: 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 } [ 0.193170] domain-1: span=48-95 level=NUMA [ 0.193172] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.193178] domain-2: span=0-23,48-95 level=NUMA [ 0.193180] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.193187] domain-3: span=0-95 level=NUMA [ 0.193189] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.193201] CPU74 attaching sched-domain(s): [ 0.193201] domain-0: span=72-95 level=MC [ 0.193203] groups: 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 } [ 0.193244] domain-1: span=48-95 level=NUMA [ 0.193246] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.193252] domain-2: span=0-23,48-95 level=NUMA [ 0.193254] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.193261] domain-3: span=0-95 level=NUMA [ 0.193263] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.193274] CPU75 attaching sched-domain(s): [ 0.193275] domain-0: span=72-95 level=MC [ 0.193277] groups: 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 } [ 0.193318] domain-1: span=48-95 level=NUMA [ 0.193320] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.193326] domain-2: span=0-23,48-95 level=NUMA [ 0.193328] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.193335] domain-3: span=0-95 level=NUMA [ 0.193337] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.193349] CPU76 attaching sched-domain(s): [ 0.193349] domain-0: span=72-95 level=MC [ 0.193351] groups: 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 } [ 0.193392] domain-1: span=48-95 level=NUMA [ 0.193393] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.193399] domain-2: span=0-23,48-95 level=NUMA [ 0.193401] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.193408] domain-3: span=0-95 level=NUMA [ 0.193410] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.193422] CPU77 attaching sched-domain(s): [ 0.193423] domain-0: span=72-95 level=MC [ 0.193424] groups: 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 } [ 0.193466] domain-1: span=48-95 level=NUMA [ 0.193468] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.193474] domain-2: span=0-23,48-95 level=NUMA [ 0.193476] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.193482] domain-3: span=0-95 level=NUMA [ 0.193484] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.193496] CPU78 attaching sched-domain(s): [ 0.193497] domain-0: span=72-95 level=MC [ 0.193499] groups: 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 } [ 0.193541] domain-1: span=48-95 level=NUMA [ 0.193543] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.193549] domain-2: span=0-23,48-95 level=NUMA [ 0.193551] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.193558] domain-3: span=0-95 level=NUMA [ 0.193560] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.193572] CPU79 attaching sched-domain(s): [ 0.193573] domain-0: span=72-95 level=MC [ 0.193574] groups: 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 } [ 0.193616] domain-1: span=48-95 level=NUMA [ 0.193617] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.193623] domain-2: span=0-23,48-95 level=NUMA [ 0.193625] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.193632] domain-3: span=0-95 level=NUMA [ 0.193634] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.193646] CPU80 attaching sched-domain(s): [ 0.193647] domain-0: span=72-95 level=MC [ 0.193649] groups: 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 } [ 0.193690] domain-1: span=48-95 level=NUMA [ 0.193692] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.193698] domain-2: span=0-23,48-95 level=NUMA [ 0.193700] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.193706] domain-3: span=0-95 level=NUMA [ 0.193708] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.193720] CPU81 attaching sched-domain(s): [ 0.193721] domain-0: span=72-95 level=MC [ 0.193723] groups: 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 } [ 0.193765] domain-1: span=48-95 level=NUMA [ 0.193766] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.193772] domain-2: span=0-23,48-95 level=NUMA [ 0.193775] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.193781] domain-3: span=0-95 level=NUMA [ 0.193784] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.193796] CPU82 attaching sched-domain(s): [ 0.193797] domain-0: span=72-95 level=MC [ 0.193798] groups: 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 } [ 0.193840] domain-1: span=48-95 level=NUMA [ 0.193842] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.193847] domain-2: span=0-23,48-95 level=NUMA [ 0.193849] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.193856] domain-3: span=0-95 level=NUMA [ 0.193858] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.193870] CPU83 attaching sched-domain(s): [ 0.193871] domain-0: span=72-95 level=MC [ 0.193872] groups: 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 } [ 0.193914] domain-1: span=48-95 level=NUMA [ 0.193915] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.193921] domain-2: span=0-23,48-95 level=NUMA [ 0.193923] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.193930] domain-3: span=0-95 level=NUMA [ 0.193933] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.193944] CPU84 attaching sched-domain(s): [ 0.193945] domain-0: span=72-95 level=MC [ 0.193947] groups: 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 } [ 0.193987] domain-1: span=48-95 level=NUMA [ 0.193989] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.193995] domain-2: span=0-23,48-95 level=NUMA [ 0.193997] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.194007] domain-3: span=0-95 level=NUMA [ 0.194009] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.194021] CPU85 attaching sched-domain(s): [ 0.194022] domain-0: span=72-95 level=MC [ 0.194024] groups: 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 } [ 0.194066] domain-1: span=48-95 level=NUMA [ 0.194068] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.194074] domain-2: span=0-23,48-95 level=NUMA [ 0.194076] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.194082] domain-3: span=0-95 level=NUMA [ 0.194085] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.194097] CPU86 attaching sched-domain(s): [ 0.194097] domain-0: span=72-95 level=MC [ 0.194099] groups: 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 } [ 0.194140] domain-1: span=48-95 level=NUMA [ 0.194142] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.194148] domain-2: span=0-23,48-95 level=NUMA [ 0.194150] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.194157] domain-3: span=0-95 level=NUMA [ 0.194159] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.194170] CPU87 attaching sched-domain(s): [ 0.194171] domain-0: span=72-95 level=MC [ 0.194173] groups: 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 } [ 0.194214] domain-1: span=48-95 level=NUMA [ 0.194216] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.194221] domain-2: span=0-23,48-95 level=NUMA [ 0.194224] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.194230] domain-3: span=0-95 level=NUMA [ 0.194232] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.194243] CPU88 attaching sched-domain(s): [ 0.194244] domain-0: span=72-95 level=MC [ 0.194246] groups: 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 } [ 0.194287] domain-1: span=48-95 level=NUMA [ 0.194289] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.194295] domain-2: span=0-23,48-95 level=NUMA [ 0.194297] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.194304] domain-3: span=0-95 level=NUMA [ 0.194306] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.194317] CPU89 attaching sched-domain(s): [ 0.194318] domain-0: span=72-95 level=MC [ 0.194320] groups: 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 } [ 0.194362] domain-1: span=48-95 level=NUMA [ 0.194364] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.194369] domain-2: span=0-23,48-95 level=NUMA [ 0.194371] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.194378] domain-3: span=0-95 level=NUMA [ 0.194380] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.194392] CPU90 attaching sched-domain(s): [ 0.194393] domain-0: span=72-95 level=MC [ 0.194394] groups: 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 } [ 0.194436] domain-1: span=48-95 level=NUMA [ 0.194438] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.194443] domain-2: span=0-23,48-95 level=NUMA [ 0.194445] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.194452] domain-3: span=0-95 level=NUMA [ 0.194454] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.194466] CPU91 attaching sched-domain(s): [ 0.194466] domain-0: span=72-95 level=MC [ 0.194468] groups: 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 } [ 0.194510] domain-1: span=48-95 level=NUMA [ 0.194512] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.194518] domain-2: span=0-23,48-95 level=NUMA [ 0.194520] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.194526] domain-3: span=0-95 level=NUMA [ 0.194529] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.194541] CPU92 attaching sched-domain(s): [ 0.194542] domain-0: span=72-95 level=MC [ 0.194543] groups: 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 } [ 0.194584] domain-1: span=48-95 level=NUMA [ 0.194586] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.194592] domain-2: span=0-23,48-95 level=NUMA [ 0.194594] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.194600] domain-3: span=0-95 level=NUMA [ 0.194602] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.194614] CPU93 attaching sched-domain(s): [ 0.194615] domain-0: span=72-95 level=MC [ 0.194617] groups: 93:{ span=93 }, 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 } [ 0.194658] domain-1: span=48-95 level=NUMA [ 0.194660] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.194666] domain-2: span=0-23,48-95 level=NUMA [ 0.194668] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.194675] domain-3: span=0-95 level=NUMA [ 0.194677] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.194689] CPU94 attaching sched-domain(s): [ 0.194690] domain-0: span=72-95 level=MC [ 0.194692] groups: 94:{ span=94 }, 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 } [ 0.194733] domain-1: span=48-95 level=NUMA [ 0.194735] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.194741] domain-2: span=0-23,48-95 level=NUMA [ 0.194743] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.194750] domain-3: span=0-95 level=NUMA [ 0.194752] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.194766] CPU95 attaching sched-domain(s): [ 0.194767] domain-0: span=72-95 level=MC [ 0.194769] groups: 95:{ span=95 }, 72:{ span=72 }, 73:{ span=73 }, 74:{ span=74 }, 75:{ span=75 }, 76:{ span=76 }, 77:{ span=77 }, 78:{ span=78 }, 79:{ span=79 }, 80:{ span=80 }, 81:{ span=81 }, 82:{ span=82 }, 83:{ span=83 }, 84:{ span=84 }, 85:{ span=85 }, 86:{ span=86 }, 87:{ span=87 }, 88:{ span=88 }, 89:{ span=89 }, 90:{ span=90 }, 91:{ span=91 }, 92:{ span=92 }, 93:{ span=93 }, 94:{ span=94 } [ 0.194810] domain-1: span=48-95 level=NUMA [ 0.194812] groups: 72:{ span=72-95 cap=24576 }, 48:{ span=48-71 cap=24576 } [ 0.194818] domain-2: span=0-23,48-95 level=NUMA [ 0.194820] groups: 72:{ span=48-95 mask=72-95 cap=49152 }, 0:{ span=0-23 cap=24575 } [ 0.194827] domain-3: span=0-95 level=NUMA [ 0.194829] groups: 72:{ span=0-23,48-95 mask=72-95 cap=73728 }, 24:{ span=0-71 mask=24-47 cap=73728 } [ 0.194841] root domain span: 0-95 (max cpu_capacity = 1024) [ 0.236185] devtmpfs: initialized [ 0.241935] KASLR disabled due to lack of seed [ 0.242096] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns [ 0.242192] futex hash table entries: 32768 (order: 9, 2097152 bytes, vmalloc) [ 0.242587] pinctrl core: initialized pinctrl subsystem [ 0.242786] SMBIOS 3.2.0 present. [ 0.242792] DMI: Huawei TaiShan 200 (Model 2280)/BC82AMDDA, BIOS 1.38 07/04/2020 [ 0.243114] NET: Registered protocol family 16 [ 0.244765] DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations [ 0.245320] DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations [ 0.245870] DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations [ 0.245883] audit: initializing netlink subsys (disabled) [ 0.245952] audit: type=2000 audit(0.236:1): state=initialized audit_enabled=0 res=1 [ 0.246082] thermal_sys: Registered thermal governor 'fair_share' [ 0.246084] thermal_sys: Registered thermal governor 'step_wise' [ 0.246088] thermal_sys: Registered thermal governor 'user_space' [ 0.246205] cpuidle: using governor menu [ 0.246314] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. [ 0.247187] ASID allocator initialised with 65536 entries [ 0.247192] HugeTLB: can optimize 127 vmemmap pages for hugepages-32768kB [ 0.247196] HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB [ 0.247199] HugeTLB: can optimize 0 vmemmap pages for hugepages-64kB [ 0.247225] ACPI: bus type PCI registered [ 0.247229] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 0.247290] Serial: AMBA PL011 UART driver [ 0.253397] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages [ 0.253402] HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages [ 0.253405] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages [ 0.253408] HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages [ 0.323731] ACPI: Added _OSI(Module Device) [ 0.323736] ACPI: Added _OSI(Processor Device) [ 0.323738] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.323741] ACPI: Added _OSI(Processor Aggregator Device) [ 0.323744] ACPI: Added _OSI(Linux-Dell-Video) [ 0.323747] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) [ 0.323749] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) [ 0.327479] ACPI: 1 ACPI AML tables successfully acquired and loaded [ 0.329933] ACPI: Interpreter enabled [ 0.329938] ACPI: Using GIC for interrupt routing [ 0.329966] ACPI: MCFG table detected, 1 entries [ 0.330176] HEST: Table parsing has been initialized. [ 0.353796] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3f]) [ 0.353805] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 0.353888] acpi PNP0A08:00: _OSC: platform does not support [SHPCHotplug LTR DPC] [ 0.353953] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] [ 0.354452] acpi PNP0A08:00: ECAM area [mem 0xd0000000-0xd3ffffff] reserved by PNP0C02:00 [ 0.354462] acpi PNP0A08:00: ECAM at [mem 0xd0000000-0xd3ffffff] for [bus 00-3f] [ 0.354483] Remapped I/O 0x00000000f7ff0000 to [io 0x0000-0xffff window] [ 0.354537] PCI host bridge to bus 0000:00 [ 0.354541] pci_bus 0000:00: root bus resource [mem 0x80000000000-0x82fffffffff pref window] [ 0.354545] pci_bus 0000:00: root bus resource [mem 0xe0000000-0xf7feffff window] [ 0.354548] pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] [ 0.354552] pci_bus 0000:00: root bus resource [bus 00-3f] [ 0.354582] pci 0000:00:00.0: [19e5:a120] type 01 class 0x060400 [ 0.354637] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.354698] pci 0000:00:02.0: [19e5:a120] type 01 class 0x060400 [ 0.354745] pci 0000:00:02.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.354796] pci 0000:00:04.0: [19e5:a120] type 01 class 0x060400 [ 0.354843] pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.354891] pci 0000:00:06.0: [19e5:a120] type 01 class 0x060400 [ 0.354940] pci 0000:00:06.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.354989] pci 0000:00:08.0: [19e5:a120] type 01 class 0x060400 [ 0.355033] pci 0000:00:08.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.355080] pci 0000:00:0c.0: [19e5:a120] type 01 class 0x060400 [ 0.355125] pci 0000:00:0c.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.355172] pci 0000:00:0e.0: [19e5:a120] type 01 class 0x060400 [ 0.355216] pci 0000:00:0e.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.355260] pci 0000:00:10.0: [19e5:a120] type 01 class 0x060400 [ 0.355305] pci 0000:00:10.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.355356] pci 0000:00:11.0: [19e5:a120] type 01 class 0x060400 [ 0.355400] pci 0000:00:11.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.355448] pci 0000:00:12.0: [19e5:a120] type 01 class 0x060400 [ 0.355496] pci 0000:00:12.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.355633] pci 0000:05:00.0: [1000:0016] type 00 class 0x010700 [ 0.355648] pci 0000:05:00.0: reg 0x10: [mem 0x80000100000-0x800001fffff 64bit pref] [ 0.355658] pci 0000:05:00.0: reg 0x18: [mem 0x80000000000-0x800000fffff 64bit pref] [ 0.355665] pci 0000:05:00.0: reg 0x20: [mem 0xe6400000-0xe64fffff] [ 0.355673] pci 0000:05:00.0: reg 0x24: [io 0x0000-0x00ff] [ 0.355680] pci 0000:05:00.0: reg 0x30: [mem 0xe6300000-0xe63fffff pref] [ 0.355744] pci 0000:05:00.0: supports D1 D2 [ 0.355873] pci 0000:08:00.0: [19e5:1710] type 00 class 0x118000 [ 0.355892] pci 0000:08:00.0: reg 0x10: [mem 0xe0000000-0xe3ffffff pref] [ 0.355903] pci 0000:08:00.0: reg 0x14: [mem 0xe6200000-0xe62fffff] [ 0.356049] pci 0000:08:00.0: supports D1 [ 0.356052] pci 0000:08:00.0: PME# supported from D0 D1 D3hot [ 0.356175] pci 0000:09:00.0: [19e5:1711] type 00 class 0x030000 [ 0.356200] pci 0000:09:00.0: reg 0x10: [mem 0xe4000000-0xe5ffffff pref] [ 0.356214] pci 0000:09:00.0: reg 0x14: [mem 0xe6000000-0xe61fffff] [ 0.356335] pci 0000:09:00.0: BAR 0: assigned to efifb [ 0.356395] pci 0000:09:00.0: supports D1 [ 0.356398] pci 0000:09:00.0: PME# supported from D0 D1 D3hot [ 0.356521] pci_bus 0000:00: on NUMA node 0 [ 0.356523] pci 0000:00:00.0: PCI bridge to [bus 01] [ 0.356528] pci 0000:00:02.0: PCI bridge to [bus 02] [ 0.356532] pci 0000:00:04.0: PCI bridge to [bus 03] [ 0.356537] pci 0000:00:06.0: PCI bridge to [bus 04] [ 0.356541] pci 0000:00:08.0: PCI bridge to [bus 05] [ 0.356544] pci 0000:00:08.0: bridge window [io 0x0000-0x0fff] [ 0.356547] pci 0000:00:08.0: bridge window [mem 0xe6300000-0xe64fffff] [ 0.356551] pci 0000:00:08.0: bridge window [mem 0x80000000000-0x800001fffff 64bit pref] [ 0.356555] pci 0000:00:0c.0: PCI bridge to [bus 06] [ 0.356559] pci 0000:00:0e.0: PCI bridge to [bus 07] [ 0.356563] pci 0000:00:10.0: PCI bridge to [bus 08] [ 0.356567] pci 0000:00:10.0: bridge window [mem 0xe6200000-0xe62fffff] [ 0.356571] pci 0000:00:10.0: bridge window [mem 0xe0000000-0xe3ffffff 64bit pref] [ 0.356574] pci 0000:00:11.0: PCI bridge to [bus 09] [ 0.356578] pci 0000:00:11.0: bridge window [mem 0xe6000000-0xe61fffff] [ 0.356581] pci 0000:00:11.0: bridge window [mem 0xe4000000-0xe5ffffff 64bit pref] [ 0.356585] pci 0000:00:12.0: PCI bridge to [bus 0a] [ 0.356592] pci 0000:00:00.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 [ 0.356596] pci 0000:00:00.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 [ 0.356600] pci 0000:00:00.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 [ 0.356604] pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 [ 0.356607] pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 [ 0.356610] pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 [ 0.356614] pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 [ 0.356617] pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 [ 0.356621] pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 [ 0.356627] pci 0000:00:06.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 [ 0.356630] pci 0000:00:06.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 [ 0.356634] pci 0000:00:06.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 [ 0.356637] pci 0000:00:0c.0: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 [ 0.356641] pci 0000:00:0c.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 06] add_size 200000 add_align 100000 [ 0.356644] pci 0000:00:0c.0: bridge window [mem 0x00100000-0x000fffff] to [bus 06] add_size 200000 add_align 100000 [ 0.356648] pci 0000:00:0e.0: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 [ 0.356651] pci 0000:00:0e.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 07] add_size 200000 add_align 100000 [ 0.356655] pci 0000:00:0e.0: bridge window [mem 0x00100000-0x000fffff] to [bus 07] add_size 200000 add_align 100000 [ 0.356667] pci 0000:00:00.0: BAR 14: assigned [mem 0xe6500000-0xe66fffff] [ 0.356670] pci 0000:00:00.0: BAR 15: assigned [mem 0x80000200000-0x800003fffff 64bit pref] [ 0.356674] pci 0000:00:02.0: BAR 14: assigned [mem 0xe6700000-0xe68fffff] [ 0.356677] pci 0000:00:02.0: BAR 15: assigned [mem 0x80000400000-0x800005fffff 64bit pref] [ 0.356680] pci 0000:00:04.0: BAR 14: assigned [mem 0xe6900000-0xe6afffff] [ 0.356684] pci 0000:00:04.0: BAR 15: assigned [mem 0x80000600000-0x800007fffff 64bit pref] [ 0.356687] pci 0000:00:06.0: BAR 14: assigned [mem 0xe6b00000-0xe6cfffff] [ 0.356690] pci 0000:00:06.0: BAR 15: assigned [mem 0x80000800000-0x800009fffff 64bit pref] [ 0.356693] pci 0000:00:0c.0: BAR 14: assigned [mem 0xe6d00000-0xe6efffff] [ 0.356696] pci 0000:00:0c.0: BAR 15: assigned [mem 0x80000a00000-0x80000bfffff 64bit pref] [ 0.356699] pci 0000:00:0e.0: BAR 14: assigned [mem 0xe6f00000-0xe70fffff] [ 0.356702] pci 0000:00:0e.0: BAR 15: assigned [mem 0x80000c00000-0x80000dfffff 64bit pref] [ 0.356706] pci 0000:00:00.0: BAR 13: assigned [io 0x1000-0x1fff] [ 0.356708] pci 0000:00:02.0: BAR 13: assigned [io 0x2000-0x2fff] [ 0.356711] pci 0000:00:04.0: BAR 13: assigned [io 0x3000-0x3fff] [ 0.356714] pci 0000:00:06.0: BAR 13: assigned [io 0x4000-0x4fff] [ 0.356716] pci 0000:00:0c.0: BAR 13: assigned [io 0x5000-0x5fff] [ 0.356719] pci 0000:00:0e.0: BAR 13: assigned [io 0x6000-0x6fff] [ 0.356724] pci 0000:00:00.0: PCI bridge to [bus 01] [ 0.356727] pci 0000:00:00.0: bridge window [io 0x1000-0x1fff] [ 0.356730] pci 0000:00:00.0: bridge window [mem 0xe6500000-0xe66fffff] [ 0.356733] pci 0000:00:00.0: bridge window [mem 0x80000200000-0x800003fffff 64bit pref] [ 0.356737] pci 0000:00:02.0: PCI bridge to [bus 02] [ 0.356740] pci 0000:00:02.0: bridge window [io 0x2000-0x2fff] [ 0.356743] pci 0000:00:02.0: bridge window [mem 0xe6700000-0xe68fffff] [ 0.356746] pci 0000:00:02.0: bridge window [mem 0x80000400000-0x800005fffff 64bit pref] [ 0.356750] pci 0000:00:04.0: PCI bridge to [bus 03] [ 0.356752] pci 0000:00:04.0: bridge window [io 0x3000-0x3fff] [ 0.356755] pci 0000:00:04.0: bridge window [mem 0xe6900000-0xe6afffff] [ 0.356759] pci 0000:00:04.0: bridge window [mem 0x80000600000-0x800007fffff 64bit pref] [ 0.356762] pci 0000:00:06.0: PCI bridge to [bus 04] [ 0.356765] pci 0000:00:06.0: bridge window [io 0x4000-0x4fff] [ 0.356768] pci 0000:00:06.0: bridge window [mem 0xe6b00000-0xe6cfffff] [ 0.356771] pci 0000:00:06.0: bridge window [mem 0x80000800000-0x800009fffff 64bit pref] [ 0.356775] pci 0000:00:08.0: PCI bridge to [bus 05] [ 0.356778] pci 0000:00:08.0: bridge window [io 0x0000-0x0fff] [ 0.356781] pci 0000:00:08.0: bridge window [mem 0xe6300000-0xe64fffff] [ 0.356784] pci 0000:00:08.0: bridge window [mem 0x80000000000-0x800001fffff 64bit pref] [ 0.356789] pci 0000:00:0c.0: PCI bridge to [bus 06] [ 0.356791] pci 0000:00:0c.0: bridge window [io 0x5000-0x5fff] [ 0.356794] pci 0000:00:0c.0: bridge window [mem 0xe6d00000-0xe6efffff] [ 0.356798] pci 0000:00:0c.0: bridge window [mem 0x80000a00000-0x80000bfffff 64bit pref] [ 0.356801] pci 0000:00:0e.0: PCI bridge to [bus 07] [ 0.356804] pci 0000:00:0e.0: bridge window [io 0x6000-0x6fff] [ 0.356807] pci 0000:00:0e.0: bridge window [mem 0xe6f00000-0xe70fffff] [ 0.356810] pci 0000:00:0e.0: bridge window [mem 0x80000c00000-0x80000dfffff 64bit pref] [ 0.356814] pci 0000:00:10.0: PCI bridge to [bus 08] [ 0.356817] pci 0000:00:10.0: bridge window [mem 0xe6200000-0xe62fffff] [ 0.356820] pci 0000:00:10.0: bridge window [mem 0xe0000000-0xe3ffffff 64bit pref] [ 0.356824] pci 0000:00:11.0: PCI bridge to [bus 09] [ 0.356827] pci 0000:00:11.0: bridge window [mem 0xe6000000-0xe61fffff] [ 0.356830] pci 0000:00:11.0: bridge window [mem 0xe4000000-0xe5ffffff 64bit pref] [ 0.356833] pci 0000:00:12.0: PCI bridge to [bus 0a] [ 0.356838] pci_bus 0000:00: resource 4 [mem 0x80000000000-0x82fffffffff pref window] [ 0.356841] pci_bus 0000:00: resource 5 [mem 0xe0000000-0xf7feffff window] [ 0.356844] pci_bus 0000:00: resource 6 [io 0x0000-0xffff window] [ 0.356847] pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] [ 0.356849] pci_bus 0000:01: resource 1 [mem 0xe6500000-0xe66fffff] [ 0.356852] pci_bus 0000:01: resource 2 [mem 0x80000200000-0x800003fffff 64bit pref] [ 0.356855] pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] [ 0.356857] pci_bus 0000:02: resource 1 [mem 0xe6700000-0xe68fffff] [ 0.356860] pci_bus 0000:02: resource 2 [mem 0x80000400000-0x800005fffff 64bit pref] [ 0.356863] pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] [ 0.356865] pci_bus 0000:03: resource 1 [mem 0xe6900000-0xe6afffff] [ 0.356868] pci_bus 0000:03: resource 2 [mem 0x80000600000-0x800007fffff 64bit pref] [ 0.356871] pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] [ 0.356873] pci_bus 0000:04: resource 1 [mem 0xe6b00000-0xe6cfffff] [ 0.356876] pci_bus 0000:04: resource 2 [mem 0x80000800000-0x800009fffff 64bit pref] [ 0.356879] pci_bus 0000:05: resource 0 [io 0x0000-0x0fff] [ 0.356881] pci_bus 0000:05: resource 1 [mem 0xe6300000-0xe64fffff] [ 0.356884] pci_bus 0000:05: resource 2 [mem 0x80000000000-0x800001fffff 64bit pref] [ 0.356887] pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] [ 0.356889] pci_bus 0000:06: resource 1 [mem 0xe6d00000-0xe6efffff] [ 0.356892] pci_bus 0000:06: resource 2 [mem 0x80000a00000-0x80000bfffff 64bit pref] [ 0.356894] pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] [ 0.356897] pci_bus 0000:07: resource 1 [mem 0xe6f00000-0xe70fffff] [ 0.356899] pci_bus 0000:07: resource 2 [mem 0x80000c00000-0x80000dfffff 64bit pref] [ 0.356902] pci_bus 0000:08: resource 1 [mem 0xe6200000-0xe62fffff] [ 0.356905] pci_bus 0000:08: resource 2 [mem 0xe0000000-0xe3ffffff 64bit pref] [ 0.356908] pci_bus 0000:09: resource 1 [mem 0xe6000000-0xe61fffff] [ 0.356910] pci_bus 0000:09: resource 2 [mem 0xe4000000-0xe5ffffff 64bit pref] [ 0.356955] ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 7b]) [ 0.356961] acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 0.357036] acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER LTR DPC] [ 0.357101] acpi PNP0A08:01: _OSC: OS now controls [PCIeCapability] [ 0.357595] acpi PNP0A08:01: ECAM area [mem 0xd7b00000-0xd7bfffff] reserved by PNP0C02:00 [ 0.357611] acpi PNP0A08:01: ECAM at [mem 0xd7b00000-0xd7bfffff] for [bus 7b] [ 0.357661] PCI host bridge to bus 0000:7b [ 0.357665] pci_bus 0000:7b: root bus resource [mem 0x148800000-0x148ffffff pref window] [ 0.357668] pci_bus 0000:7b: root bus resource [bus 7b] [ 0.357676] pci 0000:7b:00.0: [19e5:a122] type 00 class 0x088000 [ 0.357683] pci 0000:7b:00.0: reg 0x18: [mem 0x00000000-0x00003fff 64bit pref] [ 0.357743] pci_bus 0000:7b: on NUMA node 0 [ 0.357746] pci 0000:7b:00.0: BAR 2: assigned [mem 0x148800000-0x148803fff 64bit pref] [ 0.357751] pci_bus 0000:7b: resource 4 [mem 0x148800000-0x148ffffff pref window] [ 0.357787] ACPI: PCI Root Bridge [PCI2] (domain 0000 [bus 7a]) [ 0.357791] acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 0.357863] acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER LTR DPC] [ 0.357927] acpi PNP0A08:02: _OSC: OS now controls [PCIeCapability] [ 0.358416] acpi PNP0A08:02: ECAM area [mem 0xd7a00000-0xd7afffff] reserved by PNP0C02:00 [ 0.358432] acpi PNP0A08:02: ECAM at [mem 0xd7a00000-0xd7afffff] for [bus 7a] [ 0.358476] PCI host bridge to bus 0000:7a [ 0.358480] pci_bus 0000:7a: root bus resource [mem 0x20c000000-0x20c1fffff pref window] [ 0.358483] pci_bus 0000:7a: root bus resource [bus 7a] [ 0.358490] pci 0000:7a:00.0: [19e5:a23b] type 00 class 0x0c0310 [ 0.358496] pci 0000:7a:00.0: reg 0x10: [mem 0x20c100000-0x20c100fff 64bit pref] [ 0.358544] pci 0000:7a:01.0: [19e5:a239] type 00 class 0x0c0320 [ 0.358550] pci 0000:7a:01.0: reg 0x10: [mem 0x20c101000-0x20c101fff 64bit pref] [ 0.358602] pci 0000:7a:02.0: [19e5:a238] type 00 class 0x0c0330 [ 0.358608] pci 0000:7a:02.0: reg 0x10: [mem 0x20c000000-0x20c0fffff 64bit pref] [ 0.358658] pci_bus 0000:7a: on NUMA node 0 [ 0.358661] pci 0000:7a:02.0: BAR 0: assigned [mem 0x20c000000-0x20c0fffff 64bit pref] [ 0.358666] pci 0000:7a:00.0: BAR 0: assigned [mem 0x20c100000-0x20c100fff 64bit pref] [ 0.358671] pci 0000:7a:01.0: BAR 0: assigned [mem 0x20c101000-0x20c101fff 64bit pref] [ 0.358675] pci_bus 0000:7a: resource 4 [mem 0x20c000000-0x20c1fffff pref window] [ 0.358713] ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 78-79]) [ 0.358717] acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 0.358790] acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER LTR DPC] [ 0.358853] acpi PNP0A08:03: _OSC: OS now controls [PCIeCapability] [ 0.359338] acpi PNP0A08:03: ECAM area [mem 0xd7800000-0xd79fffff] reserved by PNP0C02:00 [ 0.359345] acpi PNP0A08:03: ECAM at [mem 0xd7800000-0xd79fffff] for [bus 78-79] [ 0.359405] PCI host bridge to bus 0000:78 [ 0.359409] pci_bus 0000:78: root bus resource [mem 0x208000000-0x208bfffff pref window] [ 0.359412] pci_bus 0000:78: root bus resource [bus 78-79] [ 0.359423] pci_bus 0000:78: on NUMA node 0 [ 0.359425] pci_bus 0000:78: resource 4 [mem 0x208000000-0x208bfffff pref window] [ 0.359466] ACPI: PCI Root Bridge [PCI4] (domain 0000 [bus 7c-7d]) [ 0.359470] acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 0.359541] acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER LTR DPC] [ 0.359604] acpi PNP0A08:04: _OSC: OS now controls [PCIeCapability] [ 0.360084] acpi PNP0A08:04: ECAM area [mem 0xd7c00000-0xd7dfffff] reserved by PNP0C02:00 [ 0.360090] acpi PNP0A08:04: ECAM at [mem 0xd7c00000-0xd7dfffff] for [bus 7c-7d] [ 0.360137] PCI host bridge to bus 0000:7c [ 0.360140] pci_bus 0000:7c: root bus resource [mem 0x120000000-0x13fffffff pref window] [ 0.360144] pci_bus 0000:7c: root bus resource [bus 7c-7d] [ 0.360151] pci 0000:7c:00.0: [19e5:a121] type 01 class 0x060400 [ 0.360160] pci 0000:7c:00.0: enabling Extended Tags [ 0.360173] ACPI: IORT: [Firmware Bug]: [map (____ptrval____)] conflicting mapping for input ID 0x7c00 [ 0.360176] ACPI: IORT: [Firmware Bug]: applying workaround. [ 0.360243] pci 0000:7d:00.0: [19e5:a222] type 00 class 0x020000 [ 0.360249] pci 0000:7d:00.0: reg 0x10: [mem 0x1221f0000-0x1221fffff 64bit pref] [ 0.360253] pci 0000:7d:00.0: reg 0x18: [mem 0x121f00000-0x121ffffff 64bit pref] [ 0.360274] pci 0000:7d:00.0: reg 0x224: [mem 0x122180000-0x12218ffff 64bit pref] [ 0.360278] pci 0000:7d:00.0: VF(n) BAR0 space: [mem 0x122180000-0x1221effff 64bit pref] (contains BAR0 for 7 VFs) [ 0.360283] pci 0000:7d:00.0: reg 0x22c: [mem 0x121800000-0x1218fffff 64bit pref] [ 0.360286] pci 0000:7d:00.0: VF(n) BAR2 space: [mem 0x121800000-0x121efffff 64bit pref] (contains BAR2 for 7 VFs) [ 0.360330] pci 0000:7d:00.1: [19e5:a221] type 00 class 0x020000 [ 0.360336] pci 0000:7d:00.1: reg 0x10: [mem 0x122170000-0x12217ffff 64bit pref] [ 0.360341] pci 0000:7d:00.1: reg 0x18: [mem 0x121700000-0x1217fffff 64bit pref] [ 0.360360] pci 0000:7d:00.1: reg 0x224: [mem 0x122100000-0x12210ffff 64bit pref] [ 0.360364] pci 0000:7d:00.1: VF(n) BAR0 space: [mem 0x122100000-0x12216ffff 64bit pref] (contains BAR0 for 7 VFs) [ 0.360368] pci 0000:7d:00.1: reg 0x22c: [mem 0x121000000-0x1210fffff 64bit pref] [ 0.360372] pci 0000:7d:00.1: VF(n) BAR2 space: [mem 0x121000000-0x1216fffff 64bit pref] (contains BAR2 for 7 VFs) [ 0.360418] pci 0000:7d:00.2: [19e5:a222] type 00 class 0x020000 [ 0.360423] pci 0000:7d:00.2: reg 0x10: [mem 0x1220f0000-0x1220fffff 64bit pref] [ 0.360428] pci 0000:7d:00.2: reg 0x18: [mem 0x120f00000-0x120ffffff 64bit pref] [ 0.360448] pci 0000:7d:00.2: reg 0x224: [mem 0x122080000-0x12208ffff 64bit pref] [ 0.360451] pci 0000:7d:00.2: VF(n) BAR0 space: [mem 0x122080000-0x1220effff 64bit pref] (contains BAR0 for 7 VFs) [ 0.360456] pci 0000:7d:00.2: reg 0x22c: [mem 0x120800000-0x1208fffff 64bit pref] [ 0.360460] pci 0000:7d:00.2: VF(n) BAR2 space: [mem 0x120800000-0x120efffff 64bit pref] (contains BAR2 for 7 VFs) [ 0.360503] pci 0000:7d:00.3: [19e5:a221] type 00 class 0x020000 [ 0.360509] pci 0000:7d:00.3: reg 0x10: [mem 0x122070000-0x12207ffff 64bit pref] [ 0.360514] pci 0000:7d:00.3: reg 0x18: [mem 0x120700000-0x1207fffff 64bit pref] [ 0.360533] pci 0000:7d:00.3: reg 0x224: [mem 0x122000000-0x12200ffff 64bit pref] [ 0.360537] pci 0000:7d:00.3: VF(n) BAR0 space: [mem 0x122000000-0x12206ffff 64bit pref] (contains BAR0 for 7 VFs) [ 0.360542] pci 0000:7d:00.3: reg 0x22c: [mem 0x120000000-0x1200fffff 64bit pref] [ 0.360545] pci 0000:7d:00.3: VF(n) BAR2 space: [mem 0x120000000-0x1206fffff 64bit pref] (contains BAR2 for 7 VFs) [ 0.360594] pci_bus 0000:7c: on NUMA node 0 [ 0.360601] pci 0000:7c:00.0: bridge window [mem 0x00100000-0x005fffff 64bit pref] to [bus 7d] add_size 1d00000 add_align 100000 [ 0.360606] pci 0000:7c:00.0: BAR 15: assigned [mem 0x120000000-0x1221fffff 64bit pref] [ 0.360614] pci 0000:7d:00.0: BAR 2: assigned [mem 0x120000000-0x1200fffff 64bit pref] [ 0.360619] pci 0000:7d:00.0: BAR 9: assigned [mem 0x120100000-0x1207fffff 64bit pref] [ 0.360623] pci 0000:7d:00.1: BAR 2: assigned [mem 0x120800000-0x1208fffff 64bit pref] [ 0.360627] pci 0000:7d:00.1: BAR 9: assigned [mem 0x120900000-0x120ffffff 64bit pref] [ 0.360631] pci 0000:7d:00.2: BAR 2: assigned [mem 0x121000000-0x1210fffff 64bit pref] [ 0.360635] pci 0000:7d:00.2: BAR 9: assigned [mem 0x121100000-0x1217fffff 64bit pref] [ 0.360638] pci 0000:7d:00.3: BAR 2: assigned [mem 0x121800000-0x1218fffff 64bit pref] [ 0.360643] pci 0000:7d:00.3: BAR 9: assigned [mem 0x121900000-0x121ffffff 64bit pref] [ 0.360646] pci 0000:7d:00.0: BAR 0: assigned [mem 0x122000000-0x12200ffff 64bit pref] [ 0.360650] pci 0000:7d:00.0: BAR 7: assigned [mem 0x122010000-0x12207ffff 64bit pref] [ 0.360654] pci 0000:7d:00.1: BAR 0: assigned [mem 0x122080000-0x12208ffff 64bit pref] [ 0.360658] pci 0000:7d:00.1: BAR 7: assigned [mem 0x122090000-0x1220fffff 64bit pref] [ 0.360661] pci 0000:7d:00.2: BAR 0: assigned [mem 0x122100000-0x12210ffff 64bit pref] [ 0.360665] pci 0000:7d:00.2: BAR 7: assigned [mem 0x122110000-0x12217ffff 64bit pref] [ 0.360669] pci 0000:7d:00.3: BAR 0: assigned [mem 0x122180000-0x12218ffff 64bit pref] [ 0.360673] pci 0000:7d:00.3: BAR 7: assigned [mem 0x122190000-0x1221fffff 64bit pref] [ 0.360679] pci 0000:7c:00.0: PCI bridge to [bus 7d] [ 0.360682] pci 0000:7c:00.0: bridge window [mem 0x120000000-0x1221fffff 64bit pref] [ 0.360686] pci_bus 0000:7c: resource 4 [mem 0x120000000-0x13fffffff pref window] [ 0.360689] pci_bus 0000:7d: resource 2 [mem 0x120000000-0x1221fffff 64bit pref] [ 0.360734] ACPI: PCI Root Bridge [PCI5] (domain 0000 [bus 74-75]) [ 0.360739] acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 0.360810] acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER LTR DPC] [ 0.360874] acpi PNP0A08:05: _OSC: OS now controls [PCIeCapability] [ 0.361363] acpi PNP0A08:05: ECAM area [mem 0xd7400000-0xd75fffff] reserved by PNP0C02:00 [ 0.361371] acpi PNP0A08:05: ECAM at [mem 0xd7400000-0xd75fffff] for [bus 74-75] [ 0.361441] PCI host bridge to bus 0000:74 [ 0.361444] pci_bus 0000:74: root bus resource [mem 0x141000000-0x141ffffff pref window] [ 0.361448] pci_bus 0000:74: root bus resource [mem 0x144000000-0x145ffffff pref window] [ 0.361451] pci_bus 0000:74: root bus resource [mem 0xa2000000-0xa2ffffff window] [ 0.361454] pci_bus 0000:74: root bus resource [bus 74-75] [ 0.361462] pci 0000:74:01.0: [19e5:a121] type 01 class 0x060400 [ 0.361470] pci 0000:74:01.0: enabling Extended Tags [ 0.361540] pci 0000:74:02.0: [19e5:a230] type 00 class 0x010700 [ 0.361549] pci 0000:74:02.0: reg 0x24: [mem 0xa2008000-0xa200ffff] [ 0.361627] pci 0000:74:03.0: [19e5:a235] type 00 class 0x010601 [ 0.361638] pci 0000:74:03.0: reg 0x24: [mem 0xa2010000-0xa2010fff] [ 0.361687] pci 0000:74:04.0: [19e5:a230] type 00 class 0x010700 [ 0.361695] pci 0000:74:04.0: reg 0x24: [mem 0xa2000000-0xa2007fff] [ 0.361799] pci_bus 0000:76: busn_res: can not insert [bus 76] under [bus 74-75] (conflicts with (null) [bus 74-75]) [ 0.361806] pci_bus 0000:74: on NUMA node 0 [ 0.361810] pci 0000:74:02.0: BAR 5: assigned [mem 0xa2000000-0xa2007fff] [ 0.361813] pci 0000:74:04.0: BAR 5: assigned [mem 0xa2008000-0xa200ffff] [ 0.361817] pci 0000:74:03.0: BAR 5: assigned [mem 0xa2010000-0xa2010fff] [ 0.361820] pci 0000:74:01.0: PCI bridge to [bus 76] [ 0.361825] pci_bus 0000:74: resource 4 [mem 0x141000000-0x141ffffff pref window] [ 0.361829] pci_bus 0000:74: resource 5 [mem 0x144000000-0x145ffffff pref window] [ 0.361832] pci_bus 0000:74: resource 6 [mem 0xa2000000-0xa2ffffff window] [ 0.361896] ACPI: PCI Root Bridge [PCI6] (domain 0000 [bus 80-9f]) [ 0.361902] acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 0.361973] acpi PNP0A08:06: _OSC: platform does not support [SHPCHotplug LTR DPC] [ 0.362042] acpi PNP0A08:06: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] [ 0.362528] acpi PNP0A08:06: ECAM area [mem 0xd8000000-0xd9ffffff] reserved by PNP0C02:00 [ 0.362536] acpi PNP0A08:06: ECAM at [mem 0xd8000000-0xd9ffffff] for [bus 80-9f] [ 0.362557] Remapped I/O 0x00000000cfff0000 to [io 0x10000-0x1ffff window] [ 0.362612] PCI host bridge to bus 0000:80 [ 0.362616] pci_bus 0000:80: root bus resource [mem 0x280000000000-0x282fffffffff pref window] [ 0.362619] pci_bus 0000:80: root bus resource [mem 0xb0000000-0xcffeffff window] [ 0.362623] pci_bus 0000:80: root bus resource [io 0x10000-0x1ffff window] (bus address [0x0000-0xffff]) [ 0.362626] pci_bus 0000:80: root bus resource [bus 80-9f] [ 0.362654] pci 0000:80:00.0: [19e5:a120] type 01 class 0x060400 [ 0.362710] pci 0000:80:00.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.362789] pci 0000:80:08.0: [19e5:a120] type 01 class 0x060400 [ 0.362841] pci 0000:80:08.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.362899] pci 0000:80:0a.0: [19e5:a120] type 01 class 0x060400 [ 0.362950] pci 0000:80:0a.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.363004] pci 0000:80:0c.0: [19e5:a120] type 01 class 0x060400 [ 0.363056] pci 0000:80:0c.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.363112] pci 0000:80:0e.0: [19e5:a120] type 01 class 0x060400 [ 0.363166] pci 0000:80:0e.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.363225] pci 0000:80:10.0: [19e5:a120] type 01 class 0x060400 [ 0.363277] pci 0000:80:10.0: PME# supported from D0 D1 D2 D3hot D3cold [ 0.363525] pci 0000:81:00.0: [15b3:1017] type 00 class 0x020700 [ 0.363719] pci 0000:81:00.0: reg 0x10: [mem 0x280002000000-0x280003ffffff 64bit pref] [ 0.364114] pci 0000:81:00.0: reg 0x30: [mem 0xb0500000-0xb05fffff pref] [ 0.365150] pci 0000:81:00.0: PME# supported from D3cold [ 0.365492] pci 0000:81:00.0: reg 0x1a4: [mem 0x280004400000-0x2800045fffff 64bit pref] [ 0.365496] pci 0000:81:00.0: VF(n) BAR0 space: [mem 0x280004400000-0x2800047fffff 64bit pref] (contains BAR0 for 2 VFs) [ 0.366651] pci 0000:81:00.1: [15b3:1017] type 00 class 0x020700 [ 0.366844] pci 0000:81:00.1: reg 0x10: [mem 0x280000000000-0x280001ffffff 64bit pref] [ 0.367241] pci 0000:81:00.1: reg 0x30: [mem 0xb0400000-0xb04fffff pref] [ 0.368227] pci 0000:81:00.1: PME# supported from D3cold [ 0.368553] pci 0000:81:00.1: reg 0x1a4: [mem 0x280004000000-0x2800041fffff 64bit pref] [ 0.368556] pci 0000:81:00.1: VF(n) BAR0 space: [mem 0x280004000000-0x2800043fffff 64bit pref] (contains BAR0 for 2 VFs) [ 0.369543] pci 0000:82:00.0: [19e5:3754] type 00 class 0x010802 [ 0.369556] pci 0000:82:00.0: reg 0x10: [mem 0xb0340000-0xb034ffff 64bit] [ 0.369581] pci 0000:82:00.0: reg 0x24: [mem 0xb0320000-0xb033ffff] [ 0.369587] pci 0000:82:00.0: reg 0x30: [mem 0xb0300000-0xb031ffff pref] [ 0.369741] pci 0000:83:00.0: [19e5:3754] type 00 class 0x010802 [ 0.369754] pci 0000:83:00.0: reg 0x10: [mem 0xb0240000-0xb024ffff 64bit] [ 0.369779] pci 0000:83:00.0: reg 0x24: [mem 0xb0220000-0xb023ffff] [ 0.369785] pci 0000:83:00.0: reg 0x30: [mem 0xb0200000-0xb021ffff pref] [ 0.369931] pci 0000:84:00.0: [19e5:3754] type 00 class 0x010802 [ 0.369942] pci 0000:84:00.0: reg 0x10: [mem 0xb0140000-0xb014ffff 64bit] [ 0.369967] pci 0000:84:00.0: reg 0x24: [mem 0xb0120000-0xb013ffff] [ 0.369973] pci 0000:84:00.0: reg 0x30: [mem 0xb0100000-0xb011ffff pref] [ 0.370127] pci 0000:85:00.0: [19e5:3754] type 00 class 0x010802 [ 0.370140] pci 0000:85:00.0: reg 0x10: [mem 0xb0040000-0xb004ffff 64bit] [ 0.370165] pci 0000:85:00.0: reg 0x24: [mem 0xb0020000-0xb003ffff] [ 0.370171] pci 0000:85:00.0: reg 0x30: [mem 0xb0000000-0xb001ffff pref] [ 0.370324] pci_bus 0000:80: on NUMA node 2 [ 0.370325] pci 0000:80:00.0: PCI bridge to [bus 81] [ 0.370330] pci 0000:80:00.0: bridge window [mem 0xb0400000-0xb05fffff] [ 0.370334] pci 0000:80:00.0: bridge window [mem 0x280000000000-0x2800047fffff 64bit pref] [ 0.370338] pci 0000:80:08.0: PCI bridge to [bus 82] [ 0.370342] pci 0000:80:08.0: bridge window [mem 0xb0300000-0xb03fffff] [ 0.370346] pci 0000:80:0a.0: PCI bridge to [bus 83] [ 0.370350] pci 0000:80:0a.0: bridge window [mem 0xb0200000-0xb02fffff] [ 0.370353] pci 0000:80:0c.0: PCI bridge to [bus 84] [ 0.370357] pci 0000:80:0c.0: bridge window [mem 0xb0100000-0xb01fffff] [ 0.370361] pci 0000:80:0e.0: PCI bridge to [bus 85] [ 0.370365] pci 0000:80:0e.0: bridge window [mem 0xb0000000-0xb00fffff] [ 0.370368] pci 0000:80:10.0: PCI bridge to [bus 86] [ 0.370377] pci 0000:80:08.0: bridge window [io 0x1000-0x0fff] to [bus 82] add_size 1000 [ 0.370381] pci 0000:80:08.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 82] add_size 200000 add_align 100000 [ 0.370385] pci 0000:80:0a.0: bridge window [io 0x1000-0x0fff] to [bus 83] add_size 1000 [ 0.370392] pci 0000:80:0a.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 83] add_size 200000 add_align 100000 [ 0.370396] pci 0000:80:0c.0: bridge window [io 0x1000-0x0fff] to [bus 84] add_size 1000 [ 0.370399] pci 0000:80:0c.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 84] add_size 200000 add_align 100000 [ 0.370403] pci 0000:80:0e.0: bridge window [io 0x1000-0x0fff] to [bus 85] add_size 1000 [ 0.370406] pci 0000:80:0e.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 85] add_size 200000 add_align 100000 [ 0.370412] pci 0000:80:08.0: BAR 15: assigned [mem 0x280004800000-0x2800049fffff 64bit pref] [ 0.370415] pci 0000:80:0a.0: BAR 15: assigned [mem 0x280004a00000-0x280004bfffff 64bit pref] [ 0.370419] pci 0000:80:0c.0: BAR 15: assigned [mem 0x280004c00000-0x280004dfffff 64bit pref] [ 0.370422] pci 0000:80:0e.0: BAR 15: assigned [mem 0x280004e00000-0x280004ffffff 64bit pref] [ 0.370425] pci 0000:80:08.0: BAR 13: assigned [io 0x10000-0x10fff] [ 0.370428] pci 0000:80:0a.0: BAR 13: assigned [io 0x11000-0x11fff] [ 0.370431] pci 0000:80:0c.0: BAR 13: assigned [io 0x12000-0x12fff] [ 0.370434] pci 0000:80:0e.0: BAR 13: assigned [io 0x13000-0x13fff] [ 0.370438] pci 0000:80:00.0: PCI bridge to [bus 81] [ 0.370441] pci 0000:80:00.0: bridge window [mem 0xb0400000-0xb05fffff] [ 0.370444] pci 0000:80:00.0: bridge window [mem 0x280000000000-0x2800047fffff 64bit pref] [ 0.370448] pci 0000:80:08.0: PCI bridge to [bus 82] [ 0.370451] pci 0000:80:08.0: bridge window [io 0x10000-0x10fff] [ 0.370454] pci 0000:80:08.0: bridge window [mem 0xb0300000-0xb03fffff] [ 0.370458] pci 0000:80:08.0: bridge window [mem 0x280004800000-0x2800049fffff 64bit pref] [ 0.370462] pci 0000:80:0a.0: PCI bridge to [bus 83] [ 0.370464] pci 0000:80:0a.0: bridge window [io 0x11000-0x11fff] [ 0.370468] pci 0000:80:0a.0: bridge window [mem 0xb0200000-0xb02fffff] [ 0.370471] pci 0000:80:0a.0: bridge window [mem 0x280004a00000-0x280004bfffff 64bit pref] [ 0.370475] pci 0000:80:0c.0: PCI bridge to [bus 84] [ 0.370478] pci 0000:80:0c.0: bridge window [io 0x12000-0x12fff] [ 0.370481] pci 0000:80:0c.0: bridge window [mem 0xb0100000-0xb01fffff] [ 0.370484] pci 0000:80:0c.0: bridge window [mem 0x280004c00000-0x280004dfffff 64bit pref] [ 0.370488] pci 0000:80:0e.0: PCI bridge to [bus 85] [ 0.370491] pci 0000:80:0e.0: bridge window [io 0x13000-0x13fff] [ 0.370494] pci 0000:80:0e.0: bridge window [mem 0xb0000000-0xb00fffff] [ 0.370498] pci 0000:80:0e.0: bridge window [mem 0x280004e00000-0x280004ffffff 64bit pref] [ 0.370502] pci 0000:80:10.0: PCI bridge to [bus 86] [ 0.370507] pci_bus 0000:80: resource 4 [mem 0x280000000000-0x282fffffffff pref window] [ 0.370510] pci_bus 0000:80: resource 5 [mem 0xb0000000-0xcffeffff window] [ 0.370513] pci_bus 0000:80: resource 6 [io 0x10000-0x1ffff window] [ 0.370515] pci_bus 0000:81: resource 1 [mem 0xb0400000-0xb05fffff] [ 0.370518] pci_bus 0000:81: resource 2 [mem 0x280000000000-0x2800047fffff 64bit pref] [ 0.370521] pci_bus 0000:82: resource 0 [io 0x10000-0x10fff] [ 0.370523] pci_bus 0000:82: resource 1 [mem 0xb0300000-0xb03fffff] [ 0.370526] pci_bus 0000:82: resource 2 [mem 0x280004800000-0x2800049fffff 64bit pref] [ 0.370529] pci_bus 0000:83: resource 0 [io 0x11000-0x11fff] [ 0.370532] pci_bus 0000:83: resource 1 [mem 0xb0200000-0xb02fffff] [ 0.370534] pci_bus 0000:83: resource 2 [mem 0x280004a00000-0x280004bfffff 64bit pref] [ 0.370537] pci_bus 0000:84: resource 0 [io 0x12000-0x12fff] [ 0.370540] pci_bus 0000:84: resource 1 [mem 0xb0100000-0xb01fffff] [ 0.370543] pci_bus 0000:84: resource 2 [mem 0x280004c00000-0x280004dfffff 64bit pref] [ 0.370546] pci_bus 0000:85: resource 0 [io 0x13000-0x13fff] [ 0.370548] pci_bus 0000:85: resource 1 [mem 0xb0000000-0xb00fffff] [ 0.370551] pci_bus 0000:85: resource 2 [mem 0x280004e00000-0x280004ffffff 64bit pref] [ 0.370607] ACPI: PCI Root Bridge [PCI7] (domain 0000 [bus bb]) [ 0.370612] acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 0.370686] acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER LTR DPC] [ 0.370750] acpi PNP0A08:07: _OSC: OS now controls [PCIeCapability] [ 0.371238] acpi PNP0A08:07: ECAM area [mem 0xdbb00000-0xdbbfffff] reserved by PNP0C02:00 [ 0.371253] acpi PNP0A08:07: ECAM at [mem 0xdbb00000-0xdbbfffff] for [bus bb] [ 0.371317] PCI host bridge to bus 0000:bb [ 0.371320] pci_bus 0000:bb: root bus resource [mem 0x200148800000-0x200148ffffff pref window] [ 0.371324] pci_bus 0000:bb: root bus resource [bus bb] [ 0.371332] pci 0000:bb:00.0: [19e5:a122] type 00 class 0x088000 [ 0.371340] pci 0000:bb:00.0: reg 0x18: [mem 0x00000000-0x00003fff 64bit pref] [ 0.371398] pci_bus 0000:bb: on NUMA node 2 [ 0.371400] pci 0000:bb:00.0: BAR 2: assigned [mem 0x200148800000-0x200148803fff 64bit pref] [ 0.371406] pci_bus 0000:bb: resource 4 [mem 0x200148800000-0x200148ffffff pref window] [ 0.371455] ACPI: PCI Root Bridge [PCI8] (domain 0000 [bus ba]) [ 0.371459] acpi PNP0A08:08: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 0.371534] acpi PNP0A08:08: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER LTR DPC] [ 0.371598] acpi PNP0A08:08: _OSC: OS now controls [PCIeCapability] [ 0.372081] acpi PNP0A08:08: ECAM area [mem 0xdba00000-0xdbafffff] reserved by PNP0C02:00 [ 0.372097] acpi PNP0A08:08: ECAM at [mem 0xdba00000-0xdbafffff] for [bus ba] [ 0.372153] PCI host bridge to bus 0000:ba [ 0.372157] pci_bus 0000:ba: root bus resource [mem 0x20020c000000-0x20020c1fffff pref window] [ 0.372160] pci_bus 0000:ba: root bus resource [bus ba] [ 0.372169] pci 0000:ba:00.0: [19e5:a23b] type 00 class 0x0c0310 [ 0.372175] pci 0000:ba:00.0: reg 0x10: [mem 0x20020c100000-0x20020c100fff 64bit pref] [ 0.372230] pci 0000:ba:01.0: [19e5:a239] type 00 class 0x0c0320 [ 0.372237] pci 0000:ba:01.0: reg 0x10: [mem 0x20020c101000-0x20020c101fff 64bit pref] [ 0.372290] pci 0000:ba:02.0: [19e5:a238] type 00 class 0x0c0330 [ 0.372297] pci 0000:ba:02.0: reg 0x10: [mem 0x20020c000000-0x20020c0fffff 64bit pref] [ 0.372352] pci_bus 0000:ba: on NUMA node 2 [ 0.372355] pci 0000:ba:02.0: BAR 0: assigned [mem 0x20020c000000-0x20020c0fffff 64bit pref] [ 0.372360] pci 0000:ba:00.0: BAR 0: assigned [mem 0x20020c100000-0x20020c100fff 64bit pref] [ 0.372365] pci 0000:ba:01.0: BAR 0: assigned [mem 0x20020c101000-0x20020c101fff 64bit pref] [ 0.372370] pci_bus 0000:ba: resource 4 [mem 0x20020c000000-0x20020c1fffff pref window] [ 0.372420] ACPI: PCI Root Bridge [PCI9] (domain 0000 [bus b8-b9]) [ 0.372425] acpi PNP0A08:09: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 0.372495] acpi PNP0A08:09: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER LTR DPC] [ 0.372558] acpi PNP0A08:09: _OSC: OS now controls [PCIeCapability] [ 0.373040] acpi PNP0A08:09: ECAM area [mem 0xdb800000-0xdb9fffff] reserved by PNP0C02:00 [ 0.373047] acpi PNP0A08:09: ECAM at [mem 0xdb800000-0xdb9fffff] for [bus b8-b9] [ 0.373116] PCI host bridge to bus 0000:b8 [ 0.373119] pci_bus 0000:b8: root bus resource [mem 0x200208000000-0x200208bfffff pref window] [ 0.373123] pci_bus 0000:b8: root bus resource [bus b8-b9] [ 0.373135] pci_bus 0000:b8: on NUMA node 2 [ 0.373137] pci_bus 0000:b8: resource 4 [mem 0x200208000000-0x200208bfffff pref window] [ 0.373187] ACPI: PCI Root Bridge [PCIA] (domain 0000 [bus bc-bd]) [ 0.373192] acpi PNP0A08:0a: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 0.373262] acpi PNP0A08:0a: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER LTR DPC] [ 0.373326] acpi PNP0A08:0a: _OSC: OS now controls [PCIeCapability] [ 0.373813] acpi PNP0A08:0a: ECAM area [mem 0xdbc00000-0xdbdfffff] reserved by PNP0C02:00 [ 0.373820] acpi PNP0A08:0a: ECAM at [mem 0xdbc00000-0xdbdfffff] for [bus bc-bd] [ 0.373885] PCI host bridge to bus 0000:bc [ 0.373888] pci_bus 0000:bc: root bus resource [mem 0x200120000000-0x20013fffffff pref window] [ 0.373892] pci_bus 0000:bc: root bus resource [bus bc-bd] [ 0.373900] pci 0000:bc:00.0: [19e5:a121] type 01 class 0x060400 [ 0.373911] pci 0000:bc:00.0: enabling Extended Tags [ 0.373927] ACPI: IORT: [Firmware Bug]: [map (____ptrval____)] conflicting mapping for input ID 0xbc00 [ 0.373930] ACPI: IORT: [Firmware Bug]: applying workaround. [ 0.373989] pci 0000:bd:00.0: [19e5:a222] type 00 class 0x020000 [ 0.373999] pci 0000:bd:00.0: reg 0x10: [mem 0x2001221f0000-0x2001221fffff 64bit pref] [ 0.374005] pci 0000:bd:00.0: reg 0x18: [mem 0x200121f00000-0x200121ffffff 64bit pref] [ 0.374033] pci 0000:bd:00.0: reg 0x224: [mem 0x200122180000-0x20012218ffff 64bit pref] [ 0.374037] pci 0000:bd:00.0: VF(n) BAR0 space: [mem 0x200122180000-0x2001221effff 64bit pref] (contains BAR0 for 7 VFs) [ 0.374042] pci 0000:bd:00.0: reg 0x22c: [mem 0x200121800000-0x2001218fffff 64bit pref] [ 0.374046] pci 0000:bd:00.0: VF(n) BAR2 space: [mem 0x200121800000-0x200121efffff 64bit pref] (contains BAR2 for 7 VFs) [ 0.374099] pci 0000:bd:00.1: [19e5:a221] type 00 class 0x020000 [ 0.374105] pci 0000:bd:00.1: reg 0x10: [mem 0x200122170000-0x20012217ffff 64bit pref] [ 0.374111] pci 0000:bd:00.1: reg 0x18: [mem 0x200121700000-0x2001217fffff 64bit pref] [ 0.374136] pci 0000:bd:00.1: reg 0x224: [mem 0x200122100000-0x20012210ffff 64bit pref] [ 0.374140] pci 0000:bd:00.1: VF(n) BAR0 space: [mem 0x200122100000-0x20012216ffff 64bit pref] (contains BAR0 for 7 VFs) [ 0.374146] pci 0000:bd:00.1: reg 0x22c: [mem 0x200121000000-0x2001210fffff 64bit pref] [ 0.374149] pci 0000:bd:00.1: VF(n) BAR2 space: [mem 0x200121000000-0x2001216fffff 64bit pref] (contains BAR2 for 7 VFs) [ 0.374197] pci 0000:bd:00.2: [19e5:a222] type 00 class 0x020000 [ 0.374204] pci 0000:bd:00.2: reg 0x10: [mem 0x2001220f0000-0x2001220fffff 64bit pref] [ 0.374209] pci 0000:bd:00.2: reg 0x18: [mem 0x200120f00000-0x200120ffffff 64bit pref] [ 0.374235] pci 0000:bd:00.2: reg 0x224: [mem 0x200122080000-0x20012208ffff 64bit pref] [ 0.374238] pci 0000:bd:00.2: VF(n) BAR0 space: [mem 0x200122080000-0x2001220effff 64bit pref] (contains BAR0 for 7 VFs) [ 0.374244] pci 0000:bd:00.2: reg 0x22c: [mem 0x200120800000-0x2001208fffff 64bit pref] [ 0.374247] pci 0000:bd:00.2: VF(n) BAR2 space: [mem 0x200120800000-0x200120efffff 64bit pref] (contains BAR2 for 7 VFs) [ 0.374307] pci 0000:bd:00.3: [19e5:a221] type 00 class 0x020000 [ 0.374313] pci 0000:bd:00.3: reg 0x10: [mem 0x200122070000-0x20012207ffff 64bit pref] [ 0.374318] pci 0000:bd:00.3: reg 0x18: [mem 0x200120700000-0x2001207fffff 64bit pref] [ 0.374347] pci 0000:bd:00.3: reg 0x224: [mem 0x200122000000-0x20012200ffff 64bit pref] [ 0.374351] pci 0000:bd:00.3: VF(n) BAR0 space: [mem 0x200122000000-0x20012206ffff 64bit pref] (contains BAR0 for 7 VFs) [ 0.374357] pci 0000:bd:00.3: reg 0x22c: [mem 0x200120000000-0x2001200fffff 64bit pref] [ 0.374360] pci 0000:bd:00.3: VF(n) BAR2 space: [mem 0x200120000000-0x2001206fffff 64bit pref] (contains BAR2 for 7 VFs) [ 0.374426] pci_bus 0000:bc: on NUMA node 2 [ 0.374430] pci 0000:bc:00.0: bridge window [mem 0x00100000-0x005fffff 64bit pref] to [bus bd] add_size 1d00000 add_align 100000 [ 0.374435] pci 0000:bc:00.0: BAR 15: assigned [mem 0x200120000000-0x2001221fffff 64bit pref] [ 0.374444] pci 0000:bd:00.0: BAR 2: assigned [mem 0x200120000000-0x2001200fffff 64bit pref] [ 0.374449] pci 0000:bd:00.0: BAR 9: assigned [mem 0x200120100000-0x2001207fffff 64bit pref] [ 0.374453] pci 0000:bd:00.1: BAR 2: assigned [mem 0x200120800000-0x2001208fffff 64bit pref] [ 0.374458] pci 0000:bd:00.1: BAR 9: assigned [mem 0x200120900000-0x200120ffffff 64bit pref] [ 0.374462] pci 0000:bd:00.2: BAR 2: assigned [mem 0x200121000000-0x2001210fffff 64bit pref] [ 0.374467] pci 0000:bd:00.2: BAR 9: assigned [mem 0x200121100000-0x2001217fffff 64bit pref] [ 0.374471] pci 0000:bd:00.3: BAR 2: assigned [mem 0x200121800000-0x2001218fffff 64bit pref] [ 0.374476] pci 0000:bd:00.3: BAR 9: assigned [mem 0x200121900000-0x200121ffffff 64bit pref] [ 0.374481] pci 0000:bd:00.0: BAR 0: assigned [mem 0x200122000000-0x20012200ffff 64bit pref] [ 0.374485] pci 0000:bd:00.0: BAR 7: assigned [mem 0x200122010000-0x20012207ffff 64bit pref] [ 0.374489] pci 0000:bd:00.1: BAR 0: assigned [mem 0x200122080000-0x20012208ffff 64bit pref] [ 0.374494] pci 0000:bd:00.1: BAR 7: assigned [mem 0x200122090000-0x2001220fffff 64bit pref] [ 0.374498] pci 0000:bd:00.2: BAR 0: assigned [mem 0x200122100000-0x20012210ffff 64bit pref] [ 0.374503] pci 0000:bd:00.2: BAR 7: assigned [mem 0x200122110000-0x20012217ffff 64bit pref] [ 0.374507] pci 0000:bd:00.3: BAR 0: assigned [mem 0x200122180000-0x20012218ffff 64bit pref] [ 0.374512] pci 0000:bd:00.3: BAR 7: assigned [mem 0x200122190000-0x2001221fffff 64bit pref] [ 0.374518] pci 0000:bc:00.0: PCI bridge to [bus bd] [ 0.374522] pci 0000:bc:00.0: bridge window [mem 0x200120000000-0x2001221fffff 64bit pref] [ 0.374526] pci_bus 0000:bc: resource 4 [mem 0x200120000000-0x20013fffffff pref window] [ 0.374530] pci_bus 0000:bd: resource 2 [mem 0x200120000000-0x2001221fffff 64bit pref] [ 0.374584] ACPI: PCI Root Bridge [PCIB] (domain 0000 [bus b4-b5]) [ 0.374589] acpi PNP0A08:0b: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 0.374662] acpi PNP0A08:0b: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER LTR DPC] [ 0.374727] acpi PNP0A08:0b: _OSC: OS now controls [PCIeCapability] [ 0.375212] acpi PNP0A08:0b: ECAM area [mem 0xdb400000-0xdb5fffff] reserved by PNP0C02:00 [ 0.375219] acpi PNP0A08:0b: ECAM at [mem 0xdb400000-0xdb5fffff] for [bus b4-b5] [ 0.375302] PCI host bridge to bus 0000:b4 [ 0.375306] pci_bus 0000:b4: root bus resource [mem 0x200141000000-0x200141ffffff pref window] [ 0.375309] pci_bus 0000:b4: root bus resource [mem 0x200144000000-0x200145ffffff pref window] [ 0.375312] pci_bus 0000:b4: root bus resource [mem 0xa3000000-0xa3ffffff window] [ 0.375315] pci_bus 0000:b4: root bus resource [bus b4-b5] [ 0.375324] pci 0000:b4:01.0: [19e5:a121] type 01 class 0x060400 [ 0.375334] pci 0000:b4:01.0: enabling Extended Tags [ 0.375417] pci 0000:b4:02.0: [19e5:a230] type 00 class 0x010700 [ 0.375427] pci 0000:b4:02.0: reg 0x24: [mem 0xa3008000-0xa300ffff] [ 0.375514] pci 0000:b4:03.0: [19e5:a235] type 00 class 0x010601 [ 0.375527] pci 0000:b4:03.0: reg 0x24: [mem 0xa3010000-0xa3010fff] [ 0.375579] pci 0000:b4:04.0: [19e5:a230] type 00 class 0x010700 [ 0.375589] pci 0000:b4:04.0: reg 0x24: [mem 0xa3000000-0xa3007fff] [ 0.375692] pci_bus 0000:b6: busn_res: can not insert [bus b6] under [bus b4-b5] (conflicts with (null) [bus b4-b5]) [ 0.375700] pci_bus 0000:b4: on NUMA node 2 [ 0.375703] pci 0000:b4:02.0: BAR 5: assigned [mem 0xa3000000-0xa3007fff] [ 0.375707] pci 0000:b4:04.0: BAR 5: assigned [mem 0xa3008000-0xa300ffff] [ 0.375711] pci 0000:b4:03.0: BAR 5: assigned [mem 0xa3010000-0xa3010fff] [ 0.375715] pci 0000:b4:01.0: PCI bridge to [bus b6] [ 0.375720] pci_bus 0000:b4: resource 4 [mem 0x200141000000-0x200141ffffff pref window] [ 0.375723] pci_bus 0000:b4: resource 5 [mem 0x200144000000-0x200145ffffff pref window] [ 0.375726] pci_bus 0000:b4: resource 6 [mem 0xa3000000-0xa3ffffff window] [ 0.384547] iommu: Default domain type: Translated [ 0.384620] pci 0000:09:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none [ 0.384631] pci 0000:09:00.0: vgaarb: bridge control possible [ 0.384636] pci 0000:09:00.0: vgaarb: setting as boot device (VGA legacy resources not available) [ 0.384639] vgaarb: loaded [ 0.384872] SCSI subsystem initialized [ 0.384897] ACPI: bus type USB registered [ 0.384919] usbcore: registered new interface driver usbfs [ 0.384927] usbcore: registered new interface driver hub [ 0.384977] usbcore: registered new device driver usb [ 0.384998] pps_core: LinuxPPS API ver. 1 registered [ 0.385000] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti [ 0.385008] PTP clock support registered [ 0.385196] EDAC MC: Ver: 3.0.0 [ 0.385363] Registered efivars operations [ 0.385504] ACPI: arm,spe-v1: must be homogeneous [ 0.385507] ACPI: SPE: Unable to register device [ 0.386720] NetLabel: Initializing [ 0.386724] NetLabel: domain hash size = 128 [ 0.386726] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO [ 0.386739] NetLabel: unlabeled traffic allowed by default [ 0.386865] sdei: SDEIv1.0 (0x0) detected in firmware. [ 0.389150] clocksource: Switched to clocksource arch_sys_counter [ 0.406982] VFS: Disk quotas dquot_6.6.0 [ 0.407015] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 0.407269] pnp: PnP ACPI init [ 0.407826] system 00:00: [mem 0xd0000000-0xdfffffff] could not be reserved [ 0.407837] system 00:00: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.408832] pnp 00:01: Plug and Play ACPI device, IDs PNP0501 (active) [ 0.409733] pnp: PnP ACPI: found 2 devices [ 0.411100] NET: Registered protocol family 2 [ 0.411408] IP idents hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) [ 0.413916] tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, vmalloc) [ 0.414291] TCP established hash table entries: 524288 (order: 10, 4194304 bytes, vmalloc) [ 0.414813] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, vmalloc) [ 0.414981] TCP: Hash tables configured (established 524288 bind 65536) [ 0.415261] UDP hash table entries: 65536 (order: 9, 2097152 bytes, vmalloc) [ 0.415540] UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, vmalloc) [ 0.415905] NET: Registered protocol family 1 [ 0.415940] pci 0000:7a:00.0: enabling device (0000 -> 0002) [ 0.415972] pci 0000:7a:02.0: enabling device (0000 -> 0002) [ 0.416032] pci 0000:ba:00.0: enabling device (0000 -> 0002) [ 0.416058] pci 0000:ba:02.0: enabling device (0000 -> 0002) [ 0.416086] PCI: CLS 32 bytes, default 64 [ 0.416150] Trying to unpack rootfs image as initramfs... [ 0.892293] Freeing initrd memory: 34308K [ 0.896683] hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 13 counters available [ 0.896698] kvm [1]: detected: Hisi CPU type 'Hisi1620' [ 0.896708] kvm [1]: KVM ncsnp enabled [ 0.896711] kvm [1]: KVM dvmbm disabled [ 0.896887] kvm [1]: IPA Size Limit: 48 bits [ 0.896925] kvm [1]: GICv4 support disabled [ 0.896927] kvm [1]: vgic-v2@9b020000 [ 0.896945] kvm [1]: GIC system register CPU interface enabled [ 0.897799] kvm [1]: vgic interrupt IRQ9 [ 0.898744] kvm [1]: VHE mode initialized successfully [ 0.898807] kvm [1]: Shadow device disabled [ 0.900895] Initialise system trusted keyrings [ 0.900909] Key type blacklist registered [ 0.901051] workingset: timestamp_bits=39 max_order=27 bucket_order=0 [ 0.902222] zbud: loaded [ 0.902665] integrity: Platform Keyring initialized [ 0.914805] NET: Registered protocol family 38 [ 0.914810] Key type asymmetric registered [ 0.914814] Asymmetric key parser 'x509' registered [ 0.914816] Asymmetric key parser 'pgp' registered [ 0.914830] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247) [ 0.914908] io scheduler mq-deadline registered [ 0.914911] io scheduler kyber registered [ 0.914964] io scheduler bfq registered [ 0.917821] pcieport 0000:00:00.0: PME: Signaling with IRQ 27 [ 0.918006] pcieport 0000:00:00.0: AER: enabled with IRQ 28 [ 0.918040] pcieport 0000:00:00.0: pciehp: Slot #0 AttnBtn- PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock- NoCompl- IbPresDis- LLActRep- [ 0.918838] pcieport 0000:00:02.0: PME: Signaling with IRQ 29 [ 0.918923] pcieport 0000:00:02.0: AER: enabled with IRQ 30 [ 0.918951] pcieport 0000:00:02.0: pciehp: Slot #2 AttnBtn- PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock- NoCompl- IbPresDis- LLActRep- [ 0.919668] pcieport 0000:00:04.0: PME: Signaling with IRQ 31 [ 0.919745] pcieport 0000:00:04.0: AER: enabled with IRQ 32 [ 0.919771] pcieport 0000:00:04.0: pciehp: Slot #4 AttnBtn- PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock- NoCompl- IbPresDis- LLActRep- [ 0.920603] pcieport 0000:00:06.0: PME: Signaling with IRQ 33 [ 0.920675] pcieport 0000:00:06.0: AER: enabled with IRQ 34 [ 0.920700] pcieport 0000:00:06.0: pciehp: Slot #6 AttnBtn- PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock- NoCompl- IbPresDis- LLActRep- [ 0.921531] pcieport 0000:00:08.0: PME: Signaling with IRQ 35 [ 0.921808] pcieport 0000:00:08.0: AER: enabled with IRQ 36 [ 0.922483] pcieport 0000:00:0c.0: PME: Signaling with IRQ 37 [ 0.923804] pcieport 0000:00:0c.0: AER: enabled with IRQ 38 [ 0.923828] pcieport 0000:00:0c.0: pciehp: Slot #12 AttnBtn- PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock- NoCompl- IbPresDis- LLActRep- [ 0.924545] pcieport 0000:00:0e.0: PME: Signaling with IRQ 39 [ 0.924605] pcieport 0000:00:0e.0: AER: enabled with IRQ 40 [ 0.924627] pcieport 0000:00:0e.0: pciehp: Slot #14 AttnBtn- PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock- NoCompl- IbPresDis- LLActRep- [ 0.925427] pcieport 0000:00:10.0: PME: Signaling with IRQ 41 [ 0.925492] pcieport 0000:00:10.0: AER: enabled with IRQ 42 [ 0.926268] pcieport 0000:00:11.0: PME: Signaling with IRQ 43 [ 0.926333] pcieport 0000:00:11.0: AER: enabled with IRQ 44 [ 0.927115] pcieport 0000:00:12.0: PME: Signaling with IRQ 45 [ 0.927184] pcieport 0000:00:12.0: AER: enabled with IRQ 46 [ 0.927234] ACPI: IORT: [Firmware Bug]: [map (____ptrval____)] conflicting mapping for input ID 0x7c00 [ 0.927239] ACPI: IORT: [Firmware Bug]: applying workaround. [ 0.928273] pcieport 0000:80:00.0: PME: Signaling with IRQ 47 [ 0.928381] pcieport 0000:80:00.0: AER: enabled with IRQ 48 [ 0.929123] pcieport 0000:80:08.0: PME: Signaling with IRQ 49 [ 0.929221] pcieport 0000:80:08.0: AER: enabled with IRQ 50 [ 0.929247] pcieport 0000:80:08.0: pciehp: Slot #28 AttnBtn- PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock- NoCompl- IbPresDis- LLActRep- [ 0.930163] pcieport 0000:80:0a.0: PME: Signaling with IRQ 51 [ 0.930266] pcieport 0000:80:0a.0: AER: enabled with IRQ 52 [ 0.930294] pcieport 0000:80:0a.0: pciehp: Slot #30 AttnBtn- PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock- NoCompl- IbPresDis- LLActRep- [ 0.931009] pcieport 0000:80:0c.0: PME: Signaling with IRQ 53 [ 0.931096] pcieport 0000:80:0c.0: AER: enabled with IRQ 54 [ 0.931116] pcieport 0000:80:0c.0: pciehp: Slot #32 AttnBtn- PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock- NoCompl- IbPresDis- LLActRep- [ 0.931931] pcieport 0000:80:0e.0: PME: Signaling with IRQ 55 [ 0.932022] pcieport 0000:80:0e.0: AER: enabled with IRQ 56 [ 0.932054] pcieport 0000:80:0e.0: pciehp: Slot #34 AttnBtn- PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock- NoCompl- IbPresDis- LLActRep- [ 0.932868] pcieport 0000:80:10.0: PME: Signaling with IRQ 57 [ 0.932950] pcieport 0000:80:10.0: AER: enabled with IRQ 58 [ 0.933012] ACPI: IORT: [Firmware Bug]: [map (____ptrval____)] conflicting mapping for input ID 0xbc00 [ 0.933022] ACPI: IORT: [Firmware Bug]: applying workaround. [ 0.933209] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 [ 0.934138] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 [ 0.934165] ACPI: Power Button [PWRB] [ 0.935514] [Firmware Bug]: APEI: Invalid physical address in GAR [0x0/64/0/4/0] [ 0.935532] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x0000000044010000 [ 0.935542] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x0000000044010010 [ 0.935550] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x0000000044010020 [ 0.935558] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x0000000044010030 [ 0.935567] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x0000000044010040 [ 0.935576] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x0000000044010050 [ 0.935584] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x0000000044010060 [ 0.935592] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x0000000044010070 [ 0.935600] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x0000000044010080 [ 0.935608] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x0000000044010090 [ 0.935615] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x00000000440100a0 [ 0.935623] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x00000000440100b0 [ 0.935630] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x00000000440100c0 [ 0.935639] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x00000000440100d0 [ 0.935647] ACPI: [Firmware Bug]: requested region covers kernel memory @ 0x00000000440100e0 [ 0.935785] GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. [ 0.935833] ACPI GTDT: found 1 SBSA generic Watchdog(s). [ 0.936069] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 0.956533] 00:01: ttyS0 at MMIO 0x3f00002f8 (irq = 23, base_baud = 115200) is a 16550A [ 11.055450] printk: console [ttyS0] enabled [ 11.070137] virdev: Register virtdev platform driver succeed. [ 11.076557] rdac: device handler registered [ 11.081470] hp_sw: device handler registered [ 11.086415] emc: device handler registered [ 11.091284] alua: device handler registered [ 11.099079] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 11.106278] ehci-pci: EHCI PCI platform driver [ 11.111520] ehci-pci 0000:7a:01.0: EHCI Host Controller [ 11.117474] ehci-pci 0000:7a:01.0: new USB bus registered, assigned bus number 1 [ 11.125535] ehci-pci 0000:7a:01.0: applying Synopsys HC workaround [ 11.132407] ehci-pci 0000:7a:01.0: irq 60, io mem 0x20c101000 [ 11.153119] ehci-pci 0000:7a:01.0: USB 2.0 started, EHCI 1.00 [ 11.159580] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.10 [ 11.168501] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 11.176383] usb usb1: Product: EHCI Host Controller [ 11.181931] usb usb1: Manufacturer: Linux 5.10.0-188.0.0.101.oe2203sp3.aarch64 ehci_hcd [ 11.190591] usb usb1: SerialNumber: 0000:7a:01.0 [ 11.195971] hub 1-0:1.0: USB hub found [ 11.200406] hub 1-0:1.0: 2 ports detected [ 11.205392] ehci-pci 0000:ba:01.0: EHCI Host Controller [ 11.211385] ehci-pci 0000:ba:01.0: new USB bus registered, assigned bus number 2 [ 11.219449] ehci-pci 0000:ba:01.0: applying Synopsys HC workaround [ 11.226334] ehci-pci 0000:ba:01.0: irq 61, io mem 0x20020c101000 [ 11.245127] ehci-pci 0000:ba:01.0: USB 2.0 started, EHCI 1.00 [ 11.251599] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.10 [ 11.260519] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 11.268402] usb usb2: Product: EHCI Host Controller [ 11.273952] usb usb2: Manufacturer: Linux 5.10.0-188.0.0.101.oe2203sp3.aarch64 ehci_hcd [ 11.282612] usb usb2: SerialNumber: 0000:ba:01.0 [ 11.287994] hub 2-0:1.0: USB hub found [ 11.292430] hub 2-0:1.0: 2 ports detected [ 11.297228] ehci-platform: EHCI generic platform driver [ 11.303211] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 11.310064] ohci-pci: OHCI PCI platform driver [ 11.315270] ohci-pci 0000:7a:00.0: OHCI PCI host controller [ 11.321570] ohci-pci 0000:7a:00.0: new USB bus registered, assigned bus number 3 [ 11.329654] ohci-pci 0000:7a:00.0: irq 62, io mem 0x20c100000 [ 11.397155] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.10 [ 11.406077] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 11.413959] usb usb3: Product: OHCI PCI host controller [ 11.419853] usb usb3: Manufacturer: Linux 5.10.0-188.0.0.101.oe2203sp3.aarch64 ohci_hcd [ 11.428513] usb usb3: SerialNumber: 0000:7a:00.0 [ 11.433992] hub 3-0:1.0: USB hub found [ 11.438425] hub 3-0:1.0: 2 ports detected [ 11.443331] ohci-pci 0000:ba:00.0: OHCI PCI host controller [ 11.449632] ohci-pci 0000:ba:00.0: new USB bus registered, assigned bus number 4 [ 11.457727] ohci-pci 0000:ba:00.0: irq 63, io mem 0x20020c100000 [ 11.493121] usb 1-1: new high-speed USB device number 2 using ehci-pci [ 11.525171] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.10 [ 11.534098] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 11.541980] usb usb4: Product: OHCI PCI host controller [ 11.547875] usb usb4: Manufacturer: Linux 5.10.0-188.0.0.101.oe2203sp3.aarch64 ohci_hcd [ 11.556535] usb usb4: SerialNumber: 0000:ba:00.0 [ 11.561939] hub 4-0:1.0: USB hub found [ 11.566375] hub 4-0:1.0: 2 ports detected [ 11.571213] uhci_hcd: USB Universal Host Controller Interface driver [ 11.578350] xhci_hcd 0000:7a:02.0: xHCI Host Controller [ 11.584287] xhci_hcd 0000:7a:02.0: new USB bus registered, assigned bus number 5 [ 11.592393] xhci_hcd 0000:7a:02.0: hcc params 0x0220f66d hci version 0x100 quirks 0x0000000000000010 [ 11.602296] xhci_hcd 0000:7a:02.0: xHCI Host Controller [ 11.608222] xhci_hcd 0000:7a:02.0: new USB bus registered, assigned bus number 6 [ 11.616280] xhci_hcd 0000:7a:02.0: Host supports USB 3.0 SuperSpeed [ 11.623241] usb usb5: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.10 [ 11.632163] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 11.640044] usb usb5: Product: xHCI Host Controller [ 11.645592] usb usb5: Manufacturer: Linux 5.10.0-188.0.0.101.oe2203sp3.aarch64 xhci-hcd [ 11.654252] usb usb5: SerialNumber: 0000:7a:02.0 [ 11.659638] hub 5-0:1.0: USB hub found [ 11.664075] hub 5-0:1.0: 1 port detected [ 11.668753] usb usb6: We don't know the algorithms for LPM for this host, disabling LPM. [ 11.677524] usb usb6: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.10 [ 11.686444] usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 11.694326] usb usb6: Product: xHCI Host Controller [ 11.699875] usb usb6: Manufacturer: Linux 5.10.0-188.0.0.101.oe2203sp3.aarch64 xhci-hcd [ 11.708536] usb usb6: SerialNumber: 0000:7a:02.0 [ 11.713898] hub 6-0:1.0: USB hub found [ 11.718335] hub 6-0:1.0: 1 port detected [ 11.723078] xhci_hcd 0000:ba:02.0: xHCI Host Controller [ 11.729040] xhci_hcd 0000:ba:02.0: new USB bus registered, assigned bus number 7 [ 11.730335] usb 1-1: New USB device found, idVendor=0bda, idProduct=5411, bcdDevice= 1.01 [ 11.737154] xhci_hcd 0000:ba:02.0: hcc params 0x0220f66d hci version 0x100 quirks 0x0000000000000010 [ 11.745935] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 11.755855] xhci_hcd 0000:ba:02.0: xHCI Host Controller [ 11.763524] usb 1-1: Product: 4-Port USB 2.1 Hub [ 11.769461] xhci_hcd 0000:ba:02.0: new USB bus registered, assigned bus number 8 [ 11.774715] usb 1-1: Manufacturer: Generic [ 11.774979] hub 1-1:1.0: USB hub found [ 11.782781] xhci_hcd 0000:ba:02.0: Host supports USB 3.0 SuperSpeed [ 11.788582] hub 1-1:1.0: 4 ports detected [ 11.792012] usb usb7: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.10 [ 11.812532] usb usb7: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 11.820415] usb usb7: Product: xHCI Host Controller [ 11.825966] usb usb7: Manufacturer: Linux 5.10.0-188.0.0.101.oe2203sp3.aarch64 xhci-hcd [ 11.834628] usb usb7: SerialNumber: 0000:ba:02.0 [ 11.840015] hub 7-0:1.0: USB hub found [ 11.844451] hub 7-0:1.0: 1 port detected [ 11.849118] usb usb8: We don't know the algorithms for LPM for this host, disabling LPM. [ 11.857890] usb usb8: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.10 [ 11.866810] usb usb8: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 11.874692] usb usb8: Product: xHCI Host Controller [ 11.880242] usb usb8: Manufacturer: Linux 5.10.0-188.0.0.101.oe2203sp3.aarch64 xhci-hcd [ 11.888902] usb usb8: SerialNumber: 0000:ba:02.0 [ 11.894276] hub 8-0:1.0: USB hub found [ 11.898709] hub 8-0:1.0: 1 port detected [ 11.903441] mousedev: PS/2 mouse device common for all mice [ 11.926229] rtc-efi rtc-efi.0: registered as rtc0 [ 11.939802] rtc-efi rtc-efi.0: setting system clock to 2024-03-05T09:34:43 UTC (1709631283) [ 11.945118] usb 5-1: new high-speed USB device number 2 using xhci_hcd [ 11.949573] hid: raw HID events driver (C) Jiri Kosina [ 11.961931] usbcore: registered new interface driver usbhid [ 11.968172] usbhid: USB HID core driver [ 11.973102] Initializing XFRM netlink socket [ 11.978206] NET: Registered protocol family 10 [ 11.983869] Segment Routing with IPv6 [ 11.988237] NET: Registered protocol family 17 [ 11.993551] registered taskstats version 1 [ 11.999437] Loading compiled-in X.509 certificates [ 12.048187] Loaded X.509 cert 'openEuler kernel signing key: 381049b4ab0317fa4a59c3945f7dd11f8aca835b' [ 12.058147] Load PGP public keys [ 12.062075] Loaded PGP key 'openeuler fb37bc6f' [ 12.069623] Loaded PGP key 'private OBS b25e7f66' [ 12.080833] cryptd: max_cpu_qlen set to 1000 [ 12.089120] usb 1-1.1: new full-speed USB device number 3 using ehci-pci [ 12.113110] usb 5-1: New USB device found, idVendor=0bda, idProduct=5411, bcdDevice= 1.01 [ 12.121950] usb 5-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 12.129748] usb 5-1: Product: 4-Port USB 2.1 Hub [ 12.135040] usb 5-1: Manufacturer: Generic [ 12.150974] Key type encrypted registered [ 12.155819] integrity: Loading X.509 certificate: UEFI:db [ 12.161910] integrity: Loaded X.509 cert 'Huawei Technologies Co., Ltd.: EulerOS Code Signing Certificate: 0f081cc1d2fedc52749211b644da32895618698f' [ 12.175958] integrity: Loading X.509 certificate: UEFI:MokListRT (MOKvar table) [ 12.184812] integrity: Loaded X.509 cert 'openEuler: CA: 1e1baa926835cdd94919d433c34401a2e5f53df9' [ 12.194426] ima: No TPM chip found, activating TPM-bypass! [ 12.199384] hub 5-1:1.0: USB hub found [ 12.200584] ima: Allocated hash algorithm: sha256 [ 12.205706] hub 5-1:1.0: 4 ports detected [ 12.210392] ima: No architecture policies found [ 12.220289] evm: Initialising EVM extended attributes: [ 12.226097] evm: security.selinux [ 12.230089] evm: security.apparmor [ 12.234168] evm: security.ima [ 12.237815] evm: security.capability [ 12.242069] evm: HMAC attrs: 0x1 [ 12.250261] SDEI NMI watchdog: SDEI Watchdog registered successfully [ 12.258691] integrity: Unable to open file: /etc/keys/x509_ima.der (-2) [ 12.258694] integrity: Unable to open file: /etc/keys/x509_evm.der (-2) [ 12.263328] usb 6-1: new SuperSpeed Gen 1 USB device number 2 using xhci_hcd [ 12.268059] Freeing unused kernel memory: 4608K [ 12.287463] usb 1-1.1: New USB device found, idVendor=12d1, idProduct=0003, bcdDevice= 1.00 [ 12.296477] usb 1-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 12.304449] usb 1-1.1: Product: Keyboard/Mouse KVM 1.1.0 [ 12.318975] usb 6-1: New USB device found, idVendor=0bda, idProduct=0411, bcdDevice= 1.01 [ 12.321174] Run /init as init process [ 12.327811] usb 6-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 12.332152] with arguments: [ 12.339946] usb 6-1: Product: 4-Port USB 3.1 Hub [ 12.339949] /init [ 12.345236] usb 6-1: Manufacturer: Generic [ 12.345467] input: Keyboard/Mouse KVM 1.1.0 as /devices/pci0000:7a/0000:7a:01.0/usb1/1-1/1-1.1/1-1.1:1.0/0003:12D1:0003.0001/input/input1 [ 12.350008] rhgb [ 12.350009] with environment: [ 12.350010] HOME=/ [ 12.350010] TERM=linux [ 12.350013] BOOT_IMAGE=/vmlinuz-5.10.0-188.0.0.101.oe2203sp3.aarch64 [ 12.362993] crashkernel=1024M,high [ 12.379410] systemd[1]: systemd v249-63.oe2203sp3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA -SMACK +SECCOMP +GCRYPT +GNUTLS -OPENSSL +ACL +BLKID -CURL -ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB -ZSTD +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=legacy) [ 12.393775] hub 6-1:1.0: USB hub found [ 12.412226] systemd[1]: Detected architecture arm64. [ 12.415948] hub 6-1:1.0: 4 ports detected [ 12.420870] systemd[1]: Running in initial RAM disk. [ 12.425691] hid-generic 0003:12D1:0003.0001: input,hidraw0: USB HID v1.10 Keyboard [Keyboard/Mouse KVM 1.1.0] on usb-0000:7a:01.0-1.1/input0 [ 12.451614] input: Keyboard/Mouse KVM 1.1.0 as /devices/pci0000:7a/0000:7a:01.0/usb1/1-1/1-1.1/1-1.1:1.1/0003:12D1:0003.0002/input/input2 [ 12.464669] hid-generic 0003:12D1:0003.0002: input,hidraw1: USB HID v1.10 Mouse [Keyboard/Mouse KVM 1.1.0] on usb-0000:7a:01.0-1.1/input1 [ 12.505455] systemd[1]: Hostname set to . [ 12.575247] systemd[1]: Queued start job for default target Initrd Default Target. [ 12.583666] random: systemd: uninitialized urandom read (16 bytes read) [ 12.590969] systemd[1]: Reached target Local File Systems. [ 12.597183] random: systemd: uninitialized urandom read (16 bytes read) [ 12.604541] systemd[1]: Reached target Slice Units. [ 12.610120] random: systemd: uninitialized urandom read (16 bytes read) [ 12.617434] systemd[1]: Reached target Swaps. [ 12.622488] systemd[1]: Reached target Timer Units. [ 12.628147] systemd[1]: Listening on Journal Socket (/dev/log). [ 12.634852] systemd[1]: Listening on Journal Socket. [ 12.640604] systemd[1]: Listening on udev Control Socket. [ 12.646757] systemd[1]: Listening on udev Kernel Socket. [ 12.652762] systemd[1]: Reached target Socket Units. [ 12.704848] systemd[1]: Starting Create List of Static Device Nodes... [ 12.713120] systemd[1]: Started Hardware RNG Entropy Gatherer Daemon. [ 12.720509] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. [ 12.734096] systemd[1]: (This warning is only shown for the first unit using IP firewalling.) [ 12.744668] systemd[1]: Starting Journal Service... [ 12.751434] systemd[1]: Starting Load Kernel Modules... [ 12.757364] xpmem: loading out-of-tree module taints kernel. [ 12.758492] systemd[1]: Starting Setup Virtual Console... [ 12.763808] xpmem: module verification failed: signature and/or required key missing - tainting kernel [ 12.770922] systemd[1]: Finished Create List of Static Device Nodes. [ 12.787696] XPMEM kernel module v2.7.3 loaded [ 12.787898] systemd[1]: Starting Create Static Device Nodes in /dev... [ 12.800543] systemd[1]: Finished Load Kernel Modules. [ 12.806873] systemd[1]: Finished Create Static Device Nodes in /dev. [ 12.815167] systemd[1]: Starting Apply Kernel Variables... [ 12.844467] systemd[1]: Finished Setup Virtual Console. [ 12.850574] systemd[1]: Condition check resulted in dracut ask for additional cmdline parameters being skipped. [ 12.862631] systemd[1]: Starting dracut cmdline hook... [ 12.870007] systemd[1]: Started Journal Service. [ 12.983738] device-mapper: uevent: version 1.0.3 [ 12.989140] device-mapper: ioctl: 4.43.0-ioctl (2020-10-01) initialised: dm-devel@redhat.com [ 13.218553] edma_drv 0000:08:00.0: enabling device (0140 -> 0142) [ 13.219427] Compat-mlnx-ofed backport release: a675be0 [ 13.231150] Backport based on mlnx_ofed/mlnx-ofa_kernel-4.0.git a675be0 [ 13.234477] megasas: 07.714.04.00-rc1 [ 13.238439] compat.git: mlnx_ofed/mlnx-ofa_kernel-4.0.git [ 13.241379] libata version 3.00 loaded. [ 13.243346] megaraid_sas 0000:05:00.0: BAR:0x0 BAR's base_addr(phys):0x0000080000100000 mapped virt_addr:0x(____ptrval____) [ 13.248862] megaraid_sas 0000:05:00.0: FW now in Ready state [ 13.251118] hns3: Hisilicon Ethernet Network Driver for Hip08 Family - version [ 13.255201] megaraid_sas 0000:05:00.0: 63 bit DMA mask and 63 bit consistent mask [ 13.256622] sbsa-gwdt sbsa-gwdt.0: Initialized with 10s timeout @ 100000000 Hz, action=0. [ 13.263096] hns3: Copyright (c) 2017 Huawei Corporation. [ 13.264025] nvme nvme0: pci function 0000:82:00.0 [ 13.264918] nvme nvme1: pci function 0000:83:00.0 [ 13.265181] nvme nvme2: pci function 0000:84:00.0 [ 13.265417] nvme nvme3: pci function 0000:85:00.0 [ 13.271332] megaraid_sas 0000:05:00.0: firmware supports msix : (128) [ 13.275560] nvme nvme1: Shutdown timeout set to 8 seconds [ 13.275582] nvme nvme2: Shutdown timeout set to 8 seconds [ 13.275608] nvme nvme3: Shutdown timeout set to 8 seconds [ 13.275636] nvme nvme0: Shutdown timeout set to 8 seconds [ 13.286137] nvme nvme3: 64/0/0 default/read/poll queues [ 13.286143] nvme nvme2: 64/0/0 default/read/poll queues [ 13.286175] nvme nvme0: 64/0/0 default/read/poll queues [ 13.297287] nvme nvme1: 64/0/0 default/read/poll queues [ 13.330425] megaraid_sas 0000:05:00.0: requested/available msix 97/97 [ 13.369710] megaraid_sas 0000:05:00.0: current msix/online cpus : (97/96) [ 13.369711] megaraid_sas 0000:05:00.0: RDPQ mode : (enabled) [ 13.369714] megaraid_sas 0000:05:00.0: Current firmware supports maximum commands: 9197 LDIO threshold: 0 [ 13.766773] megaraid_sas 0000:05:00.0: Performance mode :Latency (latency index = 1) [ 13.775182] megaraid_sas 0000:05:00.0: FW supports sync cache : Yes [ 13.782118] megaraid_sas 0000:05:00.0: megasas_disable_intr_fusion is called outbound_intr_mask:0x40000009 [ 13.792543] ahci 0000:74:03.0: version 3.0 [ 13.792576] ahci 0000:74:03.0: controller does not support SXS, disabling CAP_SXS [ 13.800784] [TTM] Zone kernel: Available graphics memory: 263387054 KiB [ 13.802181] nvme2n1: p1 p2 p3 p4 p5 p6 [ 13.808151] [TTM] Zone dma32: Available graphics memory: 2097152 KiB [ 13.808153] [TTM] Initializing pool allocator [ 13.808162] [TTM] Initializing DMA pool allocator [ 13.814123] nvme0n1: p1 p2 p3 p4 p5 p6 [ 13.819949] [drm] forcing VGA-1 connector on [ 13.839792] ahci 0000:74:03.0: SSS flag set, parallel bus scan disabled [ 13.847083] ahci 0000:74:03.0: AHCI 0001.0300 32 slots 2 ports 6 Gbps 0x3 impl SATA mode [ 13.855833] ahci 0000:74:03.0: flags: 64bit ncq sntf stag pm led clo only pmp fbs slum part ccc ems boh [ 13.858142] nvme3n1: p1 p2 p3 p4 p5 p6 [ 13.874110] nvme1n1: p1 p2 p3 p4 p5 p6 [ 13.888700] hisi_sas_v3_hw 0000:74:02.0: enabling device (0000 -> 0002) [ 13.906117] hisi_sas_v3_hw 0000:74:02.0: 16 hw queues [ 13.906726] hclge is initializing [ 13.910713] mlx5_core 0000:81:00.0: firmware version: 16.31.1014 [ 13.910751] mlx5_core 0000:81:00.0: 126.016 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x16 link) [ 13.911853] scsi host2: hisi_sas_v3_hw [ 13.915976] hns3 0000:7d:00.0: The firmware version is 1.9.40.6 [ 13.921494] random: systemd: uninitialized urandom read (16 bytes read) [ 13.921515] random: systemd: uninitialized urandom read (16 bytes read) [ 13.921526] random: systemd: uninitialized urandom read (16 bytes read) [ 13.923112] scsi host1: ahci [ 13.973173] [drm] Initialized hibmc 1.0.0 20160828 for 0000:09:00.0 on minor 0 [ 14.033466] hns3 0000:7d:00.0: hclge driver initialization finished. [ 14.045681] hns3 0000:7d:00.1: The firmware version is 1.9.40.6 [ 14.138172] mlx5_core 0000:81:00.0: Port module event: module 0, Cable plugged [ 14.144855] hns3 0000:7d:00.1: hclge driver initialization finished. [ 14.146265] mlx5_core 0000:81:00.0: mlx5_pcie_event:304:(pid 711): PCIe slot advertised sufficient power (27W). [ 14.157888] hns3 0000:7d:00.2: The firmware version is 1.9.40.6 [ 14.188303] mlx5_core 0000:81:00.1: firmware version: 16.31.1014 [ 14.195034] mlx5_core 0000:81:00.1: 126.016 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x16 link) [ 14.265428] hns3 0000:7d:00.2: hclge driver initialization finished. [ 14.277294] hns3 0000:7d:00.3: The firmware version is 1.9.40.6 [ 14.420870] hns3 0000:7d:00.3: hclge driver initialization finished. [ 14.425676] hns3 0000:bd:00.0: The firmware version is 1.9.40.6 [ 14.430032] Console: switching to colour frame buffer device 80x30 [ 14.444361] mlx5_core 0000:81:00.1: Port module event: module 1, Cable plugged [ 14.444476] mlx5_core 0000:81:00.1: mlx5_pcie_event:304:(pid 7): PCIe slot advertised sufficient power (27W). [ 14.446115] scsi host3: ahci [ 14.446181] ata1: SATA max UDMA/133 abar m4096@0xa2010000 port 0xa2010100 irq 424 [ 14.446182] ata2: SATA max UDMA/133 abar m4096@0xa2010000 port 0xa2010180 irq 425 [ 14.446358] ahci 0000:b4:03.0: controller does not support SXS, disabling CAP_SXS [ 14.446445] ahci 0000:b4:03.0: SSS flag set, parallel bus scan disabled [ 14.446460] ahci 0000:b4:03.0: AHCI 0001.0300 32 slots 2 ports 6 Gbps 0x3 impl SATA mode [ 14.446462] ahci 0000:b4:03.0: flags: 64bit ncq sntf stag pm led clo only pmp fbs slum part ccc ems boh [ 14.446887] scsi host4: ahci [ 14.447157] scsi host5: ahci [ 14.447217] ata3: SATA max UDMA/133 abar m4096@0xa3010000 port 0xa3010100 irq 1108 [ 14.454240] hibmc-drm 0000:09:00.0: [drm] fb0: hibmcdrmfb frame buffer device [ 14.460887] ata4: SATA max UDMA/133 abar m4096@0xa3010000 port 0xa3010180 irq 1109 [ 14.500658] hns3 0000:bd:00.0: hclge driver initialization finished. [ 14.594584] hns3 0000:bd:00.1: The firmware version is 1.9.40.6 [ 14.688206] hns3 0000:bd:00.1: hclge driver initialization finished. [ 14.700314] hns3 0000:bd:00.2: The firmware version is 1.9.40.6 [ 14.763225] ata1: SATA link down (SStatus 0 SControl 300) [ 14.776789] hns3 0000:bd:00.2: hclge driver initialization finished. [ 14.788703] hns3 0000:bd:00.3: The firmware version is 1.9.40.6 [ 14.823253] ata3: SATA link down (SStatus 0 SControl 300) [ 14.880311] hns3 0000:bd:00.3: hclge driver initialization finished. [ 14.973117] megaraid_sas 0000:05:00.0: FW provided supportMaxExtLDs: 1 max_lds: 240 [ 14.982479] megaraid_sas 0000:05:00.0: controller type : MR(2048MB) [ 14.989830] megaraid_sas 0000:05:00.0: Online Controller Reset(OCR) : Enabled [ 14.998425] megaraid_sas 0000:05:00.0: Secure JBOD support : Yes [ 15.005507] megaraid_sas 0000:05:00.0: NVMe passthru support : Yes [ 15.012757] megaraid_sas 0000:05:00.0: FW provided TM TaskAbort/Reset timeout : 6 secs/60 secs [ 15.022788] megaraid_sas 0000:05:00.0: JBOD sequence map support : Yes [ 15.030742] megaraid_sas 0000:05:00.0: PCI Lane Margining support : No [ 15.083225] ata2: SATA link down (SStatus 0 SControl 300) [ 15.113251] megaraid_sas 0000:05:00.0: NVME page size : (4096) [ 15.120998] megaraid_sas 0000:05:00.0: megasas_enable_intr_fusion is called outbound_intr_mask:0x40000000 [ 15.132017] megaraid_sas 0000:05:00.0: INIT adapter done [ 15.165224] megaraid_sas 0000:05:00.0: pci id : (0x1000)/(0x0016)/(0x19e5)/(0xda15) [ 15.174399] megaraid_sas 0000:05:00.0: unevenspan support : no [ 15.181295] megaraid_sas 0000:05:00.0: firmware crash dump : no [ 15.188273] megaraid_sas 0000:05:00.0: JBOD sequence map : enabled [ 15.237127] megaraid_sas 0000:05:00.0: Max firmware commands: 8172 shared with nr_hw_queues = 96 [ 15.247325] scsi host0: Avago SAS based MegaRAID driver [ 15.403268] ata4: SATA link down (SStatus 0 SControl 300) [ 15.454904] scsi 0:3:111:0: Direct-Access AVAGO HW-SAS3508 5.06 PQ: 0 ANSI: 5 [ 15.479909] hns3 0000:7d:00.2 enp125s0f2: renamed from eth2 [ 15.489833] sd 0:3:111:0: [sda] 1560545280 512-byte logical blocks: (799 GB/744 GiB) [ 15.499022] sd 0:3:111:0: [sda] 4096-byte physical blocks [ 15.505614] sd 0:3:111:0: [sda] Write Protect is off [ 15.511648] sd 0:3:111:0: [sda] Mode Sense: 1f 00 10 08 [ 15.511710] sd 0:3:111:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA [ 15.524208] sd 0:3:111:0: [sda] Optimal transfer size 262144 bytes [ 15.537523] hns3 0000:7d:00.1 enp125s0f1: renamed from eth1 [ 15.575860] sda: sda1 sda2 sda3 [ 15.580515] sd 0:3:111:0: [sda] Attached SCSI disk [ 15.601510] hns3 0000:bd:00.1 enp189s0f1: renamed from eth5 [ 15.634613] random: systemd: uninitialized urandom read (16 bytes read) [ 15.643986] random: systemd: uninitialized urandom read (16 bytes read) [ 15.649937] hns3 0000:bd:00.3 enp189s0f3: renamed from eth7 [ 15.652126] random: systemd: uninitialized urandom read (16 bytes read) [ 15.705304] hns3 0000:7d:00.0 enp125s0f0: renamed from eth0 [ 15.769375] hns3 0000:bd:00.2 enp189s0f2: renamed from eth6 [ 15.829444] hns3 0000:7d:00.3 enp125s0f3: renamed from eth3 [ 15.865300] hns3 0000:bd:00.0 enp189s0f0: renamed from eth4 [ 15.973187] hisi_sas_v3_hw 0000:74:02.0: neither _PS0 nor _PR0 is defined [ 16.002147] hisi_sas_v3_hw 0000:74:04.0: enabling device (0000 -> 0002) [ 16.033514] hisi_sas_v3_hw 0000:74:04.0: 16 hw queues [ 16.039677] scsi host6: hisi_sas_v3_hw [ 16.159802] random: crng init done [ 16.164298] random: 253 urandom warning(s) missed due to ratelimiting [ 17.677175] hisi_sas_v3_hw 0000:74:04.0: neither _PS0 nor _PR0 is defined [ 17.698031] hisi_sas_v3_hw 0000:b4:02.0: enabling device (0000 -> 0002) [ 17.719630] hisi_sas_v3_hw 0000:b4:02.0: 16 hw queues [ 17.725821] scsi host7: hisi_sas_v3_hw [ 18.973182] hisi_sas_v3_hw 0000:b4:02.0: neither _PS0 nor _PR0 is defined [ 18.991018] hisi_sas_v3_hw 0000:b4:04.0: enabling device (0000 -> 0002) [ 19.012316] hisi_sas_v3_hw 0000:b4:04.0: 16 hw queues [ 19.018475] scsi host8: hisi_sas_v3_hw [ 20.265177] hisi_sas_v3_hw 0000:b4:04.0: neither _PS0 nor _PR0 is defined [ 21.407092] device-mapper: ioctl: lvm[2052]: dm-0 (openeuler-root) is created successfully [ 21.510146] device-mapper: ioctl: lvm[2059]: dm-1 (openeuler-swap) is created successfully [ 21.711096] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) [ 22.193909] systemd-journald[862]: Received SIGTERM from PID 1 (systemd). [ 22.262036] SELinux: Runtime disable is deprecated, use selinux=0 on the kernel cmdline. [ 22.271823] SELinux: Disabled at runtime. [ 22.361137] audit: type=1404 audit(1709631293.920:2): enforcing=0 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=0 old-enabled=1 lsm=selinux res=1 [ 22.383245] systemd[1]: systemd v249-63.oe2203sp3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA -SMACK +SECCOMP +GCRYPT +GNUTLS -OPENSSL +ACL +BLKID -CURL -ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB -ZSTD +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=legacy) [ 22.417890] systemd[1]: Detected architecture arm64. [ 22.457744] systemd-rc-local-generator[2192]: /etc/rc.d/rc.local is not marked executable, skipping. [ 22.460749] systemd-sysv-generator[2195]: SysV service '/etc/rc.d/init.d/openresty' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ 22.495214] systemd-sysv-generator[2195]: SysV service '/etc/rc.d/init.d/mst' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ 22.525362] systemd-sysv-generator[2195]: SysV service '/etc/rc.d/init.d/lustre' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ 22.552658] systemd-sysv-generator[2195]: SysV service '/etc/rc.d/init.d/lsvcgss' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ 22.635830] systemd[1]: /usr/lib/systemd/system/dbus.socket:5: ListenStream= references a path below legacy directory /var/run/, updating /var/run/dbus/system_bus_socket \xe2\x86\x92 /run/dbus/system_bus_socket; please update the unit file accordingly. [ 22.692814] systemd[1]: /usr/lib/systemd/system/libstoragemgmt.service:7: Standard output type syslog is obsolete, automatically updating to journal. Please update your unit file, and consider removing the setting altogether. [ 22.864482] systemd[1]: initrd-switch-root.service: Deactivated successfully. [ 22.883010] systemd[1]: Stopped Switch Root. [ 22.893654] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. [ 22.894374] systemd[1]: Created slice Slice /system/getty. [ 22.913101] systemd[1]: Created slice Slice /system/modprobe. [ 22.922140] systemd[1]: Created slice Slice /system/serial-getty. [ 22.931638] systemd[1]: Created slice Slice /system/sshd-keygen. [ 22.941082] systemd[1]: Created slice Slice /system/systemd-fsck. [ 22.950477] systemd[1]: Created slice User and Session Slice. [ 22.959246] systemd[1]: Condition check resulted in Dispatch Password Requests to Console Directory Watch being skipped. [ 22.959314] systemd[1]: Started Forward Password Requests to Wall Directory Watch. [ 22.983019] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. [ 22.994867] systemd[1]: Reached target Local Encrypted Volumes. [ 23.005203] systemd[1]: Stopped target Switch Root. [ 23.012505] systemd[1]: Stopped target Initrd File Systems. [ 23.020134] systemd[1]: Stopped target Initrd Root File System. [ 23.028108] systemd[1]: Reached target Path Units. [ 23.034945] systemd[1]: Reached target Slice Units. [ 23.041877] systemd[1]: Reached target Local Verity Integrity Protected Volumes. [ 23.051496] systemd[1]: Listening on Device-mapper event daemon FIFOs. [ 23.060283] systemd[1]: Listening on LVM2 poll daemon socket. [ 23.068607] systemd[1]: Listening on RPCbind Server Activation Socket. [ 23.077094] systemd[1]: Reached target RPC Port Mapper. [ 23.085573] systemd[1]: Listening on Process Core Dump Socket. [ 23.093674] systemd[1]: Listening on initctl Compatibility Named Pipe. [ 23.102596] systemd[1]: Listening on udev Control Socket. [ 23.109831] systemd[1]: Listening on udev Kernel Socket. [ 23.118683] systemd[1]: Activating swap /dev/mapper/openeuler-swap... [ 23.126984] VFS: Open a write opened block device exclusively dm-1. current [2201 swapon]. parent [1 systemd] [ 23.129024] systemd[1]: Mounting Huge Pages File System... [ 23.139475] Adding 4194300k swap on /dev/mapper/openeuler-swap. Priority:-2 extents:1 across:4194300k FS [ 23.160127] systemd[1]: Mounting POSIX Message Queue File System... [ 23.169758] systemd[1]: Mounting Kernel Debug File System... [ 23.178795] systemd[1]: Mounting Kernel Trace File System... [ 23.186304] systemd[1]: Condition check resulted in Kernel Module supporting RPCSEC_GSS being skipped. [ 23.189609] systemd[1]: Starting Create List of Static Device Nodes... [ 23.212302] systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... [ 23.227245] systemd[1]: Starting Load Kernel Module configfs... [ 23.243338] systemd[1]: Starting Load Kernel Module drm... [ 23.252901] systemd[1]: Starting Load Kernel Module fuse... [ 23.261103] systemd[1]: plymouth-switch-root.service: Deactivated successfully. [ 23.261537] systemd[1]: Stopped Plymouth switch root service. [ 23.280328] systemd[1]: Condition check resulted in Set Up Additional Binary Formats being skipped. [ 23.290868] fuse: init (API version 7.33) [ 23.292019] systemd[1]: Stopped Journal Service. [ 23.307164] systemd[1]: Starting Journal Service... [ 23.317176] systemd[1]: Starting Load Kernel Modules... [ 23.326463] systemd[1]: Starting Remount Root and Kernel File Systems... [ 23.337997] systemd[1]: Starting Coldplug All udev Devices... [ 23.351650] systemd[1]: Activated swap /dev/mapper/openeuler-swap. [ 23.360791] systemd[1]: Started Journal Service. [ 23.465868] EXT4-fs (dm-0): re-mounted. Opts: (null) [ 23.492637] systemd-journald[2226]: Received client request to flush runtime journal. [ 23.711883] IPMI message handler: version 39.2 [ 23.725771] ipmi device interface [ 23.739305] ipmi_si: IPMI System Interface driver [ 23.746392] ipmi_si IPI0001:00: ipmi_platform: probing via ACPI [ 23.753412] ipmi_si IPI0001:00: ipmi_platform: [mem 0x3f00000e4-0x3f00000e7] regsize 1 spacing 1 irq 23 [ 23.785496] ipmi_si: Adding ACPI-specified bt state machine [ 23.787001] sd 0:3:111:0: Attached scsi generic sg0 type 0 [ 23.798597] ipmi_si: Trying ACPI-specified bt state machine at mem address 0x3f00000e4, slave address 0x0, irq 23 [ 23.821300] ipmi_si IPI0001:00: bt cap response too short: 3 [ 23.829228] ipmi_si IPI0001:00: using default values [ 23.836250] ipmi_si IPI0001:00: req2rsp=5 secs retries=2 [ 23.901155] ipmi_si IPI0001:00: The BMC does not support setting the recv irq bit, compensating, but the BMC needs to be fixed. [ 23.925457] ipmi_si IPI0001:00: Using irq 23 [ 23.973475] ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x0007db, prod_id: 0x0001, dev_id: 0x01) [ 24.124264] ipmi_si IPI0001:00: IPMI bt interface initialized [ 24.136101] ipmi_ssif: IPMI SSIF Interface driver [ 24.148673] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null) [ 24.263621] RPC: Registered named UNIX socket transport module. [ 24.270600] RPC: Registered udp transport module. [ 24.270601] RPC: Registered tcp transport module. [ 24.270605] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 24.589021] hns3 0000:7d:00.0 enp125s0f0: link up [ 24.597660] IPv6: ADDRCONF(NETDEV_CHANGE): enp125s0f0: link becomes ready [ 24.613627] hns3 0000:7d:00.2 enp125s0f2: link up [ 24.628726] hns3 0000:bd:00.0 enp189s0f0: link up [ 24.642177] hns3 0000:bd:00.2 enp189s0f2: link up [ 24.649143] hns3 0000:bd:00.3 enp189s0f3: link up [ 24.688861] mlx5_core 0000:81:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) [ 24.699424] mlx5_core 0000:81:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) [ 24.711705] hns3 0000:7d:00.0 enp125s0f0: link down [ 24.733398] hns3 0000:7d:00.0 enp125s0f0: already using mac address 60:**:**:**:cb:18 [ 24.749340] bond4: (slave enp125s0f0): Enslaving as a backup interface with a down link [ 24.769386] hns3 0000:7d:00.2 enp125s0f2: link down [ 24.794142] mlx5_core 0000:81:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) [ 24.800328] bond4: (slave enp125s0f2): Enslaving as a backup interface with a down link [ 24.804666] mlx5_core 0000:81:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) [ 24.817806] mlx5_core 0000:81:00.0 ibp129s0f0: renamed from ib0 [ 24.901713] mlx5_core 0000:81:00.1 ibp129s0f1: renamed from ib0 [ 25.083127] VFIO - User Level meta-driver version: 0.3 [ 25.750223] IPv6: ADDRCONF(NETDEV_CHANGE): enp189s0f0: link becomes ready [ 25.759374] IPv6: ADDRCONF(NETDEV_CHANGE): enp189s0f2: link becomes ready [ 25.768152] IPv6: ADDRCONF(NETDEV_CHANGE): enp189s0f3: link becomes ready [ 26.110330] hns3 0000:7d:00.0 enp125s0f0: link up [ 26.181814] hns3 0000:7d:00.2 enp125s0f2: link up [ 26.617091] bond4: Warning: No 802.3ad response from the link partner for any adapters in the bond [ 26.998558] IPv6: ADDRCONF(NETDEV_CHANGE): bond4: link becomes ready [ 27.007602] IPv6: ADDRCONF(NETDEV_CHANGE): ibp129s0f1: link becomes ready [ 27.017807] bond4: (slave enp125s0f0): link status definitely up, 25000 Mbps full duplex [ 27.027270] bond4: active interface up! [ 27.032853] bond4: (slave enp125s0f2): link status definitely up, 25000 Mbps full duplex [ 27.518107] IPv6: ADDRCONF(NETDEV_CHANGE): ibp129s0f0: link becomes ready [ 3366.331507] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [ 3366.340295] alg: No test for adler32 (adler32-zlib) [ 3367.088792] Key type ._llcrypt registered [ 3367.093517] Key type .llcrypt registered [ 3367.111812] Lustre: DEBUG MARKER: server3: executing set_hostid [ 3371.020777] Lustre: DEBUG MARKER: server3: executing load_modules_local [ 3371.472323] lnet: unknown parameter '#' ignored [ 3371.477581] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [ 3371.484207] lnet: unknown parameter '#' ignored [ 3371.489441] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [ 3371.566815] Lustre: Lustre: Build Version: 2.15.4 [ 3371.627953] LNet: Using FastReg for registration [ 3371.830214] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [ 3373.384585] Key type lgssc registered [ 3373.541627] Lustre: Echo OBD driver; http://www.lustre.org/ [ 3656.006991] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3969.131529] Lustre: DEBUG MARKER: server3: executing load_modules_local [ 3971.885909] device-mapper: ioctl: dmsetup[11175]: dm-2 (mds1_flakey) is created successfully [ 3973.946901] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3974.578455] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3975.673469] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 3975.709202] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 3975.738688] Lustre: lustre-MDT0000: new disk, initializing [ 3975.761583] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3975.776302] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 3975.797725] VFS: Open an exclusive opened block device for write dm-2. current [11586 tune2fs]. parent [11585 sh] [ 3976.781472] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [ 3982.994850] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 3984.056692] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 3996.897111] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 4002.013585] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 4005.761319] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 4012.443720] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 4031.858382] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [ 4032.732652] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [ 4032.743396] Lustre: Skipped 1 previous similar message [ 4036.959188] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [ 4036.969960] Lustre: Skipped 1 previous similar message [ 4036.974076] LustreError: 12997:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 4037.852446] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4037.872438] LustreError: Skipped 1 previous similar message [ 4038.715744] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4042.079152] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4043.117425] Lustre: 12997:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709635308/real 1709635308] req@000000003ba2b363 x1792681855433408/t0(0) o251->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709635314 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 4043.190767] Lustre: server umount lustre-MDT0000 complete [ 4044.948508] device-mapper: ioctl: dmsetup[13378]: dm-2 (mds1_flakey) is removed successfully [ 4075.801661] Lustre: DEBUG MARKER: server3: executing unload_modules_local [ 4076.658074] Key type lgssc unregistered [ 4076.836992] LNet: 14009:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 4078.028847] LNet: Removed LNI 192.168.0.83@o2ib [ 4078.352874] Key type .llcrypt unregistered [ 4078.366970] Key type ._llcrypt unregistered [ 4086.336842] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [ 4086.346758] alg: No test for adler32 (adler32-zlib) [ 4087.096767] Key type ._llcrypt registered [ 4087.102180] Key type .llcrypt registered [ 4087.126073] Lustre: DEBUG MARKER: server3: executing set_hostid [ 4091.058977] Lustre: DEBUG MARKER: server3: executing load_modules_local [ 4091.481558] lnet: unknown parameter '#' ignored [ 4091.487322] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [ 4091.494342] lnet: unknown parameter '#' ignored [ 4091.499959] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [ 4091.573954] Lustre: Lustre: Build Version: 2.15.4 [ 4091.631453] LNet: Using FastReg for registration [ 4091.831405] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [ 4093.384556] Key type lgssc registered [ 4093.540106] Lustre: Echo OBD driver; http://www.lustre.org/ [ 4380.689196] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4697.374610] Lustre: DEBUG MARKER: server3: executing load_modules_local [ 4700.152678] device-mapper: ioctl: dmsetup[17599]: dm-2 (mds1_flakey) is created successfully [ 4702.246772] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4702.882614] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4703.967672] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4703.994214] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 4704.023559] Lustre: lustre-MDT0000: new disk, initializing [ 4704.047070] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4704.058587] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 4704.079373] VFS: Open an exclusive opened block device for write dm-2. current [18011 tune2fs]. parent [18010 sh] [ 4705.072064] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [ 4711.301177] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 4712.350336] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 4719.893253] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 4730.065568] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 4733.792976] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 4736.450966] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 4755.046265] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [ 4755.664662] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [ 4755.675402] Lustre: Skipped 1 previous similar message [ 4760.147153] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [ 4760.157807] Lustre: Skipped 1 previous similar message [ 4760.929960] LustreError: 19417:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 4761.647690] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4765.267202] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4766.767601] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4766.787992] LustreError: Skipped 2 previous similar messages [ 4767.073311] Lustre: 19417:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709636032/real 1709636032] req@000000001195b4b5 x1792682610407552/t0(0) o251->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709636038 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 4767.146653] Lustre: server umount lustre-MDT0000 complete [ 4768.950375] device-mapper: ioctl: dmsetup[19799]: dm-2 (mds1_flakey) is removed successfully [ 4799.753094] Lustre: DEBUG MARKER: server3: executing unload_modules_local [ 4800.589621] Key type lgssc unregistered [ 4800.764847] LNet: 20429:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 4801.952831] LNet: Removed LNI 192.168.0.83@o2ib [ 4802.240939] Key type .llcrypt unregistered [ 4802.255173] Key type ._llcrypt unregistered [ 4810.228901] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [ 4810.238898] alg: No test for adler32 (adler32-zlib) [ 4810.988669] Key type ._llcrypt registered [ 4810.994407] Key type .llcrypt registered [ 4811.017288] Lustre: DEBUG MARKER: server3: executing set_hostid [ 4814.947219] Lustre: DEBUG MARKER: server3: executing load_modules_local [ 4815.383424] lnet: unknown parameter '#' ignored [ 4815.389204] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [ 4815.396256] lnet: unknown parameter '#' ignored [ 4815.401899] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [ 4815.476470] Lustre: Lustre: Build Version: 2.15.4 [ 4815.538404] LNet: Using FastReg for registration [ 4815.738501] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [ 4817.284461] Key type lgssc registered [ 4817.441833] Lustre: Echo OBD driver; http://www.lustre.org/ [ 5104.771632] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5421.180496] Lustre: DEBUG MARKER: server3: executing load_modules_local [ 5423.935293] device-mapper: ioctl: dmsetup[24035]: dm-2 (mds1_flakey) is created successfully [ 5425.977854] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5426.615966] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5427.702578] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 5427.730118] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 5427.758991] Lustre: lustre-MDT0000: new disk, initializing [ 5427.781482] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5427.793199] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 5427.813855] VFS: Open an exclusive opened block device for write dm-2. current [24442 tune2fs]. parent [24441 sh] [ 5428.811207] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [ 5435.005698] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 5436.057509] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 5444.553244] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 5454.535935] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 5457.435379] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 5465.147061] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 5483.866511] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [ 5485.252604] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [ 5485.263324] Lustre: Skipped 1 previous similar message [ 5488.967106] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [ 5488.977820] Lustre: Skipped 1 previous similar message [ 5489.749887] LustreError: 25853:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 5490.372392] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5490.372394] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5494.087115] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5494.107257] LustreError: Skipped 1 previous similar message [ 5495.492322] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5495.492323] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5495.893149] Lustre: 25853:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709636761/real 1709636761] req@00000000a63b353d x1792683369577088/t0(0) o251->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709636767 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 5495.966625] Lustre: server umount lustre-MDT0000 complete [ 5497.777189] device-mapper: ioctl: dmsetup[26232]: dm-2 (mds1_flakey) is removed successfully [ 5536.842530] Lustre: DEBUG MARKER: server3: executing unload_modules_local [ 5537.653817] Key type lgssc unregistered [ 5537.856519] LNet: 26859:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5539.028428] LNet: Removed LNI 192.168.0.83@o2ib [ 5539.360657] Key type .llcrypt unregistered [ 5539.374750] Key type ._llcrypt unregistered [ 5547.373714] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [ 5547.383718] alg: No test for adler32 (adler32-zlib) [ 5548.132364] Key type ._llcrypt registered [ 5548.137742] Key type .llcrypt registered [ 5548.161402] Lustre: DEBUG MARKER: server3: executing set_hostid [ 5552.103884] Lustre: DEBUG MARKER: server3: executing load_modules_local [ 5552.514644] lnet: unknown parameter '#' ignored [ 5552.520397] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [ 5552.527420] lnet: unknown parameter '#' ignored [ 5552.533033] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [ 5552.608691] Lustre: Lustre: Build Version: 2.15.4 [ 5552.667201] LNet: Using FastReg for registration [ 5552.877714] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [ 5554.424146] Key type lgssc registered [ 5554.583568] Lustre: Echo OBD driver; http://www.lustre.org/ [ 5842.055608] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6158.485070] Lustre: DEBUG MARKER: server3: executing load_modules_local [ 6161.204901] device-mapper: ioctl: dmsetup[30474]: dm-2 (mds1_flakey) is created successfully [ 6163.272281] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6163.905888] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6164.991150] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 6165.017958] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 6165.046340] Lustre: lustre-MDT0000: new disk, initializing [ 6165.066480] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 6165.078022] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 6165.098779] VFS: Open an exclusive opened block device for write dm-2. current [30885 tune2fs]. parent [30884 sh] [ 6166.084216] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [ 6172.296415] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 6173.345511] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 6180.797210] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 6191.289417] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 6194.863450] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 6197.553009] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 6216.888518] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [ 6216.899039] Lustre: Skipped 1 previous similar message [ 6217.770415] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.86@o2ib (stopping) [ 6221.133840] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [ 6222.153526] LustreError: 32295:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 6222.871221] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6226.234902] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6227.991111] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6228.011470] LustreError: Skipped 2 previous similar messages [ 6228.296897] Lustre: 32295:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709637493/real 1709637493] req@000000000fd2309d x1792684142377344/t0(0) o251->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709637499 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 6228.366580] Lustre: server umount lustre-MDT0000 complete [ 6230.171973] device-mapper: ioctl: dmsetup[32678]: dm-2 (mds1_flakey) is removed successfully [ 6269.149013] Lustre: DEBUG MARKER: server3: executing unload_modules_local [ 6270.011694] Key type lgssc unregistered [ 6270.208260] LNet: 33310:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 6271.400195] LNet: Removed LNI 192.168.0.83@o2ib [ 6271.808317] Key type .llcrypt unregistered [ 6271.822445] Key type ._llcrypt unregistered [ 6279.914512] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [ 6279.924358] alg: No test for adler32 (adler32-zlib) [ 6280.672147] Key type ._llcrypt registered [ 6280.677373] Key type .llcrypt registered [ 6280.700567] Lustre: DEBUG MARKER: server3: executing set_hostid [ 6284.653545] Lustre: DEBUG MARKER: server3: executing load_modules_local [ 6285.112541] lnet: unknown parameter '#' ignored [ 6285.118328] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [ 6285.125377] lnet: unknown parameter '#' ignored [ 6285.131021] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [ 6285.205921] Lustre: Lustre: Build Version: 2.15.4 [ 6285.267707] LNet: Using FastReg for registration [ 6285.470121] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [ 6287.019904] Key type lgssc registered [ 6287.173120] Lustre: Echo OBD driver; http://www.lustre.org/ [ 6574.473072] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6890.933143] Lustre: DEBUG MARKER: server3: executing load_modules_local [ 6893.719720] device-mapper: ioctl: dmsetup[36957]: dm-2 (mds1_flakey) is created successfully [ 6895.792842] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6896.444300] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6897.525935] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 6897.552292] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 6897.581943] Lustre: lustre-MDT0000: new disk, initializing [ 6897.601893] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 6897.613367] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 6897.633979] VFS: Open an exclusive opened block device for write dm-2. current [37367 tune2fs]. parent [37366 sh] [ 6898.625610] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [ 6904.874251] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 6905.926020] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 6917.549513] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 6926.512127] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 6927.303829] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 6929.948383] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 6948.674081] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [ 6950.174378] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.86@o2ib (stopping) [ 6952.108329] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [ 6954.557250] LustreError: 38785:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 6955.274888] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6957.228132] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6957.228135] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6958.894731] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6960.700661] Lustre: 38785:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709638226/real 1709638226] req@0000000067fc5039 x1792684909934720/t0(0) o251->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709638232 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 6960.777093] Lustre: server umount lustre-MDT0000 complete [ 6962.573390] device-mapper: ioctl: dmsetup[39165]: dm-2 (mds1_flakey) is removed successfully [ 6993.385240] Lustre: DEBUG MARKER: server3: executing unload_modules_local [ 6994.253253] Key type lgssc unregistered [ 6994.464196] LNet: 39797:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 6995.644090] LNet: Removed LNI 192.168.0.83@o2ib [ 6995.968397] Key type .llcrypt unregistered [ 6995.982650] Key type ._llcrypt unregistered [ 7003.930256] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [ 7003.939746] alg: No test for adler32 (adler32-zlib) [ 7004.688035] Key type ._llcrypt registered [ 7004.693235] Key type .llcrypt registered [ 7004.716216] Lustre: DEBUG MARKER: server3: executing set_hostid [ 7008.577857] Lustre: DEBUG MARKER: server3: executing load_modules_local [ 7009.030303] lnet: unknown parameter '#' ignored [ 7009.036047] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [ 7009.043079] lnet: unknown parameter '#' ignored [ 7009.048697] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [ 7009.125918] Lustre: Lustre: Build Version: 2.15.4 [ 7009.184852] LNet: Using FastReg for registration [ 7009.385212] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [ 7010.935803] Key type lgssc registered [ 7011.100524] Lustre: Echo OBD driver; http://www.lustre.org/ [ 7298.416037] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 7615.021798] Lustre: DEBUG MARKER: server3: executing load_modules_local [ 7617.760687] device-mapper: ioctl: dmsetup[43401]: dm-2 (mds1_flakey) is created successfully [ 7619.826585] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 7620.469385] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 7621.555149] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 7621.588615] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 7621.619055] Lustre: lustre-MDT0000: new disk, initializing [ 7621.642792] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 7621.659438] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 7621.680226] VFS: Open an exclusive opened block device for write dm-2. current [43811 tune2fs]. parent [43810 sh] [ 7622.659209] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [ 7628.924722] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 7629.978015] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 7636.868660] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 7647.905160] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 7651.404567] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 7657.189498] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 7676.077109] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [ 7677.749715] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [ 7679.249781] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.86@o2ib (stopping) [ 7679.260697] Lustre: Skipped 2 previous similar messages [ 7681.841259] LustreError: 45224:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 7682.850708] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 7683.743961] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 7683.764102] LustreError: Skipped 1 previous similar message [ 7687.970623] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 7687.984478] Lustre: 45224:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709638953/real 1709638953] req@000000005d0468fe x1792685669103936/t0(0) o251->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709638959 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 7687.990683] LustreError: Skipped 1 previous similar message [ 7688.066801] Lustre: server umount lustre-MDT0000 complete [ 7689.896117] device-mapper: ioctl: dmsetup[45605]: dm-2 (mds1_flakey) is removed successfully [ 7728.847961] Lustre: DEBUG MARKER: server3: executing unload_modules_local [ 7729.693038] Key type lgssc unregistered [ 7729.871845] LNet: 46236:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 7731.055765] LNet: Removed LNI 192.168.0.83@o2ib [ 7731.415653] Key type .llcrypt unregistered [ 7731.429731] Key type ._llcrypt unregistered [ 7739.462933] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [ 7739.472694] alg: No test for adler32 (adler32-zlib) [ 7740.219667] Key type ._llcrypt registered [ 7740.224866] Key type .llcrypt registered [ 7740.247625] Lustre: DEBUG MARKER: server3: executing set_hostid [ 7744.238785] Lustre: DEBUG MARKER: server3: executing load_modules_local [ 7744.663201] lnet: unknown parameter '#' ignored [ 7744.668998] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [ 7744.676047] lnet: unknown parameter '#' ignored [ 7744.681689] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [ 7744.756170] Lustre: Lustre: Build Version: 2.15.4 [ 7744.818687] LNet: Using FastReg for registration [ 7745.021575] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [ 7746.567478] Key type lgssc registered [ 7746.726236] Lustre: Echo OBD driver; http://www.lustre.org/ [ 8033.974115] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 8350.237173] Lustre: DEBUG MARKER: server3: executing load_modules_local [ 8353.060996] device-mapper: ioctl: dmsetup[50023]: dm-2 (mds1_flakey) is created successfully [ 8355.143146] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 8355.804759] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 8356.892543] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 8356.920700] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 8356.949557] Lustre: lustre-MDT0000: new disk, initializing [ 8356.970229] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 8356.982805] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 8357.003496] VFS: Open an exclusive opened block device for write dm-2. current [50435 tune2fs]. parent [50434 sh] [ 8357.987842] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [ 8364.210899] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 8365.261501] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 8373.912413] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 8381.335055] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 8386.513947] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 8390.179543] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 8409.605291] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.86@o2ib (stopping) [ 8411.296793] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [ 8412.969392] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [ 8412.980064] Lustre: Skipped 2 previous similar messages [ 8414.500992] LustreError: 51799:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 8414.706162] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 8417.171350] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 8417.171352] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 8419.826062] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [ 8419.846170] LustreError: Skipped 1 previous similar message [ 8420.644195] Lustre: 51799:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709639686/real 1709639686] req@0000000098d1b3a4 x1792686440854784/t0(0) o251->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709639692 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 8420.720038] Lustre: server umount lustre-MDT0000 complete [ 8422.515128] device-mapper: ioctl: dmsetup[52180]: dm-2 (mds1_flakey) is removed successfully [ 8453.098927] Lustre: DEBUG MARKER: server3: executing unload_modules_local [ 8454.051415] Key type lgssc unregistered [ 8454.283709] LNet: 52786:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 8455.459613] LNet: Removed LNI 192.168.0.83@o2ib [ 8455.827837] Key type .llcrypt unregistered [ 8455.841940] Key type ._llcrypt unregistered [56654.649022] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [56654.658834] alg: No test for adler32 (adler32-zlib) [56655.406727] Key type ._llcrypt registered [56655.411919] Key type .llcrypt registered [56655.434988] Lustre: DEBUG MARKER: server3: executing unload_modules_local [56655.474774] Key type .llcrypt unregistered [56655.488789] Key type ._llcrypt unregistered [56679.215961] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [56679.225842] alg: No test for adler32 (adler32-zlib) [56679.974355] Key type ._llcrypt registered [56679.979696] Key type .llcrypt registered [56680.002520] Lustre: DEBUG MARKER: server3: executing set_hostid [56683.950054] Lustre: DEBUG MARKER: server3: executing load_modules_local [56684.371621] lnet: unknown parameter '#' ignored [56684.377346] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [56684.384346] lnet: unknown parameter '#' ignored [56684.389933] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [56684.465486] Lustre: Lustre: Build Version: 2.15.4 [56684.527560] LNet: Using FastReg for registration [56684.739950] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [56686.294120] Key type lgssc registered [56686.459289] Lustre: Echo OBD driver; http://www.lustre.org/ [56952.570623] Lustre: DEBUG MARKER: server3: executing unload_modules_local [56953.094816] Key type lgssc unregistered [56953.229794] LNet: 58301:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [56954.397681] LNet: Removed LNI 192.168.0.83@o2ib [56954.638117] Key type .llcrypt unregistered [56954.652054] Key type ._llcrypt unregistered [56964.293133] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [56964.302894] alg: No test for adler32 (adler32-zlib) [56965.049642] Key type ._llcrypt registered [56965.055079] Key type .llcrypt registered [56965.078298] Lustre: DEBUG MARKER: server3: executing set_hostid [56969.075585] Lustre: DEBUG MARKER: server3: executing load_modules_local [56969.514723] lnet: unknown parameter '#' ignored [56969.520366] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [56969.527322] lnet: unknown parameter '#' ignored [56969.532873] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [56969.611934] Lustre: Lustre: Build Version: 2.15.4 [56969.674064] LNet: Using FastReg for registration [56969.881388] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [56971.433396] Key type lgssc registered [56971.590948] Lustre: Echo OBD driver; http://www.lustre.org/ [56973.711822] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [57258.870802] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [57575.181094] Lustre: DEBUG MARKER: server3: executing load_modules_local [57577.964379] device-mapper: ioctl: dmsetup[61873]: dm-2 (mds1_flakey) is created successfully [57580.042912] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [57580.675249] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [57581.761259] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [57581.790318] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [57581.818861] Lustre: lustre-MDT0000: new disk, initializing [57581.838523] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [57581.850008] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [57581.870631] VFS: Open an exclusive opened block device for write dm-2. current [62281 tune2fs]. parent [62280 sh] [57582.858808] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [57589.099083] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [57590.148635] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [57601.380261] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [57611.367066] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [57611.410049] Lustre: DEBUG MARKER: Using TIMEOUT=20 [57617.122497] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [64420.096136] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [64420.850660] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [64420.861050] Lustre: Skipped 1 previous similar message [64423.521534] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.86@o2ib (stopping) [64423.532237] Lustre: Skipped 1 previous similar message [64425.602646] LustreError: 63781:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [64425.970621] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [64425.990602] LustreError: Skipped 1 previous similar message [64426.867522] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [64428.622006] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [64431.090507] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [64431.110664] LustreError: Skipped 1 previous similar message [64431.745913] Lustre: 63781:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709695698/real 1709695698] req@000000005a1f9b21 x1792738169460800/t0(0) o251->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709695704 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [64431.821755] Lustre: server umount lustre-MDT0000 complete [64433.843932] device-mapper: ioctl: dmsetup[64164]: dm-2 (mds1_flakey) is removed successfully [64469.953786] Lustre: DEBUG MARKER: server3: executing unload_modules_local [64470.738306] Key type lgssc unregistered [64470.985330] LNet: 64773:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [64472.161233] LNet: Removed LNI 192.168.0.83@o2ib [64472.509237] Key type .llcrypt unregistered [64472.523328] Key type ._llcrypt unregistered [64480.610078] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [64480.620122] alg: No test for adler32 (adler32-zlib) [64481.369180] Key type ._llcrypt registered [64481.374677] Key type .llcrypt registered [64481.399273] Lustre: DEBUG MARKER: server3: executing set_hostid [64485.403199] Lustre: DEBUG MARKER: server3: executing load_modules_local [64485.848430] lnet: unknown parameter '#' ignored [64485.854443] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [64485.861459] lnet: unknown parameter '#' ignored [64485.867056] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [64485.944866] Lustre: Lustre: Build Version: 2.15.4 [64486.006290] LNet: Using FastReg for registration [64486.209763] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [64487.756973] Key type lgssc registered [64487.920410] Lustre: Echo OBD driver; http://www.lustre.org/ [64775.101090] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [65092.849775] Lustre: DEBUG MARKER: server3: executing load_modules_local [65095.604906] device-mapper: ioctl: dmsetup[68388]: dm-2 (mds1_flakey) is created successfully [65097.696915] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [65098.345093] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [65099.426985] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [65099.453904] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [65099.481059] Lustre: lustre-MDT0000: new disk, initializing [65099.502893] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [65099.514935] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [65099.536570] VFS: Open an exclusive opened block device for write dm-2. current [68800 tune2fs]. parent [68799 sh] [65100.543361] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [65106.788557] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [65107.839785] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [65115.594998] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [65121.498197] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [65129.118260] Lustre: DEBUG MARKER: Using TIMEOUT=20 [65132.798401] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [72954.836554] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.86@o2ib (stopping) [72958.438487] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [72958.449414] Lustre: Skipped 2 previous similar messages [72959.734119] LustreError: 70340:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [72959.936956] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [72963.558343] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [72963.578626] LustreError: Skipped 2 previous similar messages [72965.056860] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [72965.877177] Lustre: 70340:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709704232/real 1709704232] req@0000000088584bdd x1792746129315712/t0(0) o251->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709704238 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [72965.953845] Lustre: server umount lustre-MDT0000 complete [72968.171319] device-mapper: ioctl: dmsetup[70730]: dm-2 (mds1_flakey) is removed successfully [72997.309652] Lustre: DEBUG MARKER: server3: executing unload_modules_local [72998.101866] Key type lgssc unregistered [72998.308732] LNet: 71335:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [72999.476633] LNet: Removed LNI 192.168.0.83@o2ib [72999.838837] Key type .llcrypt unregistered [72999.853220] Key type ._llcrypt unregistered [73007.986197] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [73007.996088] alg: No test for adler32 (adler32-zlib) [73008.744590] Key type ._llcrypt registered [73008.750341] Key type .llcrypt registered [73008.772981] Lustre: DEBUG MARKER: server3: executing set_hostid [73012.781559] Lustre: DEBUG MARKER: server3: executing load_modules_local [73013.217873] lnet: unknown parameter '#' ignored [73013.223610] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [73013.230634] lnet: unknown parameter '#' ignored [73013.236250] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [73013.313274] Lustre: Lustre: Build Version: 2.15.4 [73013.374960] LNet: Using FastReg for registration [73013.587858] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [73015.132347] Key type lgssc registered [73015.286625] Lustre: Echo OBD driver; http://www.lustre.org/ [73302.456087] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [73621.405980] Lustre: DEBUG MARKER: server3: executing load_modules_local [73624.213747] device-mapper: ioctl: dmsetup[74938]: dm-2 (mds1_flakey) is created successfully [73626.288013] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [73626.921915] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [73628.007385] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [73628.034227] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [73628.063645] Lustre: lustre-MDT0000: new disk, initializing [73628.086904] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [73628.098451] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [73628.118795] VFS: Open an exclusive opened block device for write dm-2. current [75349 tune2fs]. parent [75348 sh] [73629.117471] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [73635.359614] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [73636.412243] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [73648.222099] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [73656.669097] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [73657.687637] Lustre: DEBUG MARKER: Using TIMEOUT=20 [73662.370860] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [82205.479878] Lustre: lustre-MDT0000: haven't heard from client 933d0673-13c1-4fa4-8175-de980cecf2f6 (at 192.168.0.82@o2ib) in 51 seconds. I think it's dead, and I am evicting it. exp 000000002ff2c8c0, cur 1709713478 expire 1709713448 last 1709713427 [82222.553458] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [82224.224562] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [82225.979407] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.86@o2ib (stopping) [82228.573256] LustreError: 78609:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [82229.325335] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [82231.079632] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [82232.523957] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [82232.523959] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [82234.716383] Lustre: 78609:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709713501/real 1709713501] req@00000000e08952b8 x1792755384592000/t0(0) o251->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709713507 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [82234.792858] Lustre: server umount lustre-MDT0000 complete [82238.145705] device-mapper: ioctl: dmsetup[78990]: dm-2 (mds1_flakey) is removed successfully [82283.216865] Lustre: DEBUG MARKER: server3: executing unload_modules_local [82284.044889] Key type lgssc unregistered [82284.247729] LNet: 79601:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [82285.435535] LNet: Removed LNI 192.168.0.83@o2ib [82285.824176] Key type .llcrypt unregistered [82285.838426] Key type ._llcrypt unregistered [82293.905863] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [82293.915934] alg: No test for adler32 (adler32-zlib) [82294.663514] Key type ._llcrypt registered [82294.668663] Key type .llcrypt registered [82294.691532] Lustre: DEBUG MARKER: server3: executing set_hostid [82298.698727] Lustre: DEBUG MARKER: server3: executing load_modules_local [82299.180608] lnet: unknown parameter '#' ignored [82299.186352] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [82299.193380] lnet: unknown parameter '#' ignored [82299.198996] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [82299.271824] Lustre: Lustre: Build Version: 2.15.4 [82299.329319] LNet: Using FastReg for registration [82299.536221] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [82301.079265] Key type lgssc registered [82301.245106] Lustre: Echo OBD driver; http://www.lustre.org/ [82588.595839] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [82909.078919] Lustre: DEBUG MARKER: server3: executing load_modules_local [82911.878893] device-mapper: ioctl: dmsetup[83209]: dm-2 (mds1_flakey) is created successfully [82913.957228] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [82914.609616] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [82915.696729] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [82915.724312] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [82915.754184] Lustre: lustre-MDT0000: new disk, initializing [82915.776980] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [82915.788505] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [82915.808921] VFS: Open an exclusive opened block device for write dm-2. current [83620 tune2fs]. parent [83619 sh] [82916.815450] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [82923.083068] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [82924.132597] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [82935.744909] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [82940.099383] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [82945.517237] Lustre: DEBUG MARKER: Using TIMEOUT=20 [82949.189994] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [91184.313008] Lustre: lustre-MDT0000: haven't heard from client bbfa3c5b-fcf7-43cd-af67-b136c6da6ca4 (at 192.168.0.81@o2ib) in 51 seconds. I think it's dead, and I am evicting it. exp 000000008ed70af4, cur 1709722457 expire 1709722427 last 1709722406 [91206.310394] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.86@o2ib (stopping) [91208.259954] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [91209.931920] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [91209.942602] Lustre: Skipped 1 previous similar message [91211.464797] LustreError: 85199:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [91213.878866] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [91213.898812] LustreError: Skipped 1 previous similar message [91215.032628] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [91216.530546] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [91217.607916] Lustre: 85199:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709722484/real 1709722484] req@000000004f723f61 x1792765150062080/t0(0) o251->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709722490 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [91217.684387] Lustre: server umount lustre-MDT0000 complete [91221.226658] device-mapper: ioctl: dmsetup[85585]: dm-2 (mds1_flakey) is removed successfully [91263.329102] Lustre: DEBUG MARKER: server3: executing unload_modules_local [91264.230028] Key type lgssc unregistered [91264.451242] LNet: 86195:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [91265.607126] LNet: Removed LNI 192.168.0.83@o2ib [91265.979546] Key type .llcrypt unregistered [91265.993792] Key type ._llcrypt unregistered [91274.071133] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [91274.081092] alg: No test for adler32 (adler32-zlib) [91274.831062] Key type ._llcrypt registered [91274.836439] Key type .llcrypt registered [91274.859304] Lustre: DEBUG MARKER: server3: executing set_hostid [91278.816316] Lustre: DEBUG MARKER: server3: executing load_modules_local [91279.255896] lnet: unknown parameter '#' ignored [91279.261993] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [91279.269018] lnet: unknown parameter '#' ignored [91279.274619] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [91279.348940] Lustre: Lustre: Build Version: 2.15.4 [91279.408420] LNet: Using FastReg for registration [91279.614104] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [91281.162848] Key type lgssc registered [91281.328114] Lustre: Echo OBD driver; http://www.lustre.org/ [91568.587114] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [91890.338792] Lustre: DEBUG MARKER: server3: executing load_modules_local [91893.111033] device-mapper: ioctl: dmsetup[89809]: dm-2 (mds1_flakey) is created successfully [91895.174040] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [91895.808497] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [91896.893344] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [91896.919823] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [91896.949670] Lustre: lustre-MDT0000: new disk, initializing [91896.972465] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [91896.983937] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [91897.004222] VFS: Open an exclusive opened block device for write dm-2. current [90220 tune2fs]. parent [90219 sh] [91898.008186] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [91904.260565] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [91905.311761] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [91920.175029] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [91926.672520] Lustre: DEBUG MARKER: Using TIMEOUT=20 [91930.354839] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [111287.618783] Lustre: lustre-MDT0000: haven't heard from client 608206de-40bc-4a09-8be7-a064a14ac832 (at 192.168.0.82@o2ib) in 48 seconds. I think it's dead, and I am evicting it. exp 000000009261dd54, cur 1709742561 expire 1709742531 last 1709742513 [111323.478079] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.86@o2ib (stopping) [111324.649686] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [111324.660526] Lustre: Skipped 1 previous similar message [111325.275964] LustreError: 91996:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [111326.847956] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [111328.578210] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [111329.769477] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [111329.769479] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [111331.450938] Lustre: 91996:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709742598/real 1709742598] req@000000007780f099 x1792774295034432/t0(0) o251->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709742604 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [111331.527060] Lustre: server umount lustre-MDT0000 complete [111335.635650] device-mapper: ioctl: dmsetup[92386]: dm-2 (mds1_flakey) is removed successfully [111387.179389] Lustre: DEBUG MARKER: server3: executing unload_modules_local [111388.163390] Key type lgssc unregistered [111388.418147] LNet: 92994:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [111389.593986] LNet: Removed LNI 192.168.0.83@o2ib [111389.966438] Key type .llcrypt unregistered [111389.980617] Key type ._llcrypt unregistered [111398.036070] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [111398.046379] alg: No test for adler32 (adler32-zlib) [111398.793975] Key type ._llcrypt registered [111398.799800] Key type .llcrypt registered [111398.822636] Lustre: DEBUG MARKER: server3: executing set_hostid [111402.793475] Lustre: DEBUG MARKER: server3: executing load_modules_local [111403.202132] lnet: unknown parameter '#' ignored [111403.208264] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [111403.215366] lnet: unknown parameter '#' ignored [111403.221057] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [111403.296146] Lustre: Lustre: Build Version: 2.15.4 [111403.357160] LNet: Using FastReg for registration [111403.566235] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [111405.109722] Key type lgssc registered [111405.280332] Lustre: Echo OBD driver; http://www.lustre.org/ [111692.589092] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [112019.294698] Lustre: DEBUG MARKER: server3: executing load_modules_local [112022.047143] device-mapper: ioctl: dmsetup[96530]: dm-2 (mds1_flakey) is created successfully [112024.110018] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [112024.743616] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [112025.830915] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [112025.858944] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [112025.888582] Lustre: lustre-MDT0000: new disk, initializing [112025.907484] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [112025.921061] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [112025.941284] VFS: Open an exclusive opened block device for write dm-2. current [96945 tune2fs]. parent [96944 sh] [112026.930058] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [112033.185309] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [112034.237929] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [112045.537320] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [112055.524488] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [112055.646641] Lustre: DEBUG MARKER: Using TIMEOUT=20 [112061.336075] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [145486.099144] Lustre: lustre-MDT0000: haven't heard from client 52a8b960-64d9-474f-b4d1-69d2677c2777 (at 192.168.0.81@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000ac7b04ea, cur 1709776760 expire 1709776730 last 1709776713 [146504.982270] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [146506.674626] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [146508.347857] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [146508.367985] LustreError: Skipped 2 previous similar messages [146509.111004] Lustre: 99054:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709777777/real 1709777777] req@00000000011cc422 x1792795278149632/t0(0) o251->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709777783 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [146509.191443] Lustre: server umount lustre-MDT0000 complete [146512.931072] device-mapper: ioctl: dmsetup[99434]: dm-2 (mds1_flakey) is removed successfully [146554.853817] Lustre: DEBUG MARKER: server3: executing unload_modules_local [146555.595609] Key type lgssc unregistered [146555.762283] LNet: 100041:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [146556.950163] LNet: Removed LNI 192.168.0.83@o2ib [146557.326646] Key type .llcrypt unregistered [146557.341030] Key type ._llcrypt unregistered [146565.541270] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [146565.551865] alg: No test for adler32 (adler32-zlib) [146566.302150] Key type ._llcrypt registered [146566.307481] Key type .llcrypt registered [146566.330565] Lustre: DEBUG MARKER: server3: executing set_hostid [146570.288898] Lustre: DEBUG MARKER: server3: executing load_modules_local [146570.709049] lnet: unknown parameter '#' ignored [146570.715216] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [146570.722411] lnet: unknown parameter '#' ignored [146570.728183] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [146570.805515] Lustre: Lustre: Build Version: 2.15.4 [146570.868424] LNet: Using FastReg for registration [146571.078761] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [146572.621888] Key type lgssc registered [146572.782802] Lustre: Echo OBD driver; http://www.lustre.org/ [146859.952949] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [147185.728838] Lustre: DEBUG MARKER: server3: executing load_modules_local [147188.525282] device-mapper: ioctl: dmsetup[103615]: dm-2 (mds1_flakey) is created successfully [147190.605954] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [147191.242809] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [147192.331353] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [147192.359660] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [147192.388424] Lustre: lustre-MDT0000: new disk, initializing [147192.412715] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [147192.429378] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [147192.450266] VFS: Open an exclusive opened block device for write dm-2. current [104024 tune2fs]. parent [104023 sh] [147193.440609] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [147199.697817] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [147200.750586] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [147212.189677] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [147222.173256] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [147222.185312] Lustre: DEBUG MARKER: Using TIMEOUT=20 [147225.877511] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [180222.415404] Lustre: lustre-MDT0000: haven't heard from client 5caaec22-49f2-47cd-8fee-93a383cd7309 (at 192.168.0.81@o2ib) in 50 seconds. I think it's dead, and I am evicting it. exp 000000004305ec84, cur 1709811497 expire 1709811467 last 1709811447 [180277.251923] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [180278.754271] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.86@o2ib (stopping) [180280.445031] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [180282.624979] LustreError: 106090:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [180283.854460] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [180286.575463] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [180286.575466] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [180288.767974] Lustre: 106090:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709811557/real 1709811557] req@00000000f5081290 x1792832080128512/t0(0) o251->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709811563 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [180288.844266] Lustre: server umount lustre-MDT0000 complete [180292.650031] device-mapper: ioctl: dmsetup[106473]: dm-2 (mds1_flakey) is removed successfully [180360.725073] Lustre: DEBUG MARKER: server3: executing unload_modules_local [180361.559897] Key type lgssc unregistered [180361.762820] LNet: 107084:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [180362.942735] LNet: Removed LNI 192.168.0.83@o2ib [180363.279042] Key type .llcrypt unregistered [180363.293585] Key type ._llcrypt unregistered [236068.765842] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [236068.776256] alg: No test for adler32 (adler32-zlib) [236069.524725] Key type ._llcrypt registered [236069.530571] Key type .llcrypt registered [236069.553675] Lustre: DEBUG MARKER: server3: executing set_hostid [236073.738285] Lustre: DEBUG MARKER: server3: executing load_modules_local [236074.121337] lnet: unknown parameter '#' ignored [236074.127587] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [236074.134811] lnet: unknown parameter '#' ignored [236074.140607] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [236074.227550] Lustre: Lustre: Build Version: 2.15.4 [236074.287145] LNet: Using FastReg for registration [236074.501269] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [236076.052475] Key type lgssc registered [236076.213873] Lustre: Echo OBD driver; http://www.lustre.org/ [236363.653985] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [236930.969607] LDISKFS-fs (nvme1n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [237493.243426] LDISKFS-fs (nvme2n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [238055.874638] LDISKFS-fs (nvme3n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [238456.933634] Lustre: DEBUG MARKER: server3: executing load_modules_local [238459.681016] device-mapper: ioctl: dmsetup[114901]: dm-2 (mds1_flakey) is created successfully [238461.740407] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [238462.375396] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [238463.457025] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [238463.483501] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [238463.512532] Lustre: lustre-MDT0000: new disk, initializing [238463.531686] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [238463.543335] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [238463.563546] VFS: Open an exclusive opened block device for write dm-2. current [115316 tune2fs]. parent [115315 sh] [238464.548220] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [238470.818151] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [238471.871870] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [238476.726969] device-mapper: ioctl: dmsetup[116335]: dm-3 (mds3_flakey) is created successfully [238478.763213] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [238479.383002] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [238479.405818] Lustre: Setting parameter lustre-MDT0002.mdt.identity_upcall in log lustre-MDT0002 [238479.418610] Lustre: srv-lustre-MDT0002: No data found on store. Initialize space: rc = -61 [238479.428707] Lustre: Skipped 1 previous similar message [238479.453110] Lustre: lustre-MDT0002: new disk, initializing [238479.466099] Lustre: lustre-MDT0002: Imperative Recovery not enabled, recovery window 60-180 [238479.478174] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:2:mdt [238479.484232] VFS: Open an exclusive opened block device for write dm-3. current [116675 tune2fs]. parent [116674 sh] [238479.490701] Lustre: cli-ctl-lustre-MDT0002: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:2:mdt] [238480.449718] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [238486.620065] Lustre: Setting parameter lustre-MDT0003.mdt.identity_upcall in log lustre-MDT0003 [238486.659987] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:3:mdt [238491.528559] device-mapper: ioctl: dmsetup[117705]: dm-4 (mds5_flakey) is created successfully [238493.566680] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [238494.188334] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [238494.212405] Lustre: Setting parameter lustre-MDT0004.mdt.identity_upcall in log lustre-MDT0004 [238494.226613] Lustre: srv-lustre-MDT0004: No data found on store. Initialize space: rc = -61 [238494.253735] Lustre: lustre-MDT0004: new disk, initializing [238494.272672] Lustre: lustre-MDT0004: Imperative Recovery not enabled, recovery window 60-180 [238494.286089] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000300000400-0x0000000340000400]:4:mdt [238494.294697] VFS: Open an exclusive opened block device for write dm-4. current [118054 tune2fs]. parent [118053 sh] [238494.298620] Lustre: cli-ctl-lustre-MDT0004: Allocated super-sequence [0x0000000300000400-0x0000000340000400]:4:mdt] [238495.275024] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [238506.392803] device-mapper: ioctl: dmsetup[119110]: dm-5 (mds7_flakey) is created successfully [238508.445661] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [238509.070302] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [238509.094604] Lustre: Setting parameter lustre-MDT0006.mdt.identity_upcall in log lustre-MDT0006 [238509.105337] Lustre: Skipped 1 previous similar message [238509.118458] Lustre: srv-lustre-MDT0006: No data found on store. Initialize space: rc = -61 [238509.145863] Lustre: lustre-MDT0006: new disk, initializing [238509.166811] Lustre: lustre-MDT0006: Imperative Recovery not enabled, recovery window 60-180 [238509.179987] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000380000400-0x00000003c0000400]:6:mdt [238509.186906] VFS: Open an exclusive opened block device for write dm-5. current [119475 tune2fs]. parent [119474 sh] [238509.192458] Lustre: Skipped 1 previous similar message [238509.212420] Lustre: cli-ctl-lustre-MDT0006: Allocated super-sequence [0x0000000380000400-0x00000003c0000400]:6:mdt] [238510.167137] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [238525.254420] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000400000400-0x0000000440000400]:0:ost [238525.267468] Lustre: Skipped 1 previous similar message [238557.860369] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000540000400-0x0000000580000400]:5:ost [238557.873111] Lustre: Skipped 4 previous similar messages [238572.829345] Lustre: DEBUG MARKER: Using TIMEOUT=20 [238575.567045] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [238575.576222] Lustre: Skipped 1 previous similar message [245415.309857] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.86@o2ib (stopping) [245415.320806] Lustre: Skipped 2 previous similar messages [245415.876349] LustreError: 11-0: lustre-MDT0000-osp-MDT0006: operation mds_statfs to node 0@lo failed: rc = -107 [245415.888071] Lustre: lustre-MDT0000-osp-MDT0006: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [245415.905695] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [245415.914987] Lustre: Skipped 1 previous similar message [245416.384424] LustreError: 11-0: lustre-MDT0000-osp-MDT0004: operation mds_statfs to node 0@lo failed: rc = -107 [245416.396147] Lustre: lustre-MDT0000-osp-MDT0004: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [245417.408389] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [245417.408423] Lustre: lustre-MDT0000-lwp-MDT0002: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [245417.417701] Lustre: Skipped 5 previous similar messages [245417.443150] Lustre: Skipped 3 previous similar messages [245419.954005] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [245419.964465] Lustre: Skipped 10 previous similar messages [245420.992937] LustreError: 121285:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [245421.034219] Lustre: server umount lustre-MDT0000 complete [245422.532380] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [245422.532383] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [245422.532385] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [245422.588631] LustreError: Skipped 3 previous similar messages [245422.934759] device-mapper: ioctl: dmsetup[121673]: dm-2 (mds1_flakey) is removed successfully [245424.440175] LustreError: 120150:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.84@o2ib arrived at 1709876700 with bad export cookie 1216126459960111723 [245424.458139] LustreError: 120150:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 4 previous similar messages [245425.077833] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [245425.097992] LustreError: Skipped 11 previous similar messages [245426.112297] LustreError: 11-0: lustre-MDT0001-osp-MDT0006: operation mds_statfs to node 192.168.0.84@o2ib failed: rc = -107 [245426.125791] LustreError: Skipped 1 previous similar message [245426.132682] Lustre: lustre-MDT0001-osp-MDT0006: Connection to lustre-MDT0001 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [245427.648235] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [245427.666971] LustreError: Skipped 9 previous similar messages [245429.696089] Lustre: 111820:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709876698/real 1709876698] req@0000000007e2c662 x1792925986686656/t0(0) o400->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709876705 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u192:2.0' [245429.729127] LustreError: 166-1: MGC192.168.0.83@o2ib: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [245429.743871] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [245429.763335] LustreError: Skipped 17 previous similar messages [245433.919310] Lustre: lustre-MDT0002: Not available for connect from 192.168.0.84@o2ib (stopping) [245433.930045] Lustre: Skipped 6 previous similar messages [245434.816160] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [245434.816240] Lustre: lustre-MDT0002-osp-MDT0004: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [245434.835308] LustreError: Skipped 25 previous similar messages [245434.853488] Lustre: Skipped 3 previous similar messages [245437.361567] Lustre: lustre-MDT0004: haven't heard from client ec10cc14-c8c2-4fa3-8c13-7b52e847693f (at 192.168.0.82@o2ib) in 49 seconds. I think it's dead, and I am evicting it. exp 00000000fe460d50, cur 1709876713 expire 1709876683 last 1709876664 [245439.680206] LustreError: 121981:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [245439.725628] Lustre: server umount lustre-MDT0002 complete [245441.627421] device-mapper: ioctl: dmsetup[122364]: dm-3 (mds3_flakey) is removed successfully [245443.981251] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [245444.001476] LustreError: Skipped 61 previous similar messages [245446.080032] Lustre: lustre-MDT0003-osp-MDT0004: Connection to lustre-MDT0003 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [245446.099427] Lustre: Skipped 1 previous similar message [245452.223873] Lustre: lustre-MDT0004: Not available for connect from 0@lo (stopping) [245452.233357] Lustre: Skipped 20 previous similar messages [245458.367860] LustreError: 122664:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [245458.409775] Lustre: server umount lustre-MDT0004 complete [245460.276081] device-mapper: ioctl: dmsetup[123043]: dm-4 (mds5_flakey) is removed successfully [245460.364969] LustreError: 137-5: lustre-MDT0002_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [245460.364971] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [245460.364978] LustreError: Skipped 93 previous similar messages [245460.385352] LustreError: Skipped 103 previous similar messages [245461.951732] LustreError: 11-0: lustre-MDT0005-osp-MDT0006: operation mds_statfs to node 192.168.0.84@o2ib failed: rc = -107 [245461.964847] LustreError: Skipped 2 previous similar messages [245472.817470] Lustre: lustre-MDT0006: Not available for connect from 192.168.0.84@o2ib (stopping) [245472.828201] Lustre: Skipped 13 previous similar messages [245477.131726] Lustre: server umount lustre-MDT0006 complete [245479.234073] device-mapper: ioctl: dmsetup[123721]: dm-5 (mds7_flakey) is removed successfully [245515.097739] Lustre: DEBUG MARKER: server3: executing unload_modules_local [245515.987860] Key type lgssc unregistered [245516.218735] LNet: 124328:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [245517.406635] LNet: Removed LNI 192.168.0.83@o2ib [245517.750839] Key type .llcrypt unregistered [245517.765056] Key type ._llcrypt unregistered [245536.163803] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [245536.174186] alg: No test for adler32 (adler32-zlib) [245536.922417] Key type ._llcrypt registered [245536.927923] Key type .llcrypt registered [245536.951635] Lustre: DEBUG MARKER: server3: executing set_hostid [245541.116475] Lustre: DEBUG MARKER: server3: executing load_modules_local [245541.516466] lnet: unknown parameter '#' ignored [245541.522605] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [245541.529787] lnet: unknown parameter '#' ignored [245541.535548] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [245541.613785] Lustre: Lustre: Build Version: 2.15.4 [245541.674910] LNet: Using FastReg for registration [245541.886043] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [245543.438179] Key type lgssc registered [245543.596422] Lustre: Echo OBD driver; http://www.lustre.org/ [245831.078074] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [246403.339847] LDISKFS-fs (nvme1n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [246976.572705] LDISKFS-fs (nvme2n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [247549.772150] LDISKFS-fs (nvme3n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [247950.693979] Lustre: DEBUG MARKER: server3: executing load_modules_local [247953.465400] device-mapper: ioctl: dmsetup[129978]: dm-2 (mds1_flakey) is created successfully [247955.531156] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [247956.182350] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [247957.267453] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [247957.293998] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [247957.320870] Lustre: lustre-MDT0000: new disk, initializing [247957.339874] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [247957.351550] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [247957.372561] VFS: Open an exclusive opened block device for write dm-2. current [130389 tune2fs]. parent [130388 sh] [247958.372975] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [247964.605796] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [247965.656883] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [247970.482191] device-mapper: ioctl: dmsetup[131405]: dm-3 (mds3_flakey) is created successfully [247972.500077] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [247973.121928] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [247973.144920] Lustre: Setting parameter lustre-MDT0002.mdt.identity_upcall in log lustre-MDT0002 [247973.160282] Lustre: srv-lustre-MDT0002: No data found on store. Initialize space: rc = -61 [247973.170767] Lustre: Skipped 1 previous similar message [247973.194311] Lustre: lustre-MDT0002: new disk, initializing [247973.212538] Lustre: lustre-MDT0002: Imperative Recovery not enabled, recovery window 60-180 [247973.228255] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:2:mdt [247973.236118] VFS: Open an exclusive opened block device for write dm-3. current [131746 tune2fs]. parent [131745 sh] [247973.240760] Lustre: cli-ctl-lustre-MDT0002: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:2:mdt] [247974.225838] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [247980.381502] Lustre: Setting parameter lustre-MDT0003.mdt.identity_upcall in log lustre-MDT0003 [247980.423822] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:3:mdt [247985.271626] device-mapper: ioctl: dmsetup[132778]: dm-4 (mds5_flakey) is created successfully [247987.360587] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [247987.986829] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [247988.010593] Lustre: Setting parameter lustre-MDT0004.mdt.identity_upcall in log lustre-MDT0004 [247988.032243] Lustre: srv-lustre-MDT0004: No data found on store. Initialize space: rc = -61 [247988.062214] Lustre: lustre-MDT0004: new disk, initializing [247988.081917] Lustre: lustre-MDT0004: Imperative Recovery not enabled, recovery window 60-180 [247988.095581] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000300000400-0x0000000340000400]:4:mdt [247988.102886] VFS: Open an exclusive opened block device for write dm-4. current [133133 tune2fs]. parent [133132 sh] [247988.108103] Lustre: cli-ctl-lustre-MDT0004: Allocated super-sequence [0x0000000300000400-0x0000000340000400]:4:mdt] [247989.083897] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [248000.219551] device-mapper: ioctl: dmsetup[134180]: dm-5 (mds7_flakey) is created successfully [248002.258848] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [248002.882096] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [248002.907137] Lustre: Setting parameter lustre-MDT0006.mdt.identity_upcall in log lustre-MDT0006 [248002.917969] Lustre: Skipped 1 previous similar message [248002.928412] Lustre: srv-lustre-MDT0006: No data found on store. Initialize space: rc = -61 [248002.955323] Lustre: lustre-MDT0006: new disk, initializing [248002.976121] Lustre: lustre-MDT0006: Imperative Recovery not enabled, recovery window 60-180 [248002.991431] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000380000400-0x00000003c0000400]:6:mdt [248002.999936] VFS: Open an exclusive opened block device for write dm-5. current [134543 tune2fs]. parent [134542 sh] [248003.003909] Lustre: Skipped 1 previous similar message [248003.003947] Lustre: cli-ctl-lustre-MDT0006: Allocated super-sequence [0x0000000380000400-0x00000003c0000400]:6:mdt] [248003.978461] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [248028.821953] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000440000400-0x0000000480000400]:1:ost [248028.834835] Lustre: Skipped 2 previous similar messages [248063.497896] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000005c0000400-0x0000000600000400]:7:ost [248063.510897] Lustre: Skipped 5 previous similar messages [248066.652177] Lustre: DEBUG MARKER: Using TIMEOUT=20 [248070.376338] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [248070.385509] Lustre: Skipped 1 previous similar message [256210.705267] LustreError: 11-0: lustre-MDT0000-osp-MDT0004: operation mds_statfs to node 0@lo failed: rc = -107 [256210.717248] Lustre: lustre-MDT0000-osp-MDT0004: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [256210.735095] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [256211.217236] LustreError: 11-0: lustre-MDT0000-osp-MDT0002: operation mds_statfs to node 0@lo failed: rc = -107 [256211.229402] Lustre: lustre-MDT0000-osp-MDT0002: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [256211.247240] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [256212.371123] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [256214.401941] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [256214.412467] Lustre: Skipped 14 previous similar messages [256214.545264] Lustre: lustre-MDT0000-lwp-MDT0002: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [256214.562976] Lustre: Skipped 3 previous similar messages [256219.135497] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [256219.146241] Lustre: Skipped 13 previous similar messages [256224.255502] Lustre: lustre-MDT0002: haven't heard from client fd95df74-9ed1-47c6-a788-48295fcc49b6 (at 192.168.0.81@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 0000000068888bae, cur 1709887500 expire 1709887470 last 1709887453 [256225.040944] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [256225.057705] LustreError: 136401:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [256225.121107] Lustre: server umount lustre-MDT0000 complete [256227.344636] device-mapper: ioctl: dmsetup[136795]: dm-2 (mds1_flakey) is removed successfully [256228.887307] LustreError: 130303:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.84@o2ib arrived at 1709887504 with bad export cookie 14740870769804412695 [256228.905399] LustreError: 130303:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 4 previous similar messages [256229.375506] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [256229.375509] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [256229.375514] LustreError: Skipped 1 previous similar message [256229.395711] LustreError: Skipped 2 previous similar messages [256229.905062] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [256229.905068] Lustre: lustre-MDT0001-osp-MDT0002: Connection to lustre-MDT0001 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [256229.943133] LustreError: Skipped 11 previous similar messages [256233.857680] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [256233.877979] LustreError: Skipped 9 previous similar messages [256233.885278] LustreError: 130298:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1709887509 with bad export cookie 14740870769804412492 [256233.886073] LustreError: 166-1: MGC192.168.0.83@o2ib: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [256233.902183] LustreError: 130298:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 4 previous similar messages [256234.495505] Lustre: lustre-MDT0002: Not available for connect from 192.168.0.85@o2ib (stopping) [256234.506399] Lustre: Skipped 42 previous similar messages [256235.024920] Lustre: lustre-MDT0002-osp-MDT0004: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [256235.042866] Lustre: Skipped 3 previous similar messages [256236.303856] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [256236.323994] LustreError: Skipped 21 previous similar messages [256240.144967] LustreError: 137095:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [256240.187467] Lustre: server umount lustre-MDT0002 complete [256241.374354] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [256241.394510] LustreError: Skipped 25 previous similar messages [256242.103606] device-mapper: ioctl: dmsetup[137475]: dm-3 (mds3_flakey) is removed successfully [256245.264813] Lustre: lustre-MDT0003-osp-MDT0004: Connection to lustre-MDT0003 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [256245.283714] Lustre: Skipped 1 previous similar message [256250.384634] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [256250.403739] LustreError: Skipped 62 previous similar messages [256252.927262] Lustre: lustre-MDT0004: Not available for connect from 192.168.0.85@o2ib (stopping) [256252.938234] Lustre: Skipped 27 previous similar messages [256256.272511] LustreError: 11-0: lustre-MDT0004-osp-MDT0006: operation mds_statfs to node 0@lo failed: rc = -107 [256258.832634] LustreError: 137773:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [256258.873660] Lustre: server umount lustre-MDT0004 complete [256260.779472] device-mapper: ioctl: dmsetup[138151]: dm-4 (mds5_flakey) is removed successfully [256266.512435] LustreError: 11-0: lustre-MDT0005-osp-MDT0006: operation mds_statfs to node 192.168.0.84@o2ib failed: rc = -107 [256266.525695] Lustre: lustre-MDT0005-osp-MDT0006: Connection to lustre-MDT0005 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [256266.544436] Lustre: Skipped 1 previous similar message [256266.768366] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [256266.787625] LustreError: Skipped 113 previous similar messages [256277.609761] Lustre: server umount lustre-MDT0006 complete [256279.822599] device-mapper: ioctl: dmsetup[138830]: dm-5 (mds7_flakey) is removed successfully [256315.440999] Lustre: DEBUG MARKER: server3: executing unload_modules_local [256316.324906] Key type lgssc unregistered [256316.543549] LNet: 139436:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [256317.711448] LNet: Removed LNI 192.168.0.83@o2ib [256318.032022] Key type .llcrypt unregistered [256318.046268] Key type ._llcrypt unregistered [256336.567614] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [256336.578123] alg: No test for adler32 (adler32-zlib) [256337.327219] Key type ._llcrypt registered [256337.333018] Key type .llcrypt registered [256337.355713] Lustre: DEBUG MARKER: server3: executing set_hostid [256341.423742] Lustre: DEBUG MARKER: server3: executing load_modules_local [256341.825073] lnet: unknown parameter '#' ignored [256341.831230] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [256341.838428] lnet: unknown parameter '#' ignored [256341.844198] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [256341.919986] Lustre: Lustre: Build Version: 2.15.4 [256341.984271] LNet: Using FastReg for registration [256342.197640] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [256343.750994] Key type lgssc registered [256343.904222] Lustre: Echo OBD driver; http://www.lustre.org/ [256631.426841] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [257204.608727] LDISKFS-fs (nvme1n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [257778.144323] LDISKFS-fs (nvme2n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [258351.166570] LDISKFS-fs (nvme3n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [258753.391931] Lustre: DEBUG MARKER: server3: executing load_modules_local [258756.139008] device-mapper: ioctl: dmsetup[145108]: dm-2 (mds1_flakey) is created successfully [258758.197558] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [258758.828821] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [258759.913741] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [258759.943015] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [258759.972765] Lustre: lustre-MDT0000: new disk, initializing [258759.996908] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [258760.008937] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [258760.030163] VFS: Open an exclusive opened block device for write dm-2. current [145518 tune2fs]. parent [145517 sh] [258761.024970] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [258767.255404] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [258768.308995] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [258773.147574] device-mapper: ioctl: dmsetup[146542]: dm-3 (mds3_flakey) is created successfully [258775.168823] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [258775.794129] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [258775.817344] Lustre: Setting parameter lustre-MDT0002.mdt.identity_upcall in log lustre-MDT0002 [258775.830674] Lustre: srv-lustre-MDT0002: No data found on store. Initialize space: rc = -61 [258775.840788] Lustre: Skipped 1 previous similar message [258775.864378] Lustre: lustre-MDT0002: new disk, initializing [258775.881545] Lustre: lustre-MDT0002: Imperative Recovery not enabled, recovery window 60-180 [258775.897284] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:2:mdt [258775.904689] VFS: Open an exclusive opened block device for write dm-3. current [146885 tune2fs]. parent [146884 sh] [258775.909809] Lustre: cli-ctl-lustre-MDT0002: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:2:mdt] [258776.871064] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [258782.985185] Lustre: Setting parameter lustre-MDT0003.mdt.identity_upcall in log lustre-MDT0003 [258783.030660] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:3:mdt [258787.862477] device-mapper: ioctl: dmsetup[147911]: dm-4 (mds5_flakey) is created successfully [258789.895939] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [258790.526285] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [258790.550550] Lustre: Setting parameter lustre-MDT0004.mdt.identity_upcall in log lustre-MDT0004 [258790.563581] Lustre: srv-lustre-MDT0004: No data found on store. Initialize space: rc = -61 [258790.590348] Lustre: lustre-MDT0004: new disk, initializing [258790.607425] Lustre: lustre-MDT0004: Imperative Recovery not enabled, recovery window 60-180 [258790.621878] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000300000400-0x0000000340000400]:4:mdt [258790.628297] VFS: Open an exclusive opened block device for write dm-4. current [148266 tune2fs]. parent [148265 sh] [258790.635021] Lustre: cli-ctl-lustre-MDT0004: Allocated super-sequence [0x0000000300000400-0x0000000340000400]:4:mdt] [258791.592629] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [258802.662062] device-mapper: ioctl: dmsetup[149315]: dm-5 (mds7_flakey) is created successfully [258804.733176] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [258805.370225] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [258805.395654] Lustre: Setting parameter lustre-MDT0006.mdt.identity_upcall in log lustre-MDT0006 [258805.406471] Lustre: Skipped 1 previous similar message [258805.417737] Lustre: srv-lustre-MDT0006: No data found on store. Initialize space: rc = -61 [258805.444570] Lustre: lustre-MDT0006: new disk, initializing [258805.464897] Lustre: lustre-MDT0006: Imperative Recovery not enabled, recovery window 60-180 [258805.480657] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000380000400-0x00000003c0000400]:6:mdt [258805.489732] VFS: Open an exclusive opened block device for write dm-5. current [149683 tune2fs]. parent [149682 sh] [258805.493119] Lustre: Skipped 1 previous similar message [258805.512423] Lustre: cli-ctl-lustre-MDT0006: Allocated super-sequence [0x0000000380000400-0x00000003c0000400]:6:mdt] [258806.463756] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [258826.793108] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000440000400-0x0000000480000400]:1:ost [258826.805990] Lustre: Skipped 2 previous similar messages [258860.889406] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000580000400-0x00000005c0000400]:6:ost [258860.902063] Lustre: Skipped 4 previous similar messages [258868.925904] Lustre: DEBUG MARKER: Using TIMEOUT=20 [258874.638759] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [258874.647919] Lustre: Skipped 1 previous similar message [267784.994784] Lustre: lustre-MDT0000: haven't heard from client e1897424-fd25-459c-a1ec-0725baadea51 (at 192.168.0.82@o2ib) in 48 seconds. I think it's dead, and I am evicting it. exp 00000000e10312ca, cur 1709899061 expire 1709899031 last 1709899013 [267787.859522] Lustre: lustre-MDT0000-osp-MDT0002: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [267787.859694] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [267787.877611] Lustre: Skipped 5 previous similar messages [267787.887570] Lustre: Skipped 4 previous similar messages [267788.497070] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [267788.507911] Lustre: Skipped 2 previous similar messages [267789.527930] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [267789.538445] Lustre: Skipped 1 previous similar message [267791.556022] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [267791.566603] Lustre: Skipped 15 previous similar messages [267793.492085] LustreError: 151570:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [267793.578664] Lustre: server umount lustre-MDT0000 complete [267794.753016] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [267794.753018] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [267794.792870] LustreError: Skipped 2 previous similar messages [267795.877038] device-mapper: ioctl: dmsetup[151953]: dm-2 (mds1_flakey) is removed successfully [267796.675945] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [267796.695927] LustreError: Skipped 11 previous similar messages [267797.430265] LustreError: 146865:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.84@o2ib arrived at 1709899073 with bad export cookie 7025289021934115571 [267797.448190] LustreError: 146865:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 4 previous similar messages [267798.099431] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [267798.099483] Lustre: lustre-MDT0001-osp-MDT0002: Connection to lustre-MDT0001 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [267798.118553] LustreError: Skipped 5 previous similar messages [267798.137709] Lustre: Skipped 2 previous similar messages [267800.354584] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [267800.374427] LustreError: Skipped 7 previous similar messages [267805.267157] Lustre: 141987:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709899074/real 1709899074] req@000000007331dbc4 x1792947721082624/t0(0) o400->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709899081 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u192:2.0' [267805.300016] LustreError: 166-1: MGC192.168.0.83@o2ib: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [267805.314845] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [267805.334024] LustreError: Skipped 37 previous similar messages [267806.803193] LustreError: 11-0: lustre-MDT0002-osp-MDT0004: operation mds_statfs to node 0@lo failed: rc = -107 [267806.815418] Lustre: lustre-MDT0002-osp-MDT0004: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [267806.833350] Lustre: lustre-MDT0002: Not available for connect from 0@lo (stopping) [267806.843130] Lustre: Skipped 6 previous similar messages [267810.387290] Lustre: lustre-MDT0002-osp-MDT0006: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [267812.691277] LustreError: 152252:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [267812.729061] Lustre: server umount lustre-MDT0002 complete [267813.666432] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [267813.686763] LustreError: Skipped 45 previous similar messages [267814.659756] device-mapper: ioctl: dmsetup[152631]: dm-3 (mds3_flakey) is removed successfully [267816.531219] LustreError: 11-0: lustre-MDT0003-osp-MDT0006: operation mds_statfs to node 192.168.0.84@o2ib failed: rc = -107 [267816.531221] Lustre: lustre-MDT0003-osp-MDT0004: Connection to lustre-MDT0003 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [267826.770906] LustreError: 11-0: lustre-MDT0004-osp-MDT0006: operation mds_statfs to node 0@lo failed: rc = -107 [267826.782783] Lustre: lustre-MDT0004-osp-MDT0006: Connection to lustre-MDT0004 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [267826.800492] Lustre: Skipped 1 previous similar message [267826.807061] Lustre: lustre-MDT0004: Not available for connect from 0@lo (stopping) [267826.816496] Lustre: Skipped 23 previous similar messages [267830.050126] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [267830.070318] LustreError: Skipped 110 previous similar messages [267831.379022] LustreError: 152934:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [267831.418002] Lustre: server umount lustre-MDT0004 complete [267833.327132] device-mapper: ioctl: dmsetup[153314]: dm-4 (mds5_flakey) is removed successfully [267837.010831] LustreError: 11-0: lustre-MDT0005-osp-MDT0006: operation mds_statfs to node 192.168.0.84@o2ib failed: rc = -107 [267844.311459] Lustre: lustre-MDT0006: Not available for connect from 192.168.0.84@o2ib (stopping) [267844.322051] Lustre: Skipped 14 previous similar messages [267850.179163] Lustre: server umount lustre-MDT0006 complete [267852.858493] device-mapper: ioctl: dmsetup[153995]: dm-5 (mds7_flakey) is removed successfully [267888.452473] Lustre: DEBUG MARKER: server3: executing unload_modules_local [267889.287867] Key type lgssc unregistered [267889.473854] LNet: 154604:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [267890.641755] LNet: Removed LNI 192.168.0.83@o2ib [267891.014005] Key type .llcrypt unregistered [267891.028225] Key type ._llcrypt unregistered [267909.418583] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [267909.428973] alg: No test for adler32 (adler32-zlib) [267910.177531] Key type ._llcrypt registered [267910.182841] Key type .llcrypt registered [267910.207882] Lustre: DEBUG MARKER: server3: executing set_hostid [267914.379900] Lustre: DEBUG MARKER: server3: executing load_modules_local [267914.784330] lnet: unknown parameter '#' ignored [267914.790503] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [267914.797697] lnet: unknown parameter '#' ignored [267914.803470] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [267914.878666] Lustre: Lustre: Build Version: 2.15.4 [267914.939163] LNet: Using FastReg for registration [267915.142469] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [267916.693307] Key type lgssc registered [267916.848195] Lustre: Echo OBD driver; http://www.lustre.org/ [268204.191368] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [268776.899028] LDISKFS-fs (nvme1n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [269350.180453] LDISKFS-fs (nvme2n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [269922.715033] LDISKFS-fs (nvme3n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [270326.704328] Lustre: DEBUG MARKER: server3: executing load_modules_local [270329.446014] device-mapper: ioctl: dmsetup[160271]: dm-2 (mds1_flakey) is created successfully [270331.508550] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [270332.157317] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [270333.242666] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [270333.269436] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [270333.298477] Lustre: lustre-MDT0000: new disk, initializing [270333.321789] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [270333.333450] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [270333.354252] VFS: Open an exclusive opened block device for write dm-2. current [160681 tune2fs]. parent [160680 sh] [270334.343579] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [270340.589896] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [270341.642719] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [270346.470576] device-mapper: ioctl: dmsetup[161703]: dm-3 (mds3_flakey) is created successfully [270348.498834] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [270349.124931] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [270349.148241] Lustre: Setting parameter lustre-MDT0002.mdt.identity_upcall in log lustre-MDT0002 [270349.164171] Lustre: srv-lustre-MDT0002: No data found on store. Initialize space: rc = -61 [270349.174644] Lustre: Skipped 1 previous similar message [270349.198120] Lustre: lustre-MDT0002: new disk, initializing [270349.220466] Lustre: lustre-MDT0002: Imperative Recovery not enabled, recovery window 60-180 [270349.235357] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:2:mdt [270349.242640] VFS: Open an exclusive opened block device for write dm-3. current [162043 tune2fs]. parent [162042 sh] [270349.247852] Lustre: cli-ctl-lustre-MDT0002: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:2:mdt] [270350.219438] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [270356.387498] Lustre: Setting parameter lustre-MDT0003.mdt.identity_upcall in log lustre-MDT0003 [270356.431171] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:3:mdt [270361.268145] device-mapper: ioctl: dmsetup[163070]: dm-4 (mds5_flakey) is created successfully [270363.314190] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [270363.954308] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [270363.978448] Lustre: Setting parameter lustre-MDT0004.mdt.identity_upcall in log lustre-MDT0004 [270363.993562] Lustre: srv-lustre-MDT0004: No data found on store. Initialize space: rc = -61 [270364.020743] Lustre: lustre-MDT0004: new disk, initializing [270364.038045] Lustre: lustre-MDT0004: Imperative Recovery not enabled, recovery window 60-180 [270364.051342] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000300000400-0x0000000340000400]:4:mdt [270364.059776] VFS: Open an exclusive opened block device for write dm-4. current [163423 tune2fs]. parent [163422 sh] [270364.064544] Lustre: cli-ctl-lustre-MDT0004: Allocated super-sequence [0x0000000300000400-0x0000000340000400]:4:mdt] [270365.028442] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [270376.103954] device-mapper: ioctl: dmsetup[164464]: dm-5 (mds7_flakey) is created successfully [270378.143611] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [270378.769121] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [270378.794063] Lustre: Setting parameter lustre-MDT0006.mdt.identity_upcall in log lustre-MDT0006 [270378.804892] Lustre: Skipped 1 previous similar message [270378.815767] Lustre: srv-lustre-MDT0006: No data found on store. Initialize space: rc = -61 [270378.842069] Lustre: lustre-MDT0006: new disk, initializing [270378.863688] Lustre: lustre-MDT0006: Imperative Recovery not enabled, recovery window 60-180 [270378.878970] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000380000400-0x00000003c0000400]:6:mdt [270378.887753] VFS: Open an exclusive opened block device for write dm-5. current [164829 tune2fs]. parent [164828 sh] [270378.892089] Lustre: Skipped 1 previous similar message [270378.911347] Lustre: cli-ctl-lustre-MDT0006: Allocated super-sequence [0x0000000380000400-0x00000003c0000400]:6:mdt] [270379.874343] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [270401.002839] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000440000400-0x0000000480000400]:1:ost [270401.015838] Lustre: Skipped 2 previous similar messages [270439.722263] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000005c0000400-0x0000000600000400]:7:ost [270439.735264] Lustre: Skipped 5 previous similar messages [270442.408405] Lustre: DEBUG MARKER: Using TIMEOUT=20 [270445.133584] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [270445.142503] Lustre: Skipped 1 previous similar message [279654.782296] Lustre: lustre-MDT0002: haven't heard from client ffec1193-bdd4-4213-bcc8-906be32814ec (at 192.168.0.82@o2ib) in 49 seconds. I think it's dead, and I am evicting it. exp 00000000ec6c14e9, cur 1709910931 expire 1709910901 last 1709910882 [279674.126658] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [279674.638233] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [279676.001228] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.86@o2ib (stopping) [279676.012093] Lustre: Skipped 6 previous similar messages [279676.048661] Lustre: lustre-MDT0000-osp-MDT0002: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [279676.066608] Lustre: Skipped 5 previous similar messages [279678.721170] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [279678.731728] Lustre: Skipped 13 previous similar messages [279679.889223] LustreError: 166751:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [279679.976063] Lustre: server umount lustre-MDT0000 complete [279680.382035] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [279680.382038] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [279680.421919] LustreError: Skipped 2 previous similar messages [279681.121082] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [279681.141300] LustreError: Skipped 3 previous similar messages [279682.366218] device-mapper: ioctl: dmsetup[167135]: dm-2 (mds1_flakey) is removed successfully [279683.845191] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [279683.865198] LustreError: Skipped 13 previous similar messages [279683.869744] LustreError: 164794:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.84@o2ib arrived at 1709910960 with bad export cookie 4691720130242302455 [279683.890269] LustreError: 164794:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 4 previous similar messages [279686.241014] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [279686.261300] LustreError: Skipped 7 previous similar messages [279686.288522] Lustre: lustre-MDT0001-osp-MDT0002: Connection to lustre-MDT0001 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [279686.307709] Lustre: Skipped 2 previous similar messages [279688.336361] Lustre: 157142:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709910957/real 1709910957] req@00000000aac5c3f8 x1792959924086848/t0(0) o400->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709910964 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u192:4.0' [279688.368929] LustreError: 166-1: MGC192.168.0.83@o2ib: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [279692.669854] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [279692.669857] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [279692.669863] LustreError: Skipped 32 previous similar messages [279692.690048] LustreError: Skipped 34 previous similar messages [279693.408999] Lustre: lustre-MDT0002: Not available for connect from 192.168.0.86@o2ib (stopping) [279693.420029] Lustre: Skipped 3 previous similar messages [279693.456423] Lustre: lustre-MDT0002-osp-MDT0004: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [279693.474445] Lustre: Skipped 1 previous similar message [279699.088473] LustreError: 167438:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [279699.128750] Lustre: server umount lustre-MDT0002 complete [279701.039509] device-mapper: ioctl: dmsetup[167820]: dm-3 (mds3_flakey) is removed successfully [279701.248932] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [279701.269113] LustreError: Skipped 59 previous similar messages [279702.672307] LustreError: 11-0: lustre-MDT0003-osp-MDT0004: operation mds_statfs to node 192.168.0.84@o2ib failed: rc = -107 [279702.685251] Lustre: lustre-MDT0003-osp-MDT0004: Connection to lustre-MDT0003 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [279712.400064] LustreError: 11-0: lustre-MDT0004-osp-MDT0006: operation mds_statfs to node 0@lo failed: rc = -107 [279712.411900] Lustre: lustre-MDT0004-osp-MDT0006: Connection to lustre-MDT0004 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [279712.429554] Lustre: Skipped 1 previous similar message [279712.436069] Lustre: lustre-MDT0004: Not available for connect from 0@lo (stopping) [279712.445464] Lustre: Skipped 22 previous similar messages [279717.632642] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [279717.652629] LustreError: Skipped 98 previous similar messages [279717.776132] LustreError: 168119:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [279717.816574] Lustre: server umount lustre-MDT0004 complete [279719.741632] device-mapper: ioctl: dmsetup[168496]: dm-4 (mds5_flakey) is removed successfully [279722.639966] LustreError: 11-0: lustre-MDT0005-osp-MDT0006: operation mds_statfs to node 192.168.0.84@o2ib failed: rc = -107 [279722.652944] Lustre: lustre-MDT0005-osp-MDT0006: Connection to lustre-MDT0005 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [279730.557473] Lustre: lustre-MDT0006: Not available for connect from 192.168.0.85@o2ib (stopping) [279730.568256] Lustre: Skipped 18 previous similar messages [279736.553292] Lustre: server umount lustre-MDT0006 complete [279739.880259] device-mapper: ioctl: dmsetup[169175]: dm-5 (mds7_flakey) is removed successfully [279776.568919] Lustre: DEBUG MARKER: server3: executing unload_modules_local [279777.472069] Key type lgssc unregistered [279777.698999] LNet: 169781:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [279778.894890] LNet: Removed LNI 192.168.0.83@o2ib [279779.259242] Key type .llcrypt unregistered [279779.273487] Key type ._llcrypt unregistered [279797.730028] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [279797.740597] alg: No test for adler32 (adler32-zlib) [279798.490695] Key type ._llcrypt registered [279798.496466] Key type .llcrypt registered [279798.519519] Lustre: DEBUG MARKER: server3: executing set_hostid [279802.658443] Lustre: DEBUG MARKER: server3: executing load_modules_local [279803.065555] lnet: unknown parameter '#' ignored [279803.071705] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [279803.078904] lnet: unknown parameter '#' ignored [279803.084678] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [279803.158899] Lustre: Lustre: Build Version: 2.15.4 [279803.217418] LNet: Using FastReg for registration [279803.426168] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [279804.970445] Key type lgssc registered [279805.135791] Lustre: Echo OBD driver; http://www.lustre.org/ [280092.226598] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [280665.033196] LDISKFS-fs (nvme1n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [281237.597291] LDISKFS-fs (nvme2n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [281810.044049] LDISKFS-fs (nvme3n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [282217.296331] Lustre: DEBUG MARKER: server3: executing load_modules_local [282220.075753] device-mapper: ioctl: dmsetup[175463]: dm-2 (mds1_flakey) is created successfully [282222.136048] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [282222.770183] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [282223.854969] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [282223.881380] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [282223.911135] Lustre: lustre-MDT0000: new disk, initializing [282223.930267] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [282223.944812] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [282223.966184] VFS: Open an exclusive opened block device for write dm-2. current [175874 tune2fs]. parent [175873 sh] [282224.946903] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [282231.152654] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [282232.203873] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [282237.008217] device-mapper: ioctl: dmsetup[176896]: dm-3 (mds3_flakey) is created successfully [282239.037919] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [282239.673622] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [282239.696653] Lustre: Setting parameter lustre-MDT0002.mdt.identity_upcall in log lustre-MDT0002 [282239.709490] Lustre: srv-lustre-MDT0002: No data found on store. Initialize space: rc = -61 [282239.719720] Lustre: Skipped 1 previous similar message [282239.742382] Lustre: lustre-MDT0002: new disk, initializing [282239.757175] Lustre: lustre-MDT0002: Imperative Recovery not enabled, recovery window 60-180 [282239.769658] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:2:mdt [282239.775992] VFS: Open an exclusive opened block device for write dm-3. current [177237 tune2fs]. parent [177236 sh] [282239.782513] Lustre: cli-ctl-lustre-MDT0002: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:2:mdt] [282240.737130] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [282246.851459] Lustre: Setting parameter lustre-MDT0003.mdt.identity_upcall in log lustre-MDT0003 [282246.892523] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:3:mdt [282251.717187] device-mapper: ioctl: dmsetup[178265]: dm-4 (mds5_flakey) is created successfully [282253.753417] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [282254.376566] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [282254.400707] Lustre: Setting parameter lustre-MDT0004.mdt.identity_upcall in log lustre-MDT0004 [282254.416863] Lustre: srv-lustre-MDT0004: No data found on store. Initialize space: rc = -61 [282254.443344] Lustre: lustre-MDT0004: new disk, initializing [282254.462796] Lustre: lustre-MDT0004: Imperative Recovery not enabled, recovery window 60-180 [282254.476346] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000300000400-0x0000000340000400]:4:mdt [282254.483910] VFS: Open an exclusive opened block device for write dm-4. current [178618 tune2fs]. parent [178617 sh] [282254.489156] Lustre: cli-ctl-lustre-MDT0004: Allocated super-sequence [0x0000000300000400-0x0000000340000400]:4:mdt] [282255.457816] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [282266.502118] device-mapper: ioctl: dmsetup[179674]: dm-5 (mds7_flakey) is created successfully [282268.569744] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [282269.197366] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [282269.222002] Lustre: Setting parameter lustre-MDT0006.mdt.identity_upcall in log lustre-MDT0006 [282269.232885] Lustre: Skipped 1 previous similar message [282269.244937] Lustre: lustre-MDT0006: Not available for connect from 0@lo (not set up) [282269.245174] Lustre: srv-lustre-MDT0006: No data found on store. Initialize space: rc = -61 [282269.280925] Lustre: lustre-MDT0006: new disk, initializing [282269.300970] Lustre: lustre-MDT0006: Imperative Recovery not enabled, recovery window 60-180 [282269.316202] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000380000400-0x00000003c0000400]:6:mdt [282269.324328] VFS: Open an exclusive opened block device for write dm-5. current [180045 tune2fs]. parent [180044 sh] [282269.328984] Lustre: Skipped 1 previous similar message [282269.348507] Lustre: cli-ctl-lustre-MDT0006: Allocated super-sequence [0x0000000380000400-0x00000003c0000400]:6:mdt] [282270.328384] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [282289.113476] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000400000400-0x0000000440000400]:0:ost [282289.126191] Lustre: Skipped 1 previous similar message [282326.008759] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000005c0000400-0x0000000600000400]:7:ost [282326.021627] Lustre: Skipped 6 previous similar messages [282332.785117] Lustre: DEBUG MARKER: Using TIMEOUT=20 [282335.476480] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [282335.485679] Lustre: Skipped 1 previous similar message [303858.674926] Lustre: lustre-MDT0002: haven't heard from client 744c6f95-a11b-4117-b31a-9101930ea37f (at 192.168.0.81@o2ib) in 48 seconds. I think it's dead, and I am evicting it. exp 00000000b52bb6e6, cur 1709935135 expire 1709935105 last 1709935087 [303894.657782] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.84@o2ib (stopping) [303895.247866] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.86@o2ib (stopping) [303895.258716] Lustre: Skipped 2 previous similar messages [303895.812760] LustreError: 11-0: lustre-MDT0000-osp-MDT0006: operation mds_statfs to node 0@lo failed: rc = -107 [303895.825157] Lustre: lustre-MDT0000-osp-MDT0006: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [303896.324652] LustreError: 11-0: lustre-MDT0000-osp-MDT0004: operation mds_statfs to node 0@lo failed: rc = -107 [303896.324753] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [303896.336808] Lustre: lustre-MDT0000-osp-MDT0004: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [303896.346565] Lustre: Skipped 14 previous similar messages [303896.364744] Lustre: Skipped 4 previous similar messages [303899.634558] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [303899.645575] Lustre: Skipped 1 previous similar message [303900.704343] Lustre: server umount lustre-MDT0000 complete [303901.300238] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [303901.320236] LustreError: Skipped 6 previous similar messages [303902.614393] device-mapper: ioctl: dmsetup[182661]: dm-2 (mds1_flakey) is removed successfully [303904.159184] LustreError: 176444:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.84@o2ib arrived at 1709935180 with bad export cookie 5747212402920871554 [303904.177154] LustreError: 176444:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 4 previous similar messages [303904.754316] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [303904.774496] LustreError: Skipped 9 previous similar messages [303906.052564] LustreError: 11-0: lustre-MDT0001-osp-MDT0006: operation mds_statfs to node 192.168.0.84@o2ib failed: rc = -107 [303906.065823] Lustre: lustre-MDT0001-osp-MDT0006: Connection to lustre-MDT0001 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [303906.420163] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [303906.440229] LustreError: Skipped 9 previous similar messages [303908.612405] Lustre: 172393:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709935178/real 1709935178] req@00000000385ce1e8 x1792972279431488/t0(0) o400->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709935185 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u192:0.0' [303908.645201] LustreError: 166-1: MGC192.168.0.83@o2ib: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [303908.659964] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [303908.679318] LustreError: Skipped 11 previous similar messages [303912.705211] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [303912.725558] LustreError: Skipped 21 previous similar messages [303913.345300] Lustre: lustre-MDT0002: Not available for connect from 192.168.0.84@o2ib (stopping) [303913.356155] Lustre: Skipped 6 previous similar messages [303913.732520] Lustre: lustre-MDT0002-osp-MDT0004: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [303913.750550] Lustre: Skipped 3 previous similar messages [303919.364445] LustreError: 182966:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [303919.403139] Lustre: server umount lustre-MDT0002 complete [303920.755990] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [303920.776009] LustreError: Skipped 53 previous similar messages [303921.319957] device-mapper: ioctl: dmsetup[183345]: dm-3 (mds3_flakey) is removed successfully [303924.996289] Lustre: lustre-MDT0003-osp-MDT0004: Connection to lustre-MDT0003 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [303925.015237] Lustre: Skipped 1 previous similar message [303933.831726] Lustre: lustre-MDT0004: Not available for connect from 192.168.0.84@o2ib (stopping) [303933.842283] Lustre: Skipped 25 previous similar messages [303936.260076] Lustre: lustre-MDT0004-osp-MDT0006: Connection to lustre-MDT0004 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [303937.139696] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [303937.159645] LustreError: Skipped 108 previous similar messages [303938.052212] LustreError: 183650:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [303938.089072] Lustre: server umount lustre-MDT0004 complete [303939.996022] device-mapper: ioctl: dmsetup[184030]: dm-4 (mds5_flakey) is removed successfully [303950.542991] Lustre: lustre-MDT0006: Not available for connect from 192.168.0.86@o2ib (stopping) [303950.553928] Lustre: Skipped 17 previous similar messages [303956.845700] Lustre: server umount lustre-MDT0006 complete [303960.539046] device-mapper: ioctl: dmsetup[184708]: dm-5 (mds7_flakey) is removed successfully [303997.936343] Lustre: DEBUG MARKER: server3: executing unload_modules_local [303998.820092] Key type lgssc unregistered [303999.059023] LNet: 185326:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [304000.226921] LNet: Removed LNI 192.168.0.83@o2ib [304000.607274] Key type .llcrypt unregistered [304000.621615] Key type ._llcrypt unregistered [304019.038269] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [304019.048686] alg: No test for adler32 (adler32-zlib) [304019.798676] Key type ._llcrypt registered [304019.804013] Key type .llcrypt registered [304019.827222] Lustre: DEBUG MARKER: server3: executing set_hostid [304023.942611] Lustre: DEBUG MARKER: server3: executing load_modules_local [304024.337280] lnet: unknown parameter '#' ignored [304024.343165] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [304024.350342] lnet: unknown parameter '#' ignored [304024.356121] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [304024.434248] Lustre: Lustre: Build Version: 2.15.4 [304024.495133] LNet: Using FastReg for registration [304024.696862] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [304026.250471] Key type lgssc registered [304026.404412] Lustre: Echo OBD driver; http://www.lustre.org/ [304313.758989] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [304886.433212] LDISKFS-fs (nvme1n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [305459.026247] LDISKFS-fs (nvme2n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [306031.248156] LDISKFS-fs (nvme3n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [306444.477621] Lustre: DEBUG MARKER: server3: executing load_modules_local [306447.228045] device-mapper: ioctl: dmsetup[190971]: dm-2 (mds1_flakey) is created successfully [306449.301987] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [306449.942009] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [306451.031218] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [306451.059036] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [306451.087165] Lustre: lustre-MDT0000: new disk, initializing [306451.108563] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [306451.120516] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [306451.141960] VFS: Open an exclusive opened block device for write dm-2. current [191382 tune2fs]. parent [191381 sh] [306452.147280] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [306458.398863] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [306459.451348] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [306464.295217] device-mapper: ioctl: dmsetup[192404]: dm-3 (mds3_flakey) is created successfully [306466.346456] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [306466.976504] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [306466.999939] Lustre: Setting parameter lustre-MDT0002.mdt.identity_upcall in log lustre-MDT0002 [306467.017484] Lustre: srv-lustre-MDT0002: No data found on store. Initialize space: rc = -61 [306467.028213] Lustre: Skipped 1 previous similar message [306467.052368] Lustre: lustre-MDT0002: new disk, initializing [306467.069124] Lustre: lustre-MDT0002: Imperative Recovery not enabled, recovery window 60-180 [306467.084651] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:2:mdt [306467.092393] VFS: Open an exclusive opened block device for write dm-3. current [192747 tune2fs]. parent [192746 sh] [306467.097528] Lustre: cli-ctl-lustre-MDT0002: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:2:mdt] [306468.059390] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [306474.210698] Lustre: Setting parameter lustre-MDT0003.mdt.identity_upcall in log lustre-MDT0003 [306474.252690] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:3:mdt [306479.088794] device-mapper: ioctl: dmsetup[193776]: dm-4 (mds5_flakey) is created successfully [306481.127888] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [306481.753936] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [306481.778167] Lustre: Setting parameter lustre-MDT0004.mdt.identity_upcall in log lustre-MDT0004 [306481.796994] Lustre: srv-lustre-MDT0004: No data found on store. Initialize space: rc = -61 [306481.824106] Lustre: lustre-MDT0004: new disk, initializing [306481.842701] Lustre: lustre-MDT0004: Imperative Recovery not enabled, recovery window 60-180 [306481.857086] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000300000400-0x0000000340000400]:4:mdt [306481.865578] VFS: Open an exclusive opened block device for write dm-4. current [194125 tune2fs]. parent [194124 sh] [306481.869600] Lustre: cli-ctl-lustre-MDT0004: Allocated super-sequence [0x0000000300000400-0x0000000340000400]:4:mdt] [306482.833139] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [306493.930546] device-mapper: ioctl: dmsetup[195187]: dm-5 (mds7_flakey) is created successfully [306495.996294] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [306496.634908] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [306496.660193] Lustre: Setting parameter lustre-MDT0006.mdt.identity_upcall in log lustre-MDT0006 [306496.671020] Lustre: Skipped 1 previous similar message [306496.681004] Lustre: srv-lustre-MDT0006: No data found on store. Initialize space: rc = -61 [306496.707558] Lustre: lustre-MDT0006: new disk, initializing [306496.728272] Lustre: lustre-MDT0006: Imperative Recovery not enabled, recovery window 60-180 [306496.740645] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000380000400-0x00000003c0000400]:6:mdt [306496.748804] VFS: Open an exclusive opened block device for write dm-5. current [195557 tune2fs]. parent [195556 sh] [306496.753676] Lustre: Skipped 1 previous similar message [306496.753727] Lustre: cli-ctl-lustre-MDT0006: Allocated super-sequence [0x0000000380000400-0x00000003c0000400]:6:mdt] [306497.750301] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [306519.452428] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000440000400-0x0000000480000400]:1:ost [306519.465327] Lustre: Skipped 2 previous similar messages [306555.099717] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000005c0000400-0x0000000600000400]:7:ost [306555.112400] Lustre: Skipped 5 previous similar messages [306560.263380] Lustre: DEBUG MARKER: Using TIMEOUT=20 [306562.979188] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [306562.988218] Lustre: Skipped 1 previous similar message [339207.311046] Lustre: lustre-MDT0000: haven't heard from client 1ae6ccb7-a7a2-462a-9da5-5df4dae54e02 (at 192.168.0.81@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 0000000054661976, cur 1709970484 expire 1709970454 last 1709970437 [339212.430926] Lustre: lustre-MDT0002: haven't heard from client 1ae6ccb7-a7a2-462a-9da5-5df4dae54e02 (at 192.168.0.81@o2ib) in 50 seconds. I think it's dead, and I am evicting it. exp 00000000584575c6, cur 1709970489 expire 1709970459 last 1709970439 [339212.455803] Lustre: Skipped 5 previous similar messages [339234.500044] LustreError: 11-0: lustre-MDT0000-osp-MDT0002: operation mds_statfs to node 0@lo failed: rc = -107 [339234.512031] Lustre: lustre-MDT0000-osp-MDT0002: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [339234.529936] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [339235.524062] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [339235.524153] Lustre: lustre-MDT0000-lwp-MDT0002: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [339235.534080] Lustre: Skipped 4 previous similar messages [339235.552313] Lustre: Skipped 4 previous similar messages [339237.298457] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [339237.309489] Lustre: Skipped 10 previous similar messages [339240.388628] LustreError: 197967:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [339240.437726] Lustre: server umount lustre-MDT0000 complete [339240.644033] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [339240.644035] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [339240.681802] LustreError: Skipped 4 previous similar messages [339242.348166] device-mapper: ioctl: dmsetup[198347]: dm-2 (mds1_flakey) is removed successfully [339242.418373] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [339242.418375] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [339242.418380] LustreError: Skipped 8 previous similar messages [339242.438272] LustreError: Skipped 10 previous similar messages [339243.882551] LustreError: 191948:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.84@o2ib arrived at 1709970521 with bad export cookie 4763187271381799971 [339243.900534] LustreError: 191948:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 4 previous similar messages [339244.227925] LustreError: 11-0: lustre-MDT0001-osp-MDT0004: operation mds_statfs to node 192.168.0.84@o2ib failed: rc = -107 [339244.240906] Lustre: lustre-MDT0001-osp-MDT0004: Connection to lustre-MDT0001 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [339245.763908] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [339245.783032] LustreError: Skipped 9 previous similar messages [339246.787769] Lustre: 187915:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709970517/real 1709970517] req@000000005610e022 x1792997467481152/t0(0) o400->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1709970524 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u192:1.0' [339246.820790] LustreError: 166-1: MGC192.168.0.83@o2ib: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [339248.270418] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [339248.290914] LustreError: Skipped 19 previous similar messages [339252.931852] Lustre: lustre-MDT0002-osp-MDT0004: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [339252.931921] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [339252.932014] Lustre: lustre-MDT0002: Not available for connect from 0@lo (stopping) [339252.932015] Lustre: Skipped 6 previous similar messages [339252.949815] Lustre: Skipped 3 previous similar messages [339252.969087] LustreError: Skipped 29 previous similar messages [339258.051686] Lustre: lustre-MDT0002: Not available for connect from 0@lo (stopping) [339258.061382] Lustre: Skipped 21 previous similar messages [339259.075808] LustreError: 198651:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [339259.113490] Lustre: server umount lustre-MDT0002 complete [339261.048727] device-mapper: ioctl: dmsetup[199031]: dm-3 (mds3_flakey) is removed successfully [339261.236061] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [339261.256181] LustreError: Skipped 50 previous similar messages [339263.171771] Lustre: lustre-MDT0003-osp-MDT0004: Connection to lustre-MDT0003 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [339263.190608] Lustre: Skipped 1 previous similar message [339272.001041] Lustre: lustre-MDT0004: Not available for connect from 192.168.0.84@o2ib (stopping) [339274.435360] LustreError: 11-0: lustre-MDT0004-osp-MDT0006: operation mds_statfs to node 0@lo failed: rc = -107 [339274.447167] LustreError: Skipped 1 previous similar message [339274.454069] Lustre: lustre-MDT0004-osp-MDT0006: Connection to lustre-MDT0004 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [339277.619746] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [339277.639690] LustreError: Skipped 98 previous similar messages [339285.955099] Lustre: lustre-MDT0004 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [339285.971270] LustreError: 199330:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [339286.009734] Lustre: server umount lustre-MDT0004 complete [339287.916403] device-mapper: ioctl: dmsetup[199707]: dm-4 (mds5_flakey) is removed successfully [339289.795205] LustreError: 11-0: lustre-MDT0005-osp-MDT0006: operation mds_statfs to node 192.168.0.84@o2ib failed: rc = -107 [339302.195523] Lustre: lustre-MDT0006: Not available for connect from 192.168.0.84@o2ib (stopping) [339302.206208] Lustre: Skipped 34 previous similar messages [339304.714199] Lustre: server umount lustre-MDT0006 complete [339308.520831] device-mapper: ioctl: dmsetup[200393]: dm-5 (mds7_flakey) is removed successfully [339355.987411] Lustre: DEBUG MARKER: server3: executing unload_modules_local [339356.795139] Key type lgssc unregistered [339357.014024] LNet: 200997:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [339358.209946] LNet: Removed LNI 192.168.0.83@o2ib [339358.566326] Key type .llcrypt unregistered [339358.580562] Key type ._llcrypt unregistered [339377.126840] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [339377.137349] alg: No test for adler32 (adler32-zlib) [339377.885718] Key type ._llcrypt registered [339377.891225] Key type .llcrypt registered [339377.914629] Lustre: DEBUG MARKER: server3: executing set_hostid [339382.142422] Lustre: DEBUG MARKER: server3: executing load_modules_local [339382.531140] lnet: unknown parameter '#' ignored [339382.537035] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [339382.544227] lnet: unknown parameter '#' ignored [339382.550009] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [339382.626566] Lustre: Lustre: Build Version: 2.15.4 [339382.688761] LNet: Using FastReg for registration [339382.898832] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [339384.449496] Key type lgssc registered [339384.617276] Lustre: Echo OBD driver; http://www.lustre.org/ [339671.957519] LDISKFS-fs (nvme0n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [340244.387920] LDISKFS-fs (nvme1n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [340817.139996] LDISKFS-fs (nvme2n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [341389.779687] LDISKFS-fs (nvme3n1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [341807.491350] Lustre: DEBUG MARKER: server3: executing load_modules_local [341810.241800] device-mapper: ioctl: dmsetup[206665]: dm-2 (mds1_flakey) is created successfully [341812.325038] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [341812.985320] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [341814.074571] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [341814.105943] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [341814.135154] Lustre: lustre-MDT0000: new disk, initializing [341814.158837] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [341814.171631] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [341814.192996] VFS: Open an exclusive opened block device for write dm-2. current [207078 tune2fs]. parent [207077 sh] [341815.203324] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [341821.485154] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [341822.538600] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [341827.427866] device-mapper: ioctl: dmsetup[208098]: dm-3 (mds3_flakey) is created successfully [341829.472798] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [341830.093919] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [341830.116797] Lustre: Setting parameter lustre-MDT0002.mdt.identity_upcall in log lustre-MDT0002 [341830.130055] Lustre: srv-lustre-MDT0002: No data found on store. Initialize space: rc = -61 [341830.140264] Lustre: Skipped 1 previous similar message [341830.162378] Lustre: lustre-MDT0002: new disk, initializing [341830.184017] Lustre: lustre-MDT0002: Imperative Recovery not enabled, recovery window 60-180 [341830.199915] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:2:mdt [341830.208945] VFS: Open an exclusive opened block device for write dm-3. current [208443 tune2fs]. parent [208442 sh] [341830.212817] Lustre: cli-ctl-lustre-MDT0002: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:2:mdt] [341831.189776] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [341837.413022] Lustre: Setting parameter lustre-MDT0003.mdt.identity_upcall in log lustre-MDT0003 [341837.455515] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:3:mdt [341842.328809] device-mapper: ioctl: dmsetup[209479]: dm-4 (mds5_flakey) is created successfully [341844.413816] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [341845.041256] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [341845.066236] Lustre: Setting parameter lustre-MDT0004.mdt.identity_upcall in log lustre-MDT0004 [341845.083868] Lustre: srv-lustre-MDT0004: No data found on store. Initialize space: rc = -61 [341845.110377] Lustre: lustre-MDT0004: new disk, initializing [341845.129875] Lustre: lustre-MDT0004: Imperative Recovery not enabled, recovery window 60-180 [341845.143538] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000300000400-0x0000000340000400]:4:mdt [341845.152118] VFS: Open an exclusive opened block device for write dm-4. current [209830 tune2fs]. parent [209829 sh] [341845.156728] Lustre: cli-ctl-lustre-MDT0004: Allocated super-sequence [0x0000000300000400-0x0000000340000400]:4:mdt] [341846.129649] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [341857.305575] device-mapper: ioctl: dmsetup[210869]: dm-5 (mds7_flakey) is created successfully [341859.366216] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [341859.994361] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [341860.018918] Lustre: Setting parameter lustre-MDT0006.mdt.identity_upcall in log lustre-MDT0006 [341860.029999] Lustre: Skipped 1 previous similar message [341860.039705] Lustre: srv-lustre-MDT0006: No data found on store. Initialize space: rc = -61 [341860.066352] Lustre: lustre-MDT0006: new disk, initializing [341860.092170] Lustre: lustre-MDT0006: Imperative Recovery not enabled, recovery window 60-180 [341860.106903] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000380000400-0x00000003c0000400]:6:mdt [341860.114783] VFS: Open an exclusive opened block device for write dm-5. current [211235 tune2fs]. parent [211234 sh] [341860.119989] Lustre: Skipped 1 previous similar message [341860.139946] Lustre: cli-ctl-lustre-MDT0006: Allocated super-sequence [0x0000000380000400-0x00000003c0000400]:6:mdt] [341861.105029] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [341877.516652] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000400000400-0x0000000440000400]:0:ost [341877.529592] Lustre: Skipped 1 previous similar message [341911.052297] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000580000400-0x00000005c0000400]:6:ost [341911.065269] Lustre: Skipped 5 previous similar messages [341923.620360] Lustre: DEBUG MARKER: Using TIMEOUT=20 [341926.316313] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [341926.325485] Lustre: Skipped 1 previous similar message [386100.621123] Lustre: lustre-MDT0004: haven't heard from client 3826deae-7f09-47e2-a191-b02b1b89c7b5 (at 192.168.0.82@o2ib) in 50 seconds. I think it's dead, and I am evicting it. exp 00000000666007ea, cur 1710017378 expire 1710017348 last 1710017328 [386103.985946] Lustre: lustre-MDT0002: haven't heard from client 3826deae-7f09-47e2-a191-b02b1b89c7b5 (at 192.168.0.82@o2ib) in 50 seconds. I think it's dead, and I am evicting it. exp 00000000dd1fdcd1, cur 1710017381 expire 1710017351 last 1710017331 [386104.010750] Lustre: Skipped 5 previous similar messages [386516.924153] LustreError: 11-0: lustre-MDT0000-osp-MDT0002: operation mds_statfs to node 0@lo failed: rc = -107 [386516.936200] Lustre: lustre-MDT0000-osp-MDT0002: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [386516.954157] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [386517.692146] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [386517.692221] Lustre: lustre-MDT0000-lwp-MDT0002: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [386517.702175] Lustre: Skipped 4 previous similar messages [386517.727745] Lustre: Skipped 4 previous similar messages [386518.699335] Lustre: lustre-MDT0000: Not available for connect from 192.168.0.85@o2ib (stopping) [386518.709901] Lustre: Skipped 4 previous similar messages [386522.812087] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [386522.821935] Lustre: Skipped 21 previous similar messages [386522.842556] Lustre: server umount lustre-MDT0000 complete [386523.819138] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [386523.819140] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [386523.859099] LustreError: Skipped 2 previous similar messages [386524.795403] device-mapper: ioctl: dmsetup[214340]: dm-2 (mds1_flakey) is removed successfully [386525.574298] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [386525.574301] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [386525.614207] LustreError: Skipped 2 previous similar messages [386526.334755] LustreError: 210403:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.84@o2ib arrived at 1710017804 with bad export cookie 1981386070723345956 [386526.352966] LustreError: 210403:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 4 previous similar messages [386526.652046] LustreError: 11-0: lustre-MDT0001-osp-MDT0004: operation mds_statfs to node 192.168.0.84@o2ib failed: rc = -107 [386526.665119] Lustre: lustre-MDT0001-osp-MDT0004: Connection to lustre-MDT0001 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [386527.932026] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [386527.951033] LustreError: Skipped 13 previous similar messages [386532.792011] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [386532.812401] LustreError: Skipped 27 previous similar messages [386534.267804] Lustre: 203593:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1710017805/real 1710017805] req@000000006c266412 x1793034684924032/t0(0) o400->MGC192.168.0.83@o2ib@0@lo:26/25 lens 224/224 e 0 to 1 dl 1710017812 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u192:1.0' [386534.300826] LustreError: 166-1: MGC192.168.0.83@o2ib: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [386536.107058] Lustre: lustre-MDT0002: Not available for connect from 192.168.0.85@o2ib (stopping) [386536.117801] Lustre: Skipped 2 previous similar messages [386536.379864] LustreError: 11-0: lustre-MDT0002-osp-MDT0006: operation mds_statfs to node 0@lo failed: rc = -107 [386536.391634] LustreError: Skipped 1 previous similar message [386536.398526] Lustre: lustre-MDT0002-osp-MDT0006: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [386536.416150] Lustre: Skipped 2 previous similar messages [386541.755974] LustreError: 214642:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [386541.796434] Lustre: server umount lustre-MDT0002 complete [386542.250821] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [386542.250822] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.85@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [386542.250828] LustreError: Skipped 48 previous similar messages [386542.270978] LustreError: Skipped 54 previous similar messages [386543.743578] device-mapper: ioctl: dmsetup[215025]: dm-3 (mds3_flakey) is removed successfully [386545.595870] Lustre: lustre-MDT0003-osp-MDT0006: Connection to lustre-MDT0003 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [386545.615077] Lustre: Skipped 2 previous similar messages [386554.429470] Lustre: lustre-MDT0004: Not available for connect from 192.168.0.84@o2ib (stopping) [386554.440111] Lustre: Skipped 22 previous similar messages [386556.859612] LustreError: 11-0: lustre-MDT0004-osp-MDT0006: operation mds_statfs to node 0@lo failed: rc = -107 [386556.859614] Lustre: lustre-MDT0004-osp-MDT0006: Connection to lustre-MDT0004 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [386556.889546] LustreError: Skipped 1 previous similar message [386560.443677] LustreError: 215326:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [386560.481206] Lustre: server umount lustre-MDT0004 complete [386562.352629] device-mapper: ioctl: dmsetup[215705]: dm-4 (mds5_flakey) is removed successfully [386567.099366] LustreError: 11-0: lustre-MDT0005-osp-MDT0006: operation mds_statfs to node 192.168.0.84@o2ib failed: rc = -107 [386574.390182] Lustre: lustre-MDT0006: Not available for connect from 192.168.0.84@o2ib (stopping) [386574.400841] Lustre: Skipped 13 previous similar messages [386574.888958] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.0.84@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [386574.908874] LustreError: Skipped 210 previous similar messages [386579.463119] Lustre: server umount lustre-MDT0006 complete [386583.334892] device-mapper: ioctl: dmsetup[216386]: dm-5 (mds7_flakey) is removed successfully [386623.915699] Lustre: DEBUG MARKER: server3: executing unload_modules_local [386624.819039] Key type lgssc unregistered [386624.990395] LNet: 216990:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [386626.170319] LNet: Removed LNI 192.168.0.83@o2ib [386626.558534] Key type .llcrypt unregistered [386626.572773] Key type ._llcrypt unregistered [1375660.564885] nvme nvme0: pci function 0000:82:00.0 [1375660.581745] nvme nvme0: Shutdown timeout set to 8 seconds [1375660.596324] nvme nvme0: 64/0/0 default/read/poll queues [2256955.586647] systemd-rc-local-generator[253446]: /etc/rc.d/rc.local is not marked executable, skipping. [2256955.587865] systemd-sysv-generator[253449]: SysV service '/etc/rc.d/init.d/openresty' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [2256955.625139] systemd-sysv-generator[253449]: SysV service '/etc/rc.d/init.d/mst' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [2256955.651987] systemd-sysv-generator[253449]: SysV service '/etc/rc.d/init.d/lustre' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [2256955.679038] systemd-sysv-generator[253449]: SysV service '/etc/rc.d/init.d/lsvcgss' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [2256967.664524] systemd-rc-local-generator[254024]: /etc/rc.d/rc.local is not marked executable, skipping. [2256967.665243] systemd-sysv-generator[254027]: SysV service '/etc/rc.d/init.d/openresty' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [2256967.703856] systemd-sysv-generator[254027]: SysV service '/etc/rc.d/init.d/mst' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [2256967.732308] systemd-sysv-generator[254027]: SysV service '/etc/rc.d/init.d/lustre' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [2256967.759649] systemd-sysv-generator[254027]: SysV service '/etc/rc.d/init.d/lsvcgss' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [2422558.312657] nvme0n1: p1 [2422558.437383] nvme1n1: p1 [2422558.554758] nvme2n1: p1 [2422558.663575] nvme3n1: [2423987.763251] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [2423987.773982] alg: No test for adler32 (adler32-zlib) [2423988.521626] Key type ._llcrypt registered [2423988.527599] Key type .llcrypt registered [2423988.544165] lnet: unknown parameter '#' ignored [2423988.550334] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [2423988.578009] Lustre: Lustre: Build Version: 2.15.4 [2423988.629753] LNet: Using FastReg for registration [2423988.828836] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [2424491.415607] systemd-rc-local-generator[281791]: /etc/rc.d/rc.local is not marked executable, skipping. [2424491.418646] systemd-sysv-generator[281794]: SysV service '/etc/rc.d/init.d/openresty' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [2424491.454163] systemd-sysv-generator[281794]: SysV service '/etc/rc.d/init.d/mst' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [2424782.104262] capability: warning: `dnf' uses 32-bit capabilities (legacy support in use) [2424818.289358] systemd-rc-local-generator[353857]: /etc/rc.d/rc.local is not marked executable, skipping. [2424818.292149] systemd-sysv-generator[353860]: SysV service '/etc/rc.d/init.d/openresty' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [2424818.328158] systemd-sysv-generator[353860]: SysV service '/etc/rc.d/init.d/mst' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [2424818.359768] systemd-sysv-generator[353860]: SysV service '/etc/rc.d/init.d/lustre' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [2424818.387360] systemd-sysv-generator[353860]: SysV service '/etc/rc.d/init.d/lsvcgss' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [2429575.028428] LNet: 354589:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [2429576.200296] LNet: Removed LNI 192.168.0.83@o2ib [2429576.436480] Key type .llcrypt unregistered [2429576.450873] Key type ._llcrypt unregistered [2429576.550340] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [2429576.561079] alg: No test for adler32 (adler32-zlib) [2429577.308407] Key type ._llcrypt registered [2429577.314423] Key type .llcrypt registered [2429577.330946] lnet: unknown parameter '#' ignored [2429577.336877] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [2429577.360480] Lustre: Lustre: Build Version: 2.15.4 [2429577.410970] LNet: Using FastReg for registration [2429577.617010] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [2430424.942640] VFS: Open an exclusive opened block device for write nvme0n1. current [355511 wipefs]. parent [355510 parted_disks.sh] [2430424.976908] nvme0n1: p1 [2430425.027165] nvme0n1: p1 p2 p3 p4 [2430425.096658] VFS: Open an exclusive opened block device for write nvme1n1. current [355524 wipefs]. parent [355510 parted_disks.sh] [2430425.133420] nvme1n1: p1 [2430425.239684] VFS: Open an exclusive opened block device for write nvme2n1. current [355536 wipefs]. parent [355510 parted_disks.sh] [2430425.358436] VFS: Open an exclusive opened block device for write nvme3n1. current [355551 wipefs]. parent [355510 parted_disks.sh] [2430425.385194] nvme3n1: [2430425.410493] nvme3n1: p1 [2430603.547259] LNet: 356289:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [2430604.727152] LNet: Removed LNI 192.168.0.83@o2ib [2430604.915259] Key type .llcrypt unregistered [2430604.929795] Key type ._llcrypt unregistered [2430605.032862] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [2430605.042972] alg: No test for adler32 (adler32-zlib) [2430605.791216] Key type ._llcrypt registered [2430605.796936] Key type .llcrypt registered [2430605.813458] lnet: unknown parameter '#' ignored [2430605.819318] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [2430605.844276] Lustre: Lustre: Build Version: 2.15.4 [2430605.895470] LNet: Using FastReg for registration [2430606.094210] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [2430729.380760] Lustre: DEBUG MARKER: server3: executing set_hostid [2430733.157986] Lustre: DEBUG MARKER: server3: executing load_modules_local [2430735.152964] Key type lgssc registered [2430735.247002] Lustre: Echo OBD driver; http://www.lustre.org/ [2430924.416045] LDISKFS-fs (nvme0n1p1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2431054.458654] Lustre: DEBUG MARKER: server3: executing unload_modules_local [2431054.998441] Key type lgssc unregistered [2431055.135745] LNet: 368831:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [2431056.335619] LNet: Removed LNI 192.168.0.83@o2ib [2431056.700372] Key type .llcrypt unregistered [2431056.714589] Key type ._llcrypt unregistered [2431213.141640] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [2431213.151781] alg: No test for adler32 (adler32-zlib) [2431213.901092] Key type ._llcrypt registered [2431213.906573] Key type .llcrypt registered [2431213.929586] Lustre: DEBUG MARKER: server3: executing set_hostid [2431218.653237] Lustre: DEBUG MARKER: server3: executing load_modules_local [2431219.083259] lnet: unknown parameter '#' ignored [2431219.089417] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [2431219.165653] Lustre: Lustre: Build Version: 2.15.4 [2431219.224070] LNetError: 374413:0:(o2iblnd.c:3327:kiblnd_startup()) ko2iblnd: No matching interfaces [2431220.236890] LNetError: 105-4: Error -100 starting up LNI o2ib [2431220.244845] LustreError: 374413:0:(events.c:639:ptlrpc_init_portals()) network initialisation failed [2474097.958790] Key type .llcrypt unregistered [2474097.973286] Key type ._llcrypt unregistered [2474098.072570] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [2474098.082636] alg: No test for adler32 (adler32-zlib) [2474098.830997] Key type ._llcrypt registered [2474098.836727] Key type .llcrypt registered [2474098.853210] lnet: unknown parameter '#' ignored [2474098.859063] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [2474098.882756] Lustre: Lustre: Build Version: 2.15.4 [2474098.933738] LNet: Using FastReg for registration [2474099.137963] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [2474235.255295] Lustre: DEBUG MARKER: server3: executing set_hostid [2474239.040409] Lustre: DEBUG MARKER: server3: executing load_modules_local [2474241.000497] Key type lgssc registered [2474241.092783] Lustre: Echo OBD driver; http://www.lustre.org/ [2474434.189549] LDISKFS-fs (nvme0n1p1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2474814.530207] LDISKFS-fs (nvme1n1p1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2475197.786532] LDISKFS-fs (nvme2n1p1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2475580.948687] LDISKFS-fs (nvme3n1p1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2475789.615277] LDISKFS-fs (nvme0n1p2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2475823.247877] LDISKFS-fs (nvme1n1p2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2475856.932540] LDISKFS-fs (nvme2n1p2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2475890.654320] LDISKFS-fs (nvme3n1p2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2475924.682352] LDISKFS-fs (nvme0n1p3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2475958.040133] LDISKFS-fs (nvme1n1p3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2475991.539923] LDISKFS-fs (nvme2n1p3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476024.931638] LDISKFS-fs (nvme3n1p3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476059.226982] LDISKFS-fs (nvme0n1p4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476092.673711] LDISKFS-fs (nvme1n1p4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476126.212657] LDISKFS-fs (nvme2n1p4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476159.688834] LDISKFS-fs (nvme3n1p4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476194.027812] LDISKFS-fs (nvme0n1p5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476227.615426] LDISKFS-fs (nvme1n1p5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476261.353235] LDISKFS-fs (nvme2n1p5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476295.084412] LDISKFS-fs (nvme3n1p5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476354.750405] Lustre: DEBUG MARKER: server3: executing load_modules_local [2476373.222391] device-mapper: ioctl: dmsetup[393138]: dm-2 (mds3_flakey) is created successfully [2476374.806687] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476375.028688] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [2476375.129517] Lustre: srv-lustre-MDT0002: No data found on store. Initialize space: rc = -61 [2476376.150853] Lustre: lustre-MDT0002: new disk, initializing [2476376.174293] Lustre: lustre-MDT0002: Imperative Recovery not enabled, recovery window 60-180 [2476376.187010] Lustre: cli-ctl-lustre-MDT0002: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:2:mdt] [2476376.194465] VFS: Open an exclusive opened block device for write dm-2. current [393541 tune2fs]. parent [393540 sh] [2476377.145302] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476415.676685] device-mapper: ioctl: dmsetup[394574]: dm-3 (mds9_flakey) is created successfully [2476417.286950] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476417.494722] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [2476417.523208] Lustre: srv-lustre-MDT0008: No data found on store. Initialize space: rc = -61 [2476417.553428] Lustre: lustre-MDT0008: new disk, initializing [2476417.576782] Lustre: lustre-MDT0008: Imperative Recovery not enabled, recovery window 60-180 [2476417.590332] Lustre: cli-ctl-lustre-MDT0008: Allocated super-sequence [0x0000000400000400-0x0000000440000400]:8:mdt] [2476417.598947] VFS: Open an exclusive opened block device for write dm-3. current [394917 tune2fs]. parent [394916 sh] [2476418.552333] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476449.099365] device-mapper: ioctl: dmsetup[395961]: dm-4 (mds15_flakey) is created successfully [2476450.719698] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476450.927378] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [2476450.959206] Lustre: srv-lustre-MDT000e: No data found on store. Initialize space: rc = -61 [2476450.988104] Lustre: lustre-MDT000e: new disk, initializing [2476451.015429] Lustre: lustre-MDT000e: Imperative Recovery not enabled, recovery window 60-180 [2476451.029560] Lustre: cli-ctl-lustre-MDT000e: Allocated super-sequence [0x0000000580000400-0x00000005c0000400]:e:mdt] [2476451.038424] VFS: Open an exclusive opened block device for write dm-4. current [396326 tune2fs]. parent [396325 sh] [2476452.014518] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476485.244003] device-mapper: ioctl: dmsetup[397333]: dm-5 (mds21_flakey) is created successfully [2476486.887269] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476487.097246] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [2476487.133040] Lustre: srv-lustre-MDT0014: No data found on store. Initialize space: rc = -61 [2476487.162894] Lustre: lustre-MDT0014: new disk, initializing [2476487.194528] Lustre: lustre-MDT0014: Imperative Recovery not enabled, recovery window 60-180 [2476487.209153] Lustre: cli-ctl-lustre-MDT0014: Allocated super-sequence [0x0000000700000400-0x0000000740000400]:14:mdt] [2476487.217832] VFS: Open an exclusive opened block device for write dm-5. current [397713 tune2fs]. parent [397712 sh] [2476488.196025] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476524.107958] device-mapper: ioctl: dmsetup[398817]: dm-6 (ost3_flakey) is created successfully [2476525.650918] LDISKFS-fs (dm-6): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476525.710323] LDISKFS-fs (dm-6): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2476525.723041] Lustre: lustre-OST0002-osd: enabled 'large_dir' feature on device /dev/mapper/ost3_flakey [2476525.794072] Lustre: lustre-OST0002: new disk, initializing [2476525.801761] Lustre: srv-lustre-OST0002: No data found on store. Initialize space: rc = -61 [2476525.838731] Lustre: lustre-OST0002: Imperative Recovery not enabled, recovery window 60-180 [2476525.856927] VFS: Open an exclusive opened block device for write dm-6. current [399209 tune2fs]. parent [399208 sh] [2476526.854116] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476527.420690] Lustre: cli-lustre-OST0002-super: Allocated super-sequence [0x0000000880000400-0x00000008c0000400]:2:ost] [2476562.236761] device-mapper: ioctl: dmsetup[400328]: dm-7 (ost9_flakey) is created successfully [2476563.823819] LDISKFS-fs (dm-7): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476563.890598] LDISKFS-fs (dm-7): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2476563.903499] Lustre: lustre-OST0008-osd: enabled 'large_dir' feature on device /dev/mapper/ost9_flakey [2476563.934097] Lustre: lustre-OST0008: new disk, initializing [2476563.941696] Lustre: srv-lustre-OST0008: No data found on store. Initialize space: rc = -61 [2476563.985803] Lustre: lustre-OST0008: Imperative Recovery not enabled, recovery window 60-180 [2476564.004159] VFS: Open an exclusive opened block device for write dm-7. current [400668 tune2fs]. parent [400667 sh] [2476564.675761] Lustre: cli-lustre-OST0008-super: Allocated super-sequence [0x0000000a00000400-0x0000000a40000400]:8:ost] [2476565.021369] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476595.405405] device-mapper: ioctl: dmsetup[401785]: dm-8 (ost15_flakey) is created successfully [2476597.014355] LDISKFS-fs (dm-8): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476597.073994] LDISKFS-fs (dm-8): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2476597.086882] Lustre: lustre-OST000e-osd: enabled 'large_dir' feature on device /dev/mapper/ost15_flakey [2476597.120408] Lustre: lustre-OST000e: new disk, initializing [2476597.127929] Lustre: srv-lustre-OST000e: No data found on store. Initialize space: rc = -61 [2476597.167245] Lustre: lustre-OST000e: Imperative Recovery not enabled, recovery window 60-180 [2476597.187875] VFS: Open an exclusive opened block device for write dm-8. current [402124 tune2fs]. parent [402123 sh] [2476598.212068] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476598.404290] Lustre: cli-lustre-OST000e-super: Allocated super-sequence [0x0000000b80000400-0x0000000bc0000400]:e:ost] [2476631.613567] device-mapper: ioctl: dmsetup[403169]: dm-9 (ost21_flakey) is created successfully [2476633.244818] LDISKFS-fs (dm-9): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476633.312626] LDISKFS-fs (dm-9): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2476633.325526] Lustre: lustre-OST0014-osd: enabled 'large_dir' feature on device /dev/mapper/ost21_flakey [2476633.364601] Lustre: lustre-OST0014: new disk, initializing [2476633.371801] Lustre: srv-lustre-OST0014: No data found on store. Initialize space: rc = -61 [2476633.411083] Lustre: lustre-OST0014: Imperative Recovery not enabled, recovery window 60-180 [2476633.431136] VFS: Open an exclusive opened block device for write dm-9. current [403509 tune2fs]. parent [403508 sh] [2476634.459873] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476636.804312] Lustre: cli-lustre-OST0014-super: Allocated super-sequence [0x0000000d00000400-0x0000000d40000400]:14:ost] [2476670.720351] device-mapper: ioctl: dmsetup[404620]: dm-10 (ost27_flakey) is created successfully [2476672.368480] LDISKFS-fs (dm-10): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476672.429959] LDISKFS-fs (dm-10): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2476672.443232] Lustre: lustre-OST001a-osd: enabled 'large_dir' feature on device /dev/mapper/ost27_flakey [2476672.510324] VFS: Open an exclusive opened block device for write dm-10. current [404958 tune2fs]. parent [404957 sh] [2476673.565803] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476702.102573] device-mapper: ioctl: dmsetup[406007]: dm-11 (ost33_flakey) is created successfully [2476703.788503] LDISKFS-fs (dm-11): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476703.848256] LDISKFS-fs (dm-11): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2476703.861716] Lustre: lustre-OST0020-osd: enabled 'large_dir' feature on device /dev/mapper/ost33_flakey [2476703.900094] Lustre: lustre-OST0020: new disk, initializing [2476703.907351] Lustre: Skipped 1 previous similar message [2476703.907684] Lustre: lustre-OST0020: Not available for connect from 192.168.0.82@o2ib (not set up) [2476703.915664] Lustre: srv-lustre-OST0020: No data found on store. Initialize space: rc = -61 [2476703.925362] Lustre: Skipped 3 previous similar messages [2476703.936341] Lustre: Skipped 1 previous similar message [2476703.978553] Lustre: lustre-OST0020: Imperative Recovery not enabled, recovery window 60-180 [2476703.989334] Lustre: Skipped 1 previous similar message [2476704.006615] VFS: Open an exclusive opened block device for write dm-11. current [406347 tune2fs]. parent [406346 sh] [2476704.047586] Lustre: cli-lustre-OST0020-super: Allocated super-sequence [0x0000001000000400-0x0000001040000400]:20:ost] [2476704.060520] Lustre: Skipped 1 previous similar message [2476705.078068] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476741.877395] device-mapper: ioctl: dmsetup[407493]: dm-12 (ost39_flakey) is created successfully [2476743.550561] LDISKFS-fs (dm-12): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476743.611182] LDISKFS-fs (dm-12): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2476743.624483] Lustre: lustre-OST0026-osd: enabled 'large_dir' feature on device /dev/mapper/ost39_flakey [2476743.698125] VFS: Open an exclusive opened block device for write dm-12. current [407835 tune2fs]. parent [407834 sh] [2476744.764960] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476778.948347] device-mapper: ioctl: dmsetup[408993]: dm-13 (ost45_flakey) is created successfully [2476780.634138] LDISKFS-fs (dm-13): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476780.710396] LDISKFS-fs (dm-13): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2476780.723165] Lustre: lustre-OST002c-osd: enabled 'large_dir' feature on device /dev/mapper/ost45_flakey [2476780.792080] Lustre: lustre-OST002c: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2476780.813182] VFS: Open an exclusive opened block device for write dm-13. current [409331 tune2fs]. parent [409330 sh] [2476781.895904] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476813.757712] device-mapper: ioctl: dmsetup[410394]: dm-14 (ost51_flakey) is created successfully [2476815.473328] LDISKFS-fs (dm-14): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476815.550010] LDISKFS-fs (dm-14): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2476815.624086] Lustre: lustre-OST0032: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2476815.645389] VFS: Open an exclusive opened block device for write dm-14. current [410732 tune2fs]. parent [410731 sh] [2476816.739534] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476854.294274] device-mapper: ioctl: dmsetup[411906]: dm-15 (ost57_flakey) is created successfully [2476856.020629] LDISKFS-fs (dm-15): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476856.085164] LDISKFS-fs (dm-15): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2476856.098033] Lustre: lustre-OST0038-osd: enabled 'large_dir' feature on device /dev/mapper/ost57_flakey [2476856.109308] Lustre: Skipped 1 previous similar message [2476856.148117] Lustre: lustre-OST0038: new disk, initializing [2476856.155113] Lustre: Skipped 3 previous similar messages [2476856.165059] Lustre: srv-lustre-OST0038: No data found on store. Initialize space: rc = -61 [2476856.175275] Lustre: Skipped 3 previous similar messages [2476856.217764] Lustre: lustre-OST0038: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2476856.239350] VFS: Open an exclusive opened block device for write dm-15. current [412243 tune2fs]. parent [412242 sh] [2476856.408257] Lustre: cli-lustre-OST0038-super: Allocated super-sequence [0x0000001600000400-0x0000001640000400]:38:ost] [2476856.420942] Lustre: Skipped 3 previous similar messages [2476857.353931] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476889.999973] device-mapper: ioctl: dmsetup[413405]: dm-16 (ost63_flakey) is created successfully [2476891.754181] LDISKFS-fs (dm-16): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476891.812398] LDISKFS-fs (dm-16): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2476891.886942] Lustre: lustre-OST003e: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2476891.907601] VFS: Open an exclusive opened block device for write dm-16. current [413744 tune2fs]. parent [413743 sh] [2476893.019494] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476928.363218] device-mapper: ioctl: dmsetup[414865]: dm-17 (ost69_flakey) is created successfully [2476930.142717] LDISKFS-fs (dm-17): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476930.207685] LDISKFS-fs (dm-17): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2476930.287135] Lustre: lustre-OST0044: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2476930.309659] VFS: Open an exclusive opened block device for write dm-17. current [415202 tune2fs]. parent [415201 sh] [2476931.433727] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2476969.893050] device-mapper: ioctl: dmsetup[416410]: dm-18 (ost75_flakey) is created successfully [2476971.686991] LDISKFS-fs (dm-18): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2476971.748974] LDISKFS-fs (dm-18): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2476971.828788] Lustre: lustre-OST004a: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2476971.850508] VFS: Open an exclusive opened block device for write dm-18. current [416747 tune2fs]. parent [416746 sh] [2476972.998643] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2477003.380071] device-mapper: ioctl: dmsetup[417863]: dm-19 (ost81_flakey) is created successfully [2477005.196631] LDISKFS-fs (dm-19): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2477005.266821] LDISKFS-fs (dm-19): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2477005.279807] Lustre: lustre-OST0050-osd: enabled 'large_dir' feature on device /dev/mapper/ost81_flakey [2477005.291162] Lustre: Skipped 3 previous similar messages [2477005.369464] Lustre: lustre-OST0050: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2477005.391021] VFS: Open an exclusive opened block device for write dm-19. current [418202 tune2fs]. parent [418201 sh] [2477006.554108] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2477045.475584] device-mapper: ioctl: dmsetup[419420]: dm-20 (ost87_flakey) is created successfully [2477047.327929] LDISKFS-fs (dm-20): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2477047.398173] LDISKFS-fs (dm-20): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2477047.482446] Lustre: lustre-OST0056: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2477047.505170] VFS: Open an exclusive opened block device for write dm-20. current [419804 tune2fs]. parent [419803 sh] [2477048.663970] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2477085.186478] device-mapper: ioctl: dmsetup[420998]: dm-21 (ost93_flakey) is created successfully [2477087.051610] LDISKFS-fs (dm-21): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2477087.110402] LDISKFS-fs (dm-21): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2477087.210477] VFS: Open an exclusive opened block device for write dm-21. current [421384 tune2fs]. parent [421383 sh] [2477088.383741] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2477111.410673] Lustre: DEBUG MARKER: Using TIMEOUT=20 [2477124.641409] Lustre: DEBUG MARKER: server1: executing check_logdir /tmp/test_logs/1712108434 [2477125.477651] Lustre: DEBUG MARKER: client1: executing check_logdir /tmp/test_logs/1712108434 [2477127.501393] Lustre: DEBUG MARKER: server1: executing yml_node [2477128.285612] Lustre: DEBUG MARKER: client1: executing yml_node [2477131.314388] Lustre: DEBUG MARKER: Client: 2.15.4 [2477132.390751] Lustre: DEBUG MARKER: MDS: 2.15.4 [2477133.437694] Lustre: DEBUG MARKER: OSS: 2.15.4 [2482803.700633] Lustre: DEBUG MARKER: server1: executing check_logdir /tmp/test_logs/1712114113 [2482804.529655] Lustre: DEBUG MARKER: client1: executing check_logdir /tmp/test_logs/1712114113 [2482806.488561] Lustre: DEBUG MARKER: server1: executing yml_node [2482807.181775] Lustre: DEBUG MARKER: client1: executing yml_node [2482810.451607] Lustre: DEBUG MARKER: Client: 2.15.4 [2482811.540270] Lustre: DEBUG MARKER: MDS: 2.15.4 [2482812.626292] Lustre: DEBUG MARKER: OSS: 2.15.4 [2488777.764992] Lustre: DEBUG MARKER: server1: executing check_logdir /tmp/test_logs/1712120087 [2488778.660739] Lustre: DEBUG MARKER: client1: executing check_logdir /tmp/test_logs/1712120087 [2488780.868412] Lustre: DEBUG MARKER: server1: executing yml_node [2488781.842264] Lustre: DEBUG MARKER: client1: executing yml_node [2488785.808730] Lustre: DEBUG MARKER: Client: 2.15.4 [2488786.935265] Lustre: DEBUG MARKER: MDS: 2.15.4 [2488788.060834] Lustre: DEBUG MARKER: OSS: 2.15.4 [2494938.700013] Lustre: DEBUG MARKER: server1: executing check_logdir /tmp/test_logs/1712126248 [2494939.608568] Lustre: DEBUG MARKER: client1: executing check_logdir /tmp/test_logs/1712126248 [2494946.699816] Lustre: 427354:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712126252/real 0] req@00000000cf1d2435 x1795273568523520/t0(0) o104->lustre-OST0002@192.168.0.100@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712126259 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'' [2494950.726583] Lustre: DEBUG MARKER: server1: executing yml_node [2494951.418220] Lustre: lustre-OST0008: Client 5d853277-1f18-437e-a94e-e36dde23fe51 (at 192.168.0.100@o2ib) reconnecting [2494951.430593] Lustre: Skipped 7 previous similar messages [2494951.650720] Lustre: DEBUG MARKER: client1: executing yml_node [2494952.491850] Lustre: 430149:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712126257/real 0] req@00000000904251dc x1795273575157888/t0(0) o104->lustre-OST003e@192.168.0.100@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712126264 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'' [2494952.491853] Lustre: 426064:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712126257/real 0] req@000000008ce6e044 x1795273575175360/t0(0) o104->lustre-OST0032@192.168.0.100@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712126264 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'' [2494952.491859] Lustre: 426064:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 2 previous similar messages [2494952.522597] Lustre: 430149:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 1333 previous similar messages [2494954.703809] Lustre: 434458:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712126257/real 0] req@00000000df768643 x1795273575147456/t0(0) o104->lustre-OST004a@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712126267 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'' [2494954.735237] Lustre: 434458:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 337 previous similar messages [2494958.731620] Lustre: 427354:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712126260/real 1712126260] req@00000000cf1d2435 x1795273568523520/t0(0) o104->lustre-OST0002@192.168.0.100@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712126271 ref 2 fl Rpc:Xr/2/ffffffff rc 0/-1 job:'' [2494958.763883] Lustre: 427354:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 1372 previous similar messages [2494965.585944] Lustre: lustre-OST0002: Client 5d853277-1f18-437e-a94e-e36dde23fe51 (at 192.168.0.100@o2ib) reconnecting [2494965.598961] Lustre: Skipped 6 previous similar messages [2494967.680216] Lustre: lustre-OST0008: Client 5d853277-1f18-437e-a94e-e36dde23fe51 (at 192.168.0.100@o2ib) reconnecting [2494967.680218] Lustre: lustre-OST000e: Client 5d853277-1f18-437e-a94e-e36dde23fe51 (at 192.168.0.100@o2ib) reconnecting [2494967.680223] Lustre: Skipped 1 previous similar message [2494967.693062] Lustre: Skipped 6 previous similar messages [2494980.747245] Lustre: ll_ost03_015: service thread pid 427354 was inactive for 41.150 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [2494980.747250] Lustre: ll_ost01_068: service thread pid 430164 was inactive for 41.281 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [2494980.747253] Lustre: ll_ost01_097: service thread pid 434471 was inactive for 41.280 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2494980.747257] Pid: 430130, comm: ll_ost01_055 5.10.0-188.0.0.101.oe2203sp3.aarch64 #1 SMP Wed Feb 21 13:52:43 CST 2024 [2494980.747265] Lustre: Skipped 1 previous similar message [2494980.747266] Lustre: Skipped 20 previous similar messages [2494980.747267] Call Trace TBD: [2494980.769237] Lustre: Skipped 1 previous similar message [2494980.769246] Pid: 427354, comm: ll_ost03_015 5.10.0-188.0.0.101.oe2203sp3.aarch64 #1 SMP Wed Feb 21 13:52:43 CST 2024 [2494980.857456] Call Trace TBD: [2494980.861613] Pid: 430164, comm: ll_ost01_068 5.10.0-188.0.0.101.oe2203sp3.aarch64 #1 SMP Wed Feb 21 13:52:43 CST 2024 [2494980.874353] Call Trace TBD: [2494989.844596] Lustre: lustre-OST0002: Client 5d853277-1f18-437e-a94e-e36dde23fe51 (at 192.168.0.100@o2ib) reconnecting [2494989.857235] Lustre: Skipped 6 previous similar messages [2494996.235127] Lustre: 426125:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712126280/real 1712126280] req@000000001ac69ff0 x1795273575164672/t0(0) o104->lustre-OST004a@192.168.0.100@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712126309 ref 2 fl Rpc:Xr/2/ffffffff rc 0/-1 job:'' [2494996.235132] Lustre: 434458:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712126280/real 1712126280] req@00000000546cf90b x1795273575167616/t0(0) o104->lustre-OST004a@192.168.0.100@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712126309 ref 2 fl Rpc:Xr/2/ffffffff rc 0/-1 job:'' [2494996.298786] Lustre: 426125:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 157 previous similar messages [2495007.370801] Lustre: ll_ost01_074: service thread pid 430179 was inactive for 62.608 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2495007.370803] Lustre: ll_ost01_036: service thread pid 426125 was inactive for 62.608 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2495007.370805] Lustre: ll_ost01_044: service thread pid 426133 was inactive for 62.608 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2495007.370807] Lustre: ll_ost01_022: service thread pid 426090 was inactive for 62.608 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2495007.370808] Lustre: ll_ost01_056: service thread pid 430132 was inactive for 62.608 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2495007.370811] Lustre: ll_ost01_011: service thread pid 426064 was inactive for 62.608 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2495007.370817] Lustre: Skipped 11 previous similar messages [2495007.370818] Lustre: Skipped 11 previous similar messages [2495007.370819] Lustre: Skipped 11 previous similar messages [2495007.370820] Lustre: Skipped 11 previous similar messages [2495007.370822] Lustre: Skipped 11 previous similar messages [2495007.388296] Lustre: Skipped 1 previous similar message [2495025.992545] Lustre: lustre-OST001a: Client 5d853277-1f18-437e-a94e-e36dde23fe51 (at 192.168.0.100@o2ib) reconnecting [2495025.992547] Lustre: lustre-OST0020: Client 5d853277-1f18-437e-a94e-e36dde23fe51 (at 192.168.0.100@o2ib) reconnecting [2495025.992552] Lustre: Skipped 9 previous similar messages [2495026.005005] Lustre: Skipped 12 previous similar messages [2495038.518596] Lustre: DEBUG MARKER: Client: 2.15.4 [2495039.757511] Lustre: DEBUG MARKER: MDS: 2.15.4 [2495040.139013] LustreError: 393470:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.0.100@o2ib ns: filter-lustre-OST0002_UUID lock: 000000009d8bea1f/0xfc429b6ead83c7df lrc: 3/0,0 mode: PR/PR res: [0x88000040b:0x2:0x0].0x0 rrc: 2176 type: EXT [597352448->597581823] (req 597352448->597401599) gid 0 flags: 0x60000400010020 nid: 192.168.0.100@o2ib remote: 0xdfeed0111e7b4f8e expref: 5265 pid: 426104 timeout: 2495081 lvb_type: 0 [2495040.185989] LustreError: 434552:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.100@o2ib arrived at 1712126353 with bad export cookie 18177261945584924331 [2495040.187617] LustreError: 430236:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@00000000e1cc70f2 x1795273603396928/t0(0) o104->lustre-OST0002@192.168.0.100@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2495040.204205] LustreError: 434552:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 181 previous similar messages [2495040.229120] LustreError: 430236:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 715 previous similar messages [2495040.686237] LustreError: 434552:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.100@o2ib arrived at 1712126353 with bad export cookie 18177261945677964082 [2495040.704733] LustreError: 434552:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 11505 previous similar messages [2495040.958212] Lustre: DEBUG MARKER: OSS: 2.15.4 [2495045.692896] LustreError: 434551:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.100@o2ib arrived at 1712126358 with bad export cookie 18177261945677964082 [2495045.711391] LustreError: 434551:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 25358 previous similar messages [2498531.408258] Lustre: 377330:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712129837/real 1712129837] req@00000000e7073d25 x1795273896743680/t0(0) o13->lustre-OST001c-osc-MDT0008@192.168.0.85@o2ib:7/4 lens 224/368 e 0 to 1 dl 1712129844 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'osp-pre-28-8.0' [2498531.408261] Lustre: 377332:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712129837/real 1712129837] req@00000000488f8518 x1795273896744064/t0(0) o13->lustre-OST0040-osc-MDT0008@192.168.0.85@o2ib:7/4 lens 224/368 e 0 to 1 dl 1712129844 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'osp-pre-64-8.0' [2498531.408265] Lustre: 377295:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712129837/real 0] req@000000009123f26b x1795273896747520/t0(0) o13->lustre-OST005e-osc-MDT0002@192.168.0.85@o2ib:7/4 lens 224/368 e 0 to 1 dl 1712129844 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'osp-pre-94-2.0' [2498531.408275] Lustre: lustre-OST0058-osc-MDT0014: Connection to lustre-OST0058 (at 192.168.0.85@o2ib) was lost; in progress operations using this service will wait for recovery to complete [2498531.408279] Lustre: 377332:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 844 previous similar messages [2498531.408282] Lustre: 377295:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 844 previous similar messages [2498531.553484] Lustre: Skipped 3 previous similar messages [2498531.920254] Lustre: 377345:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712129837/real 0] req@00000000ee7c0765 x1795273896746880/t0(0) o13->lustre-OST0058-osc-MDT0002@192.168.0.85@o2ib:7/4 lens 224/368 e 0 to 1 dl 1712129844 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'osp-pre-88-2.0' [2498531.953747] Lustre: 377345:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 2 previous similar messages [2498531.965426] Lustre: lustre-OST0058-osc-MDT0002: Connection to lustre-OST0058 (at 192.168.0.85@o2ib) was lost; in progress operations using this service will wait for recovery to complete [2498531.984301] Lustre: Skipped 2 previous similar messages [2498533.200236] Lustre: 377344:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712129839/real 0] req@00000000eb08c066 x1795273896808896/t0(0) o13->lustre-OST0028-osc-MDT000e@192.168.0.85@o2ib:7/4 lens 224/368 e 0 to 1 dl 1712129846 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'osp-pre-40-14.0' [2498533.233349] Lustre: 377344:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 87 previous similar messages [2498534.224227] Lustre: lustre-OST0034-osc-MDT0008: Connection to lustre-OST0034 (at 192.168.0.85@o2ib) was lost; in progress operations using this service will wait for recovery to complete [2498534.243173] Lustre: Skipped 120 previous similar messages [2498535.248192] Lustre: 377332:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712129841/real 0] req@000000008a70bd1f x1795273896818816/t0(0) o13->lustre-OST0010-osc-MDT0008@192.168.0.85@o2ib:7/4 lens 224/368 e 0 to 1 dl 1712129848 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'osp-pre-16-8.0' [2498535.281066] Lustre: 377332:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 74 previous similar messages [2498539.376127] Lustre: 377348:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712129845/real 0] req@00000000cea69116 x1795273897049600/t0(0) o400->lustre-MDT000a-lwp-OST005c@192.168.0.85@o2ib:12/10 lens 224/224 e 0 to 1 dl 1712129852 ref 2 fl Rpc:XNr/0/ffffffff rc 0/-1 job:'kworker/u192:1.0' [2498539.376136] Lustre: lustre-MDT0016-lwp-OST0014: Connection to lustre-MDT0016 (at 192.168.0.85@o2ib) was lost; in progress operations using this service will wait for recovery to complete [2498539.410021] Lustre: 377348:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 48 previous similar messages [2498539.428891] Lustre: Skipped 6 previous similar messages [2498542.544073] LNetError: 377254:0:(o2iblnd_cb.c:3370:kiblnd_check_txs_locked()) Timed out tx: tx_queue(WSQ:001), 18 seconds [2498542.557400] LNetError: 377254:0:(o2iblnd_cb.c:3439:kiblnd_check_conns()) Timed out RDMA with 192.168.0.85@o2ib (18): c: 0, oc: 0, rc: 8 [2498546.544001] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.85@o2ib: 5 seconds [2498547.535993] Lustre: 377346:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712129853/real 0] req@00000000b41f9633 x1795273897384448/t0(0) o400->lustre-MDT000a-lwp-OST003e@192.168.0.85@o2ib:12/10 lens 224/224 e 0 to 1 dl 1712129860 ref 2 fl Rpc:XNr/0/ffffffff rc 0/-1 job:'kworker/u192:2.0' [2498547.570099] Lustre: 377346:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 539 previous similar messages [2498550.543935] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.85@o2ib: 10 seconds [2498550.556955] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 7 previous similar messages [2498554.543873] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.85@o2ib: 13 seconds [2498554.556487] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 7 previous similar messages [2498558.543807] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.85@o2ib: 18 seconds [2498558.556434] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 7 previous similar messages [2498566.479762] Lustre: lustre-OST001a: haven't heard from client lustre-MDT000a-mdtlov_UUID (at 192.168.0.85@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000824f85a9, cur 1712129879 expire 1712129849 last 1712129832 [2498566.543665] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.85@o2ib: 26 seconds [2498566.557002] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 14 previous similar messages [2498566.983820] Lustre: lustre-OST0014: haven't heard from client lustre-MDT0016-mdtlov_UUID (at 192.168.0.85@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 000000003126bba5, cur 1712129879 expire 1712129849 last 1712129832 [2498567.008916] Lustre: Skipped 83 previous similar messages [2498568.110350] Lustre: lustre-OST0032: haven't heard from client lustre-MDT0004-mdtlov_UUID (at 192.168.0.85@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000db6c40d3, cur 1712129881 expire 1712129851 last 1712129834 [2498568.134376] Lustre: Skipped 3 previous similar messages [2498570.564947] Lustre: lustre-OST0044: haven't heard from client lustre-MDT0016-mdtlov_UUID (at 192.168.0.85@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000f221bb4e, cur 1712129883 expire 1712129853 last 1712129836 [2498570.588979] Lustre: Skipped 19 previous similar messages [2498578.543478] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.85@o2ib: 3 seconds [2498578.556381] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 20 previous similar messages [2498598.543139] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.85@o2ib: 58 seconds [2498598.555848] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 37 previous similar messages [2498634.542537] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.85@o2ib: 93 seconds [2498634.555361] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 59 previous similar messages [2498702.541412] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.85@o2ib: 0 seconds [2498702.554204] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 109 previous similar messages [2498834.539208] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.85@o2ib: 3 seconds [2498834.552071] LNet: 377254:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 201 previous similar messages [2824182.978236] Lustre: lustre-MDT0002: haven't heard from client e2ce5154-0e50-4d2c-937a-cbce0d65804c (at 192.168.0.95@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000dfe0c982, cur 1712455501 expire 1712455471 last 1712455454 [2824183.003852] Lustre: Skipped 19 previous similar messages [2824465.742814] Lustre: lustre-MDT0000-osp-MDT0002: Connection to lustre-MDT0000 (at 192.168.0.81@o2ib) was lost; in progress operations using this service will wait for recovery to complete [2824465.762642] Lustre: Skipped 32 previous similar messages [2824483.406382] Lustre: 377342:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712455794/real 1712455794] req@0000000046acb618 x1795278872257984/t0(0) o400->MGC192.168.0.81@o2ib@192.168.0.81@o2ib:26/25 lens 224/224 e 0 to 1 dl 1712455801 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u192:2.0' [2824483.440801] Lustre: 377342:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [2824483.452489] LustreError: 166-1: MGC192.168.0.81@o2ib: Connection to MGS (at 192.168.0.81@o2ib) was lost; in progress operations using this service will fail [2824483.468899] Lustre: lustre-MDT0001-osp-MDT0002: Connection to lustre-MDT0001 (at 192.168.0.82@o2ib) was lost; in progress operations using this service will wait for recovery to complete [2824483.487893] Lustre: Skipped 19 previous similar messages [2824501.944468] Lustre: lustre-MDT0002: Not available for connect from 192.168.0.86@o2ib (stopping) [2824502.862181] LustreError: 11-0: lustre-MDT0002-osp-MDT000e: operation mds_statfs to node 0@lo failed: rc = -107 [2824502.874224] Lustre: lustre-MDT0002-osp-MDT000e: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [2824502.892011] Lustre: lustre-MDT0002: Not available for connect from 0@lo (stopping) [2824502.902097] Lustre: Skipped 40 previous similar messages [2824504.764548] Lustre: lustre-MDT0002: Not available for connect from 192.168.0.84@o2ib (stopping) [2824504.775502] Lustre: Skipped 40 previous similar messages [2824504.910108] Lustre: lustre-MDT0002-osp-MDT0008: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [2824504.928189] Lustre: Skipped 17 previous similar messages [2824507.482065] Lustre: lustre-MDT0002: Not available for connect from 192.168.0.81@o2ib (stopping) [2824507.492659] Lustre: Skipped 77 previous similar messages [2824508.238271] LustreError: 452794:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [2824512.602048] Lustre: lustre-MDT0002: Not available for connect from 192.168.0.81@o2ib (stopping) [2824512.613017] Lustre: Skipped 96 previous similar messages [2824513.001577] Lustre: server umount lustre-MDT0002 complete [2824513.425008] LustreError: 137-5: lustre-MDT0002_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [2824513.445280] LustreError: Skipped 19 previous similar messages [2824515.144617] LustreError: 137-5: lustre-MDT0002_UUID: not available for connect from 192.168.0.82@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [2824515.164496] LustreError: Skipped 37 previous similar messages [2824516.168586] LustreError: 137-5: lustre-MDT0002_UUID: not available for connect from 192.168.0.82@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [2824516.188469] LustreError: Skipped 57 previous similar messages [2824518.544805] LustreError: 137-5: lustre-MDT0002_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [2824518.565058] LustreError: Skipped 38 previous similar messages [2824519.048272] device-mapper: ioctl: dmsetup[453175]: dm-2 (mds3_flakey) is removed successfully [2824521.293940] Lustre: lustre-MDT0003-osp-MDT0008: Connection to lustre-MDT0003 (at 192.168.0.84@o2ib) was lost; in progress operations using this service will wait for recovery to complete [2824521.313202] Lustre: Skipped 18 previous similar messages [2824522.842056] LustreError: 137-5: lustre-MDT0002_UUID: not available for connect from 192.168.0.81@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [2824522.861957] LustreError: Skipped 114 previous similar messages [2824531.528331] LustreError: 137-5: lustre-MDT0002_UUID: not available for connect from 192.168.0.82@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [2824531.548863] LustreError: Skipped 153 previous similar messages [2824541.005546] LustreError: 11-0: lustre-MDT0005-osp-MDT0008: operation mds_statfs to node 192.168.0.86@o2ib failed: rc = -107 [2824541.018655] LustreError: Skipped 1 previous similar message [2824541.025616] Lustre: lustre-MDT0005-osp-MDT0008: Connection to lustre-MDT0005 (at 192.168.0.86@o2ib) was lost; in progress operations using this service will wait for recovery to complete [2824547.912221] LustreError: 137-5: lustre-MDT0002_UUID: not available for connect from 192.168.0.82@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [2824547.932191] LustreError: Skipped 361 previous similar messages [2824559.437260] LustreError: 11-0: lustre-MDT0006-osp-MDT0014: operation mds_statfs to node 192.168.0.81@o2ib failed: rc = -107 [2824559.450648] Lustre: lustre-MDT0006-osp-MDT0014: Connection to lustre-MDT0006 (at 192.168.0.81@o2ib) was lost; in progress operations using this service will wait for recovery to complete [2824559.469712] Lustre: Skipped 18 previous similar messages [2824561.485235] LustreError: 11-0: lustre-MDT0006-osp-MDT0008: operation mds_statfs to node 192.168.0.81@o2ib failed: rc = -107 [2824581.007732] LustreError: 137-5: lustre-MDT0002_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [2824581.028073] LustreError: Skipped 621 previous similar messages [2824602.444621] Lustre: lustre-MDT0008-osp-MDT000e: Connection to lustre-MDT0008 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [2824602.445075] Lustre: lustre-MDT0008: Not available for connect from 0@lo (stopping) [2824602.462831] Lustre: Skipped 54 previous similar messages [2824602.472615] Lustre: Skipped 16 previous similar messages [2824608.332639] LustreError: 453481:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [2824613.703161] Lustre: server umount lustre-MDT0008 complete [2824620.718210] device-mapper: ioctl: dmsetup[453866]: dm-3 (mds9_flakey) is removed successfully [2824645.518674] LustreError: 137-5: lustre-MDT0002_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [2824645.538993] LustreError: Skipped 1776 previous similar messages [2824646.219798] LustreError: 11-0: lustre-MDT000b-osp-MDT000e: operation mds_statfs to node 192.168.0.86@o2ib failed: rc = -107 [2824671.819382] LustreError: 11-0: lustre-MDT000c-osp-MDT000e: operation mds_statfs to node 192.168.0.81@o2ib failed: rc = -107 [2824671.832712] LustreError: Skipped 1 previous similar message [2824671.839672] Lustre: lustre-MDT000c-osp-MDT000e: Connection to lustre-MDT000c (at 192.168.0.81@o2ib) was lost; in progress operations using this service will wait for recovery to complete [2824671.858464] Lustre: Skipped 36 previous similar messages [2824697.418942] LustreError: 11-0: lustre-MDT000d-osp-MDT000e: operation mds_statfs to node 192.168.0.82@o2ib failed: rc = -107 [2824697.432074] LustreError: Skipped 1 previous similar message [2824718.733447] Lustre: lustre-MDT000e: Not available for connect from 192.168.0.86@o2ib (stopping) [2824718.744309] Lustre: Skipped 225 previous similar messages [2824729.925189] Lustre: server umount lustre-MDT000e complete [2824736.296148] device-mapper: ioctl: dmsetup[454554]: dm-4 (mds15_flakey) is removed successfully [2824738.634326] LustreError: 11-0: lustre-MDT000f-osp-MDT0014: operation mds_statfs to node 192.168.0.84@o2ib failed: rc = -107 [2824738.647645] LustreError: Skipped 1 previous similar message [2824775.056536] LustreError: 137-5: lustre-MDT0002_UUID: not available for connect from 192.168.0.86@o2ib (no target). If you are running an HA pair check that the target is mounted on the other server. [2824775.076621] LustreError: Skipped 5312 previous similar messages [2824810.313118] LustreError: 11-0: lustre-MDT0013-osp-MDT0014: operation mds_statfs to node 192.168.0.82@o2ib failed: rc = -107 [2824810.326272] Lustre: lustre-MDT0013-osp-MDT0014: Connection to lustre-MDT0013 (at 192.168.0.82@o2ib) was lost; in progress operations using this service will wait for recovery to complete [2824810.345064] Lustre: Skipped 103 previous similar messages [2824852.296798] Lustre: lustre-MDT0014: Not available for connect from 0@lo (stopping) [2824852.306413] Lustre: Skipped 191 previous similar messages [2824863.483890] Lustre: server umount lustre-MDT0014 complete [2824870.558559] device-mapper: ioctl: dmsetup[455249]: dm-5 (mds21_flakey) is removed successfully [2824929.086420] Lustre: server umount lustre-OST0002 complete [2824931.294717] device-mapper: ioctl: dmsetup[455929]: dm-6 (ost3_flakey) is removed successfully [2824947.268199] Lustre: server umount lustre-OST0008 complete [2824949.444141] device-mapper: ioctl: dmsetup[456606]: dm-7 (ost9_flakey) is removed successfully [2824965.430611] Lustre: server umount lustre-OST000e complete [2824967.637767] device-mapper: ioctl: dmsetup[457281]: dm-8 (ost15_flakey) is removed successfully [2824985.875221] device-mapper: ioctl: dmsetup[457959]: dm-9 (ost21_flakey) is removed successfully [2825001.803949] Lustre: server umount lustre-OST001a complete [2825001.811109] Lustre: Skipped 1 previous similar message [2825004.001906] device-mapper: ioctl: dmsetup[458640]: dm-10 (ost27_flakey) is removed successfully [2825022.158812] device-mapper: ioctl: dmsetup[459313]: dm-11 (ost33_flakey) is removed successfully [2825040.341724] device-mapper: ioctl: dmsetup[459990]: dm-12 (ost39_flakey) is removed successfully [2825058.336294] device-mapper: ioctl: dmsetup[460662]: dm-13 (ost45_flakey) is removed successfully [2825074.169375] Lustre: server umount lustre-OST0032 complete [2825074.176266] Lustre: Skipped 3 previous similar messages [2825076.336271] device-mapper: ioctl: dmsetup[461338]: dm-14 (ost51_flakey) is removed successfully [2825094.247691] device-mapper: ioctl: dmsetup[462014]: dm-15 (ost57_flakey) is removed successfully [2825112.325881] device-mapper: ioctl: dmsetup[462690]: dm-16 (ost63_flakey) is removed successfully [2825130.371420] device-mapper: ioctl: dmsetup[463366]: dm-17 (ost69_flakey) is removed successfully [2825148.514657] device-mapper: ioctl: dmsetup[464037]: dm-18 (ost75_flakey) is removed successfully [2825166.493718] device-mapper: ioctl: dmsetup[464711]: dm-19 (ost81_flakey) is removed successfully [2825184.613452] device-mapper: ioctl: dmsetup[465383]: dm-20 (ost87_flakey) is removed successfully [2825202.719415] device-mapper: ioctl: dmsetup[466055]: dm-21 (ost93_flakey) is removed successfully [2825213.606371] Lustre: DEBUG MARKER: server3: executing unload_modules_local [2825214.448008] Key type lgssc unregistered [2825214.606479] LNet: 466674:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [2825215.778387] LNet: Removed LNI 192.168.0.83@o2ib [2825216.131200] Key type .llcrypt unregistered [2825216.145931] Key type ._llcrypt unregistered [2825418.104401] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [2825418.115096] alg: No test for adler32 (adler32-zlib) [2825418.863146] Key type ._llcrypt registered [2825418.868559] Key type .llcrypt registered [2825418.885209] lnet: unknown parameter '#' ignored [2825418.891121] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [2825418.915297] Lustre: Lustre: Build Version: 2.15.4 [2825418.966531] LNet: Using FastReg for registration [2825419.168345] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [2827473.585482] LNet: 469539:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [2827474.749372] LNet: Removed LNI 192.168.0.83@o2ib [2827474.970626] Key type .llcrypt unregistered [2827474.984985] Key type ._llcrypt unregistered [2827475.067620] LNet: HW NUMA nodes: 4, HW CPU cores: 96, npartitions: 4 [2827475.078204] alg: No test for adler32 (adler32-zlib) [2827475.825452] Key type ._llcrypt registered [2827475.831176] Key type .llcrypt registered [2827475.847711] lnet: unknown parameter '#' ignored [2827475.853929] lnet: unknown parameter 'o2ib1(ibp129s0f1)' ignored [2827475.885812] Lustre: Lustre: Build Version: 2.15.4 [2827475.936696] LNet: Using FastReg for registration [2827476.135039] LNet: Added LNI 192.168.0.83@o2ib [8/256/0/180] [2827919.924321] Lustre: DEBUG MARKER: server3: executing set_hostid [2827923.716174] Lustre: DEBUG MARKER: server3: executing load_modules_local [2827925.745956] Key type lgssc registered [2827925.844242] Lustre: Echo OBD driver; http://www.lustre.org/ [2828118.725013] LDISKFS-fs (nvme0n1p1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2828500.475497] LDISKFS-fs (nvme1n1p1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2828883.934702] LDISKFS-fs (nvme2n1p1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829267.469260] LDISKFS-fs (nvme3n1p1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829483.509262] LDISKFS-fs (nvme0n1p2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829523.058248] LDISKFS-fs (nvme1n1p2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829562.651913] LDISKFS-fs (nvme2n1p2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829605.450937] LDISKFS-fs (nvme3n1p2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829653.789266] LDISKFS-fs (nvme0n1p3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829697.787669] LDISKFS-fs (nvme1n1p3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829737.415268] LDISKFS-fs (nvme2n1p3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829770.019802] LDISKFS-fs (nvme3n1p3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829802.474727] LDISKFS-fs (nvme0n1p4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829834.854183] LDISKFS-fs (nvme1n1p4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829867.181765] LDISKFS-fs (nvme2n1p4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829899.417830] LDISKFS-fs (nvme3n1p4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829942.023285] LDISKFS-fs (nvme0n1p5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2829978.947778] LDISKFS-fs (nvme1n1p5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830022.893363] LDISKFS-fs (nvme2n1p5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830066.633997] LDISKFS-fs (nvme3n1p5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830131.752791] Lustre: DEBUG MARKER: server3: executing load_modules_local [2830150.274961] device-mapper: ioctl: dmsetup[485833]: dm-2 (mds3_flakey) is created successfully [2830151.834665] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830152.054456] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [2830152.155937] Lustre: srv-lustre-MDT0002: No data found on store. Initialize space: rc = -61 [2830153.179416] Lustre: lustre-MDT0002: new disk, initializing [2830153.203228] Lustre: lustre-MDT0002: Imperative Recovery not enabled, recovery window 60-180 [2830153.219995] Lustre: cli-ctl-lustre-MDT0002: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:2:mdt] [2830153.227647] VFS: Open an exclusive opened block device for write dm-2. current [486238 tune2fs]. parent [486237 sh] [2830154.185820] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830192.983408] device-mapper: ioctl: dmsetup[487272]: dm-3 (mds9_flakey) is created successfully [2830194.585439] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830194.795644] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [2830194.823922] Lustre: srv-lustre-MDT0008: No data found on store. Initialize space: rc = -61 [2830194.853296] Lustre: lustre-MDT0008: new disk, initializing [2830194.877752] Lustre: lustre-MDT0008: Imperative Recovery not enabled, recovery window 60-180 [2830194.889935] Lustre: cli-ctl-lustre-MDT0008: Allocated super-sequence [0x0000000400000400-0x0000000440000400]:8:mdt] [2830194.908592] VFS: Open an exclusive opened block device for write dm-3. current [487623 tune2fs]. parent [487622 sh] [2830195.863489] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830226.294632] device-mapper: ioctl: dmsetup[488675]: dm-4 (mds15_flakey) is created successfully [2830227.917963] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830228.124499] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [2830228.155677] Lustre: srv-lustre-MDT000e: No data found on store. Initialize space: rc = -61 [2830228.184554] Lustre: lustre-MDT000e: new disk, initializing [2830228.211119] Lustre: lustre-MDT000e: Imperative Recovery not enabled, recovery window 60-180 [2830228.223444] Lustre: cli-ctl-lustre-MDT000e: Allocated super-sequence [0x0000000580000400-0x00000005c0000400]:e:mdt] [2830228.231992] VFS: Open an exclusive opened block device for write dm-4. current [489037 tune2fs]. parent [489036 sh] [2830229.215854] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830262.579700] device-mapper: ioctl: dmsetup[490045]: dm-5 (mds21_flakey) is created successfully [2830264.224813] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830264.431166] LDISKFS-fs (dm-5): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [2830264.466542] Lustre: srv-lustre-MDT0014: No data found on store. Initialize space: rc = -61 [2830264.496434] Lustre: lustre-MDT0014: new disk, initializing [2830264.533243] Lustre: lustre-MDT0014: Imperative Recovery not enabled, recovery window 60-180 [2830264.549756] Lustre: cli-ctl-lustre-MDT0014: Allocated super-sequence [0x0000000700000400-0x0000000740000400]:14:mdt] [2830264.558827] VFS: Open an exclusive opened block device for write dm-5. current [490425 tune2fs]. parent [490424 sh] [2830265.546209] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830301.290658] device-mapper: ioctl: dmsetup[491529]: dm-6 (ost3_flakey) is created successfully [2830302.837182] LDISKFS-fs (dm-6): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830302.895333] LDISKFS-fs (dm-6): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830302.908226] Lustre: lustre-OST0002-osd: enabled 'large_dir' feature on device /dev/mapper/ost3_flakey [2830302.983867] Lustre: lustre-OST0002: new disk, initializing [2830302.991453] Lustre: srv-lustre-OST0002: No data found on store. Initialize space: rc = -61 [2830303.028156] Lustre: lustre-OST0002: Imperative Recovery not enabled, recovery window 60-180 [2830303.049247] VFS: Open an exclusive opened block device for write dm-6. current [491920 tune2fs]. parent [491919 sh] [2830304.062949] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830304.419745] Lustre: cli-lustre-OST0002-super: Allocated super-sequence [0x0000000880000400-0x00000008c0000400]:2:ost] [2830339.659666] device-mapper: ioctl: dmsetup[493027]: dm-7 (ost9_flakey) is created successfully [2830341.256575] LDISKFS-fs (dm-7): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830341.318356] LDISKFS-fs (dm-7): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830341.331113] Lustre: lustre-OST0008-osd: enabled 'large_dir' feature on device /dev/mapper/ost9_flakey [2830341.360387] Lustre: lustre-OST0008: new disk, initializing [2830341.363721] Lustre: lustre-OST0008: Not available for connect from 0@lo (not set up) [2830341.367544] Lustre: srv-lustre-OST0008: No data found on store. Initialize space: rc = -61 [2830341.377625] Lustre: Skipped 3 previous similar messages [2830341.423127] Lustre: lustre-OST0008: Imperative Recovery not enabled, recovery window 60-180 [2830341.441600] VFS: Open an exclusive opened block device for write dm-7. current [493390 tune2fs]. parent [493389 sh] [2830341.561053] Lustre: cli-lustre-OST0008-super: Allocated super-sequence [0x0000000a00000400-0x0000000a40000400]:8:ost] [2830342.459368] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830372.861679] device-mapper: ioctl: dmsetup[494496]: dm-8 (ost15_flakey) is created successfully [2830374.462459] LDISKFS-fs (dm-8): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830374.522733] LDISKFS-fs (dm-8): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830374.536105] Lustre: lustre-OST000e-osd: enabled 'large_dir' feature on device /dev/mapper/ost15_flakey [2830374.566607] Lustre: lustre-OST000e: new disk, initializing [2830374.573760] Lustre: srv-lustre-OST000e: No data found on store. Initialize space: rc = -61 [2830374.611814] Lustre: lustre-OST000e: Imperative Recovery not enabled, recovery window 60-180 [2830374.630218] VFS: Open an exclusive opened block device for write dm-8. current [494834 tune2fs]. parent [494833 sh] [2830375.010733] Lustre: cli-lustre-OST000e-super: Allocated super-sequence [0x0000000b80000400-0x0000000bc0000400]:e:ost] [2830375.672524] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830409.096366] device-mapper: ioctl: dmsetup[495868]: dm-9 (ost21_flakey) is created successfully [2830410.707207] LDISKFS-fs (dm-9): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830410.766257] LDISKFS-fs (dm-9): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830410.779147] Lustre: lustre-OST0014-osd: enabled 'large_dir' feature on device /dev/mapper/ost21_flakey [2830410.811920] Lustre: lustre-OST0014: new disk, initializing [2830410.819446] Lustre: srv-lustre-OST0014: No data found on store. Initialize space: rc = -61 [2830410.860182] Lustre: lustre-OST0014: Imperative Recovery not enabled, recovery window 60-180 [2830410.880774] VFS: Open an exclusive opened block device for write dm-9. current [496205 tune2fs]. parent [496204 sh] [2830411.924232] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830411.951926] Lustre: cli-lustre-OST0014-super: Allocated super-sequence [0x0000000d00000400-0x0000000d40000400]:14:ost] [2830448.095569] device-mapper: ioctl: dmsetup[497347]: dm-10 (ost27_flakey) is created successfully [2830449.719843] LDISKFS-fs (dm-10): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830449.792575] LDISKFS-fs (dm-10): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830449.805435] Lustre: lustre-OST001a-osd: enabled 'large_dir' feature on device /dev/mapper/ost27_flakey [2830449.873919] VFS: Open an exclusive opened block device for write dm-10. current [497688 tune2fs]. parent [497687 sh] [2830450.927003] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830479.295952] device-mapper: ioctl: dmsetup[498708]: dm-11 (ost33_flakey) is created successfully [2830480.960849] LDISKFS-fs (dm-11): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830481.019757] LDISKFS-fs (dm-11): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830481.032627] Lustre: lustre-OST0020-osd: enabled 'large_dir' feature on device /dev/mapper/ost33_flakey [2830481.069052] Lustre: lustre-OST0020: new disk, initializing [2830481.076592] Lustre: Skipped 1 previous similar message [2830481.084258] Lustre: srv-lustre-OST0020: No data found on store. Initialize space: rc = -61 [2830481.094391] Lustre: Skipped 1 previous similar message [2830481.128377] Lustre: lustre-OST0020: Imperative Recovery not enabled, recovery window 60-180 [2830481.139279] Lustre: Skipped 1 previous similar message [2830481.155204] VFS: Open an exclusive opened block device for write dm-11. current [499071 tune2fs]. parent [499070 sh] [2830482.207860] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830482.489824] Lustre: cli-lustre-OST0020-super: Allocated super-sequence [0x0000001000000400-0x0000001040000400]:20:ost] [2830482.502785] Lustre: Skipped 1 previous similar message [2830518.945820] device-mapper: ioctl: dmsetup[500198]: dm-12 (ost39_flakey) is created successfully [2830520.624150] LDISKFS-fs (dm-12): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830520.699453] LDISKFS-fs (dm-12): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830520.712243] Lustre: lustre-OST0026-osd: enabled 'large_dir' feature on device /dev/mapper/ost39_flakey [2830520.787543] VFS: Open an exclusive opened block device for write dm-12. current [500536 tune2fs]. parent [500535 sh] [2830521.852389] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830556.030595] device-mapper: ioctl: dmsetup[501665]: dm-13 (ost45_flakey) is created successfully [2830557.736075] LDISKFS-fs (dm-13): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830557.797010] LDISKFS-fs (dm-13): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830557.810268] Lustre: lustre-OST002c-osd: enabled 'large_dir' feature on device /dev/mapper/ost45_flakey [2830557.879602] Lustre: lustre-OST002c: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2830557.902328] VFS: Open an exclusive opened block device for write dm-13. current [502004 tune2fs]. parent [502003 sh] [2830558.989877] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830590.868252] device-mapper: ioctl: dmsetup[503102]: dm-14 (ost51_flakey) is created successfully [2830592.583086] LDISKFS-fs (dm-14): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830592.643794] LDISKFS-fs (dm-14): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830592.720226] Lustre: lustre-OST0032: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2830592.741157] VFS: Open an exclusive opened block device for write dm-14. current [503441 tune2fs]. parent [503440 sh] [2830593.829873] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830631.433520] device-mapper: ioctl: dmsetup[504612]: dm-15 (ost57_flakey) is created successfully [2830633.163216] LDISKFS-fs (dm-15): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830633.227719] LDISKFS-fs (dm-15): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830633.240931] Lustre: lustre-OST0038-osd: enabled 'large_dir' feature on device /dev/mapper/ost57_flakey [2830633.252301] Lustre: Skipped 1 previous similar message [2830633.288980] Lustre: lustre-OST0038: new disk, initializing [2830633.296258] Lustre: Skipped 3 previous similar messages [2830633.304009] Lustre: srv-lustre-OST0038: No data found on store. Initialize space: rc = -61 [2830633.314571] Lustre: Skipped 3 previous similar messages [2830633.360393] Lustre: lustre-OST0038: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2830633.381777] VFS: Open an exclusive opened block device for write dm-15. current [504983 tune2fs]. parent [504982 sh] [2830633.784963] Lustre: cli-lustre-OST0038-super: Allocated super-sequence [0x0000001600000400-0x0000001640000400]:38:ost] [2830633.797870] Lustre: Skipped 3 previous similar messages [2830634.479954] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830666.946694] device-mapper: ioctl: dmsetup[506159]: dm-16 (ost63_flakey) is created successfully [2830668.701023] LDISKFS-fs (dm-16): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830668.761772] LDISKFS-fs (dm-16): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830668.842357] Lustre: lustre-OST003e: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2830668.864536] VFS: Open an exclusive opened block device for write dm-16. current [506497 tune2fs]. parent [506496 sh] [2830669.972436] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830705.418602] device-mapper: ioctl: dmsetup[507613]: dm-17 (ost69_flakey) is created successfully [2830707.201298] LDISKFS-fs (dm-17): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830707.261264] LDISKFS-fs (dm-17): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830707.340435] Lustre: lustre-OST0044: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2830707.363394] VFS: Open an exclusive opened block device for write dm-17. current [507953 tune2fs]. parent [507952 sh] [2830708.483705] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830746.815010] device-mapper: ioctl: dmsetup[509165]: dm-18 (ost75_flakey) is created successfully [2830748.613139] LDISKFS-fs (dm-18): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830748.688080] LDISKFS-fs (dm-18): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830748.769541] Lustre: lustre-OST004a: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2830748.791486] VFS: Open an exclusive opened block device for write dm-18. current [509506 tune2fs]. parent [509505 sh] [2830749.923018] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830780.323569] device-mapper: ioctl: dmsetup[510618]: dm-19 (ost81_flakey) is created successfully [2830782.144820] LDISKFS-fs (dm-19): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830782.205147] LDISKFS-fs (dm-19): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830782.218318] Lustre: lustre-OST0050-osd: enabled 'large_dir' feature on device /dev/mapper/ost81_flakey [2830782.229586] Lustre: Skipped 3 previous similar messages [2830782.301573] Lustre: lustre-OST0050: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2830782.322298] VFS: Open an exclusive opened block device for write dm-19. current [510956 tune2fs]. parent [510955 sh] [2830783.466887] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830822.653474] device-mapper: ioctl: dmsetup[512182]: dm-20 (ost87_flakey) is created successfully [2830824.475457] LDISKFS-fs (dm-20): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830824.536699] LDISKFS-fs (dm-20): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830824.625118] Lustre: lustre-OST0056: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [2830824.646437] VFS: Open an exclusive opened block device for write dm-20. current [512522 tune2fs]. parent [512521 sh] [2830825.805885] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830862.180354] device-mapper: ioctl: dmsetup[513795]: dm-21 (ost93_flakey) is created successfully [2830864.044139] LDISKFS-fs (dm-21): mounted filesystem with ordered data mode. Opts: errors=remount-ro [2830864.101824] LDISKFS-fs (dm-21): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [2830864.196965] VFS: Open an exclusive opened block device for write dm-21. current [514135 tune2fs]. parent [514134 sh] [2830865.381015] Lustre: DEBUG MARKER: server3: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 192 [2830888.401021] Lustre: DEBUG MARKER: Using TIMEOUT=20 [2830904.668255] Lustre: DEBUG MARKER: server1: executing check_logdir /tmp/test_logs/1712462220 [2830905.424843] Lustre: DEBUG MARKER: client1: executing check_logdir /tmp/test_logs/1712462220 [2830907.362114] Lustre: DEBUG MARKER: server1: executing yml_node [2830908.168429] Lustre: DEBUG MARKER: client1: executing yml_node [2830911.226489] Lustre: DEBUG MARKER: Client: 2.15.4 [2830912.264841] Lustre: DEBUG MARKER: MDS: 2.15.4 [2830913.311712] Lustre: DEBUG MARKER: OSS: 2.15.4 [2836957.088805] Lustre: DEBUG MARKER: server1: executing check_logdir /tmp/test_logs/1712468272 [2836957.923757] Lustre: DEBUG MARKER: client1: executing check_logdir /tmp/test_logs/1712468272 [2836960.193358] Lustre: DEBUG MARKER: server1: executing yml_node [2836960.919668] Lustre: DEBUG MARKER: client1: executing yml_node [2836964.122282] Lustre: DEBUG MARKER: Client: 2.15.4 [2836965.207740] Lustre: DEBUG MARKER: MDS: 2.15.4 [2836966.289335] Lustre: DEBUG MARKER: OSS: 2.15.4 [2844446.684727] Lustre: lustre-OST0008: Client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) reconnecting [2844446.696974] Lustre: Skipped 2 previous similar messages [2844455.423223] Lustre: lustre-OST0038: Client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) reconnecting [2844455.435464] Lustre: Skipped 4 previous similar messages [2844459.981834] Lustre: lustre-OST0056: Client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) reconnecting [2844460.006658] Lustre: 523368:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712475761/real 0] req@00000000d59134bd x1795643954522816/t0(0) o104->lustre-OST0032@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712475773 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'' [2844460.710661] Lustre: 523356:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712475761/real 0] req@000000002bffa2bf x1795643954522496/t0(0) o104->lustre-OST0026@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712475768 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'' [2844460.741553] Lustre: 523356:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 3622 previous similar messages [2844462.854620] Lustre: 523372:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712475764/real 0] req@00000000132e834d x1795643961133568/t0(0) o104->lustre-OST001a@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712475779 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'' [2844462.885762] Lustre: 523372:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 6 previous similar messages [2844464.934582] Lustre: 523372:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712475766/real 0] req@000000005a26c774 x1795643962517440/t0(0) o104->lustre-OST001a@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712475783 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'' [2844464.965951] Lustre: 523372:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 128 previous similar messages [2844474.259311] Lustre: lustre-OST0038: Client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) reconnecting [2844474.259313] Lustre: lustre-OST003e: Client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) reconnecting [2844474.259317] Lustre: Skipped 11 previous similar messages [2844474.271866] Lustre: Skipped 11 previous similar messages [2844476.422256] Lustre: ll_ost03_001: service thread pid 491848 was inactive for 40.636 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [2844476.422259] Lustre: ll_ost03_004: service thread pid 492510 was inactive for 40.636 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [2844476.422262] Lustre: ll_ost03_012: service thread pid 523337 was inactive for 40.443 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2844476.422264] Lustre: ll_ost03_010: service thread pid 523335 was inactive for 40.635 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2844476.422266] Lustre: ll_ost03_000: service thread pid 491847 was inactive for 40.636 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2844476.422268] Lustre: ll_ost03_005: service thread pid 492530 was inactive for 40.444 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2844476.422270] Lustre: ll_ost03_003: service thread pid 492336 was inactive for 40.636 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2844476.422272] Lustre: ll_ost03_011: service thread pid 523336 was inactive for 40.627 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2844476.422275] Lustre: ll_ost03_002: service thread pid 491849 was inactive for 40.636 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [2844476.422283] Pid: 492510, comm: ll_ost03_004 5.10.0-188.0.0.101.oe2203sp3.aarch64 #1 SMP Wed Feb 21 13:52:43 CST 2024 [2844476.422284] Lustre: Skipped 4 previous similar messages [2844476.422286] Lustre: Skipped 4 previous similar messages [2844476.422287] Lustre: Skipped 4 previous similar messages [2844476.422289] Lustre: Skipped 4 previous similar messages [2844476.422290] Lustre: Skipped 4 previous similar messages [2844476.422291] Lustre: Skipped 4 previous similar messages [2844476.643075] Call Trace TBD: [2844476.647222] Pid: 491849, comm: ll_ost03_002 5.10.0-188.0.0.101.oe2203sp3.aarch64 #1 SMP Wed Feb 21 13:52:43 CST 2024 [2844476.659473] Call Trace TBD: [2844476.663608] Pid: 491848, comm: ll_ost03_001 5.10.0-188.0.0.101.oe2203sp3.aarch64 #1 SMP Wed Feb 21 13:52:43 CST 2024 [2844476.675825] Call Trace TBD: [2844478.022362] Lustre: 523354:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712475780/real 1712475780] req@00000000aefda67a x1795643953430592/t0(0) o104->lustre-OST0050@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712475795 ref 2 fl Rpc:Xr/2/ffffffff rc -11/-1 job:'' [2844478.054094] Lustre: 523354:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 2393 previous similar messages [2844479.426008] Lustre: lustre-MDT0002: Client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) reconnecting [2844479.438267] Lustre: Skipped 6 previous similar messages [2844486.182257] Lustre: 492562:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712475783/real 1712475783] req@00000000faf601aa x1795643953898432/t0(0) o104->lustre-OST0008@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712475798 ref 2 fl Rpc:Xr/2/ffffffff rc -11/-1 job:'' [2844486.214111] Lustre: 492562:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 270 previous similar messages [2844492.272509] Lustre: lustre-MDT0008: Client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) reconnecting [2844492.284880] Lustre: Skipped 7 previous similar messages [2844498.949912] Lustre: ll_ost03_017: service thread pid 523342 was inactive for 62.139 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2844500.997870] Lustre: ll_ost03_018: service thread pid 523343 was inactive for 63.752 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2844500.997873] Lustre: ll_ost03_013: service thread pid 523338 was inactive for 63.752 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2844500.997875] Lustre: ll_ost03_020: service thread pid 523345 was inactive for 63.752 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2844500.997878] Lustre: ll_ost03_023: service thread pid 523348 was inactive for 63.751 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2844500.997886] Lustre: Skipped 2 previous similar messages [2844500.997887] Lustre: Skipped 2 previous similar messages [2844500.997889] Lustre: Skipped 2 previous similar messages [2844502.341999] Lustre: 523359:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712475792/real 1712475792] req@00000000b4f1aec0 x1795643957450112/t0(0) o104->lustre-OST0038@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712475814 ref 2 fl Rpc:Xr/2/ffffffff rc -11/-1 job:'' [2844502.374139] Lustre: 523359:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 164 previous similar messages [2844505.093804] Lustre: ll_ost03_047: service thread pid 523372 was inactive for 62.726 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2844505.093807] Lustre: ll_ost03_032: service thread pid 523357 was inactive for 62.726 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2844505.093809] Lustre: ll_ost03_043: service thread pid 523368 was inactive for 62.727 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2844505.093816] Lustre: Skipped 15 previous similar messages [2844505.093818] Lustre: Skipped 15 previous similar messages [2844505.111482] Lustre: Skipped 1 previous similar message [2844509.565451] Lustre: lustre-OST003e: Client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) reconnecting [2844509.565454] Lustre: lustre-OST0038: Client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) reconnecting [2844509.565458] Lustre: Skipped 1 previous similar message [2844509.577775] Lustre: Skipped 2 previous similar messages [2844535.813339] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.0.95@o2ib ns: filter-lustre-OST0056_UUID lock: 0000000075303eec/0xbc03d50a6fcfebee lrc: 3/0,0 mode: PR/PR res: [0x1d80000415:0x2:0x0].0x0 rrc: 13511 type: EXT [2657718272->2658267135] (req 2657718272->2657767423) gid 0 flags: 0x60000400030020 nid: 192.168.0.95@o2ib remote: 0x73bc24bd401e39e8 expref: 17346 pid: 518887 timeout: 2844582 lvb_type: 0 [2844535.882290] LustreError: 523355:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@000000006a4ce074 x1795643975499200/t0(0) o104->lustre-OST0050@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2844536.387722] LustreError: 523374:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@000000002e5316b1 x1795643976856896/t0(0) o104->lustre-OST0056@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2844536.412987] LustreError: 523374:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 1421 previous similar messages [2844536.905420] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.0.95@o2ib ns: filter-lustre-OST005c_UUID lock: 00000000d3de3b2f/0xbc03d50a6fcace85 lrc: 3/0,0 mode: PR/PR res: [0x1f0000040f:0x4:0x0].0x0 rrc: 12391 type: EXT [2509901824->2514903039] (req 2509901824->2509955071) gid 0 flags: 0x60000400010020 nid: 192.168.0.95@o2ib remote: 0x73bc24bd4016624d expref: 15760 pid: 523195 timeout: 2844583 lvb_type: 0 [2844536.952881] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 258 previous similar messages [2844537.386797] LustreError: 523367:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@000000000ec6b897 x1795643978505600/t0(0) o104->lustre-OST0026@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2844537.412404] LustreError: 523367:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 102 previous similar messages [2844540.037422] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.0.95@o2ib ns: filter-lustre-OST0020_UUID lock: 00000000e593de96/0xbc03d50a6fd2d162 lrc: 3/0,0 mode: PR/PR res: [0x1000000403:0x5:0x0].0x0 rrc: 14581 type: EXT [2737659904->2737975295] (req 2737659904->2737713151) gid 0 flags: 0x60000400010020 nid: 192.168.0.95@o2ib remote: 0x73bc24bd4022a6a6 expref: 16956 pid: 519004 timeout: 2844587 lvb_type: 0 [2844540.085245] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 1742 previous similar messages [2844546.153243] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.0.104@o2ib ns: filter-lustre-OST003e_UUID lock: 00000000bcd397f1/0xbc03d50a6fb771c8 lrc: 3/0,0 mode: PR/PR res: [0x1780000415:0x2:0x0].0x0 rrc: 9977 type: EXT [2006421504->2007113727] (req 2006421504->2006474751) gid 0 flags: 0x60000400010020 nid: 192.168.0.104@o2ib remote: 0x5f450e92197ff89a expref: 10527 pid: 519027 timeout: 2844593 lvb_type: 0 [2844546.201183] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 1055 previous similar messages [2844546.213296] LustreError: 523380:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@000000003b7809db x1795643988588224/t0(0) o104->lustre-OST003e@192.168.0.104@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2844546.220159] LustreError: 519919:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.104@o2ib arrived at 1712475864 with bad export cookie 13547906344721477643 [2844546.240141] LustreError: 523380:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 103 previous similar messages [2844546.259126] LustreError: 519919:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 6 previous similar messages [2844564.626820] Lustre: lustre-MDT0002: haven't heard from client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000d1f81000, cur 1712475883 expire 1712475853 last 1712475836 [2844565.163303] LustreError: 523226:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.95@o2ib arrived at 1712475883 with bad export cookie 13547906344855370157 [2844565.181873] LustreError: 523226:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 708 previous similar messages [2844566.161341] LustreError: 486155:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.95@o2ib arrived at 1712475884 with bad export cookie 13547906344855370395 [2844566.179728] LustreError: 486155:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 10202 previous similar messages [2844568.170080] LustreError: 522570:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.95@o2ib arrived at 1712475886 with bad export cookie 13547906344855370395 [2844568.188494] LustreError: 522570:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 294 previous similar messages [2844572.204800] LustreError: 523243:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.95@o2ib arrived at 1712475890 with bad export cookie 13547906344855370059 [2844572.223220] LustreError: 523243:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 5052 previous similar messages [2844572.596985] Lustre: lustre-MDT000e: haven't heard from client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000ac0c28d1, cur 1712475891 expire 1712475861 last 1712475844 [2844580.209512] LustreError: 523231:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.95@o2ib arrived at 1712475898 with bad export cookie 13547906344855369149 [2844580.227924] LustreError: 523231:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 17250 previous similar messages [2844583.544319] Lustre: lustre-MDT0008: haven't heard from client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) in 48 seconds. I think it's dead, and I am evicting it. exp 00000000f62f1032, cur 1712475902 expire 1712475872 last 1712475854 [2844584.358161] Lustre: lustre-MDT0014: Client 3c55ee59-e078-479a-987d-704255682268 (at 192.168.0.100@o2ib) reconnecting [2844584.370978] Lustre: Skipped 19 previous similar messages [2844585.644573] Lustre: DEBUG MARKER: server1: executing check_logdir /tmp/test_logs/1712475900 [2844586.764108] Lustre: DEBUG MARKER: client1: executing check_logdir /tmp/test_logs/1712475900 [2844592.376654] Lustre: lustre-OST0044: haven't heard from client 5003d4cc-0b97-41f5-9e4b-7758af7fd7cd (at 192.168.0.104@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000d88d73f7, cur 1712475911 expire 1712475881 last 1712475864 [2844592.402091] Lustre: Skipped 1 previous similar message [2844625.412567] Lustre: lustre-OST0008: haven't heard from client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000aea6cea6, cur 1712475944 expire 1712475914 last 1712475897 [2844625.437752] Lustre: Skipped 15 previous similar messages [2844636.554677] Lustre: lustre-MDT0002: haven't heard from client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000f774701a, cur 1712475955 expire 1712475925 last 1712475908 [2844636.579837] Lustre: Skipped 2 previous similar messages [2844636.648393] LustreError: 523368:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@00000000a3b8f7b5 x1795644029968832/t0(0) o104->lustre-OST0032@192.168.0.100@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2844636.664276] LustreError: 486157:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.100@o2ib arrived at 1712475955 with bad export cookie 13547906344721476747 [2844636.674069] LustreError: 523368:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 619 previous similar messages [2844636.692583] LustreError: 486157:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 10941 previous similar messages [2844670.122079] Lustre: DEBUG MARKER: server1: executing yml_node [2844670.937032] Lustre: DEBUG MARKER: client1: executing yml_node [2844701.486929] Lustre: DEBUG MARKER: Client: 2.15.4 [2844702.709120] Lustre: DEBUG MARKER: MDS: 2.15.4 [2844703.817650] Lustre: DEBUG MARKER: OSS: 2.15.4 [2849648.348638] Lustre: lustre-OST002c: Client 3c55ee59-e078-479a-987d-704255682268 (at 192.168.0.100@o2ib) reconnecting [2849648.361276] Lustre: Skipped 40 previous similar messages [2849648.918625] Lustre: 519011:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712480960/real 0] req@0000000075023037 x1795644553529408/t0(0) o104->lustre-OST000e@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712480967 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'' [2849648.949935] Lustre: 519011:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 7981 previous similar messages [2849656.640415] Lustre: lustre-OST0020: Client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) reconnecting [2849656.652855] Lustre: Skipped 25 previous similar messages [2849663.731337] Lustre: 518893:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712480968/real 1712480968] req@00000000648666c1 x1795644553685376/t0(0) o104->lustre-OST005c@192.168.0.100@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712480980 ref 2 fl Rpc:Xr/2/ffffffff rc -11/-1 job:'' [2849663.763575] Lustre: 518893:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 3145 previous similar messages [2849674.258450] Lustre: lustre-MDT0014: Client 495f5759-68b9-423a-9105-fb855a79e8b3 (at 192.168.0.101@o2ib) reconnecting [2849674.271322] Lustre: Skipped 44 previous similar messages [2849678.419107] Lustre: 518963:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712480980/real 1712480980] req@00000000d058c1a0 x1795644555001152/t0(0) o104->lustre-OST003e@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712480996 ref 2 fl Rpc:Xr/2/ffffffff rc -11/-1 job:'' [2849678.419110] Lustre: 518905:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712480980/real 1712480980] req@000000009d069c62 x1795644554995776/t0(0) o104->lustre-OST003e@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712480996 ref 2 fl Rpc:Xr/2/ffffffff rc -11/-1 job:'' [2849678.419118] Lustre: 518905:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 211 previous similar messages [2849678.451173] Lustre: 518963:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 285 previous similar messages [2849682.354884] ptlrpc_watchdog_fire: 23 callbacks suppressed [2849682.354887] Lustre: ll_ost02_008: service thread pid 518861 was inactive for 40.704 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2849682.354890] Lustre: ll_ost02_006: service thread pid 496833 was inactive for 40.705 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2849682.354892] Lustre: ll_ost02_027: service thread pid 518963 was inactive for 40.446 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2849682.354895] Lustre: ll_ost02_005: service thread pid 492317 was inactive for 40.704 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2849682.354902] Lustre: ll_ost02_030: service thread pid 518972 was inactive for 40.446 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [2849682.354905] Pid: 518893, comm: ll_ost02_011 5.10.0-188.0.0.101.oe2203sp3.aarch64 #1 SMP Wed Feb 21 13:52:43 CST 2024 [2849682.354906] Lustre: Skipped 12 previous similar messages [2849682.354908] Lustre: Skipped 12 previous similar messages [2849682.354909] Lustre: Skipped 14 previous similar messages [2849682.354911] Call Trace TBD: [2849682.354914] Pid: 518969, comm: ll_ost02_029 5.10.0-188.0.0.101.oe2203sp3.aarch64 #1 SMP Wed Feb 21 13:52:43 CST 2024 [2849682.354916] Call Trace TBD: [2849682.362269] Lustre: Skipped 6 previous similar messages [2849682.379643] Lustre: Skipped 2 previous similar messages [2849682.379653] Pid: 518972, comm: ll_ost02_030 5.10.0-188.0.0.101.oe2203sp3.aarch64 #1 SMP Wed Feb 21 13:52:43 CST 2024 [2849682.532465] Call Trace TBD: [2849710.284332] Lustre: lustre-OST005c: Client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) reconnecting [2849710.296612] Lustre: Skipped 33 previous similar messages [2849710.610561] Lustre: 527417:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712481013/real 1712481013] req@00000000e0be08bb x1795644560808896/t0(0) o104->lustre-OST0050@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712481029 ref 2 fl Rpc:Xr/2/ffffffff rc -11/-1 job:'' [2849710.642292] Lustre: 527417:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 6347 previous similar messages [2849711.026422] Lustre: ll_ost02_026: service thread pid 518960 was inactive for 63.784 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2849711.026425] Lustre: ll_ost02_015: service thread pid 518912 was inactive for 63.784 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2849711.026427] Lustre: ll_ost02_038: service thread pid 519002 was inactive for 63.784 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2849711.026429] Lustre: ll_ost02_012: service thread pid 518897 was inactive for 63.784 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2849711.026431] Lustre: ll_ost02_035: service thread pid 518993 was inactive for 63.784 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2849711.026434] Lustre: ll_ost02_009: service thread pid 518884 was inactive for 63.784 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2849711.026441] Lustre: Skipped 7 previous similar messages [2849711.026442] Lustre: Skipped 7 previous similar messages [2849711.026443] Lustre: Skipped 7 previous similar messages [2849711.026445] Lustre: Skipped 7 previous similar messages [2849711.026447] Lustre: Skipped 7 previous similar messages [2849711.043916] Lustre: Skipped 4 previous similar messages [2849715.122369] Lustre: ll_ost02_056: service thread pid 527409 was inactive for 63.528 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2849715.122372] Lustre: ll_ost02_064: service thread pid 527417 was inactive for 63.527 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2849715.122380] Lustre: Skipped 15 previous similar messages [2849715.139857] Lustre: Skipped 15 previous similar messages [2849741.746124] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.0.95@o2ib ns: filter-lustre-OST0044_UUID lock: 000000008ce4c84a/0xbc03d50a90e3543e lrc: 3/0,0 mode: PR/PR res: [0x190000040b:0x2:0x0].0x0 rrc: 20735 type: EXT [3767046144->3782565887] (req 3767046144->3767099391) gid 0 flags: 0x60000400010020 nid: 192.168.0.95@o2ib remote: 0x73bc24bd4b333592 expref: 28256 pid: 518956 timeout: 2849788 lvb_type: 0 [2849741.793154] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 523 previous similar messages [2849741.838000] LustreError: 518945:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@00000000513d921d x1795644577691264/t0(0) o104->lustre-OST001a@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2849741.862939] LustreError: 518945:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 2 previous similar messages [2849746.889889] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.0.101@o2ib ns: filter-lustre-OST000e_UUID lock: 000000008637d4d2/0xbc03d50a90d95f03 lrc: 3/0,0 mode: PR/PR res: [0xb80000411:0x2:0x0].0x0 rrc: 21304 type: EXT [3551969280->3552477183] (req 3551969280->3552018431) gid 0 flags: 0x60000400010020 nid: 192.168.0.101@o2ib remote: 0x9e497775a5531d65 expref: 14716 pid: 519007 timeout: 2849794 lvb_type: 0 [2849746.937538] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 1777 previous similar messages [2849746.950430] LustreError: 519000:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@00000000c87f409b x1795644580004800/t0(0) o104->lustre-OST002c@192.168.0.101@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2849746.976185] LustreError: 519000:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 332 previous similar messages [2849746.994729] LustreError: 486159:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.101@o2ib arrived at 1712481065 with bad export cookie 13547906345251778211 [2849747.013184] LustreError: 486159:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 10598 previous similar messages [2849749.141802] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.0.104@o2ib ns: filter-lustre-OST002c_UUID lock: 00000000a0a9a16a/0xbc03d50a90de1158 lrc: 3/0,0 mode: PR/PR res: [0x1300000408:0x3:0x0].0x0 rrc: 20697 type: EXT [3564363776->3566591999] (req 3564363776->3564417023) gid 0 flags: 0x60000400010020 nid: 192.168.0.104@o2ib remote: 0x5f450e922376355f expref: 15117 pid: 523146 timeout: 2849796 lvb_type: 0 [2849749.190102] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 491 previous similar messages [2849749.202740] LustreError: 496833:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@00000000da94b172 x1795644581269248/t0(0) o104->lustre-OST002c@192.168.0.104@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2849749.228476] LustreError: 496833:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 194 previous similar messages [2849751.025929] LustreError: 527343:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.104@o2ib arrived at 1712481069 with bad export cookie 13547906345251778302 [2849751.044604] LustreError: 527343:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 196 previous similar messages [2849762.481627] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.0.101@o2ib ns: filter-lustre-OST004a_UUID lock: 00000000bd5c5f45/0xbc03d50a90e59930 lrc: 3/0,0 mode: PR/PR res: [0x1a8000040d:0x4:0x0].0x0 rrc: 20559 type: EXT [3939622912->3941441535] (req 3939622912->3939676159) gid 0 flags: 0x60000400010020 nid: 192.168.0.101@o2ib remote: 0x9e497775a55ffb7f expref: 15299 pid: 496833 timeout: 2849809 lvb_type: 0 [2849762.529879] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 352 previous similar messages [2849762.542458] LustreError: 518893:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@00000000a866c552 x1795644592015296/t0(0) o104->lustre-OST005c@192.168.0.101@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2849762.567935] LustreError: 518893:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 314 previous similar messages [2849762.647515] LustreError: 523221:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.101@o2ib arrived at 1712481081 with bad export cookie 13547906345251778029 [2849762.666273] LustreError: 523221:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 11 previous similar messages [2849767.761561] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.0.101@o2ib ns: filter-lustre-OST001a_UUID lock: 00000000d02b8822/0xbc03d50a90e10c20 lrc: 3/0,0 mode: PR/PR res: [0xe80000411:0x3:0x0].0x0 rrc: 19249 type: EXT [3671470080->3702190079] (req 3671470080->3671519231) gid 0 flags: 0x60000400010020 nid: 192.168.0.101@o2ib remote: 0x9e497775a55ad349 expref: 14348 pid: 519007 timeout: 2849814 lvb_type: 0 [2849767.809499] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 213 previous similar messages [2849769.358093] Lustre: lustre-MDT0008: haven't heard from client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 000000000e1785cb, cur 1712481088 expire 1712481058 last 1712481041 [2849769.383155] Lustre: Skipped 16 previous similar messages [2849774.170901] LustreError: 527412:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@00000000da7af67f x1795644620038976/t0(0) o104->lustre-OST004a@192.168.0.100@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2849774.196441] LustreError: 527412:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 15881 previous similar messages [2849774.478042] Lustre: lustre-MDT0002: haven't heard from client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 0000000015bd201a, cur 1712481093 expire 1712481063 last 1712481046 [2849774.708352] Lustre: lustre-MDT0014: Client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) reconnecting [2849774.720814] Lustre: Skipped 19 previous similar messages [2849776.085447] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.0.100@o2ib ns: filter-lustre-OST0026_UUID lock: 0000000071afd5ca/0xbc03d50a90d97bb2 lrc: 3/0,0 mode: PR/PR res: [0x1180000411:0x3:0x0].0x0 rrc: 18279 type: EXT [3554762752->3555831807] (req 3554762752->3554815999) gid 0 flags: 0x60000400010020 nid: 192.168.0.100@o2ib remote: 0x746077e7bd784f82 expref: 14489 pid: 523167 timeout: 2849823 lvb_type: 0 [2849776.133358] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 615 previous similar messages [2849778.660697] LustreError: 486156:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.100@o2ib arrived at 1712481097 with bad export cookie 13547906345251777994 [2849778.679315] LustreError: 486156:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 1131 previous similar messages [2849793.309120] Lustre: lustre-OST005c: haven't heard from client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) in 48 seconds. I think it's dead, and I am evicting it. exp 00000000e1198053, cur 1712481112 expire 1712481082 last 1712481064 [2849809.303493] Lustre: lustre-OST005c: haven't heard from client 495f5759-68b9-423a-9105-fb855a79e8b3 (at 192.168.0.101@o2ib) in 44 seconds. I think it's dead, and I am evicting it. exp 00000000da4f4279, cur 1712481128 expire 1712481098 last 1712481084 [2849809.329017] Lustre: Skipped 37 previous similar messages [2849813.168747] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.0.100@o2ib ns: filter-lustre-OST005c_UUID lock: 00000000e1cd8418/0xbc03d50a90cfc44a lrc: 3/0,0 mode: PR/PR res: [0x1f0000040a:0x4:0x0].0x0 rrc: 3470 type: EXT [3327660032->3328491519] (req 3327660032->3327713279) gid 0 flags: 0x60000400010020 nid: 192.168.0.100@o2ib remote: 0x746077e7bd6df82b expref: 13756 pid: 518874 timeout: 2849860 lvb_type: 0 [2849813.216543] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 377 previous similar messages [2849832.370156] Lustre: lustre-OST0032: haven't heard from client 3c55ee59-e078-479a-987d-704255682268 (at 192.168.0.100@o2ib) in 31 seconds. I think it's dead, and I am evicting it. exp 000000001d9328d6, cur 1712481151 expire 1712481121 last 1712481120 [2849832.395511] Lustre: Skipped 5 previous similar messages [2849896.680743] Lustre: DEBUG MARKER: server1: executing check_logdir /tmp/test_logs/1712481212 [2849897.552815] Lustre: DEBUG MARKER: client1: executing check_logdir /tmp/test_logs/1712481212 [2850033.423256] Lustre: lustre-MDT0014: Client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) reconnecting [2850033.435702] Lustre: Skipped 11 previous similar messages [2850173.857306] Lustre: DEBUG MARKER: server1: executing yml_node [2850174.741729] Lustre: DEBUG MARKER: client1: executing yml_node [2850177.741023] Lustre: DEBUG MARKER: Client: 2.15.4 [2850178.874927] Lustre: DEBUG MARKER: MDS: 2.15.4 [2850180.096967] Lustre: DEBUG MARKER: OSS: 2.15.4 [2850216.362549] Lustre: lustre-MDT0002: haven't heard from client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000c49ca600, cur 1712481535 expire 1712481505 last 1712481488 [2850216.387387] Lustre: Skipped 20 previous similar messages [2850360.401338] Lustre: DEBUG MARKER: server1: executing check_logdir /tmp/test_logs/1712481675 [2850361.271879] Lustre: DEBUG MARKER: client1: executing check_logdir /tmp/test_logs/1712481675 [2850363.281149] Lustre: DEBUG MARKER: server1: executing yml_node [2850364.120250] Lustre: DEBUG MARKER: client1: executing yml_node [2850367.144084] Lustre: DEBUG MARKER: Client: 2.15.4 [2850368.289689] Lustre: DEBUG MARKER: MDS: 2.15.4 [2850369.428385] Lustre: DEBUG MARKER: OSS: 2.15.4 [2857845.441618] Lustre: lustre-OST004a: Client 495f5759-68b9-423a-9105-fb855a79e8b3 (at 192.168.0.101@o2ib) reconnecting [2857845.454196] Lustre: Skipped 2 previous similar messages [2857845.528772] Lustre: 491847:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712489157/real 0] req@00000000feac9e0a x1795646608022272/t0(0) o104->lustre-OST004a@192.168.0.100@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712489164 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'' [2857845.528775] Lustre: 523358:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712489157/real 0] req@000000003fb9f0f3 x1795646608014848/t0(0) o104->lustre-OST004a@192.168.0.100@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712489164 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'' [2857845.528782] Lustre: 523358:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 10251 previous similar messages [2857845.559783] Lustre: 491847:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 19668 previous similar messages [2857854.796651] Lustre: 523388:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1712489163/real 0] req@00000000bca33da3 x1795646611093568/t0(0) o104->lustre-OST003e@192.168.0.91@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712489171 ref 2 fl Rpc:Xr/0/ffffffff rc 0/-1 job:'' [2857854.827764] Lustre: 523388:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 4003 previous similar messages [2857872.140366] Lustre: 523345:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712489183/real 1712489183] req@0000000060c6a9e3 x1795646608341312/t0(0) o104->lustre-OST0008@192.168.0.100@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712489190 ref 2 fl Rpc:Xr/2/ffffffff rc -11/-1 job:'' [2857872.172755] Lustre: 523345:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 300 previous similar messages [2857880.364070] ptlrpc_watchdog_fire: 43 callbacks suppressed [2857880.364078] Lustre: ll_ost01_020: service thread pid 518891 was inactive for 40.469 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [2857880.364081] Lustre: ll_ost01_116: service thread pid 523171 was inactive for 40.469 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2857880.364084] Lustre: ll_ost01_120: service thread pid 523177 was inactive for 40.469 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [2857880.364086] Lustre: ll_ost01_094: service thread pid 523138 was inactive for 40.469 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [2857880.364091] Lustre: Skipped 3 previous similar messages [2857880.364093] Pid: 523177, comm: ll_ost01_120 5.10.0-188.0.0.101.oe2203sp3.aarch64 #1 SMP Wed Feb 21 13:52:43 CST 2024 [2857880.364094] Call Trace TBD: [2857880.364096] Pid: 523138, comm: ll_ost01_094 5.10.0-188.0.0.101.oe2203sp3.aarch64 #1 SMP Wed Feb 21 13:52:43 CST 2024 [2857880.364097] Call Trace TBD: [2857880.371141] Pid: 518891, comm: ll_ost01_020 5.10.0-188.0.0.101.oe2203sp3.aarch64 #1 SMP Wed Feb 21 13:52:43 CST 2024 [2857880.460078] LNetError: 469698:0:(o2iblnd_cb.c:3370:kiblnd_check_txs_locked()) Timed out tx: tx_queue(WSQ:001), 18 seconds [2857880.470931] Lustre: lustre-MDT0002: Client 3c55ee59-e078-479a-987d-704255682268 (at 192.168.0.100@o2ib) reconnecting [2857880.470934] Lustre: Skipped 233 previous similar messages [2857880.472536] Call Trace TBD: [2857880.477155] LNetError: 469698:0:(o2iblnd_cb.c:3439:kiblnd_check_conns()) Timed out RDMA with 192.168.0.91@o2ib (9): c: 0, oc: 1, rc: 8 [2857884.460007] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.91@o2ib: 6 seconds [2857888.459949] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.91@o2ib: 10 seconds [2857888.472359] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 7 previous similar messages [2857892.459882] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.91@o2ib: 14 seconds [2857892.472303] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 7 previous similar messages [2857896.459807] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.91@o2ib: 18 seconds [2857896.472277] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 7 previous similar messages [2857904.203838] Lustre: 523385:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712489212/real 1712489212] req@000000008dd7c4a6 x1795646611094784/t0(0) o104->lustre-OST001a@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712489219 ref 2 fl Rpc:Xr/2/ffffffff rc -11/-1 job:'' [2857904.203841] Lustre: 523363:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712489212/real 1712489212] req@000000004f190b8b x1795646611094528/t0(0) o104->lustre-OST001a@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712489219 ref 2 fl Rpc:Xr/2/ffffffff rc -11/-1 job:'' [2857904.203849] Lustre: 523363:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 5880 previous similar messages [2857904.236341] Lustre: 523385:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 5891 previous similar messages [2857906.987650] Lustre: ll_ost03_039: service thread pid 523364 was inactive for 63.498 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2857906.987652] Lustre: ll_ost03_045: service thread pid 523370 was inactive for 63.498 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2857906.987654] Lustre: ll_ost03_025: service thread pid 523350 was inactive for 63.498 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [2857906.987662] Lustre: Skipped 8 previous similar messages [2857906.987663] Lustre: Skipped 8 previous similar messages [2857907.005814] Lustre: Skipped 2 previous similar messages [2857938.736185] LustreError: 523354:0:(ldlm_lockd.c:713:ldlm_handle_ast_error()) ### client (nid 192.168.0.95@o2ib) failed to reply to blocking AST (req@00000000990e76a5 x1795646606762688 status 0 rc -110), evict it ns: filter-lustre-OST005c_UUID lock: 00000000b9902f06/0xbc03d50ae91c2e4b lrc: 4/0,0 mode: PR/PR res: [0x1f00000405:0x2:0x0].0x0 rrc: 148001 type: EXT [12652257280->12653649919] (req 12652257280->12652306431) gid 0 flags: 0x60000400010020 nid: 192.168.0.95@o2ib remote: 0x73bc24bd6bd4054b expref: 131391 pid: 518964 timeout: 2858040 lvb_type: 0 [2857938.736187] LustreError: 508549:0:(ldlm_lockd.c:713:ldlm_handle_ast_error()) ### client (nid 192.168.0.95@o2ib) failed to reply to blocking AST (req@000000001b463c42 x1795646606810368 status 0 rc -110), evict it ns: filter-lustre-OST0026_UUID lock: 0000000048017a11/0xbc03d50ae92c2468 lrc: 4/0,0 mode: PR/PR res: [0x118000040f:0x2:0x0].0x0 rrc: 148279 type: EXT [12649394176->12650786815] (req 12649394176->12649443327) gid 0 flags: 0x60000400010020 nid: 192.168.0.95@o2ib remote: 0x73bc24bd6bde0951 expref: 131617 pid: 523193 timeout: 2858040 lvb_type: 0 [2857938.736190] LustreError: 138-a: lustre-OST0026: A client on nid 192.168.0.95@o2ib was evicted due to a lock blocking callback time out: rc -110 [2857938.736198] LustreError: 508549:0:(ldlm_lockd.c:713:ldlm_handle_ast_error()) Skipped 2 previous similar messages [2857938.736199] LustreError: Skipped 1 previous similar message [2857938.736211] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 46s: evicting client at 192.168.0.95@o2ib ns: filter-lustre-OST000e_UUID lock: 00000000fed41461/0xbc03d50ae91b6b1d lrc: 3/0,0 mode: PR/PR res: [0xb80000407:0x3:0x0].0x0 rrc: 148763 type: EXT [12649390080->12650692607] (req 12649390080->12649443327) gid 0 flags: 0x60000400030020 nid: 192.168.0.95@o2ib remote: 0x73bc24bd6bd350e1 expref: 131680 pid: 518841 timeout: 0 lvb_type: 0 [2857938.736217] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 30 previous similar messages [2857938.736670] LustreError: 523335:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@000000005c3aa4e6 x1795646620937152/t0(0) o104->lustre-OST000e@192.168.0.95@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2857938.736672] LustreError: 523335:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 8409 previous similar messages [2857938.757611] LustreError: 527338:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.104@o2ib arrived at 1712489257 with bad export cookie 13547906346255753799 [2857938.757615] LustreError: 523002:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.104@o2ib arrived at 1712489257 with bad export cookie 13547906346255753799 [2857938.757618] LustreError: 527337:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.104@o2ib arrived at 1712489257 with bad export cookie 13547906346255753799 [2857938.757620] LustreError: 527338:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 4061 previous similar messages [2857938.757622] LustreError: 523002:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 4061 previous similar messages [2857938.757624] LustreError: 527337:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 4061 previous similar messages [2857938.789954] LustreError: 523354:0:(ldlm_lockd.c:713:ldlm_handle_ast_error()) Skipped 7 previous similar messages [2857940.737597] LustreError: 491847:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@000000006b6c7813 x1795646621215808/t0(0) o104->lustre-OST004a@192.168.0.100@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2857940.763698] LustreError: 491847:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 1741 previous similar messages [2857942.799284] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.0.101@o2ib ns: filter-lustre-OST0002_UUID lock: 00000000ae870d98/0xbc03d50ae8d620f3 lrc: 4/0,0 mode: PR/PR res: [0x880000407:0x4:0x0].0x0 rrc: 149669 type: EXT [12591480832->12591575039] (req 12591480832->12591534079) gid 0 flags: 0x60000400010020 nid: 192.168.0.101@o2ib remote: 0x9e497775c3dd6dbd expref: 129874 pid: 527408 timeout: 2857990 lvb_type: 0 [2857942.836890] LustreError: 523013:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.100@o2ib arrived at 1712489261 with bad export cookie 13547906346255752742 [2857942.848276] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 1238 previous similar messages [2857942.867246] LustreError: 523013:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 33 previous similar messages [2857942.957564] LustreError: 523344:0:(ldlm_lockd.c:713:ldlm_handle_ast_error()) ### client (nid 192.168.0.100@o2ib) failed to reply to blocking AST (req@000000007a585ccc x1795646609122112 status 0 rc -110), evict it ns: filter-lustre-OST000e_UUID lock: 0000000059c2f144/0xbc03d50ae91c648b lrc: 4/0,0 mode: PR/PR res: [0xb80000407:0x4:0x0].0x0 rrc: 148153 type: EXT [12585807872->12586733567] (req 12585807872->12585857023) gid 0 flags: 0x60000400010020 nid: 192.168.0.100@o2ib remote: 0x746077e7dd50c06f expref: 98650 pid: 523179 timeout: 2858060 lvb_type: 0 [2857943.011478] LustreError: 138-a: lustre-OST000e: A client on nid 192.168.0.100@o2ib was evicted due to a lock blocking callback time out: rc -110 [2857943.026738] LustreError: Skipped 19 previous similar messages [2857944.748295] LustreError: 523372:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@00000000c1cd5b2d x1795646621633280/t0(0) o104->lustre-OST0020@192.168.0.101@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2857944.774020] LustreError: 523372:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 1915 previous similar messages [2857949.606086] Lustre: lustre-MDT0008: Client 3c55ee59-e078-479a-987d-704255682268 (at 192.168.0.100@o2ib) reconnecting [2857949.606089] Lustre: lustre-MDT000e: Client 3c55ee59-e078-479a-987d-704255682268 (at 192.168.0.100@o2ib) reconnecting [2857949.606091] Lustre: lustre-MDT0002: Client 3c55ee59-e078-479a-987d-704255682268 (at 192.168.0.100@o2ib) reconnecting [2857949.606097] Lustre: Skipped 178 previous similar messages [2857949.606099] Lustre: Skipped 178 previous similar messages [2857951.426727] LustreError: 523378:0:(ldlm_lockd.c:713:ldlm_handle_ast_error()) ### client (nid 192.168.0.95@o2ib) failed to reply to blocking AST (req@00000000c77a7b54 x1795646611681856 status 0 rc -110), evict it ns: filter-lustre-OST0008_UUID lock: 00000000988c80e5/0xbc03d50ae8c292a3 lrc: 4/0,0 mode: PR/PR res: [0xa00000407:0x5:0x0].0x0 rrc: 148399 type: EXT [12587675648->12588318719] (req 12587675648->12587728895) gid 0 flags: 0x60000400010020 nid: 192.168.0.95@o2ib remote: 0x73bc24bd6b671d71 expref: 131498 pid: 523178 timeout: 2858064 lvb_type: 0 [2857951.480345] LustreError: 523378:0:(ldlm_lockd.c:713:ldlm_handle_ast_error()) Skipped 3 previous similar messages [2857951.492695] LustreError: 138-a: lustre-OST0008: A client on nid 192.168.0.95@o2ib was evicted due to a lock blocking callback time out: rc -110 [2857951.507431] LustreError: Skipped 2 previous similar messages [2857951.514536] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 34s: evicting client at 192.168.0.95@o2ib ns: filter-lustre-OST0008_UUID lock: 00000000988c80e5/0xbc03d50ae8c292a3 lrc: 3/0,0 mode: PR/PR res: [0xa00000407:0x5:0x0].0x0 rrc: 148399 type: EXT [12587675648->12588318719] (req 12587675648->12587728895) gid 0 flags: 0x60000400010020 nid: 192.168.0.95@o2ib remote: 0x73bc24bd6b671d71 expref: 131499 pid: 523178 timeout: 0 lvb_type: 0 [2857951.561499] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 542 previous similar messages [2857952.758293] LustreError: 523395:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@0000000089a45b94 x1795646622753536/t0(0) o104->lustre-OST0056@192.168.0.101@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2857952.783829] LustreError: 523395:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 2523 previous similar messages [2857956.282287] LustreError: 518622:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.100@o2ib arrived at 1712489275 with bad export cookie 13547906345684935712 [2857956.300860] LustreError: 518622:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 61 previous similar messages [2857962.123028] LustreError: 523353:0:(ldlm_lockd.c:713:ldlm_handle_ast_error()) ### client (nid 192.168.0.91@o2ib) failed to reply to blocking AST (req@00000000527752b5 x1795646613018880 status 0 rc -110), evict it ns: filter-lustre-OST0014_UUID lock: 00000000d62094d8/0xbc03d50ae92a693f lrc: 4/0,0 mode: PR/PR res: [0xd00000406:0x2:0x0].0x0 rrc: 148625 type: EXT [12568236032->12569440255] (req 12568236032->12568285183) gid 0 flags: 0x60000400010020 nid: 192.168.0.91@o2ib remote: 0x7d6fb536a49cc4ea expref: 167758 pid: 518804 timeout: 2858059 lvb_type: 0 [2857962.123144] LustreError: 138-a: lustre-OST0008: A client on nid 192.168.0.91@o2ib was evicted due to a lock blocking callback time out: rc -110 [2857962.176781] LustreError: 523353:0:(ldlm_lockd.c:713:ldlm_handle_ast_error()) Skipped 17 previous similar messages [2857962.191743] LustreError: Skipped 16 previous similar messages [2857967.246911] LustreError: 523370:0:(ldlm_lockd.c:713:ldlm_handle_ast_error()) ### client (nid 192.168.0.91@o2ib) failed to reply to blocking AST (req@000000004d765170 x1795646613376128 status 0 rc -110), evict it ns: filter-lustre-OST0044_UUID lock: 000000001f80d909/0xbc03d50ae92a8ac5 lrc: 4/0,0 mode: PR/PR res: [0x1900000405:0x3:0x0].0x0 rrc: 147797 type: EXT [12575019008->12576460799] (req 12575019008->12575072255) gid 0 flags: 0x60000400030020 nid: 192.168.0.91@o2ib remote: 0x7d6fb536a49ce95d expref: 167715 pid: 518804 timeout: 2858059 lvb_type: 0 [2857967.278894] LustreError: 138-a: lustre-OST0044: A client on nid 192.168.0.91@o2ib was evicted due to a lock blocking callback time out: rc -110 [2857967.300729] LustreError: 523370:0:(ldlm_lockd.c:713:ldlm_handle_ast_error()) Skipped 5 previous similar messages [2857967.316043] LustreError: Skipped 5 previous similar messages [2857968.522763] Lustre: 523388:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712489231/real 1712489232] req@000000008e5cd362 x1795646613399104/t0(0) o104->lustre-OST003e@192.168.0.91@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712489285 ref 1 fl Rpc:XQr/2/ffffffff rc -11/-1 job:'' [2857968.522766] Lustre: 523390:0:(client.c:2289:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1712489231/real 1712489232] req@00000000088f83b5 x1795646613377920/t0(0) o104->lustre-OST003e@192.168.0.91@o2ib:15/16 lens 328/224 e 0 to 1 dl 1712489285 ref 1 fl Rpc:XQr/2/ffffffff rc -11/-1 job:'' [2857968.522774] Lustre: 523390:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 19478 previous similar messages [2857968.522880] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 56s: evicting client at 192.168.0.91@o2ib ns: filter-lustre-OST003e_UUID lock: 0000000073285eb0/0xbc03d50ae91929b2 lrc: 3/0,0 mode: PR/PR res: [0x1780000406:0x3:0x0].0x0 rrc: 147647 type: EXT [12578095104->12579303423] (req 12578095104->12578148351) gid 0 flags: 0x60000400010020 nid: 192.168.0.91@o2ib remote: 0x7d6fb536a48a14da expref: 167767 pid: 518975 timeout: 0 lvb_type: 0 [2857968.522882] LustreError: 486167:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 22 previous similar messages [2857968.555334] Lustre: 523388:0:(client.c:2289:ptlrpc_expire_one_request()) Skipped 19492 previous similar messages [2857968.758612] LustreError: 523350:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@00000000fedb1729 x1795646798129216/t0(0) o104->lustre-OST0002@192.168.0.91@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2857968.758615] LustreError: 523390:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@00000000dab2d2b4 x1795646798129408/t0(0) o104->lustre-OST003e@192.168.0.91@o2ib:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [2857968.758623] LustreError: 523390:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 925694 previous similar messages [2857968.784180] LustreError: 523350:0:(client.c:1255:ptlrpc_import_delay_req()) Skipped 930874 previous similar messages [2857980.458424] LNetError: 469698:0:(o2iblnd_cb.c:3370:kiblnd_check_txs_locked()) Timed out tx: tx_queue(WSQ:001), 16 seconds [2857980.471518] LNetError: 469698:0:(o2iblnd_cb.c:3439:kiblnd_check_conns()) Timed out RDMA with 192.168.0.91@o2ib (16): c: 0, oc: 1, rc: 8 [2857984.462366] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.91@o2ib: 4 seconds [2857984.475204] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 15 previous similar messages [2857985.322406] Lustre: lustre-OST0014: haven't heard from client 5003d4cc-0b97-41f5-9e4b-7758af7fd7cd (at 192.168.0.104@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000c8e488d7, cur 1712489304 expire 1712489274 last 1712489257 [2857985.347824] Lustre: Skipped 3 previous similar messages [2857986.350609] LustreError: 523394:0:(ldlm_lockd.c:713:ldlm_handle_ast_error()) ### client (nid 192.168.0.91@o2ib) failed to reply to blocking AST (req@00000000790bd6c0 x1795646610783232 status 0 rc -110), evict it ns: filter-lustre-OST0038_UUID lock: 0000000035dc73a2/0xbc03d50ae907e60b lrc: 4/0,0 mode: PR/PR res: [0x160000040d:0x3:0x0].0x0 rrc: 147722 type: EXT [12606869504->12607041535] (req 12606869504->12606918655) gid 0 flags: 0x60000400030020 nid: 192.168.0.91@o2ib remote: 0x7d6fb536a47284e7 expref: 167601 pid: 505618 timeout: 2858048 lvb_type: 0 [2857986.404441] LustreError: 523394:0:(ldlm_lockd.c:713:ldlm_handle_ast_error()) Skipped 3 previous similar messages [2857986.416520] LustreError: 138-a: lustre-OST0038: A client on nid 192.168.0.91@o2ib was evicted due to a lock blocking callback time out: rc -110 [2857986.431235] LustreError: Skipped 4 previous similar messages [2857992.462237] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.91@o2ib: 12 seconds [2857992.475226] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 15 previous similar messages [2858001.178678] Lustre: lustre-OST0032: haven't heard from client 495f5759-68b9-423a-9105-fb855a79e8b3 (at 192.168.0.101@o2ib) in 46 seconds. I think it's dead, and I am evicting it. exp 00000000bd6af1ab, cur 1712489320 expire 1712489290 last 1712489274 [2858001.203995] Lustre: Skipped 30 previous similar messages [2858008.461973] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.91@o2ib: 28 seconds [2858008.474852] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 31 previous similar messages [2858044.461382] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Timed out tx for 192.168.0.91@o2ib: 20 seconds [2858044.474251] LNet: 469698:0:(o2iblnd_cb.c:3415:kiblnd_check_conns()) Skipped 63 previous similar messages [2858069.201363] Lustre: lustre-OST0014: haven't heard from client 8a3b1b08-af0c-43fe-a247-a462555b0372 (at 192.168.0.95@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000e0b7b93f, cur 1712489388 expire 1712489358 last 1712489341 [2858069.226659] Lustre: Skipped 48 previous similar messages [2858120.516170] Lustre: lustre-OST000e: haven't heard from client 3c55ee59-e078-479a-987d-704255682268 (at 192.168.0.100@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 0000000038149ca8, cur 1712489439 expire 1712489409 last 1712489392 [2858120.541279] Lustre: Skipped 39 previous similar messages [2858144.455733] LNetError: 469698:0:(o2iblnd_cb.c:3370:kiblnd_check_txs_locked()) Timed out tx: tx_queue(WSQ:001), 19 seconds [2858144.469097] LNetError: 469698:0:(o2iblnd_cb.c:3439:kiblnd_check_conns()) Timed out RDMA with 192.168.0.91@o2ib (11): c: 0, oc: 0, rc: 8 [2858164.455402] LNetError: 469698:0:(o2iblnd_cb.c:3370:kiblnd_check_txs_locked()) Timed out tx: tx_queue(WSQ:001), 18 seconds [2858164.468601] LNetError: 469698:0:(o2iblnd_cb.c:3370:kiblnd_check_txs_locked()) Skipped 1 previous similar message [2858164.480596] LNetError: 469698:0:(o2iblnd_cb.c:3439:kiblnd_check_conns()) Timed out RDMA with 192.168.0.91@o2ib (16): c: 0, oc: 0, rc: 8 [2858164.494578] LNetError: 469698:0:(o2iblnd_cb.c:3439:kiblnd_check_conns()) Skipped 1 previous similar message [2858184.455075] LNetError: 469698:0:(o2iblnd_cb.c:3370:kiblnd_check_txs_locked()) Timed out tx: tx_queue(WSQ:001), 19 seconds [2858184.468502] LNetError: 469698:0:(o2iblnd_cb.c:3439:kiblnd_check_conns()) Timed out RDMA with 192.168.0.91@o2ib (19): c: 0, oc: 0, rc: 8 [2858194.147408] Lustre: lustre-MDT0002: haven't heard from client e3c026fc-444f-462b-aa0c-4bbb7d271d35 (at 192.168.0.91@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 0000000062518138, cur 1712489513 expire 1712489483 last 1712489466 [2858194.172688] Lustre: Skipped 27 previous similar messages [2858204.454758] LNetError: 469698:0:(o2iblnd_cb.c:3370:kiblnd_check_txs_locked()) Timed out tx: tx_queue(WSQ:001), 17 seconds [2858204.468020] LNetError: 469698:0:(o2iblnd_cb.c:3439:kiblnd_check_conns()) Timed out RDMA with 192.168.0.91@o2ib (16): c: 0, oc: 3, rc: 8 [2858336.452581] LNetError: 469698:0:(o2iblnd_cb.c:3370:kiblnd_check_txs_locked()) Timed out tx: active_txs(WSQ:010), 16 seconds [2858336.466141] LNetError: 469698:0:(o2iblnd_cb.c:3439:kiblnd_check_conns()) Timed out RDMA with 192.168.0.91@o2ib (116): c: 8, oc: 0, rc: 8 [2858336.824775] Lustre: lustre-MDT000e: Client 495f5759-68b9-423a-9105-fb855a79e8b3 (at 192.168.0.101@o2ib) reconnecting [2858336.837210] Lustre: Skipped 67 previous similar messages [2858368.164212] Lustre: lustre-MDT0008: haven't heard from client 5003d4cc-0b97-41f5-9e4b-7758af7fd7cd (at 192.168.0.104@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 000000003b664b64, cur 1712489687 expire 1712489657 last 1712489640 [2858368.189825] Lustre: Skipped 24 previous similar messages [2858756.382175] Lustre: lustre-MDT0002: haven't heard from client 495f5759-68b9-423a-9105-fb855a79e8b3 (at 192.168.0.101@o2ib) in 48 seconds. I think it's dead, and I am evicting it. exp 00000000fad26746, cur 1712490075 expire 1712490045 last 1712490027 [2858756.407606] Lustre: Skipped 7 previous similar messages [2858947.937948] Lustre: lustre-MDT0002: Client 5003d4cc-0b97-41f5-9e4b-7758af7fd7cd (at 192.168.0.104@o2ib) reconnecting [2858947.937950] Lustre: lustre-MDT0008: Client 5003d4cc-0b97-41f5-9e4b-7758af7fd7cd (at 192.168.0.104@o2ib) reconnecting [2858947.937955] Lustre: Skipped 10 previous similar messages [2858947.944276] Lustre: lustre-MDT0002: Export 000000009e833eeb already connecting from 192.168.0.104@o2ib [2858947.950647] Lustre: Skipped 12 previous similar messages [2859625.286213] Lustre: lustre-MDT0002: haven't heard from client 3c55ee59-e078-479a-987d-704255682268 (at 192.168.0.100@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000f6bedc9c, cur 1712490944 expire 1712490914 last 1712490897 [2859625.311600] Lustre: Skipped 9 previous similar messages [2860155.430654] LNetError: 469698:0:(o2iblnd_cb.c:3370:kiblnd_check_txs_locked()) Timed out tx: active_txs(WSQ:010), 19 seconds [2860155.444086] LNetError: 469698:0:(o2iblnd_cb.c:3439:kiblnd_check_conns()) Timed out RDMA with 192.168.0.101@o2ib (124): c: 6, oc: 0, rc: 8 [2863317.384165] LustreError: 486155:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.95@o2ib arrived at 1712494636 with bad export cookie 13547906345684935397 [2863317.384168] LustreError: 523226:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.95@o2ib arrived at 1712494636 with bad export cookie 13547906345684935397 [2863317.384173] LustreError: 523226:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 195 previous similar messages [2863317.402338] LustreError: 486155:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 199 previous similar messages [2863322.274724] LustreError: 523230:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.95@o2ib arrived at 1712494641 with bad export cookie 13547906345684935397 [2863322.292865] LustreError: 523230:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 135 previous similar messages [2863354.213769] LustreError: 523230:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.95@o2ib arrived at 1712494673 with bad export cookie 13547906347861594448 [2863354.231967] LustreError: 523230:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 1 previous similar message [2863566.415927] LustreError: 523233:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.91@o2ib arrived at 1712494885 with bad export cookie 13547906347861594882 [2863566.434273] LustreError: 523233:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 10 previous similar messages [2863619.262999] Lustre: lustre-MDT0002: haven't heard from client 495f5759-68b9-423a-9105-fb855a79e8b3 (at 192.168.0.101@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000b40528e1, cur 1712494938 expire 1712494908 last 1712494891 [2863619.288564] Lustre: Skipped 7 previous similar messages [2864269.050422] Lustre: lustre-MDT0002: haven't heard from client 495f5759-68b9-423a-9105-fb855a79e8b3 (at 192.168.0.101@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 000000005f118fdc, cur 1712495588 expire 1712495558 last 1712495541 [2864269.076003] Lustre: Skipped 3 previous similar messages [2865232.036189] Lustre: lustre-MDT0002: haven't heard from client 495f5759-68b9-423a-9105-fb855a79e8b3 (at 192.168.0.101@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000c1311fa6, cur 1712496551 expire 1712496521 last 1712496504 [2865232.061738] Lustre: Skipped 3 previous similar messages [2865610.081118] Lustre: lustre-MDT000e: haven't heard from client 495f5759-68b9-423a-9105-fb855a79e8b3 (at 192.168.0.101@o2ib) in 48 seconds. I think it's dead, and I am evicting it. exp 000000004b3ea746, cur 1712496929 expire 1712496899 last 1712496881 [2865610.106499] Lustre: Skipped 1 previous similar message [2865659.291156] Lustre: lustre-MDT0008: haven't heard from client 495f5759-68b9-423a-9105-fb855a79e8b3 (at 192.168.0.101@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 000000008ec8f26a, cur 1712496978 expire 1712496948 last 1712496931 [2865659.316331] Lustre: Skipped 1 previous similar message [2865910.166883] Lustre: lustre-MDT0002: haven't heard from client 495f5759-68b9-423a-9105-fb855a79e8b3 (at 192.168.0.101@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 00000000c7e0922c, cur 1712497229 expire 1712497199 last 1712497182 [2865910.192015] Lustre: Skipped 1 previous similar message [2866013.673428] Lustre: lustre-MDT0008: Client 495f5759-68b9-423a-9105-fb855a79e8b3 (at 192.168.0.101@o2ib) reconnecting [2866013.686144] Lustre: Skipped 7 previous similar messages [2866078.370831] LustreError: 523223:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.101@o2ib arrived at 1712497397 with bad export cookie 13547906347861594581 [2866078.389204] LustreError: 523223:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 2 previous similar messages [2866089.131442] LustreError: 523223:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.101@o2ib arrived at 1712497408 with bad export cookie 13547906347861594574 [2866089.149664] LustreError: 523223:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 2 previous similar messages [2869630.043283] Lustre: lustre-MDT0008: haven't heard from client 5003d4cc-0b97-41f5-9e4b-7758af7fd7cd (at 192.168.0.104@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 000000006bac0f90, cur 1712500949 expire 1712500919 last 1712500902 [2869630.068671] Lustre: Skipped 3 previous similar messages [2871272.322292] LustreError: 527343:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.104@o2ib arrived at 1712502591 with bad export cookie 13547906347861593958 [2871272.340800] LustreError: 527343:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 10 previous similar messages [2871272.840638] LustreError: 527343:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.104@o2ib arrived at 1712502591 with bad export cookie 13547906347861594056 [2871272.859140] LustreError: 527343:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 11 previous similar messages [2879892.838076] Lustre: lustre-MDT000e: haven't heard from client 3c55ee59-e078-479a-987d-704255682268 (at 192.168.0.100@o2ib) in 47 seconds. I think it's dead, and I am evicting it. exp 000000009283cf9d, cur 1712511212 expire 1712511182 last 1712511165 [2879892.863656] Lustre: Skipped 3 previous similar messages [2880082.701472] Lustre: DEBUG MARKER: server1: executing check_logdir /tmp/test_logs/1712511398 [2880083.571875] Lustre: DEBUG MARKER: client1: executing check_logdir /tmp/test_logs/1712511398 [2880104.738422] Lustre: DEBUG MARKER: server1: executing yml_node [2880105.641154] Lustre: DEBUG MARKER: client1: executing yml_node [2880118.049287] Lustre: DEBUG MARKER: Client: 2.15.4 [2880128.698078] Lustre: DEBUG MARKER: MDS: 2.15.4 [2880139.372171] Lustre: DEBUG MARKER: OSS: 2.15.4 [2880425.404899] LustreError: 518622:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.100@o2ib arrived at 1712511744 with bad export cookie 13547906347861594371 [2880425.423278] LustreError: 518622:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 5 previous similar messages [2880431.912682] LustreError: 518622:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.100@o2ib arrived at 1712511751 with bad export cookie 13547906347861594392 [2880431.930993] LustreError: 518622:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 1 previous similar message [2880433.331594] LustreError: 518622:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.100@o2ib arrived at 1712511752 with bad export cookie 13547906347861594357 [2880433.349921] LustreError: 518622:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 1 previous similar message [2880437.671396] LustreError: 518622:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.100@o2ib arrived at 1712511756 with bad export cookie 13547906347861594399 [2880437.689777] LustreError: 518622:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 3 previous similar messages [2880492.325617] LustreError: 518622:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.100@o2ib arrived at 1712511811 with bad export cookie 13547906347861594350 [2880492.344100] LustreError: 518622:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 1 previous similar message [2880504.311061] LustreError: 518622:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) ldlm_cancel from 192.168.0.100@o2ib arrived at 1712511823 with bad export cookie 13547906347861594700 [2880504.329632] LustreError: 518622:0:(ldlm_lockd.c:2517:ldlm_cancel_handler()) Skipped 2 previous similar messages [2922135.537979] Lustre: DEBUG MARKER: server1: executing check_logdir /tmp/test_logs/1712553452 [2922136.405518] Lustre: DEBUG MARKER: client1: executing check_logdir /tmp/test_logs/1712553452 [2922138.454981] Lustre: DEBUG MARKER: server1: executing yml_node [2922139.304845] Lustre: DEBUG MARKER: client1: executing yml_node [2922142.341077] Lustre: DEBUG MARKER: Client: 2.15.4 [2922143.499834] Lustre: DEBUG MARKER: MDS: 2.15.4 [2922144.655335] Lustre: DEBUG MARKER: OSS: 2.15.4 [2927440.728919] Lustre: DEBUG MARKER: server1: executing check_logdir /tmp/test_logs/1712558757 [2927441.645799] Lustre: DEBUG MARKER: client1: executing check_logdir /tmp/test_logs/1712558757 [2927443.617785] Lustre: DEBUG MARKER: server1: executing yml_node [2927444.541111] Lustre: DEBUG MARKER: client1: executing yml_node [2927447.480049] Lustre: DEBUG MARKER: Client: 2.15.4 [2927448.623265] Lustre: DEBUG MARKER: MDS: 2.15.4 [2927449.793925] Lustre: DEBUG MARKER: OSS: 2.15.4 [2933573.982497] Lustre: DEBUG MARKER: server1: executing check_logdir /tmp/test_logs/1712564890 [2933575.069316] Lustre: DEBUG MARKER: client1: executing check_logdir /tmp/test_logs/1712564890 [2933577.090235] Lustre: DEBUG MARKER: server1: executing yml_node [2933577.949117] Lustre: DEBUG MARKER: client1: executing yml_node [2933581.080577] Lustre: DEBUG MARKER: Client: 2.15.4 [2933582.256665] Lustre: DEBUG MARKER: MDS: 2.15.4 [2933583.432589] Lustre: DEBUG MARKER: OSS: 2.15.4 [2939045.686707] Lustre: DEBUG MARKER: server1: executing check_logdir /tmp/test_logs/1712570362 [2939046.986763] Lustre: DEBUG MARKER: client1: executing check_logdir /tmp/test_logs/1712570362 [2939048.963133] Lustre: DEBUG MARKER: server1: executing yml_node [2939049.864249] Lustre: DEBUG MARKER: client1: executing yml_node [2939052.790171] Lustre: DEBUG MARKER: Client: 2.15.4 [2939053.930922] Lustre: DEBUG MARKER: MDS: 2.15.4 [2939055.083127] Lustre: DEBUG MARKER: OSS: 2.15.4 [2943355.772261] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 [2943355.782920] Mem abort info: [2943355.787084] ESR = 0x96000004 [2943355.791467] EC = 0x25: DABT (current EL), IL = 32 bits [2943355.798103] SET = 0, FnV = 0 [2943355.802514] EA = 0, S1PTW = 0 [2943355.806956] Data abort info: [2943355.811132] ISV = 0, ISS = 0x00000004 [2943355.816253] CM = 0, WnR = 0 [2943355.820503] user pgtable: 4k pages, 48-bit VAs, pgdp=000020225f1f5000 [2943355.828583] [0000000000000000] pgd=0000000000000000, p4d=0000000000000000 [2943355.837047] Internal error: Oops: 0000000096000004 [#1] SMP [2943355.843979] Modules linked in: ofd(OE) ost(OE) osp(OE) mdd(OE) lod(OE) mdt(OE) lfsck(OE) osd_ldiskfs(OE) lquota(OE) ldiskfs(OE) obdecho(OE) mgc(OE) ptlrpc_gss(OE) dm_flakey lustre(OE) lmv(OE) mdc(OE) lov(OE) osc(OE) fid(OE) fld(OE) ko2iblnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) virtio_pci virtio_pci_modern_dev virtio_ring virtio binfmt_misc crc32_generic uio_pci_generic uio vfio_pci vfio_virqfd vfio_iommu_type1 vfio cuse rdma_ucm(OE) rdma_cm(OE) iw_cm(OE) ib_ipoib(OE) ib_cm(OE) bonding ib_umad(OE) rfkill sunrpc vfat fat ipmi_ssif acpi_ipmi sg ipmi_si ipmi_devintf hisi_uncore_l3c_pmu hisi_uncore_hha_pmu hisi_uncore_ddrc_pmu ipmi_msghandler hisi_uncore_pmu sch_fq_codel fuse ext4 mbcache jbd2 mlx5_ib(OE) ib_uverbs(OE) sd_mod ib_core(OE) mlx5_core(OE) hclge mlxfw(OE) hisi_sas_v3_hw tls ghash_ce hisi_sas_main sha2_ce auxiliary(OE) psample sha256_arm64 libsas ahci hibmc_drm nvme sha1_ce mlxdevm(OE) drm_vram_helper libahci sbsa_gwdt scsi_transport_sas hns3 nvme_core [2943355.844066] drm_ttm_helper libata megaraid_sas ttm t10_pi hnae3 mlx_compat(OE) host_edma_drv i2c_designware_platform i2c_designware_core dm_mirror dm_region_hash dm_log dm_mod xpmem(OE) aes_neon_bs aes_neon_blk aes_ce_blk crypto_simd cryptd aes_ce_cipher [last unloaded: libcfs] [2943355.962538] CPU: 71 PID: 535774 Comm: mdt02_020 Kdump: loaded Tainted: G OE 5.10.0-188.0.0.101.oe2203sp3.aarch64 #1 [2943355.976312] Hardware name: Huawei TaiShan 200 (Model 2280)/BC82AMDDA, BIOS 1.38 07/04/2020 [2943355.986557] pstate: a0400009 (NzCv daif +PAN -UAO -TCO BTYPE=--) [2943355.994241] pc : lod_lookup+0x20/0x34 [lod] [2943356.000088] lr : __mdd_lookup.isra.0+0x25c/0x5b0 [mdd] [2943356.006730] sp : ffff800050e23680 [2943356.011548] x29: ffff800050e23680 x28: 0000000000003690 [2943356.018305] x27: ffff800009472000 x26: 0000000000000001 [2943356.025117] x25: ffff00510e59bb50 x24: ffff800009472000 [2943356.031941] x23: ffff20258e612490 x22: ffff20211d045a00 [2943356.038785] x21: ffff20210dca6980 x20: 0000000000000000 [2943356.045691] x19: ffff00575c617300 x18: 00000000ff2eeb9a [2943356.052572] x17: ffff80000a2a7378 x16: 0000000000000000 [2943356.059407] x15: ffffffffffffffff x14: ffffffffffffffff [2943356.066227] x13: 0000000000000030 x12: 0000000000000a50 [2943356.073049] x11: 0000000000003000 x10: 0000000000000b20 [2943356.079771] x9 : ffff80000a71288c x8 : ffff20245a918020 [2943356.086564] x7 : 0000000000000970 x6 : 0000000000004000 [2943356.093333] x5 : 0000000000000001 x4 : 0000000000000000 [2943356.100050] x3 : ffff20211d045a00 x2 : ffff20258e612490 [2943356.106760] x1 : ffff005143638c00 x0 : ffff20210dca6980 [2943356.113461] Call trace: [2943356.117294] lod_lookup+0x20/0x34 [lod] [2943356.122494] __mdd_lookup.isra.0+0x25c/0x5b0 [mdd] [2943356.128639] mdd_lookup+0x118/0x234 [mdd] [2943356.134047] mdt_getattr_name_lock+0x1654/0x2ea0 [mdt] [2943356.140545] mdt_intent_getattr+0x33c/0x604 [mdt] [2943356.146633] mdt_intent_opc+0x16c/0x630 [mdt] [2943356.152296] mdt_intent_policy+0x234/0x3ec [mdt] [2943356.158292] ldlm_lock_enqueue+0x4b0/0x980 [ptlrpc] [2943356.164539] ldlm_handle_enqueue0+0x73c/0x22b0 [ptlrpc] [2943356.171121] tgt_enqueue+0x88/0x2d0 [ptlrpc] [2943356.176736] tgt_handle_request0+0xd8/0x944 [ptlrpc] [2943356.183074] tgt_request_handle+0x2a0/0xda0 [ptlrpc] [2943356.189421] ptlrpc_server_handle_request.isra.0+0x3d4/0x1214 [ptlrpc] [2943356.197528] ptlrpc_main+0xdec/0x16d0 [ptlrpc] [2943356.203288] kthread+0x108/0x13c [2943356.207785] ret_from_fork+0x10/0x18 [2943356.212604] Code: 910003fd f9400c21 d1006021 f9401c24 (f9400084) [2943356.220001] SMP: stopping secondary CPUs [2943356.227619] Starting crashdump kernel... [2943356.232827] Bye!