[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Initializing cgroup subsys cpuacct [ 0.000000] Linux version 3.10.0-957.27.2.el7_lustre.pl2.x86_64 (sthiell@oak-rbh01) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) ) #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-3.10.0-957.27.2.el7_lustre.pl2.x86_64 root=UUID=c3a48ae6-4259-4cfd-bd4c-1e4ff227425e ro crashkernel=auto nomodeset console=ttyS0,115200 LANG=en_US.UTF-8 [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000008efff] usable [ 0.000000] BIOS-e820: [mem 0x000000000008f000-0x000000000008ffff] ACPI NVS [ 0.000000] BIOS-e820: [mem 0x0000000000090000-0x000000000009ffff] usable [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000004f882fff] usable [ 0.000000] BIOS-e820: [mem 0x000000004f883000-0x000000005788bfff] reserved [ 0.000000] BIOS-e820: [mem 0x000000005788c000-0x000000006cacefff] usable [ 0.000000] BIOS-e820: [mem 0x000000006cacf000-0x000000006efcefff] reserved [ 0.000000] BIOS-e820: [mem 0x000000006efcf000-0x000000006fdfefff] ACPI NVS [ 0.000000] BIOS-e820: [mem 0x000000006fdff000-0x000000006fffefff] ACPI data [ 0.000000] BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable [ 0.000000] BIOS-e820: [mem 0x0000000070000000-0x000000008fffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fec10000-0x00000000fec10fff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fed80000-0x00000000fed80fff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000107f37ffff] usable [ 0.000000] BIOS-e820: [mem 0x000000107f380000-0x000000107fffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000001080000000-0x000000207ff7ffff] usable [ 0.000000] BIOS-e820: [mem 0x000000207ff80000-0x000000207fffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000002080000000-0x000000307ff7ffff] usable [ 0.000000] BIOS-e820: [mem 0x000000307ff80000-0x000000307fffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000003080000000-0x000000407ff7ffff] usable [ 0.000000] BIOS-e820: [mem 0x000000407ff80000-0x000000407fffffff] reserved [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] e820: update [mem 0x3705b020-0x3708cc5f] usable ==> usable [ 0.000000] e820: update [mem 0x37029020-0x3705ac5f] usable ==> usable [ 0.000000] e820: update [mem 0x37020020-0x3702805f] usable ==> usable [ 0.000000] e820: update [mem 0x37007020-0x3701f65f] usable ==> usable [ 0.000000] extended physical RAM map: [ 0.000000] reserve setup_data: [mem 0x0000000000000000-0x000000000008efff] usable [ 0.000000] reserve setup_data: [mem 0x000000000008f000-0x000000000008ffff] ACPI NVS [ 0.000000] reserve setup_data: [mem 0x0000000000090000-0x000000000009ffff] usable [ 0.000000] reserve setup_data: [mem 0x0000000000100000-0x000000003700701f] usable [ 0.000000] reserve setup_data: [mem 0x0000000037007020-0x000000003701f65f] usable [ 0.000000] reserve setup_data: [mem 0x000000003701f660-0x000000003702001f] usable [ 0.000000] reserve setup_data: [mem 0x0000000037020020-0x000000003702805f] usable [ 0.000000] reserve setup_data: [mem 0x0000000037028060-0x000000003702901f] usable [ 0.000000] reserve setup_data: [mem 0x0000000037029020-0x000000003705ac5f] usable [ 0.000000] reserve setup_data: [mem 0x000000003705ac60-0x000000003705b01f] usable [ 0.000000] reserve setup_data: [mem 0x000000003705b020-0x000000003708cc5f] usable [ 0.000000] reserve setup_data: [mem 0x000000003708cc60-0x000000004f882fff] usable [ 0.000000] reserve setup_data: [mem 0x000000004f883000-0x000000005788bfff] reserved [ 0.000000] reserve setup_data: [mem 0x000000005788c000-0x000000006cacefff] usable [ 0.000000] reserve setup_data: [mem 0x000000006cacf000-0x000000006efcefff] reserved [ 0.000000] reserve setup_data: [mem 0x000000006efcf000-0x000000006fdfefff] ACPI NVS [ 0.000000] reserve setup_data: [mem 0x000000006fdff000-0x000000006fffefff] ACPI data [ 0.000000] reserve setup_data: [mem 0x000000006ffff000-0x000000006fffffff] usable [ 0.000000] reserve setup_data: [mem 0x0000000070000000-0x000000008fffffff] reserved [ 0.000000] reserve setup_data: [mem 0x00000000fec10000-0x00000000fec10fff] reserved [ 0.000000] reserve setup_data: [mem 0x00000000fed80000-0x00000000fed80fff] reserved [ 0.000000] reserve setup_data: [mem 0x0000000100000000-0x000000107f37ffff] usable [ 0.000000] reserve setup_data: [mem 0x000000107f380000-0x000000107fffffff] reserved [ 0.000000] reserve setup_data: [mem 0x0000001080000000-0x000000207ff7ffff] usable [ 0.000000] reserve setup_data: [mem 0x000000207ff80000-0x000000207fffffff] reserved [ 0.000000] reserve setup_data: [mem 0x0000002080000000-0x000000307ff7ffff] usable [ 0.000000] reserve setup_data: [mem 0x000000307ff80000-0x000000307fffffff] reserved [ 0.000000] reserve setup_data: [mem 0x0000003080000000-0x000000407ff7ffff] usable [ 0.000000] reserve setup_data: [mem 0x000000407ff80000-0x000000407fffffff] reserved [ 0.000000] efi: EFI v2.50 by Dell Inc. [ 0.000000] efi: ACPI=0x6fffe000 ACPI 2.0=0x6fffe014 SMBIOS=0x6eab5000 SMBIOS 3.0=0x6eab3000 [ 0.000000] efi: mem00: type=3, attr=0xf, range=[0x0000000000000000-0x0000000000001000) (0MB) [ 0.000000] efi: mem01: type=2, attr=0xf, range=[0x0000000000001000-0x0000000000002000) (0MB) [ 0.000000] efi: mem02: type=7, attr=0xf, range=[0x0000000000002000-0x0000000000010000) (0MB) [ 0.000000] efi: mem03: type=3, attr=0xf, range=[0x0000000000010000-0x0000000000014000) (0MB) [ 0.000000] efi: mem04: type=7, attr=0xf, range=[0x0000000000014000-0x0000000000063000) (0MB) [ 0.000000] efi: mem05: type=3, attr=0xf, range=[0x0000000000063000-0x000000000008f000) (0MB) [ 0.000000] efi: mem06: type=10, attr=0xf, range=[0x000000000008f000-0x0000000000090000) (0MB) [ 0.000000] efi: mem07: type=3, attr=0xf, range=[0x0000000000090000-0x00000000000a0000) (0MB) [ 0.000000] efi: mem08: type=4, attr=0xf, range=[0x0000000000100000-0x0000000000120000) (0MB) [ 0.000000] efi: mem09: type=7, attr=0xf, range=[0x0000000000120000-0x0000000000c00000) (10MB) [ 0.000000] efi: mem10: type=3, attr=0xf, range=[0x0000000000c00000-0x0000000001000000) (4MB) [ 0.000000] efi: mem11: type=2, attr=0xf, range=[0x0000000001000000-0x000000000267b000) (22MB) [ 0.000000] efi: mem12: type=7, attr=0xf, range=[0x000000000267b000-0x0000000004000000) (25MB) [ 0.000000] efi: mem13: type=4, attr=0xf, range=[0x0000000004000000-0x000000000403b000) (0MB) [ 0.000000] efi: mem14: type=7, attr=0xf, range=[0x000000000403b000-0x0000000037007000) (815MB) [ 0.000000] efi: mem15: type=2, attr=0xf, range=[0x0000000037007000-0x000000004eee6000) (382MB) [ 0.000000] efi: mem16: type=7, attr=0xf, range=[0x000000004eee6000-0x000000004eeea000) (0MB) [ 0.000000] efi: mem17: type=2, attr=0xf, range=[0x000000004eeea000-0x000000004eeec000) (0MB) [ 0.000000] efi: mem18: type=1, attr=0xf, range=[0x000000004eeec000-0x000000004f009000) (1MB) [ 0.000000] efi: mem19: type=2, attr=0xf, range=[0x000000004f009000-0x000000004f128000) (1MB) [ 0.000000] efi: mem20: type=1, attr=0xf, range=[0x000000004f128000-0x000000004f237000) (1MB) [ 0.000000] efi: mem21: type=3, attr=0xf, range=[0x000000004f237000-0x000000004f883000) (6MB) [ 0.000000] efi: mem22: type=0, attr=0xf, range=[0x000000004f883000-0x000000005788c000) (128MB) [ 0.000000] efi: mem23: type=3, attr=0xf, range=[0x000000005788c000-0x000000005796e000) (0MB) [ 0.000000] efi: mem24: type=4, attr=0xf, range=[0x000000005796e000-0x000000005b4cf000) (59MB) [ 0.000000] efi: mem25: type=3, attr=0xf, range=[0x000000005b4cf000-0x000000005b8cf000) (4MB) [ 0.000000] efi: mem26: type=7, attr=0xf, range=[0x000000005b8cf000-0x0000000067b64000) (194MB) [ 0.000000] efi: mem27: type=4, attr=0xf, range=[0x0000000067b64000-0x0000000067b71000) (0MB) [ 0.000000] efi: mem28: type=7, attr=0xf, range=[0x0000000067b71000-0x0000000067b75000) (0MB) [ 0.000000] efi: mem29: type=4, attr=0xf, range=[0x0000000067b75000-0x0000000068189000) (6MB) [ 0.000000] efi: mem30: type=7, attr=0xf, range=[0x0000000068189000-0x000000006818a000) (0MB) [ 0.000000] efi: mem31: type=4, attr=0xf, range=[0x000000006818a000-0x000000006819e000) (0MB) [ 0.000000] efi: mem32: type=7, attr=0xf, range=[0x000000006819e000-0x000000006819f000) (0MB) [ 0.000000] efi: mem33: type=4, attr=0xf, range=[0x000000006819f000-0x00000000681a3000) (0MB) [ 0.000000] efi: mem34: type=7, attr=0xf, range=[0x00000000681a3000-0x00000000681a4000) (0MB) [ 0.000000] efi: mem35: type=4, attr=0xf, range=[0x00000000681a4000-0x00000000681b5000) (0MB) [ 0.000000] efi: mem36: type=7, attr=0xf, range=[0x00000000681b5000-0x00000000681b6000) (0MB) [ 0.000000] efi: mem37: type=4, attr=0xf, range=[0x00000000681b6000-0x00000000681b7000) (0MB) [ 0.000000] efi: mem38: type=7, attr=0xf, range=[0x00000000681b7000-0x00000000681b8000) (0MB) [ 0.000000] efi: mem39: type=4, attr=0xf, range=[0x00000000681b8000-0x00000000681c6000) (0MB) [ 0.000000] efi: mem40: type=7, attr=0xf, range=[0x00000000681c6000-0x00000000681c7000) (0MB) [ 0.000000] efi: mem41: type=4, attr=0xf, range=[0x00000000681c7000-0x00000000681d3000) (0MB) [ 0.000000] efi: mem42: type=7, attr=0xf, range=[0x00000000681d3000-0x00000000681d4000) (0MB) [ 0.000000] efi: mem43: type=4, attr=0xf, range=[0x00000000681d4000-0x00000000681d6000) (0MB) [ 0.000000] efi: mem44: type=7, attr=0xf, range=[0x00000000681d6000-0x00000000681d7000) (0MB) [ 0.000000] efi: mem45: type=4, attr=0xf, range=[0x00000000681d7000-0x00000000681e2000) (0MB) [ 0.000000] efi: mem46: type=7, attr=0xf, range=[0x00000000681e2000-0x00000000681e3000) (0MB) [ 0.000000] efi: mem47: type=4, attr=0xf, range=[0x00000000681e3000-0x00000000681e4000) (0MB) [ 0.000000] efi: mem48: type=7, attr=0xf, range=[0x00000000681e4000-0x00000000681e5000) (0MB) [ 0.000000] efi: mem49: type=4, attr=0xf, range=[0x00000000681e5000-0x00000000681ec000) (0MB) [ 0.000000] efi: mem50: type=7, attr=0xf, range=[0x00000000681ec000-0x00000000681ed000) (0MB) [ 0.000000] efi: mem51: type=4, attr=0xf, range=[0x00000000681ed000-0x00000000681fa000) (0MB) [ 0.000000] efi: mem52: type=7, attr=0xf, range=[0x00000000681fa000-0x00000000681fb000) (0MB) [ 0.000000] efi: mem53: type=4, attr=0xf, range=[0x00000000681fb000-0x0000000068203000) (0MB) [ 0.000000] efi: mem54: type=7, attr=0xf, range=[0x0000000068203000-0x0000000068204000) (0MB) [ 0.000000] efi: mem55: type=4, attr=0xf, range=[0x0000000068204000-0x0000000068207000) (0MB) [ 0.000000] efi: mem56: type=7, attr=0xf, range=[0x0000000068207000-0x0000000068208000) (0MB) [ 0.000000] efi: mem57: type=4, attr=0xf, range=[0x0000000068208000-0x0000000068212000) (0MB) [ 0.000000] efi: mem58: type=7, attr=0xf, range=[0x0000000068212000-0x0000000068213000) (0MB) [ 0.000000] efi: mem59: type=4, attr=0xf, range=[0x0000000068213000-0x0000000068533000) (3MB) [ 0.000000] efi: mem60: type=7, attr=0xf, range=[0x0000000068533000-0x0000000068534000) (0MB) [ 0.000000] efi: mem61: type=4, attr=0xf, range=[0x0000000068534000-0x0000000068550000) (0MB) [ 0.000000] efi: mem62: type=7, attr=0xf, range=[0x0000000068550000-0x0000000068551000) (0MB) [ 0.000000] efi: mem63: type=4, attr=0xf, range=[0x0000000068551000-0x0000000068562000) (0MB) [ 0.000000] efi: mem64: type=7, attr=0xf, range=[0x0000000068562000-0x0000000068564000) (0MB) [ 0.000000] efi: mem65: type=4, attr=0xf, range=[0x0000000068564000-0x0000000068572000) (0MB) [ 0.000000] efi: mem66: type=7, attr=0xf, range=[0x0000000068572000-0x0000000068573000) (0MB) [ 0.000000] efi: mem67: type=4, attr=0xf, range=[0x0000000068573000-0x0000000068598000) (0MB) [ 0.000000] efi: mem68: type=7, attr=0xf, range=[0x0000000068598000-0x0000000068599000) (0MB) [ 0.000000] efi: mem69: type=4, attr=0xf, range=[0x0000000068599000-0x00000000685ad000) (0MB) [ 0.000000] efi: mem70: type=7, attr=0xf, range=[0x00000000685ad000-0x00000000685ae000) (0MB) [ 0.000000] efi: mem71: type=4, attr=0xf, range=[0x00000000685ae000-0x000000006860b000) (0MB) [ 0.000000] efi: mem72: type=7, attr=0xf, range=[0x000000006860b000-0x000000006860c000) (0MB) [ 0.000000] efi: mem73: type=4, attr=0xf, range=[0x000000006860c000-0x0000000068613000) (0MB) [ 0.000000] efi: mem74: type=7, attr=0xf, range=[0x0000000068613000-0x0000000068614000) (0MB) [ 0.000000] efi: mem75: type=4, attr=0xf, range=[0x0000000068614000-0x0000000068618000) (0MB) [ 0.000000] efi: mem76: type=7, attr=0xf, range=[0x0000000068618000-0x0000000068619000) (0MB) [ 0.000000] efi: mem77: type=4, attr=0xf, range=[0x0000000068619000-0x000000006862a000) (0MB) [ 0.000000] efi: mem78: type=7, attr=0xf, range=[0x000000006862a000-0x000000006862b000) (0MB) [ 0.000000] efi: mem79: type=4, attr=0xf, range=[0x000000006862b000-0x0000000068644000) (0MB) [ 0.000000] efi: mem80: type=7, attr=0xf, range=[0x0000000068644000-0x0000000068645000) (0MB) [ 0.000000] efi: mem81: type=4, attr=0xf, range=[0x0000000068645000-0x000000006865f000) (0MB) [ 0.000000] efi: mem82: type=7, attr=0xf, range=[0x000000006865f000-0x0000000068660000) (0MB) [ 0.000000] efi: mem83: type=4, attr=0xf, range=[0x0000000068660000-0x00000000686bb000) (0MB) [ 0.000000] efi: mem84: type=7, attr=0xf, range=[0x00000000686bb000-0x00000000686bc000) (0MB) [ 0.000000] efi: mem85: type=4, attr=0xf, range=[0x00000000686bc000-0x00000000686c0000) (0MB) [ 0.000000] efi: mem86: type=7, attr=0xf, range=[0x00000000686c0000-0x00000000686c1000) (0MB) [ 0.000000] efi: mem87: type=4, attr=0xf, range=[0x00000000686c1000-0x00000000686c3000) (0MB) [ 0.000000] efi: mem88: type=7, attr=0xf, range=[0x00000000686c3000-0x00000000686c4000) (0MB) [ 0.000000] efi: mem89: type=4, attr=0xf, range=[0x00000000686c4000-0x00000000686c9000) (0MB) [ 0.000000] efi: mem90: type=7, attr=0xf, range=[0x00000000686c9000-0x00000000686ca000) (0MB) [ 0.000000] efi: mem91: type=4, attr=0xf, range=[0x00000000686ca000-0x00000000686cd000) (0MB) [ 0.000000] efi: mem92: type=7, attr=0xf, range=[0x00000000686cd000-0x00000000686ce000) (0MB) [ 0.000000] efi: mem93: type=4, attr=0xf, range=[0x00000000686ce000-0x00000000686d6000) (0MB) [ 0.000000] efi: mem94: type=7, attr=0xf, range=[0x00000000686d6000-0x00000000686d7000) (0MB) [ 0.000000] efi: mem95: type=4, attr=0xf, range=[0x00000000686d7000-0x0000000068708000) (0MB) [ 0.000000] efi: mem96: type=7, attr=0xf, range=[0x0000000068708000-0x0000000068709000) (0MB) [ 0.000000] efi: mem97: type=4, attr=0xf, range=[0x0000000068709000-0x000000006b8cf000) (49MB) [ 0.000000] efi: mem98: type=7, attr=0xf, range=[0x000000006b8cf000-0x000000006b8d0000) (0MB) [ 0.000000] efi: mem99: type=3, attr=0xf, range=[0x000000006b8d0000-0x000000006cacf000) (17MB) [ 0.000000] efi: mem100: type=6, attr=0x800000000000000f, range=[0x000000006cacf000-0x000000006cbcf000) (1MB) [ 0.000000] efi: mem101: type=5, attr=0x800000000000000f, range=[0x000000006cbcf000-0x000000006cdcf000) (2MB) [ 0.000000] efi: mem102: type=0, attr=0xf, range=[0x000000006cdcf000-0x000000006efcf000) (34MB) [ 0.000000] efi: mem103: type=10, attr=0xf, range=[0x000000006efcf000-0x000000006fdff000) (14MB) [ 0.000000] efi: mem104: type=9, attr=0xf, range=[0x000000006fdff000-0x000000006ffff000) (2MB) [ 0.000000] efi: mem105: type=4, attr=0xf, range=[0x000000006ffff000-0x0000000070000000) (0MB) [ 0.000000] efi: mem106: type=7, attr=0xf, range=[0x0000000100000000-0x000000107f380000) (63475MB) [ 0.000000] efi: mem107: type=7, attr=0xf, range=[0x0000001080000000-0x000000207ff80000) (65535MB) [ 0.000000] efi: mem108: type=7, attr=0xf, range=[0x0000002080000000-0x000000307ff80000) (65535MB) [ 0.000000] efi: mem109: type=7, attr=0xf, range=[0x0000003080000000-0x000000407ff80000) (65535MB) [ 0.000000] efi: mem110: type=0, attr=0x9, range=[0x0000000070000000-0x0000000080000000) (256MB) [ 0.000000] efi: mem111: type=11, attr=0x800000000000000f, range=[0x0000000080000000-0x0000000090000000) (256MB) [ 0.000000] efi: mem112: type=11, attr=0x800000000000000f, range=[0x00000000fec10000-0x00000000fec11000) (0MB) [ 0.000000] efi: mem113: type=11, attr=0x800000000000000f, range=[0x00000000fed80000-0x00000000fed81000) (0MB) [ 0.000000] efi: mem114: type=0, attr=0x0, range=[0x000000107f380000-0x0000001080000000) (12MB) [ 0.000000] efi: mem115: type=0, attr=0x0, range=[0x000000207ff80000-0x0000002080000000) (0MB) [ 0.000000] efi: mem116: type=0, attr=0x0, range=[0x000000307ff80000-0x0000003080000000) (0MB) [ 0.000000] efi: mem117: type=0, attr=0x0, range=[0x000000407ff80000-0x0000004080000000) (0MB) [ 0.000000] SMBIOS 3.2.0 present. [ 0.000000] DMI: Dell Inc. PowerEdge R6415/065PKD, BIOS 1.10.6 08/15/2019 [ 0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved [ 0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable [ 0.000000] e820: last_pfn = 0x407ff80 max_arch_pfn = 0x400000000 [ 0.000000] MTRR default type: uncachable [ 0.000000] MTRR fixed ranges enabled: [ 0.000000] 00000-9FFFF write-back [ 0.000000] A0000-FFFFF uncachable [ 0.000000] MTRR variable ranges enabled: [ 0.000000] 0 base 0000FF000000 mask FFFFFF000000 write-protect [ 0.000000] 1 base 000000000000 mask FFFF80000000 write-back [ 0.000000] 2 base 000070000000 mask FFFFF0000000 uncachable [ 0.000000] 3 disabled [ 0.000000] 4 disabled [ 0.000000] 5 disabled [ 0.000000] 6 disabled [ 0.000000] 7 disabled [ 0.000000] TOM2: 0000004080000000 aka 264192M [ 0.000000] PAT configuration [0-7]: WB WC UC- UC WB WP UC- UC [ 0.000000] e820: last_pfn = 0x70000 max_arch_pfn = 0x400000000 [ 0.000000] Base memory trampoline at [ffff9e4d80099000] 99000 size 24576 [ 0.000000] Using GB pages for direct mapping [ 0.000000] BRK [0x2744c53000, 0x2744c53fff] PGTABLE [ 0.000000] BRK [0x2744c54000, 0x2744c54fff] PGTABLE [ 0.000000] BRK [0x2744c55000, 0x2744c55fff] PGTABLE [ 0.000000] BRK [0x2744c56000, 0x2744c56fff] PGTABLE [ 0.000000] BRK [0x2744c57000, 0x2744c57fff] PGTABLE [ 0.000000] BRK [0x2744c58000, 0x2744c58fff] PGTABLE [ 0.000000] BRK [0x2744c59000, 0x2744c59fff] PGTABLE [ 0.000000] BRK [0x2744c5a000, 0x2744c5afff] PGTABLE [ 0.000000] BRK [0x2744c5b000, 0x2744c5bfff] PGTABLE [ 0.000000] BRK [0x2744c5c000, 0x2744c5cfff] PGTABLE [ 0.000000] BRK [0x2744c5d000, 0x2744c5dfff] PGTABLE [ 0.000000] BRK [0x2744c5e000, 0x2744c5efff] PGTABLE [ 0.000000] RAMDISK: [mem 0x3708d000-0x383d1fff] [ 0.000000] Early table checksum verification disabled [ 0.000000] ACPI: RSDP 000000006fffe014 00024 (v02 DELL ) [ 0.000000] ACPI: XSDT 000000006fffd0e8 000AC (v01 DELL PE_SC3 00000002 DELL 00000001) [ 0.000000] ACPI: FACP 000000006fff0000 00114 (v06 DELL PE_SC3 00000002 DELL 00000001) [ 0.000000] ACPI: DSDT 000000006ffdc000 1038C (v02 DELL PE_SC3 00000002 DELL 00000001) [ 0.000000] ACPI: FACS 000000006fdd3000 00040 [ 0.000000] ACPI: SSDT 000000006fffc000 000D2 (v02 DELL PE_SC3 00000002 MSFT 04000000) [ 0.000000] ACPI: BERT 000000006fffb000 00030 (v01 DELL BERT 00000001 DELL 00000001) [ 0.000000] ACPI: HEST 000000006fffa000 006DC (v01 DELL HEST 00000001 DELL 00000001) [ 0.000000] ACPI: SSDT 000000006fff9000 00294 (v01 DELL PE_SC3 00000001 AMD 00000001) [ 0.000000] ACPI: SRAT 000000006fff8000 00420 (v03 DELL PE_SC3 00000001 AMD 00000001) [ 0.000000] ACPI: MSCT 000000006fff7000 0004E (v01 DELL PE_SC3 00000000 AMD 00000001) [ 0.000000] ACPI: SLIT 000000006fff6000 0003C (v01 DELL PE_SC3 00000001 AMD 00000001) [ 0.000000] ACPI: CRAT 000000006fff3000 02DC0 (v01 DELL PE_SC3 00000001 AMD 00000001) [ 0.000000] ACPI: EINJ 000000006fff2000 00150 (v01 DELL PE_SC3 00000001 AMD 00000001) [ 0.000000] ACPI: SLIC 000000006fff1000 00024 (v01 DELL PE_SC3 00000002 DELL 00000001) [ 0.000000] ACPI: HPET 000000006ffef000 00038 (v01 DELL PE_SC3 00000002 DELL 00000001) [ 0.000000] ACPI: APIC 000000006ffee000 004B2 (v03 DELL PE_SC3 00000002 DELL 00000001) [ 0.000000] ACPI: MCFG 000000006ffed000 0003C (v01 DELL PE_SC3 00000002 DELL 00000001) [ 0.000000] ACPI: SSDT 000000006ffdb000 00629 (v02 DELL xhc_port 00000001 INTL 20170119) [ 0.000000] ACPI: IVRS 000000006ffda000 00210 (v02 DELL PE_SC3 00000001 AMD 00000000) [ 0.000000] ACPI: SSDT 000000006ffd8000 01658 (v01 AMD CPMCMN 00000001 INTL 20170119) [ 0.000000] ACPI: Local APIC address 0xfee00000 [ 0.000000] SRAT: PXM 0 -> APIC 0x00 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x01 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x02 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x03 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x04 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x05 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x08 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x09 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x0a -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x0b -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x0c -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x0d -> Node 0 [ 0.000000] SRAT: PXM 1 -> APIC 0x10 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x11 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x12 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x13 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x14 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x15 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x18 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x19 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x1a -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x1b -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x1c -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x1d -> Node 1 [ 0.000000] SRAT: PXM 2 -> APIC 0x20 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x21 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x22 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x23 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x24 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x25 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x28 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x29 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x2a -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x2b -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x2c -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x2d -> Node 2 [ 0.000000] SRAT: PXM 3 -> APIC 0x30 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x31 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x32 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x33 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x34 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x35 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x38 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x39 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x3a -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x3b -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x3c -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x3d -> Node 3 [ 0.000000] SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] [ 0.000000] SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] [ 0.000000] SRAT: Node 0 PXM 0 [mem 0x100000000-0x107fffffff] [ 0.000000] SRAT: Node 1 PXM 1 [mem 0x1080000000-0x207fffffff] [ 0.000000] SRAT: Node 2 PXM 2 [mem 0x2080000000-0x307fffffff] [ 0.000000] SRAT: Node 3 PXM 3 [mem 0x3080000000-0x407fffffff] [ 0.000000] NUMA: Initialized distance table, cnt=4 [ 0.000000] NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] [ 0.000000] NUMA: Node 0 [mem 0x00000000-0x7fffffff] + [mem 0x100000000-0x107fffffff] -> [mem 0x00000000-0x107fffffff] [ 0.000000] NODE_DATA(0) allocated [mem 0x107f359000-0x107f37ffff] [ 0.000000] NODE_DATA(1) allocated [mem 0x207ff59000-0x207ff7ffff] [ 0.000000] NODE_DATA(2) allocated [mem 0x307ff59000-0x307ff7ffff] [ 0.000000] NODE_DATA(3) allocated [mem 0x407ff58000-0x407ff7efff] [ 0.000000] Reserving 176MB of memory at 704MB for crashkernel (System RAM: 261692MB) [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00001000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x407ff7ffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00001000-0x0008efff] [ 0.000000] node 0: [mem 0x00090000-0x0009ffff] [ 0.000000] node 0: [mem 0x00100000-0x4f882fff] [ 0.000000] node 0: [mem 0x5788c000-0x6cacefff] [ 0.000000] node 0: [mem 0x6ffff000-0x6fffffff] [ 0.000000] node 0: [mem 0x100000000-0x107f37ffff] [ 0.000000] node 1: [mem 0x1080000000-0x207ff7ffff] [ 0.000000] node 2: [mem 0x2080000000-0x307ff7ffff] [ 0.000000] node 3: [mem 0x3080000000-0x407ff7ffff] [ 0.000000] Initmem setup node 0 [mem 0x00001000-0x107f37ffff] [ 0.000000] On node 0 totalpages: 16661989 [ 0.000000] DMA zone: 64 pages used for memmap [ 0.000000] DMA zone: 1126 pages reserved [ 0.000000] DMA zone: 3998 pages, LIFO batch:0 [ 0.000000] DMA32 zone: 6380 pages used for memmap [ 0.000000] DMA32 zone: 408263 pages, LIFO batch:31 [ 0.000000] Normal zone: 253902 pages used for memmap [ 0.000000] Normal zone: 16249728 pages, LIFO batch:31 [ 0.000000] Initmem setup node 1 [mem 0x1080000000-0x207ff7ffff] [ 0.000000] On node 1 totalpages: 16777088 [ 0.000000] Normal zone: 262142 pages used for memmap [ 0.000000] Normal zone: 16777088 pages, LIFO batch:31 [ 0.000000] Initmem setup node 2 [mem 0x2080000000-0x307ff7ffff] [ 0.000000] On node 2 totalpages: 16777088 [ 0.000000] Normal zone: 262142 pages used for memmap [ 0.000000] Normal zone: 16777088 pages, LIFO batch:31 [ 0.000000] Initmem setup node 3 [mem 0x3080000000-0x407ff7ffff] [ 0.000000] On node 3 totalpages: 16777088 [ 0.000000] Normal zone: 262142 pages used for memmap [ 0.000000] Normal zone: 16777088 pages, LIFO batch:31 [ 0.000000] ACPI: PM-Timer IO Port: 0x408 [ 0.000000] ACPI: Local APIC address 0xfee00000 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x10] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x20] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x30] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x08] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x18] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x28] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x38] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x12] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x22] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x32] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x0a] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x1a] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x2a] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x3a] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x10] lapic_id[0x04] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x11] lapic_id[0x14] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x12] lapic_id[0x24] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x13] lapic_id[0x34] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x14] lapic_id[0x0c] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x15] lapic_id[0x1c] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x16] lapic_id[0x2c] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x17] lapic_id[0x3c] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x18] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x19] lapic_id[0x11] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x1a] lapic_id[0x21] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x1b] lapic_id[0x31] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x1c] lapic_id[0x09] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x1d] lapic_id[0x19] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x1e] lapic_id[0x29] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x1f] lapic_id[0x39] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x20] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x21] lapic_id[0x13] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x22] lapic_id[0x23] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x23] lapic_id[0x33] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x24] lapic_id[0x0b] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x25] lapic_id[0x1b] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x26] lapic_id[0x2b] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x27] lapic_id[0x3b] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x28] lapic_id[0x05] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x29] lapic_id[0x15] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x2a] lapic_id[0x25] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x2b] lapic_id[0x35] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x2c] lapic_id[0x0d] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x2d] lapic_id[0x1d] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x2e] lapic_id[0x2d] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x2f] lapic_id[0x3d] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x30] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x31] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x32] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x33] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x34] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x35] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x36] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x37] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x38] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x39] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x3a] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x3b] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x3c] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x3d] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x3e] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x3f] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x40] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x41] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x42] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x43] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x44] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x45] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x46] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x47] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x48] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x49] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x4a] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x4b] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x4c] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x4d] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x4e] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x4f] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x50] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x51] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x52] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x53] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x54] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x55] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x56] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x57] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x58] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x59] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x5a] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x5b] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x5c] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x5d] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x5e] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x5f] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x60] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x61] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x62] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x63] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x64] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x65] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x66] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x67] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x68] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x69] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x6a] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x6b] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x6c] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x6d] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x6e] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x6f] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x70] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x71] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x72] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x73] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x74] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x75] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x76] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x77] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x78] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x79] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x7a] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x7b] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x7c] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x7d] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x7e] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x7f] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1]) [ 0.000000] ACPI: IOAPIC (id[0x80] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 128, version 33, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: IOAPIC (id[0x81] address[0xfd880000] gsi_base[24]) [ 0.000000] IOAPIC[1]: apic_id 129, version 33, address 0xfd880000, GSI 24-55 [ 0.000000] ACPI: IOAPIC (id[0x82] address[0xe0900000] gsi_base[56]) [ 0.000000] IOAPIC[2]: apic_id 130, version 33, address 0xe0900000, GSI 56-87 [ 0.000000] ACPI: IOAPIC (id[0x83] address[0xc5900000] gsi_base[88]) [ 0.000000] IOAPIC[3]: apic_id 131, version 33, address 0xc5900000, GSI 88-119 [ 0.000000] ACPI: IOAPIC (id[0x84] address[0xaa900000] gsi_base[120]) [ 0.000000] IOAPIC[4]: apic_id 132, version 33, address 0xaa900000, GSI 120-151 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level) [ 0.000000] ACPI: IRQ0 used by override. [ 0.000000] ACPI: IRQ9 used by override. [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x10228201 base: 0xfed00000 [ 0.000000] smpboot: Allowing 128 CPUs, 80 hotplug CPUs [ 0.000000] PM: Registered nosave memory: [mem 0x0008f000-0x0008ffff] [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000fffff] [ 0.000000] PM: Registered nosave memory: [mem 0x37007000-0x37007fff] [ 0.000000] PM: Registered nosave memory: [mem 0x3701f000-0x3701ffff] [ 0.000000] PM: Registered nosave memory: [mem 0x37020000-0x37020fff] [ 0.000000] PM: Registered nosave memory: [mem 0x37028000-0x37028fff] [ 0.000000] PM: Registered nosave memory: [mem 0x37029000-0x37029fff] [ 0.000000] PM: Registered nosave memory: [mem 0x3705a000-0x3705afff] [ 0.000000] PM: Registered nosave memory: [mem 0x3705b000-0x3705bfff] [ 0.000000] PM: Registered nosave memory: [mem 0x3708c000-0x3708cfff] [ 0.000000] PM: Registered nosave memory: [mem 0x4f883000-0x5788bfff] [ 0.000000] PM: Registered nosave memory: [mem 0x6cacf000-0x6efcefff] [ 0.000000] PM: Registered nosave memory: [mem 0x6efcf000-0x6fdfefff] [ 0.000000] PM: Registered nosave memory: [mem 0x6fdff000-0x6fffefff] [ 0.000000] PM: Registered nosave memory: [mem 0x70000000-0x8fffffff] [ 0.000000] PM: Registered nosave memory: [mem 0x90000000-0xfec0ffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfec10000-0xfec10fff] [ 0.000000] PM: Registered nosave memory: [mem 0xfec11000-0xfed7ffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfed80000-0xfed80fff] [ 0.000000] PM: Registered nosave memory: [mem 0xfed81000-0xffffffff] [ 0.000000] PM: Registered nosave memory: [mem 0x107f380000-0x107fffffff] [ 0.000000] PM: Registered nosave memory: [mem 0x207ff80000-0x207fffffff] [ 0.000000] PM: Registered nosave memory: [mem 0x307ff80000-0x307fffffff] [ 0.000000] e820: [mem 0x90000000-0xfec0ffff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on bare hardware [ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:4 [ 0.000000] PERCPU: Embedded 38 pages/cpu @ffff9e5dbee00000 s118784 r8192 d28672 u262144 [ 0.000000] pcpu-alloc: s118784 r8192 d28672 u262144 alloc=1*2097152 [ 0.000000] pcpu-alloc: [0] 000 004 008 012 016 020 024 028 [ 0.000000] pcpu-alloc: [0] 032 036 040 044 048 052 056 060 [ 0.000000] pcpu-alloc: [0] 064 068 072 076 080 084 088 092 [ 0.000000] pcpu-alloc: [0] 096 100 104 108 112 116 120 124 [ 0.000000] pcpu-alloc: [1] 001 005 009 013 017 021 025 029 [ 0.000000] pcpu-alloc: [1] 033 037 041 045 049 053 057 061 [ 0.000000] pcpu-alloc: [1] 065 069 073 077 081 085 089 093 [ 0.000000] pcpu-alloc: [1] 097 101 105 109 113 117 121 125 [ 0.000000] pcpu-alloc: [2] 002 006 010 014 018 022 026 030 [ 0.000000] pcpu-alloc: [2] 034 038 042 046 050 054 058 062 [ 0.000000] pcpu-alloc: [2] 066 070 074 078 082 086 090 094 [ 0.000000] pcpu-alloc: [2] 098 102 106 110 114 118 122 126 [ 0.000000] pcpu-alloc: [3] 003 007 011 015 019 023 027 031 [ 0.000000] pcpu-alloc: [3] 035 039 043 047 051 055 059 063 [ 0.000000] pcpu-alloc: [3] 067 071 075 079 083 087 091 095 [ 0.000000] pcpu-alloc: [3] 099 103 107 111 115 119 123 127 [ 0.000000] Built 4 zonelists in Zone order, mobility grouping on. Total pages: 65945355 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.10.0-957.27.2.el7_lustre.pl2.x86_64 root=UUID=c3a48ae6-4259-4cfd-bd4c-1e4ff227425e ro crashkernel=auto nomodeset console=ttyS0,115200 LANG=en_US.UTF-8 [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100 [ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form [ 0.000000] Memory: 9613424k/270532096k available (7676k kernel code, 2559084k absent, 4654536k reserved, 6045k data, 1876k init) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=4 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=128. [ 0.000000] NR_IRQS:327936 nr_irqs:3624 0 [ 0.000000] Console: colour dummy device 80x25 [ 0.000000] console [ttyS0] enabled [ 0.000000] allocated 1072693248 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] Enabling automatic NUMA balancing. Configure with numa_balancing= or the kernel.numa_balancing sysctl [ 0.000000] hpet clockevent registered [ 0.000000] tsc: Fast TSC calibration using PIT [ 0.000000] tsc: Detected 1996.203 MHz processor [ 0.000057] Calibrating delay loop (skipped), value calculated using timer frequency.. 3992.40 BogoMIPS (lpj=1996203) [ 0.010704] pid_max: default: 131072 minimum: 1024 [ 0.016180] Security Framework initialized [ 0.020296] SELinux: Initializing. [ 0.023855] SELinux: Starting in permissive mode [ 0.023856] Yama: becoming mindful. [ 0.044046] Dentry cache hash table entries: 33554432 (order: 16, 268435456 bytes) [ 0.100042] Inode-cache hash table entries: 16777216 (order: 15, 134217728 bytes) [ 0.127716] Mount-cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.135119] Mountpoint-cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.144271] Initializing cgroup subsys memory [ 0.148671] Initializing cgroup subsys devices [ 0.153128] Initializing cgroup subsys freezer [ 0.157581] Initializing cgroup subsys net_cls [ 0.162038] Initializing cgroup subsys blkio [ 0.166320] Initializing cgroup subsys perf_event [ 0.171042] Initializing cgroup subsys hugetlb [ 0.175498] Initializing cgroup subsys pids [ 0.179692] Initializing cgroup subsys net_prio [ 0.184303] tseg: 0070000000 [ 0.189932] LVT offset 2 assigned for vector 0xf4 [ 0.194664] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 512 [ 0.200681] Last level dTLB entries: 4KB 1536, 2MB 1536, 4MB 768 [ 0.206698] tlb_flushall_shift: 6 [ 0.210046] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp [ 0.219619] FEATURE SPEC_CTRL Not Present [ 0.223641] FEATURE IBPB_SUPPORT Present [ 0.227577] Spectre V2 : Enabling Indirect Branch Prediction Barrier [ 0.234013] Spectre V2 : Mitigation: Full retpoline [ 0.239328] Freeing SMP alternatives: 28k freed [ 0.245785] ACPI: Core revision 20130517 [ 0.254490] ACPI: All ACPI Tables successfully acquired [ 0.264839] ftrace: allocating 29216 entries in 115 pages [ 0.605823] Switched APIC routing to physical flat. [ 0.612754] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.628764] smpboot: CPU0: AMD EPYC 7401P 24-Core Processor (fam: 17, model: 01, stepping: 02) [ 0.715403] random: fast init done [ 0.741404] APIC calibration not consistent with PM-Timer: 101ms instead of 100ms [ 0.748878] APIC delta adjusted to PM-Timer: 623826 (636296) [ 0.754570] Performance Events: Fam17h core perfctr, AMD PMU driver. [ 0.761005] ... version: 0 [ 0.765016] ... bit width: 48 [ 0.769115] ... generic registers: 6 [ 0.773129] ... value mask: 0000ffffffffffff [ 0.778443] ... max period: 00007fffffffffff [ 0.783755] ... fixed-purpose events: 0 [ 0.787767] ... event mask: 000000000000003f [ 0.796120] NMI watchdog: enabled on all CPUs, permanently consumes one hw-PMU counter. [ 0.804205] smpboot: Booting Node 1, Processors #1 OK [ 0.817407] smpboot: Booting Node 2, Processors #2 OK [ 0.830618] smpboot: Booting Node 3, Processors #3 OK [ 0.843810] smpboot: Booting Node 0, Processors #4 OK [ 0.856991] smpboot: Booting Node 1, Processors #5 OK [ 0.870181] smpboot: Booting Node 2, Processors #6 OK [ 0.883361] smpboot: Booting Node 3, Processors #7 OK [ 0.896545] smpboot: Booting Node 0, Processors #8 OK [ 0.909940] smpboot: Booting Node 1, Processors #9 OK [ 0.923135] smpboot: Booting Node 2, Processors #10 OK [ 0.936412] smpboot: Booting Node 3, Processors #11 OK [ 0.949684] smpboot: Booting Node 0, Processors #12 OK [ 0.962954] smpboot: Booting Node 1, Processors #13 OK [ 0.976227] smpboot: Booting Node 2, Processors #14 OK [ 0.989506] smpboot: Booting Node 3, Processors #15 OK [ 1.002778] smpboot: Booting Node 0, Processors #16 OK [ 1.016156] smpboot: Booting Node 1, Processors #17 OK [ 1.029435] smpboot: Booting Node 2, Processors #18 OK [ 1.042716] smpboot: Booting Node 3, Processors #19 OK [ 1.055977] smpboot: Booting Node 0, Processors #20 OK [ 1.069234] smpboot: Booting Node 1, Processors #21 OK [ 1.082509] smpboot: Booting Node 2, Processors #22 OK [ 1.095792] smpboot: Booting Node 3, Processors #23 OK [ 1.109057] smpboot: Booting Node 0, Processors #24 OK [ 1.122807] smpboot: Booting Node 1, Processors #25 OK [ 1.136053] smpboot: Booting Node 2, Processors #26 OK [ 1.149297] smpboot: Booting Node 3, Processors #27 OK [ 1.162524] smpboot: Booting Node 0, Processors #28 OK [ 1.175761] smpboot: Booting Node 1, Processors #29 OK [ 1.188996] smpboot: Booting Node 2, Processors #30 OK [ 1.202228] smpboot: Booting Node 3, Processors #31 OK [ 1.215471] smpboot: Booting Node 0, Processors #32 OK [ 1.228809] smpboot: Booting Node 1, Processors #33 OK [ 1.242051] smpboot: Booting Node 2, Processors #34 OK [ 1.255303] smpboot: Booting Node 3, Processors #35 OK [ 1.268536] smpboot: Booting Node 0, Processors #36 OK [ 1.281774] smpboot: Booting Node 1, Processors #37 OK [ 1.295017] smpboot: Booting Node 2, Processors #38 OK [ 1.308258] smpboot: Booting Node 3, Processors #39 OK [ 1.321482] smpboot: Booting Node 0, Processors #40 OK [ 1.334824] smpboot: Booting Node 1, Processors #41 OK [ 1.348170] smpboot: Booting Node 2, Processors #42 OK [ 1.361413] smpboot: Booting Node 3, Processors #43 OK [ 1.374753] smpboot: Booting Node 0, Processors #44 OK [ 1.387987] smpboot: Booting Node 1, Processors #45 OK [ 1.401322] smpboot: Booting Node 2, Processors #46 OK [ 1.414565] smpboot: Booting Node 3, Processors #47 [ 1.427276] Brought up 48 CPUs [ 1.430534] smpboot: Max logical packages: 3 [ 1.434809] smpboot: Total of 48 processors activated (191635.48 BogoMIPS) [ 1.723365] node 0 initialised, 15462980 pages in 274ms [ 1.731635] node 3 initialised, 15989250 pages in 278ms [ 1.731681] node 1 initialised, 15989367 pages in 279ms [ 1.737733] node 2 initialised, 15984664 pages in 285ms [ 1.747876] devtmpfs: initialized [ 1.773808] EVM: security.selinux [ 1.777130] EVM: security.ima [ 1.780104] EVM: security.capability [ 1.783782] PM: Registering ACPI NVS region [mem 0x0008f000-0x0008ffff] (4096 bytes) [ 1.791529] PM: Registering ACPI NVS region [mem 0x6efcf000-0x6fdfefff] (14876672 bytes) [ 1.801187] atomic64 test passed for x86-64 platform with CX8 and with SSE [ 1.808066] pinctrl core: initialized pinctrl subsystem [ 1.813397] RTC time: 15:29:14, date: 12/10/19 [ 1.817994] NET: Registered protocol family 16 [ 1.822796] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it [ 1.830365] ACPI: bus type PCI registered [ 1.834379] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 1.840963] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0x80000000-0x8fffffff] (base 0x80000000) [ 1.850265] PCI: MMCONFIG at [mem 0x80000000-0x8fffffff] reserved in E820 [ 1.857055] PCI: Using configuration type 1 for base access [ 1.862640] PCI: Dell System detected, enabling pci=bfsort. [ 1.878052] ACPI: Added _OSI(Module Device) [ 1.882245] ACPI: Added _OSI(Processor Device) [ 1.886690] ACPI: Added _OSI(3.0 _SCP Extensions) [ 1.891394] ACPI: Added _OSI(Processor Aggregator Device) [ 1.896797] ACPI: Added _OSI(Linux-Dell-Video) [ 1.902056] ACPI: EC: Look up EC in DSDT [ 1.903036] ACPI: Executed 2 blocks of module-level executable AML code [ 1.915093] ACPI: Interpreter enabled [ 1.918765] ACPI: (supports S0 S5) [ 1.922171] ACPI: Using IOAPIC for interrupt routing [ 1.927349] HEST: Table parsing has been initialized. [ 1.932402] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 1.941548] ACPI: Enabled 1 GPEs in block 00 to 1F [ 1.953196] ACPI: PCI Interrupt Link [LNKA] (IRQs 4 5 7 10 11 14 15) *0 [ 1.960108] ACPI: PCI Interrupt Link [LNKB] (IRQs 4 5 7 10 11 14 15) *0 [ 1.967015] ACPI: PCI Interrupt Link [LNKC] (IRQs 4 5 7 10 11 14 15) *0 [ 1.973921] ACPI: PCI Interrupt Link [LNKD] (IRQs 4 5 7 10 11 14 15) *0 [ 1.980831] ACPI: PCI Interrupt Link [LNKE] (IRQs 4 5 7 10 11 14 15) *0 [ 1.987739] ACPI: PCI Interrupt Link [LNKF] (IRQs 4 5 7 10 11 14 15) *0 [ 1.994646] ACPI: PCI Interrupt Link [LNKG] (IRQs 4 5 7 10 11 14 15) *0 [ 2.001550] ACPI: PCI Interrupt Link [LNKH] (IRQs 4 5 7 10 11 14 15) *0 [ 2.008599] ACPI: PCI Root Bridge [PC00] (domain 0000 [bus 00-3f]) [ 2.014783] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI] [ 2.023000] acpi PNP0A08:00: PCIe AER handled by firmware [ 2.028442] acpi PNP0A08:00: _OSC: platform does not support [SHPCHotplug] [ 2.035390] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] [ 2.043041] acpi PNP0A08:00: FADT indicates ASPM is unsupported, using BIOS configuration [ 2.051503] PCI host bridge to bus 0000:00 [ 2.055600] pci_bus 0000:00: root bus resource [io 0x0000-0x03af window] [ 2.062386] pci_bus 0000:00: root bus resource [io 0x03e0-0x0cf7 window] [ 2.069172] pci_bus 0000:00: root bus resource [mem 0x000c0000-0x000c3fff window] [ 2.076651] pci_bus 0000:00: root bus resource [mem 0x000c4000-0x000c7fff window] [ 2.084130] pci_bus 0000:00: root bus resource [mem 0x000c8000-0x000cbfff window] [ 2.091611] pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window] [ 2.099090] pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window] [ 2.106568] pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window] [ 2.114049] pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window] [ 2.121527] pci_bus 0000:00: root bus resource [mem 0x000dc000-0x000dffff window] [ 2.129008] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000e3fff window] [ 2.136487] pci_bus 0000:00: root bus resource [mem 0x000e4000-0x000e7fff window] [ 2.143967] pci_bus 0000:00: root bus resource [mem 0x000e8000-0x000ebfff window] [ 2.151447] pci_bus 0000:00: root bus resource [mem 0x000ec000-0x000effff window] [ 2.158926] pci_bus 0000:00: root bus resource [mem 0x000f0000-0x000fffff window] [ 2.166404] pci_bus 0000:00: root bus resource [io 0x0d00-0x3fff window] [ 2.173192] pci_bus 0000:00: root bus resource [mem 0xe1000000-0xfebfffff window] [ 2.180672] pci_bus 0000:00: root bus resource [mem 0x10000000000-0x2bf3fffffff window] [ 2.188671] pci_bus 0000:00: root bus resource [bus 00-3f] [ 2.194165] pci 0000:00:00.0: [1022:1450] type 00 class 0x060000 [ 2.194248] pci 0000:00:00.2: [1022:1451] type 00 class 0x080600 [ 2.194336] pci 0000:00:01.0: [1022:1452] type 00 class 0x060000 [ 2.194414] pci 0000:00:02.0: [1022:1452] type 00 class 0x060000 [ 2.194488] pci 0000:00:03.0: [1022:1452] type 00 class 0x060000 [ 2.194548] pci 0000:00:03.1: [1022:1453] type 01 class 0x060400 [ 2.194992] pci 0000:00:03.1: PME# supported from D0 D3hot D3cold [ 2.195092] pci 0000:00:04.0: [1022:1452] type 00 class 0x060000 [ 2.195173] pci 0000:00:07.0: [1022:1452] type 00 class 0x060000 [ 2.195233] pci 0000:00:07.1: [1022:1454] type 01 class 0x060400 [ 2.195988] pci 0000:00:07.1: PME# supported from D0 D3hot D3cold [ 2.196068] pci 0000:00:08.0: [1022:1452] type 00 class 0x060000 [ 2.196128] pci 0000:00:08.1: [1022:1454] type 01 class 0x060400 [ 2.196964] pci 0000:00:08.1: PME# supported from D0 D3hot D3cold [ 2.197079] pci 0000:00:14.0: [1022:790b] type 00 class 0x0c0500 [ 2.197279] pci 0000:00:14.3: [1022:790e] type 00 class 0x060100 [ 2.197482] pci 0000:00:18.0: [1022:1460] type 00 class 0x060000 [ 2.197533] pci 0000:00:18.1: [1022:1461] type 00 class 0x060000 [ 2.197584] pci 0000:00:18.2: [1022:1462] type 00 class 0x060000 [ 2.197635] pci 0000:00:18.3: [1022:1463] type 00 class 0x060000 [ 2.197688] pci 0000:00:18.4: [1022:1464] type 00 class 0x060000 [ 2.197738] pci 0000:00:18.5: [1022:1465] type 00 class 0x060000 [ 2.197789] pci 0000:00:18.6: [1022:1466] type 00 class 0x060000 [ 2.197839] pci 0000:00:18.7: [1022:1467] type 00 class 0x060000 [ 2.197889] pci 0000:00:19.0: [1022:1460] type 00 class 0x060000 [ 2.197943] pci 0000:00:19.1: [1022:1461] type 00 class 0x060000 [ 2.197998] pci 0000:00:19.2: [1022:1462] type 00 class 0x060000 [ 2.198053] pci 0000:00:19.3: [1022:1463] type 00 class 0x060000 [ 2.198105] pci 0000:00:19.4: [1022:1464] type 00 class 0x060000 [ 2.198159] pci 0000:00:19.5: [1022:1465] type 00 class 0x060000 [ 2.198212] pci 0000:00:19.6: [1022:1466] type 00 class 0x060000 [ 2.198267] pci 0000:00:19.7: [1022:1467] type 00 class 0x060000 [ 2.198319] pci 0000:00:1a.0: [1022:1460] type 00 class 0x060000 [ 2.198376] pci 0000:00:1a.1: [1022:1461] type 00 class 0x060000 [ 2.198430] pci 0000:00:1a.2: [1022:1462] type 00 class 0x060000 [ 2.198484] pci 0000:00:1a.3: [1022:1463] type 00 class 0x060000 [ 2.198536] pci 0000:00:1a.4: [1022:1464] type 00 class 0x060000 [ 2.198589] pci 0000:00:1a.5: [1022:1465] type 00 class 0x060000 [ 2.198643] pci 0000:00:1a.6: [1022:1466] type 00 class 0x060000 [ 2.198699] pci 0000:00:1a.7: [1022:1467] type 00 class 0x060000 [ 2.198752] pci 0000:00:1b.0: [1022:1460] type 00 class 0x060000 [ 2.198806] pci 0000:00:1b.1: [1022:1461] type 00 class 0x060000 [ 2.198860] pci 0000:00:1b.2: [1022:1462] type 00 class 0x060000 [ 2.198915] pci 0000:00:1b.3: [1022:1463] type 00 class 0x060000 [ 2.198967] pci 0000:00:1b.4: [1022:1464] type 00 class 0x060000 [ 2.199020] pci 0000:00:1b.5: [1022:1465] type 00 class 0x060000 [ 2.199074] pci 0000:00:1b.6: [1022:1466] type 00 class 0x060000 [ 2.199127] pci 0000:00:1b.7: [1022:1467] type 00 class 0x060000 [ 2.200005] pci 0000:01:00.0: [15b3:101b] type 00 class 0x020700 [ 2.200150] pci 0000:01:00.0: reg 0x10: [mem 0xe2000000-0xe3ffffff 64bit pref] [ 2.200384] pci 0000:01:00.0: reg 0x30: [mem 0xfff00000-0xffffffff pref] [ 2.200792] pci 0000:01:00.0: PME# supported from D3cold [ 2.201071] pci 0000:00:03.1: PCI bridge to [bus 01] [ 2.206044] pci 0000:00:03.1: bridge window [mem 0xe2000000-0xe3ffffff 64bit pref] [ 2.206120] pci 0000:02:00.0: [1022:145a] type 00 class 0x130000 [ 2.206218] pci 0000:02:00.2: [1022:1456] type 00 class 0x108000 [ 2.206236] pci 0000:02:00.2: reg 0x18: [mem 0xf7300000-0xf73fffff] [ 2.206248] pci 0000:02:00.2: reg 0x24: [mem 0xf7400000-0xf7401fff] [ 2.206325] pci 0000:02:00.3: [1022:145f] type 00 class 0x0c0330 [ 2.206338] pci 0000:02:00.3: reg 0x10: [mem 0xf7200000-0xf72fffff 64bit] [ 2.206387] pci 0000:02:00.3: PME# supported from D0 D3hot D3cold [ 2.206446] pci 0000:00:07.1: PCI bridge to [bus 02] [ 2.211416] pci 0000:00:07.1: bridge window [mem 0xf7200000-0xf74fffff] [ 2.212001] pci 0000:03:00.0: [1022:1455] type 00 class 0x130000 [ 2.212111] pci 0000:03:00.1: [1022:1468] type 00 class 0x108000 [ 2.212130] pci 0000:03:00.1: reg 0x18: [mem 0xf7000000-0xf70fffff] [ 2.212143] pci 0000:03:00.1: reg 0x24: [mem 0xf7100000-0xf7101fff] [ 2.212234] pci 0000:00:08.1: PCI bridge to [bus 03] [ 2.217204] pci 0000:00:08.1: bridge window [mem 0xf7000000-0xf71fffff] [ 2.217221] pci_bus 0000:00: on NUMA node 0 [ 2.217598] ACPI: PCI Root Bridge [PC01] (domain 0000 [bus 40-7f]) [ 2.223786] acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI] [ 2.232001] acpi PNP0A08:01: PCIe AER handled by firmware [ 2.237446] acpi PNP0A08:01: _OSC: platform does not support [SHPCHotplug] [ 2.244394] acpi PNP0A08:01: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] [ 2.252051] acpi PNP0A08:01: FADT indicates ASPM is unsupported, using BIOS configuration [ 2.260467] PCI host bridge to bus 0000:40 [ 2.264569] pci_bus 0000:40: root bus resource [io 0x4000-0x7fff window] [ 2.271353] pci_bus 0000:40: root bus resource [mem 0xc6000000-0xe0ffffff window] [ 2.278833] pci_bus 0000:40: root bus resource [mem 0x2bf40000000-0x47e7fffffff window] [ 2.286833] pci_bus 0000:40: root bus resource [bus 40-7f] [ 2.292323] pci 0000:40:00.0: [1022:1450] type 00 class 0x060000 [ 2.292394] pci 0000:40:00.2: [1022:1451] type 00 class 0x080600 [ 2.292485] pci 0000:40:01.0: [1022:1452] type 00 class 0x060000 [ 2.292559] pci 0000:40:02.0: [1022:1452] type 00 class 0x060000 [ 2.292635] pci 0000:40:03.0: [1022:1452] type 00 class 0x060000 [ 2.292710] pci 0000:40:04.0: [1022:1452] type 00 class 0x060000 [ 2.292789] pci 0000:40:07.0: [1022:1452] type 00 class 0x060000 [ 2.292849] pci 0000:40:07.1: [1022:1454] type 01 class 0x060400 [ 2.292973] pci 0000:40:07.1: PME# supported from D0 D3hot D3cold [ 2.293053] pci 0000:40:08.0: [1022:1452] type 00 class 0x060000 [ 2.293116] pci 0000:40:08.1: [1022:1454] type 01 class 0x060400 [ 2.293227] pci 0000:40:08.1: PME# supported from D0 D3hot D3cold [ 2.293909] pci 0000:41:00.0: [1022:145a] type 00 class 0x130000 [ 2.294016] pci 0000:41:00.2: [1022:1456] type 00 class 0x108000 [ 2.294035] pci 0000:41:00.2: reg 0x18: [mem 0xdb300000-0xdb3fffff] [ 2.294048] pci 0000:41:00.2: reg 0x24: [mem 0xdb400000-0xdb401fff] [ 2.294131] pci 0000:41:00.3: [1022:145f] type 00 class 0x0c0330 [ 2.294144] pci 0000:41:00.3: reg 0x10: [mem 0xdb200000-0xdb2fffff 64bit] [ 2.294199] pci 0000:41:00.3: PME# supported from D0 D3hot D3cold [ 2.294260] pci 0000:40:07.1: PCI bridge to [bus 41] [ 2.299229] pci 0000:40:07.1: bridge window [mem 0xdb200000-0xdb4fffff] [ 2.299324] pci 0000:42:00.0: [1022:1455] type 00 class 0x130000 [ 2.299444] pci 0000:42:00.1: [1022:1468] type 00 class 0x108000 [ 2.299464] pci 0000:42:00.1: reg 0x18: [mem 0xdb000000-0xdb0fffff] [ 2.299478] pci 0000:42:00.1: reg 0x24: [mem 0xdb100000-0xdb101fff] [ 2.299576] pci 0000:40:08.1: PCI bridge to [bus 42] [ 2.304543] pci 0000:40:08.1: bridge window [mem 0xdb000000-0xdb1fffff] [ 2.304556] pci_bus 0000:40: on NUMA node 1 [ 2.304737] ACPI: PCI Root Bridge [PC02] (domain 0000 [bus 80-bf]) [ 2.310920] acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI] [ 2.319130] acpi PNP0A08:02: PCIe AER handled by firmware [ 2.324571] acpi PNP0A08:02: _OSC: platform does not support [SHPCHotplug] [ 2.331510] acpi PNP0A08:02: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] [ 2.339163] acpi PNP0A08:02: FADT indicates ASPM is unsupported, using BIOS configuration [ 2.347604] PCI host bridge to bus 0000:80 [ 2.351704] pci_bus 0000:80: root bus resource [io 0x03b0-0x03df window] [ 2.358489] pci_bus 0000:80: root bus resource [mem 0x000a0000-0x000bffff window] [ 2.365970] pci_bus 0000:80: root bus resource [io 0x8000-0xbfff window] [ 2.372756] pci_bus 0000:80: root bus resource [mem 0xab000000-0xc5ffffff window] [ 2.380237] pci_bus 0000:80: root bus resource [mem 0x47e80000000-0x63dbfffffff window] [ 2.388234] pci_bus 0000:80: root bus resource [bus 80-bf] [ 2.393727] pci 0000:80:00.0: [1022:1450] type 00 class 0x060000 [ 2.393799] pci 0000:80:00.2: [1022:1451] type 00 class 0x080600 [ 2.393886] pci 0000:80:01.0: [1022:1452] type 00 class 0x060000 [ 2.393948] pci 0000:80:01.1: [1022:1453] type 01 class 0x060400 [ 2.394077] pci 0000:80:01.1: PME# supported from D0 D3hot D3cold [ 2.394150] pci 0000:80:01.2: [1022:1453] type 01 class 0x060400 [ 2.394271] pci 0000:80:01.2: PME# supported from D0 D3hot D3cold [ 2.394352] pci 0000:80:02.0: [1022:1452] type 00 class 0x060000 [ 2.394428] pci 0000:80:03.0: [1022:1452] type 00 class 0x060000 [ 2.394487] pci 0000:80:03.1: [1022:1453] type 01 class 0x060400 [ 2.394986] pci 0000:80:03.1: PME# supported from D0 D3hot D3cold [ 2.395083] pci 0000:80:04.0: [1022:1452] type 00 class 0x060000 [ 2.395164] pci 0000:80:07.0: [1022:1452] type 00 class 0x060000 [ 2.395225] pci 0000:80:07.1: [1022:1454] type 01 class 0x060400 [ 2.395335] pci 0000:80:07.1: PME# supported from D0 D3hot D3cold [ 2.395412] pci 0000:80:08.0: [1022:1452] type 00 class 0x060000 [ 2.395474] pci 0000:80:08.1: [1022:1454] type 01 class 0x060400 [ 2.396000] pci 0000:80:08.1: PME# supported from D0 D3hot D3cold [ 2.396221] pci 0000:81:00.0: [14e4:165f] type 00 class 0x020000 [ 2.396247] pci 0000:81:00.0: reg 0x10: [mem 0xac230000-0xac23ffff 64bit pref] [ 2.396262] pci 0000:81:00.0: reg 0x18: [mem 0xac240000-0xac24ffff 64bit pref] [ 2.396277] pci 0000:81:00.0: reg 0x20: [mem 0xac250000-0xac25ffff 64bit pref] [ 2.396287] pci 0000:81:00.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] [ 2.396364] pci 0000:81:00.0: PME# supported from D0 D3hot D3cold [ 2.396457] pci 0000:81:00.1: [14e4:165f] type 00 class 0x020000 [ 2.396482] pci 0000:81:00.1: reg 0x10: [mem 0xac200000-0xac20ffff 64bit pref] [ 2.396497] pci 0000:81:00.1: reg 0x18: [mem 0xac210000-0xac21ffff 64bit pref] [ 2.396512] pci 0000:81:00.1: reg 0x20: [mem 0xac220000-0xac22ffff 64bit pref] [ 2.396522] pci 0000:81:00.1: reg 0x30: [mem 0xfffc0000-0xffffffff pref] [ 2.396598] pci 0000:81:00.1: PME# supported from D0 D3hot D3cold [ 2.396689] pci 0000:80:01.1: PCI bridge to [bus 81] [ 2.401666] pci 0000:80:01.1: bridge window [mem 0xac200000-0xac2fffff 64bit pref] [ 2.401980] pci 0000:82:00.0: [1556:be00] type 01 class 0x060400 [ 2.404675] pci 0000:80:01.2: PCI bridge to [bus 82-83] [ 2.409907] pci 0000:80:01.2: bridge window [mem 0xc0000000-0xc08fffff] [ 2.409911] pci 0000:80:01.2: bridge window [mem 0xab000000-0xabffffff 64bit pref] [ 2.409958] pci 0000:83:00.0: [102b:0536] type 00 class 0x030000 [ 2.409977] pci 0000:83:00.0: reg 0x10: [mem 0xab000000-0xabffffff pref] [ 2.409989] pci 0000:83:00.0: reg 0x14: [mem 0xc0808000-0xc080bfff] [ 2.410000] pci 0000:83:00.0: reg 0x18: [mem 0xc0000000-0xc07fffff] [ 2.410141] pci 0000:82:00.0: PCI bridge to [bus 83] [ 2.415119] pci 0000:82:00.0: bridge window [mem 0xc0000000-0xc08fffff] [ 2.415125] pci 0000:82:00.0: bridge window [mem 0xab000000-0xabffffff 64bit pref] [ 2.415209] pci 0000:84:00.0: [1000:00d1] type 00 class 0x010700 [ 2.415232] pci 0000:84:00.0: reg 0x10: [mem 0xac000000-0xac0fffff 64bit pref] [ 2.415242] pci 0000:84:00.0: reg 0x18: [mem 0xac100000-0xac1fffff 64bit pref] [ 2.415249] pci 0000:84:00.0: reg 0x20: [mem 0xc0d00000-0xc0dfffff] [ 2.415257] pci 0000:84:00.0: reg 0x24: [io 0x8000-0x80ff] [ 2.415265] pci 0000:84:00.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] [ 2.415317] pci 0000:84:00.0: supports D1 D2 [ 2.417672] pci 0000:80:03.1: PCI bridge to [bus 84] [ 2.422645] pci 0000:80:03.1: bridge window [io 0x8000-0x8fff] [ 2.422647] pci 0000:80:03.1: bridge window [mem 0xc0d00000-0xc0dfffff] [ 2.422651] pci 0000:80:03.1: bridge window [mem 0xac000000-0xac1fffff 64bit pref] [ 2.423022] pci 0000:85:00.0: [1022:145a] type 00 class 0x130000 [ 2.423127] pci 0000:85:00.2: [1022:1456] type 00 class 0x108000 [ 2.423146] pci 0000:85:00.2: reg 0x18: [mem 0xc0b00000-0xc0bfffff] [ 2.423159] pci 0000:85:00.2: reg 0x24: [mem 0xc0c00000-0xc0c01fff] [ 2.423251] pci 0000:80:07.1: PCI bridge to [bus 85] [ 2.428218] pci 0000:80:07.1: bridge window [mem 0xc0b00000-0xc0cfffff] [ 2.428313] pci 0000:86:00.0: [1022:1455] type 00 class 0x130000 [ 2.428431] pci 0000:86:00.1: [1022:1468] type 00 class 0x108000 [ 2.428450] pci 0000:86:00.1: reg 0x18: [mem 0xc0900000-0xc09fffff] [ 2.428465] pci 0000:86:00.1: reg 0x24: [mem 0xc0a00000-0xc0a01fff] [ 2.428553] pci 0000:86:00.2: [1022:7901] type 00 class 0x010601 [ 2.428585] pci 0000:86:00.2: reg 0x24: [mem 0xc0a02000-0xc0a02fff] [ 2.428624] pci 0000:86:00.2: PME# supported from D3hot D3cold [ 2.428691] pci 0000:80:08.1: PCI bridge to [bus 86] [ 2.433662] pci 0000:80:08.1: bridge window [mem 0xc0900000-0xc0afffff] [ 2.433688] pci_bus 0000:80: on NUMA node 2 [ 2.433858] ACPI: PCI Root Bridge [PC03] (domain 0000 [bus c0-ff]) [ 2.440038] acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI] [ 2.448247] acpi PNP0A08:03: PCIe AER handled by firmware [ 2.453684] acpi PNP0A08:03: _OSC: platform does not support [SHPCHotplug] [ 2.460630] acpi PNP0A08:03: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] [ 2.468282] acpi PNP0A08:03: FADT indicates ASPM is unsupported, using BIOS configuration [ 2.476606] acpi PNP0A08:03: host bridge window [mem 0x63dc0000000-0xffffffffffff window] ([0x80000000000-0xffffffffffff] ignored, not CPU addressable) [ 2.490244] PCI host bridge to bus 0000:c0 [ 2.494344] pci_bus 0000:c0: root bus resource [io 0xc000-0xffff window] [ 2.501131] pci_bus 0000:c0: root bus resource [mem 0x90000000-0xaaffffff window] [ 2.508611] pci_bus 0000:c0: root bus resource [mem 0x63dc0000000-0x7ffffffffff window] [ 2.516608] pci_bus 0000:c0: root bus resource [bus c0-ff] [ 2.522099] pci 0000:c0:00.0: [1022:1450] type 00 class 0x060000 [ 2.522168] pci 0000:c0:00.2: [1022:1451] type 00 class 0x080600 [ 2.522257] pci 0000:c0:01.0: [1022:1452] type 00 class 0x060000 [ 2.522320] pci 0000:c0:01.1: [1022:1453] type 01 class 0x060400 [ 2.522460] pci 0000:c0:01.1: PME# supported from D0 D3hot D3cold [ 2.522556] pci 0000:c0:02.0: [1022:1452] type 00 class 0x060000 [ 2.522632] pci 0000:c0:03.0: [1022:1452] type 00 class 0x060000 [ 2.522709] pci 0000:c0:04.0: [1022:1452] type 00 class 0x060000 [ 2.522787] pci 0000:c0:07.0: [1022:1452] type 00 class 0x060000 [ 2.522848] pci 0000:c0:07.1: [1022:1454] type 01 class 0x060400 [ 2.523258] pci 0000:c0:07.1: PME# supported from D0 D3hot D3cold [ 2.523334] pci 0000:c0:08.0: [1022:1452] type 00 class 0x060000 [ 2.523397] pci 0000:c0:08.1: [1022:1454] type 01 class 0x060400 [ 2.523509] pci 0000:c0:08.1: PME# supported from D0 D3hot D3cold [ 2.524198] pci 0000:c1:00.0: [1000:005f] type 00 class 0x010400 [ 2.524211] pci 0000:c1:00.0: reg 0x10: [io 0xc000-0xc0ff] [ 2.524222] pci 0000:c1:00.0: reg 0x14: [mem 0xa5500000-0xa550ffff 64bit] [ 2.524232] pci 0000:c1:00.0: reg 0x1c: [mem 0xa5400000-0xa54fffff 64bit] [ 2.524244] pci 0000:c1:00.0: reg 0x30: [mem 0xfff00000-0xffffffff pref] [ 2.524301] pci 0000:c1:00.0: supports D1 D2 [ 2.524351] pci 0000:c0:01.1: PCI bridge to [bus c1] [ 2.529317] pci 0000:c0:01.1: bridge window [io 0xc000-0xcfff] [ 2.529320] pci 0000:c0:01.1: bridge window [mem 0xa5400000-0xa55fffff] [ 2.529404] pci 0000:c2:00.0: [1022:145a] type 00 class 0x130000 [ 2.529509] pci 0000:c2:00.2: [1022:1456] type 00 class 0x108000 [ 2.529527] pci 0000:c2:00.2: reg 0x18: [mem 0xa5200000-0xa52fffff] [ 2.529541] pci 0000:c2:00.2: reg 0x24: [mem 0xa5300000-0xa5301fff] [ 2.529633] pci 0000:c0:07.1: PCI bridge to [bus c2] [ 2.534604] pci 0000:c0:07.1: bridge window [mem 0xa5200000-0xa53fffff] [ 2.534699] pci 0000:c3:00.0: [1022:1455] type 00 class 0x130000 [ 2.534815] pci 0000:c3:00.1: [1022:1468] type 00 class 0x108000 [ 2.534835] pci 0000:c3:00.1: reg 0x18: [mem 0xa5000000-0xa50fffff] [ 2.534849] pci 0000:c3:00.1: reg 0x24: [mem 0xa5100000-0xa5101fff] [ 2.534947] pci 0000:c0:08.1: PCI bridge to [bus c3] [ 2.539917] pci 0000:c0:08.1: bridge window [mem 0xa5000000-0xa51fffff] [ 2.539934] pci_bus 0000:c0: on NUMA node 3 [ 2.542095] vgaarb: device added: PCI:0000:83:00.0,decodes=io+mem,owns=io+mem,locks=none [ 2.550188] vgaarb: loaded [ 2.552897] vgaarb: bridge control possible 0000:83:00.0 [ 2.558320] SCSI subsystem initialized [ 2.562101] ACPI: bus type USB registered [ 2.566131] usbcore: registered new interface driver usbfs [ 2.571624] usbcore: registered new interface driver hub [ 2.577150] usbcore: registered new device driver usb [ 2.582516] EDAC MC: Ver: 3.0.0 [ 2.585921] PCI: Using ACPI for IRQ routing [ 2.609076] PCI: pci_cache_line_size set to 64 bytes [ 2.609231] e820: reserve RAM buffer [mem 0x0008f000-0x0008ffff] [ 2.609233] e820: reserve RAM buffer [mem 0x37007020-0x37ffffff] [ 2.609234] e820: reserve RAM buffer [mem 0x37020020-0x37ffffff] [ 2.609236] e820: reserve RAM buffer [mem 0x37029020-0x37ffffff] [ 2.609237] e820: reserve RAM buffer [mem 0x3705b020-0x37ffffff] [ 2.609238] e820: reserve RAM buffer [mem 0x4f883000-0x4fffffff] [ 2.609240] e820: reserve RAM buffer [mem 0x6cacf000-0x6fffffff] [ 2.609241] e820: reserve RAM buffer [mem 0x107f380000-0x107fffffff] [ 2.609242] e820: reserve RAM buffer [mem 0x207ff80000-0x207fffffff] [ 2.609244] e820: reserve RAM buffer [mem 0x307ff80000-0x307fffffff] [ 2.609245] e820: reserve RAM buffer [mem 0x407ff80000-0x407fffffff] [ 2.609496] NetLabel: Initializing [ 2.612897] NetLabel: domain hash size = 128 [ 2.617259] NetLabel: protocols = UNLABELED CIPSOv4 [ 2.622239] NetLabel: unlabeled traffic allowed by default [ 2.628013] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 2.632995] hpet0: 3 comparators, 32-bit 14.318180 MHz counter [ 2.641009] Switched to clocksource hpet [ 2.649658] pnp: PnP ACPI init [ 2.652741] ACPI: bus type PNP registered [ 2.656937] system 00:00: [mem 0x80000000-0x8fffffff] has been reserved [ 2.663558] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active) [ 2.663613] pnp 00:01: Plug and Play ACPI device, IDs PNP0b00 (active) [ 2.663811] pnp 00:02: Plug and Play ACPI device, IDs PNP0501 (active) [ 2.664000] pnp 00:03: Plug and Play ACPI device, IDs PNP0501 (active) [ 2.664143] pnp: PnP ACPI: found 4 devices [ 2.668253] ACPI: bus type PNP unregistered [ 2.679750] pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff00000-0xffffffff pref]: no compatible bridge window [ 2.689673] pci 0000:81:00.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window [ 2.699584] pci 0000:81:00.1: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window [ 2.709499] pci 0000:84:00.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window [ 2.719415] pci 0000:c1:00.0: can't claim BAR 6 [mem 0xfff00000-0xffffffff pref]: no compatible bridge window [ 2.729355] pci 0000:00:03.1: BAR 14: assigned [mem 0xe1000000-0xe10fffff] [ 2.736238] pci 0000:01:00.0: BAR 6: assigned [mem 0xe1000000-0xe10fffff pref] [ 2.743467] pci 0000:00:03.1: PCI bridge to [bus 01] [ 2.748442] pci 0000:00:03.1: bridge window [mem 0xe1000000-0xe10fffff] [ 2.755237] pci 0000:00:03.1: bridge window [mem 0xe2000000-0xe3ffffff 64bit pref] [ 2.762985] pci 0000:00:07.1: PCI bridge to [bus 02] [ 2.767958] pci 0000:00:07.1: bridge window [mem 0xf7200000-0xf74fffff] [ 2.774755] pci 0000:00:08.1: PCI bridge to [bus 03] [ 2.779728] pci 0000:00:08.1: bridge window [mem 0xf7000000-0xf71fffff] [ 2.786528] pci_bus 0000:00: resource 4 [io 0x0000-0x03af window] [ 2.786529] pci_bus 0000:00: resource 5 [io 0x03e0-0x0cf7 window] [ 2.786531] pci_bus 0000:00: resource 6 [mem 0x000c0000-0x000c3fff window] [ 2.786533] pci_bus 0000:00: resource 7 [mem 0x000c4000-0x000c7fff window] [ 2.786535] pci_bus 0000:00: resource 8 [mem 0x000c8000-0x000cbfff window] [ 2.786537] pci_bus 0000:00: resource 9 [mem 0x000cc000-0x000cffff window] [ 2.786538] pci_bus 0000:00: resource 10 [mem 0x000d0000-0x000d3fff window] [ 2.786540] pci_bus 0000:00: resource 11 [mem 0x000d4000-0x000d7fff window] [ 2.786542] pci_bus 0000:00: resource 12 [mem 0x000d8000-0x000dbfff window] [ 2.786544] pci_bus 0000:00: resource 13 [mem 0x000dc000-0x000dffff window] [ 2.786545] pci_bus 0000:00: resource 14 [mem 0x000e0000-0x000e3fff window] [ 2.786547] pci_bus 0000:00: resource 15 [mem 0x000e4000-0x000e7fff window] [ 2.786549] pci_bus 0000:00: resource 16 [mem 0x000e8000-0x000ebfff window] [ 2.786550] pci_bus 0000:00: resource 17 [mem 0x000ec000-0x000effff window] [ 2.786552] pci_bus 0000:00: resource 18 [mem 0x000f0000-0x000fffff window] [ 2.786554] pci_bus 0000:00: resource 19 [io 0x0d00-0x3fff window] [ 2.786555] pci_bus 0000:00: resource 20 [mem 0xe1000000-0xfebfffff window] [ 2.786557] pci_bus 0000:00: resource 21 [mem 0x10000000000-0x2bf3fffffff window] [ 2.786559] pci_bus 0000:01: resource 1 [mem 0xe1000000-0xe10fffff] [ 2.786561] pci_bus 0000:01: resource 2 [mem 0xe2000000-0xe3ffffff 64bit pref] [ 2.786562] pci_bus 0000:02: resource 1 [mem 0xf7200000-0xf74fffff] [ 2.786564] pci_bus 0000:03: resource 1 [mem 0xf7000000-0xf71fffff] [ 2.786576] pci 0000:40:07.1: PCI bridge to [bus 41] [ 2.791550] pci 0000:40:07.1: bridge window [mem 0xdb200000-0xdb4fffff] [ 2.798347] pci 0000:40:08.1: PCI bridge to [bus 42] [ 2.803319] pci 0000:40:08.1: bridge window [mem 0xdb000000-0xdb1fffff] [ 2.810117] pci_bus 0000:40: resource 4 [io 0x4000-0x7fff window] [ 2.810119] pci_bus 0000:40: resource 5 [mem 0xc6000000-0xe0ffffff window] [ 2.810120] pci_bus 0000:40: resource 6 [mem 0x2bf40000000-0x47e7fffffff window] [ 2.810122] pci_bus 0000:41: resource 1 [mem 0xdb200000-0xdb4fffff] [ 2.810124] pci_bus 0000:42: resource 1 [mem 0xdb000000-0xdb1fffff] [ 2.810157] pci 0000:80:01.1: BAR 14: assigned [mem 0xac300000-0xac3fffff] [ 2.817039] pci 0000:81:00.0: BAR 6: assigned [mem 0xac300000-0xac33ffff pref] [ 2.824268] pci 0000:81:00.1: BAR 6: assigned [mem 0xac340000-0xac37ffff pref] [ 2.831495] pci 0000:80:01.1: PCI bridge to [bus 81] [ 2.836470] pci 0000:80:01.1: bridge window [mem 0xac300000-0xac3fffff] [ 2.843266] pci 0000:80:01.1: bridge window [mem 0xac200000-0xac2fffff 64bit pref] [ 2.851015] pci 0000:82:00.0: PCI bridge to [bus 83] [ 2.855991] pci 0000:82:00.0: bridge window [mem 0xc0000000-0xc08fffff] [ 2.862784] pci 0000:82:00.0: bridge window [mem 0xab000000-0xabffffff 64bit pref] [ 2.870537] pci 0000:80:01.2: PCI bridge to [bus 82-83] [ 2.875776] pci 0000:80:01.2: bridge window [mem 0xc0000000-0xc08fffff] [ 2.882569] pci 0000:80:01.2: bridge window [mem 0xab000000-0xabffffff 64bit pref] [ 2.890322] pci 0000:84:00.0: BAR 6: no space for [mem size 0x00040000 pref] [ 2.897381] pci 0000:84:00.0: BAR 6: failed to assign [mem size 0x00040000 pref] [ 2.904784] pci 0000:80:03.1: PCI bridge to [bus 84] [ 2.909758] pci 0000:80:03.1: bridge window [io 0x8000-0x8fff] [ 2.915860] pci 0000:80:03.1: bridge window [mem 0xc0d00000-0xc0dfffff] [ 2.922655] pci 0000:80:03.1: bridge window [mem 0xac000000-0xac1fffff 64bit pref] [ 2.930404] pci 0000:80:07.1: PCI bridge to [bus 85] [ 2.935377] pci 0000:80:07.1: bridge window [mem 0xc0b00000-0xc0cfffff] [ 2.942177] pci 0000:80:08.1: PCI bridge to [bus 86] [ 2.947157] pci 0000:80:08.1: bridge window [mem 0xc0900000-0xc0afffff] [ 2.953954] pci_bus 0000:80: resource 4 [io 0x03b0-0x03df window] [ 2.953956] pci_bus 0000:80: resource 5 [mem 0x000a0000-0x000bffff window] [ 2.953958] pci_bus 0000:80: resource 6 [io 0x8000-0xbfff window] [ 2.953960] pci_bus 0000:80: resource 7 [mem 0xab000000-0xc5ffffff window] [ 2.953962] pci_bus 0000:80: resource 8 [mem 0x47e80000000-0x63dbfffffff window] [ 2.953964] pci_bus 0000:81: resource 1 [mem 0xac300000-0xac3fffff] [ 2.953965] pci_bus 0000:81: resource 2 [mem 0xac200000-0xac2fffff 64bit pref] [ 2.953967] pci_bus 0000:82: resource 1 [mem 0xc0000000-0xc08fffff] [ 2.953969] pci_bus 0000:82: resource 2 [mem 0xab000000-0xabffffff 64bit pref] [ 2.953971] pci_bus 0000:83: resource 1 [mem 0xc0000000-0xc08fffff] [ 2.953972] pci_bus 0000:83: resource 2 [mem 0xab000000-0xabffffff 64bit pref] [ 2.953974] pci_bus 0000:84: resource 0 [io 0x8000-0x8fff] [ 2.953976] pci_bus 0000:84: resource 1 [mem 0xc0d00000-0xc0dfffff] [ 2.953977] pci_bus 0000:84: resource 2 [mem 0xac000000-0xac1fffff 64bit pref] [ 2.953979] pci_bus 0000:85: resource 1 [mem 0xc0b00000-0xc0cfffff] [ 2.953981] pci_bus 0000:86: resource 1 [mem 0xc0900000-0xc0afffff] [ 2.954006] pci 0000:c1:00.0: BAR 6: no space for [mem size 0x00100000 pref] [ 2.961056] pci 0000:c1:00.0: BAR 6: failed to assign [mem size 0x00100000 pref] [ 2.968458] pci 0000:c0:01.1: PCI bridge to [bus c1] [ 2.973432] pci 0000:c0:01.1: bridge window [io 0xc000-0xcfff] [ 2.979535] pci 0000:c0:01.1: bridge window [mem 0xa5400000-0xa55fffff] [ 2.986333] pci 0000:c0:07.1: PCI bridge to [bus c2] [ 2.991306] pci 0000:c0:07.1: bridge window [mem 0xa5200000-0xa53fffff] [ 2.998104] pci 0000:c0:08.1: PCI bridge to [bus c3] [ 3.003083] pci 0000:c0:08.1: bridge window [mem 0xa5000000-0xa51fffff] [ 3.009881] pci_bus 0000:c0: resource 4 [io 0xc000-0xffff window] [ 3.009883] pci_bus 0000:c0: resource 5 [mem 0x90000000-0xaaffffff window] [ 3.009885] pci_bus 0000:c0: resource 6 [mem 0x63dc0000000-0x7ffffffffff window] [ 3.009886] pci_bus 0000:c1: resource 0 [io 0xc000-0xcfff] [ 3.009888] pci_bus 0000:c1: resource 1 [mem 0xa5400000-0xa55fffff] [ 3.009890] pci_bus 0000:c2: resource 1 [mem 0xa5200000-0xa53fffff] [ 3.009892] pci_bus 0000:c3: resource 1 [mem 0xa5000000-0xa51fffff] [ 3.009977] NET: Registered protocol family 2 [ 3.015011] TCP established hash table entries: 524288 (order: 10, 4194304 bytes) [ 3.023144] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) [ 3.029969] TCP: Hash tables configured (established 524288 bind 65536) [ 3.036623] TCP: reno registered [ 3.039974] UDP hash table entries: 65536 (order: 9, 2097152 bytes) [ 3.046576] UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes) [ 3.053783] NET: Registered protocol family 1 [ 3.058601] pci 0000:83:00.0: Boot video device [ 3.058639] PCI: CLS 64 bytes, default 64 [ 3.058698] Unpacking initramfs... [ 3.329039] Freeing initrd memory: 19732k freed [ 3.335750] AMD-Vi: IOMMU performance counters supported [ 3.341139] AMD-Vi: IOMMU performance counters supported [ 3.346485] AMD-Vi: IOMMU performance counters supported [ 3.351842] AMD-Vi: IOMMU performance counters supported [ 3.358476] iommu: Adding device 0000:00:01.0 to group 0 [ 3.364499] iommu: Adding device 0000:00:02.0 to group 1 [ 3.370525] iommu: Adding device 0000:00:03.0 to group 2 [ 3.376667] iommu: Adding device 0000:00:03.1 to group 3 [ 3.382746] iommu: Adding device 0000:00:04.0 to group 4 [ 3.388736] iommu: Adding device 0000:00:07.0 to group 5 [ 3.394783] iommu: Adding device 0000:00:07.1 to group 6 [ 3.400772] iommu: Adding device 0000:00:08.0 to group 7 [ 3.406791] iommu: Adding device 0000:00:08.1 to group 8 [ 3.412807] iommu: Adding device 0000:00:14.0 to group 9 [ 3.418147] iommu: Adding device 0000:00:14.3 to group 9 [ 3.424208] iommu: Adding device 0000:00:18.0 to group 10 [ 3.429633] iommu: Adding device 0000:00:18.1 to group 10 [ 3.435058] iommu: Adding device 0000:00:18.2 to group 10 [ 3.440483] iommu: Adding device 0000:00:18.3 to group 10 [ 3.445907] iommu: Adding device 0000:00:18.4 to group 10 [ 3.451334] iommu: Adding device 0000:00:18.5 to group 10 [ 3.456761] iommu: Adding device 0000:00:18.6 to group 10 [ 3.462187] iommu: Adding device 0000:00:18.7 to group 10 [ 3.468391] iommu: Adding device 0000:00:19.0 to group 11 [ 3.473817] iommu: Adding device 0000:00:19.1 to group 11 [ 3.479242] iommu: Adding device 0000:00:19.2 to group 11 [ 3.484666] iommu: Adding device 0000:00:19.3 to group 11 [ 3.490092] iommu: Adding device 0000:00:19.4 to group 11 [ 3.495520] iommu: Adding device 0000:00:19.5 to group 11 [ 3.500945] iommu: Adding device 0000:00:19.6 to group 11 [ 3.506369] iommu: Adding device 0000:00:19.7 to group 11 [ 3.512537] iommu: Adding device 0000:00:1a.0 to group 12 [ 3.517966] iommu: Adding device 0000:00:1a.1 to group 12 [ 3.523392] iommu: Adding device 0000:00:1a.2 to group 12 [ 3.528819] iommu: Adding device 0000:00:1a.3 to group 12 [ 3.534247] iommu: Adding device 0000:00:1a.4 to group 12 [ 3.539670] iommu: Adding device 0000:00:1a.5 to group 12 [ 3.545096] iommu: Adding device 0000:00:1a.6 to group 12 [ 3.550520] iommu: Adding device 0000:00:1a.7 to group 12 [ 3.556715] iommu: Adding device 0000:00:1b.0 to group 13 [ 3.562141] iommu: Adding device 0000:00:1b.1 to group 13 [ 3.567570] iommu: Adding device 0000:00:1b.2 to group 13 [ 3.572996] iommu: Adding device 0000:00:1b.3 to group 13 [ 3.578421] iommu: Adding device 0000:00:1b.4 to group 13 [ 3.583845] iommu: Adding device 0000:00:1b.5 to group 13 [ 3.589270] iommu: Adding device 0000:00:1b.6 to group 13 [ 3.594696] iommu: Adding device 0000:00:1b.7 to group 13 [ 3.600855] iommu: Adding device 0000:01:00.0 to group 14 [ 3.606945] iommu: Adding device 0000:02:00.0 to group 15 [ 3.613062] iommu: Adding device 0000:02:00.2 to group 16 [ 3.619183] iommu: Adding device 0000:02:00.3 to group 17 [ 3.625271] iommu: Adding device 0000:03:00.0 to group 18 [ 3.631356] iommu: Adding device 0000:03:00.1 to group 19 [ 3.637456] iommu: Adding device 0000:40:01.0 to group 20 [ 3.643539] iommu: Adding device 0000:40:02.0 to group 21 [ 3.649654] iommu: Adding device 0000:40:03.0 to group 22 [ 3.655756] iommu: Adding device 0000:40:04.0 to group 23 [ 3.661861] iommu: Adding device 0000:40:07.0 to group 24 [ 3.667841] iommu: Adding device 0000:40:07.1 to group 25 [ 3.673859] iommu: Adding device 0000:40:08.0 to group 26 [ 3.679900] iommu: Adding device 0000:40:08.1 to group 27 [ 3.685931] iommu: Adding device 0000:41:00.0 to group 28 [ 3.691947] iommu: Adding device 0000:41:00.2 to group 29 [ 3.697991] iommu: Adding device 0000:41:00.3 to group 30 [ 3.703975] iommu: Adding device 0000:42:00.0 to group 31 [ 3.710027] iommu: Adding device 0000:42:00.1 to group 32 [ 3.716087] iommu: Adding device 0000:80:01.0 to group 33 [ 3.722111] iommu: Adding device 0000:80:01.1 to group 34 [ 3.728280] iommu: Adding device 0000:80:01.2 to group 35 [ 3.734365] iommu: Adding device 0000:80:02.0 to group 36 [ 3.740418] iommu: Adding device 0000:80:03.0 to group 37 [ 3.746476] iommu: Adding device 0000:80:03.1 to group 38 [ 3.752508] iommu: Adding device 0000:80:04.0 to group 39 [ 3.758553] iommu: Adding device 0000:80:07.0 to group 40 [ 3.764614] iommu: Adding device 0000:80:07.1 to group 41 [ 3.770694] iommu: Adding device 0000:80:08.0 to group 42 [ 3.776723] iommu: Adding device 0000:80:08.1 to group 43 [ 3.782809] iommu: Adding device 0000:81:00.0 to group 44 [ 3.788260] iommu: Adding device 0000:81:00.1 to group 44 [ 3.794314] iommu: Adding device 0000:82:00.0 to group 45 [ 3.799726] iommu: Adding device 0000:83:00.0 to group 45 [ 3.805793] iommu: Adding device 0000:84:00.0 to group 46 [ 3.811802] iommu: Adding device 0000:85:00.0 to group 47 [ 3.817855] iommu: Adding device 0000:85:00.2 to group 48 [ 3.823866] iommu: Adding device 0000:86:00.0 to group 49 [ 3.829906] iommu: Adding device 0000:86:00.1 to group 50 [ 3.835966] iommu: Adding device 0000:86:00.2 to group 51 [ 3.842009] iommu: Adding device 0000:c0:01.0 to group 52 [ 3.848012] iommu: Adding device 0000:c0:01.1 to group 53 [ 3.854044] iommu: Adding device 0000:c0:02.0 to group 54 [ 3.860108] iommu: Adding device 0000:c0:03.0 to group 55 [ 3.866154] iommu: Adding device 0000:c0:04.0 to group 56 [ 3.872165] iommu: Adding device 0000:c0:07.0 to group 57 [ 3.878243] iommu: Adding device 0000:c0:07.1 to group 58 [ 3.884330] iommu: Adding device 0000:c0:08.0 to group 59 [ 3.890336] iommu: Adding device 0000:c0:08.1 to group 60 [ 3.898815] iommu: Adding device 0000:c1:00.0 to group 61 [ 3.904833] iommu: Adding device 0000:c2:00.0 to group 62 [ 3.910869] iommu: Adding device 0000:c2:00.2 to group 63 [ 3.916892] iommu: Adding device 0000:c3:00.0 to group 64 [ 3.922919] iommu: Adding device 0000:c3:00.1 to group 65 [ 3.928527] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40 [ 3.933847] AMD-Vi: Extended features (0xf77ef22294ada): [ 3.939168] PPR NX GT IA GA PC GA_vAPIC [ 3.943310] AMD-Vi: Found IOMMU at 0000:40:00.2 cap 0x40 [ 3.948633] AMD-Vi: Extended features (0xf77ef22294ada): [ 3.953953] PPR NX GT IA GA PC GA_vAPIC [ 3.958096] AMD-Vi: Found IOMMU at 0000:80:00.2 cap 0x40 [ 3.963419] AMD-Vi: Extended features (0xf77ef22294ada): [ 3.968740] PPR NX GT IA GA PC GA_vAPIC [ 3.972874] AMD-Vi: Found IOMMU at 0000:c0:00.2 cap 0x40 [ 3.978197] AMD-Vi: Extended features (0xf77ef22294ada): [ 3.983518] PPR NX GT IA GA PC GA_vAPIC [ 3.987649] AMD-Vi: Interrupt remapping enabled [ 3.992192] AMD-Vi: virtual APIC enabled [ 3.996190] pci 0000:00:00.2: irq 26 for MSI/MSI-X [ 3.996286] pci 0000:40:00.2: irq 27 for MSI/MSI-X [ 3.996370] pci 0000:80:00.2: irq 28 for MSI/MSI-X [ 3.996452] pci 0000:c0:00.2: irq 29 for MSI/MSI-X [ 3.996507] AMD-Vi: Lazy IO/TLB flushing enabled [ 4.002836] perf: AMD NB counters detected [ 4.006984] perf: AMD LLC counters detected [ 4.017261] sha1_ssse3: Using SHA-NI optimized SHA-1 implementation [ 4.023615] sha256_ssse3: Using SHA-256-NI optimized SHA-256 implementation [ 4.032225] futex hash table entries: 32768 (order: 9, 2097152 bytes) [ 4.038861] Initialise system trusted keyring [ 4.043269] audit: initializing netlink socket (disabled) [ 4.048687] type=2000 audit(1575991752.194:1): initialized [ 4.079549] HugeTLB registered 1 GB page size, pre-allocated 0 pages [ 4.085907] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 4.093548] zpool: loaded [ 4.096185] zbud: loaded [ 4.099095] VFS: Disk quotas dquot_6.6.0 [ 4.103132] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 4.109945] msgmni has been set to 32768 [ 4.113970] Key type big_key registered [ 4.117819] SELinux: Registering netfilter hooks [ 4.120269] NET: Registered protocol family 38 [ 4.124731] Key type asymmetric registered [ 4.128834] Asymmetric key parser 'x509' registered [ 4.133772] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 248) [ 4.141324] io scheduler noop registered [ 4.145258] io scheduler deadline registered (default) [ 4.150443] io scheduler cfq registered [ 4.154288] io scheduler mq-deadline registered [ 4.158830] io scheduler kyber registered [ 4.163751] pcieport 0000:00:03.1: irq 30 for MSI/MSI-X [ 4.163921] pcieport 0000:00:07.1: irq 31 for MSI/MSI-X [ 4.164884] pcieport 0000:00:08.1: irq 33 for MSI/MSI-X [ 4.165863] pcieport 0000:40:07.1: irq 34 for MSI/MSI-X [ 4.166529] pcieport 0000:40:08.1: irq 36 for MSI/MSI-X [ 4.167326] pcieport 0000:80:01.1: irq 37 for MSI/MSI-X [ 4.167571] pcieport 0000:80:01.2: irq 38 for MSI/MSI-X [ 4.167778] pcieport 0000:80:03.1: irq 39 for MSI/MSI-X [ 4.168571] pcieport 0000:80:07.1: irq 41 for MSI/MSI-X [ 4.168801] pcieport 0000:80:08.1: irq 43 for MSI/MSI-X [ 4.169634] pcieport 0000:c0:01.1: irq 44 for MSI/MSI-X [ 4.170417] pcieport 0000:c0:07.1: irq 46 for MSI/MSI-X [ 4.170656] pcieport 0000:c0:08.1: irq 48 for MSI/MSI-X [ 4.170770] pcieport 0000:00:03.1: Signaling PME through PCIe PME interrupt [ 4.177747] pci 0000:01:00.0: Signaling PME through PCIe PME interrupt [ 4.184277] pcie_pme 0000:00:03.1:pcie001: service driver pcie_pme loaded [ 4.184289] pcieport 0000:00:07.1: Signaling PME through PCIe PME interrupt [ 4.191256] pci 0000:02:00.0: Signaling PME through PCIe PME interrupt [ 4.197792] pci 0000:02:00.2: Signaling PME through PCIe PME interrupt [ 4.204324] pci 0000:02:00.3: Signaling PME through PCIe PME interrupt [ 4.210861] pcie_pme 0000:00:07.1:pcie001: service driver pcie_pme loaded [ 4.210872] pcieport 0000:00:08.1: Signaling PME through PCIe PME interrupt [ 4.217835] pci 0000:03:00.0: Signaling PME through PCIe PME interrupt [ 4.224372] pci 0000:03:00.1: Signaling PME through PCIe PME interrupt [ 4.230907] pcie_pme 0000:00:08.1:pcie001: service driver pcie_pme loaded [ 4.230927] pcieport 0000:40:07.1: Signaling PME through PCIe PME interrupt [ 4.237892] pci 0000:41:00.0: Signaling PME through PCIe PME interrupt [ 4.244424] pci 0000:41:00.2: Signaling PME through PCIe PME interrupt [ 4.250959] pci 0000:41:00.3: Signaling PME through PCIe PME interrupt [ 4.257497] pcie_pme 0000:40:07.1:pcie001: service driver pcie_pme loaded [ 4.257511] pcieport 0000:40:08.1: Signaling PME through PCIe PME interrupt [ 4.264482] pci 0000:42:00.0: Signaling PME through PCIe PME interrupt [ 4.271017] pci 0000:42:00.1: Signaling PME through PCIe PME interrupt [ 4.277549] pcie_pme 0000:40:08.1:pcie001: service driver pcie_pme loaded [ 4.277568] pcieport 0000:80:01.1: Signaling PME through PCIe PME interrupt [ 4.284536] pci 0000:81:00.0: Signaling PME through PCIe PME interrupt [ 4.291071] pci 0000:81:00.1: Signaling PME through PCIe PME interrupt [ 4.297611] pcie_pme 0000:80:01.1:pcie001: service driver pcie_pme loaded [ 4.297625] pcieport 0000:80:01.2: Signaling PME through PCIe PME interrupt [ 4.304590] pci 0000:82:00.0: Signaling PME through PCIe PME interrupt [ 4.311126] pci 0000:83:00.0: Signaling PME through PCIe PME interrupt [ 4.317663] pcie_pme 0000:80:01.2:pcie001: service driver pcie_pme loaded [ 4.317677] pcieport 0000:80:03.1: Signaling PME through PCIe PME interrupt [ 4.324649] pci 0000:84:00.0: Signaling PME through PCIe PME interrupt [ 4.331182] pcie_pme 0000:80:03.1:pcie001: service driver pcie_pme loaded [ 4.331198] pcieport 0000:80:07.1: Signaling PME through PCIe PME interrupt [ 4.338169] pci 0000:85:00.0: Signaling PME through PCIe PME interrupt [ 4.344703] pci 0000:85:00.2: Signaling PME through PCIe PME interrupt [ 4.351238] pcie_pme 0000:80:07.1:pcie001: service driver pcie_pme loaded [ 4.351253] pcieport 0000:80:08.1: Signaling PME through PCIe PME interrupt [ 4.358222] pci 0000:86:00.0: Signaling PME through PCIe PME interrupt [ 4.364756] pci 0000:86:00.1: Signaling PME through PCIe PME interrupt [ 4.371293] pci 0000:86:00.2: Signaling PME through PCIe PME interrupt [ 4.377829] pcie_pme 0000:80:08.1:pcie001: service driver pcie_pme loaded [ 4.377844] pcieport 0000:c0:01.1: Signaling PME through PCIe PME interrupt [ 4.384814] pci 0000:c1:00.0: Signaling PME through PCIe PME interrupt [ 4.391346] pcie_pme 0000:c0:01.1:pcie001: service driver pcie_pme loaded [ 4.391363] pcieport 0000:c0:07.1: Signaling PME through PCIe PME interrupt [ 4.398333] pci 0000:c2:00.0: Signaling PME through PCIe PME interrupt [ 4.404868] pci 0000:c2:00.2: Signaling PME through PCIe PME interrupt [ 4.411404] pcie_pme 0000:c0:07.1:pcie001: service driver pcie_pme loaded [ 4.411418] pcieport 0000:c0:08.1: Signaling PME through PCIe PME interrupt [ 4.418387] pci 0000:c3:00.0: Signaling PME through PCIe PME interrupt [ 4.424922] pci 0000:c3:00.1: Signaling PME through PCIe PME interrupt [ 4.431460] pcie_pme 0000:c0:08.1:pcie001: service driver pcie_pme loaded [ 4.431480] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 4.437062] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 4.443743] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 [ 4.450549] efifb: probing for efifb [ 4.454153] efifb: framebuffer at 0xab000000, mapped to 0xffffadf819800000, using 3072k, total 3072k [ 4.463291] efifb: mode is 1024x768x32, linelength=4096, pages=1 [ 4.469306] efifb: scrolling: redraw [ 4.472896] efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 [ 4.494172] Console: switching to colour frame buffer device 128x48 [ 4.516290] fb0: EFI VGA frame buffer device [ 4.520665] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input0 [ 4.528851] ACPI: Power Button [PWRB] [ 4.532571] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input1 [ 4.539978] ACPI: Power Button [PWRF] [ 4.544836] GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. [ 4.552320] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 4.579518] 00:02: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 4.606057] 00:03: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 4.612116] Non-volatile memory driver v1.3 [ 4.616351] Linux agpgart interface v0.103 [ 4.622117] crash memory driver: version 1.1 [ 4.626621] rdac: device handler registered [ 4.630866] hp_sw: device handler registered [ 4.635148] emc: device handler registered [ 4.639404] alua: device handler registered [ 4.643631] libphy: Fixed MDIO Bus: probed [ 4.647794] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 4.654331] ehci-pci: EHCI PCI platform driver [ 4.658798] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 4.664990] ohci-pci: OHCI PCI platform driver [ 4.669456] uhci_hcd: USB Universal Host Controller Interface driver [ 4.675933] xhci_hcd 0000:02:00.3: xHCI Host Controller [ 4.681228] xhci_hcd 0000:02:00.3: new USB bus registered, assigned bus number 1 [ 4.688742] xhci_hcd 0000:02:00.3: hcc params 0x0270f665 hci version 0x100 quirks 0x00000410 [ 4.697221] xhci_hcd 0000:02:00.3: irq 50 for MSI/MSI-X [ 4.697244] xhci_hcd 0000:02:00.3: irq 51 for MSI/MSI-X [ 4.697263] xhci_hcd 0000:02:00.3: irq 52 for MSI/MSI-X [ 4.697283] xhci_hcd 0000:02:00.3: irq 53 for MSI/MSI-X [ 4.697302] xhci_hcd 0000:02:00.3: irq 54 for MSI/MSI-X [ 4.697322] xhci_hcd 0000:02:00.3: irq 55 for MSI/MSI-X [ 4.697340] xhci_hcd 0000:02:00.3: irq 56 for MSI/MSI-X [ 4.697358] xhci_hcd 0000:02:00.3: irq 57 for MSI/MSI-X [ 4.697490] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 [ 4.704285] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 4.711513] usb usb1: Product: xHCI Host Controller [ 4.716400] usb usb1: Manufacturer: Linux 3.10.0-957.27.2.el7_lustre.pl2.x86_64 xhci-hcd [ 4.724494] usb usb1: SerialNumber: 0000:02:00.3 [ 4.729236] hub 1-0:1.0: USB hub found [ 4.733002] hub 1-0:1.0: 2 ports detected [ 4.737260] xhci_hcd 0000:02:00.3: xHCI Host Controller [ 4.742556] xhci_hcd 0000:02:00.3: new USB bus registered, assigned bus number 2 [ 4.749972] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. [ 4.758084] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003 [ 4.764883] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 4.772112] usb usb2: Product: xHCI Host Controller [ 4.777001] usb usb2: Manufacturer: Linux 3.10.0-957.27.2.el7_lustre.pl2.x86_64 xhci-hcd [ 4.785094] usb usb2: SerialNumber: 0000:02:00.3 [ 4.789821] hub 2-0:1.0: USB hub found [ 4.793583] hub 2-0:1.0: 2 ports detected [ 4.797894] xhci_hcd 0000:41:00.3: xHCI Host Controller [ 4.803214] xhci_hcd 0000:41:00.3: new USB bus registered, assigned bus number 3 [ 4.810725] xhci_hcd 0000:41:00.3: hcc params 0x0270f665 hci version 0x100 quirks 0x00000410 [ 4.819208] xhci_hcd 0000:41:00.3: irq 59 for MSI/MSI-X [ 4.819229] xhci_hcd 0000:41:00.3: irq 60 for MSI/MSI-X [ 4.819248] xhci_hcd 0000:41:00.3: irq 61 for MSI/MSI-X [ 4.819270] xhci_hcd 0000:41:00.3: irq 62 for MSI/MSI-X [ 4.819291] xhci_hcd 0000:41:00.3: irq 63 for MSI/MSI-X [ 4.819317] xhci_hcd 0000:41:00.3: irq 64 for MSI/MSI-X [ 4.819336] xhci_hcd 0000:41:00.3: irq 65 for MSI/MSI-X [ 4.819355] xhci_hcd 0000:41:00.3: irq 66 for MSI/MSI-X [ 4.819508] usb usb3: New USB device found, idVendor=1d6b, idProduct=0002 [ 4.826307] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 4.833533] usb usb3: Product: xHCI Host Controller [ 4.838424] usb usb3: Manufacturer: Linux 3.10.0-957.27.2.el7_lustre.pl2.x86_64 xhci-hcd [ 4.846516] usb usb3: SerialNumber: 0000:41:00.3 [ 4.851254] hub 3-0:1.0: USB hub found [ 4.855022] hub 3-0:1.0: 2 ports detected [ 4.859283] xhci_hcd 0000:41:00.3: xHCI Host Controller [ 4.864559] xhci_hcd 0000:41:00.3: new USB bus registered, assigned bus number 4 [ 4.871995] usb usb4: We don't know the algorithms for LPM for this host, disabling LPM. [ 4.880106] usb usb4: New USB device found, idVendor=1d6b, idProduct=0003 [ 4.886905] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 4.894133] usb usb4: Product: xHCI Host Controller [ 4.899023] usb usb4: Manufacturer: Linux 3.10.0-957.27.2.el7_lustre.pl2.x86_64 xhci-hcd [ 4.907117] usb usb4: SerialNumber: 0000:41:00.3 [ 4.911830] hub 4-0:1.0: USB hub found [ 4.915595] hub 4-0:1.0: 2 ports detected [ 4.919859] usbcore: registered new interface driver usbserial_generic [ 4.926398] usbserial: USB Serial support registered for generic [ 4.932452] i8042: PNP: No PS/2 controller found. Probing ports directly. [ 5.171067] usb 3-1: new high-speed USB device number 2 using xhci_hcd [ 5.301009] usb 3-1: New USB device found, idVendor=1604, idProduct=10c0 [ 5.307715] usb 3-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 5.320149] hub 3-1:1.0: USB hub found [ 5.324136] hub 3-1:1.0: 4 ports detected [ 5.970277] i8042: No controller found [ 5.974044] sched: RT throttling activated [ 5.974053] tsc: Refined TSC clocksource calibration: 1996.249 MHz [ 5.974188] mousedev: PS/2 mouse device common for all mice [ 5.974428] rtc_cmos 00:01: RTC can wake from S4 [ 5.974785] rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0 [ 5.974888] rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram, hpet irqs [ 5.974945] cpuidle: using governor menu [ 5.975212] EFI Variables Facility v0.08 2004-May-17 [ 6.000629] hidraw: raw HID events driver (C) Jiri Kosina [ 6.000730] usbcore: registered new interface driver usbhid [ 6.000731] usbhid: USB HID core driver [ 6.000860] drop_monitor: Initializing network drop monitor service [ 6.001017] TCP: cubic registered [ 6.001022] Initializing XFRM netlink socket [ 6.001244] NET: Registered protocol family 10 [ 6.001796] NET: Registered protocol family 17 [ 6.001800] mpls_gso: MPLS GSO support [ 6.002871] mce: Using 23 MCE banks [ 6.002926] microcode: CPU0: patch_level=0x08001250 [ 6.002936] microcode: CPU1: patch_level=0x08001250 [ 6.002947] microcode: CPU2: patch_level=0x08001250 [ 6.002961] microcode: CPU3: patch_level=0x08001250 [ 6.002977] microcode: CPU4: patch_level=0x08001250 [ 6.002993] microcode: CPU5: patch_level=0x08001250 [ 6.003007] microcode: CPU6: patch_level=0x08001250 [ 6.003022] microcode: CPU7: patch_level=0x08001250 [ 6.003032] microcode: CPU8: patch_level=0x08001250 [ 6.003042] microcode: CPU9: patch_level=0x08001250 [ 6.003053] microcode: CPU10: patch_level=0x08001250 [ 6.003064] microcode: CPU11: patch_level=0x08001250 [ 6.003089] microcode: CPU12: patch_level=0x08001250 [ 6.003099] microcode: CPU13: patch_level=0x08001250 [ 6.003110] microcode: CPU14: patch_level=0x08001250 [ 6.003121] microcode: CPU15: patch_level=0x08001250 [ 6.004084] usb 3-1.1: new high-speed USB device number 3 using xhci_hcd [ 6.005043] microcode: CPU16: patch_level=0x08001250 [ 6.005048] microcode: CPU17: patch_level=0x08001250 [ 6.005059] microcode: CPU18: patch_level=0x08001250 [ 6.005074] microcode: CPU19: patch_level=0x08001250 [ 6.005086] microcode: CPU20: patch_level=0x08001250 [ 6.005096] microcode: CPU21: patch_level=0x08001250 [ 6.005107] microcode: CPU22: patch_level=0x08001250 [ 6.005118] microcode: CPU23: patch_level=0x08001250 [ 6.005129] microcode: CPU24: patch_level=0x08001250 [ 6.005139] microcode: CPU25: patch_level=0x08001250 [ 6.005150] microcode: CPU26: patch_level=0x08001250 [ 6.005161] microcode: CPU27: patch_level=0x08001250 [ 6.005171] microcode: CPU28: patch_level=0x08001250 [ 6.005182] microcode: CPU29: patch_level=0x08001250 [ 6.005193] microcode: CPU30: patch_level=0x08001250 [ 6.005203] microcode: CPU31: patch_level=0x08001250 [ 6.005214] microcode: CPU32: patch_level=0x08001250 [ 6.005225] microcode: CPU33: patch_level=0x08001250 [ 6.005233] microcode: CPU34: patch_level=0x08001250 [ 6.005241] microcode: CPU35: patch_level=0x08001250 [ 6.005252] microcode: CPU36: patch_level=0x08001250 [ 6.005262] microcode: CPU37: patch_level=0x08001250 [ 6.005273] microcode: CPU38: patch_level=0x08001250 [ 6.005284] microcode: CPU39: patch_level=0x08001250 [ 6.005295] microcode: CPU40: patch_level=0x08001250 [ 6.005307] microcode: CPU41: patch_level=0x08001250 [ 6.005317] microcode: CPU42: patch_level=0x08001250 [ 6.005328] microcode: CPU43: patch_level=0x08001250 [ 6.005338] microcode: CPU44: patch_level=0x08001250 [ 6.005349] microcode: CPU45: patch_level=0x08001250 [ 6.005360] microcode: CPU46: patch_level=0x08001250 [ 6.005370] microcode: CPU47: patch_level=0x08001250 [ 6.005413] microcode: Microcode Update Driver: v2.01 , Peter Oruba [ 6.005538] PM: Hibernation image not present or could not be loaded. [ 6.005541] Loading compiled-in X.509 certificates [ 6.005569] Loaded X.509 cert 'CentOS Linux kpatch signing key: ea0413152cde1d98ebdca3fe6f0230904c9ef717' [ 6.005582] Loaded X.509 cert 'CentOS Linux Driver update signing key: 7f421ee0ab69461574bb358861dbe77762a4201b' [ 6.005954] Loaded X.509 cert 'CentOS Linux kernel signing key: 468656045a39b52ff2152c315f6198c3e658f24d' [ 6.005971] registered taskstats version 1 [ 6.008178] Key type trusted registered [ 6.009756] Key type encrypted registered [ 6.009805] IMA: No TPM chip found, activating TPM-bypass! (rc=-19) [ 6.012141] Magic number: 15:305:487 [ 6.012215] platform ACPI0007:7c: hash matches [ 6.012235] acpi ACPI0007:7c: hash matches [ 6.012277] memory memory1976: hash matches [ 6.012342] memory memory508: hash matches [ 6.018916] rtc_cmos 00:01: setting system clock to 2019-12-10 15:29:18 UTC (1575991758) [ 6.407286] Switched to clocksource tsc [ 6.412203] Freeing unused kernel memory: 1876k freed [ 6.417526] Write protecting the kernel read-only data: 12288k [ 6.419031] usb 3-1.1: New USB device found, idVendor=1604, idProduct=10c0 [ 6.419033] usb 3-1.1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 6.438923] Freeing unused kernel memory: 504k freed [ 6.440179] hub 3-1.1:1.0: USB hub found [ 6.440532] hub 3-1.1:1.0: 4 ports detected [ 6.453409] Freeing unused kernel memory: 596k freed [ 6.504223] usb 3-1.4: new high-speed USB device number 4 using xhci_hcd [ 6.515130] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) [ 6.523090] usb 1-1: new high-speed USB device number 2 using xhci_hcd [ 6.540789] systemd[1]: Detected architecture x86-64. [ 6.545848] systemd[1]: Running in initial RAM disk. [ 6.559263] systemd[1]: Set hostname to . [ 6.585037] usb 3-1.4: New USB device found, idVendor=1604, idProduct=10c0 [ 6.591917] usb 3-1.4: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 6.594593] systemd[1]: Created slice Root Slice. [ 6.608206] systemd[1]: Listening on udev Kernel Socket. [ 6.619151] systemd[1]: Reached target Timers. [ 6.628141] systemd[1]: Reached target Local File Systems. [ 6.632185] hub 3-1.4:1.0: USB hub found [ 6.632535] hub 3-1.4:1.0: 4 ports detected [ 6.647217] systemd[1]: Created slice System Slice. [ 6.654896] usb 1-1: New USB device found, idVendor=0424, idProduct=2744 [ 6.661692] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 6.670124] usb 1-1: Product: USB2734 [ 6.673788] usb 1-1: Manufacturer: Microchip Tech [ 6.678554] systemd[1]: Reached target Slices. [ 6.687194] systemd[1]: Listening on Journal Socket. [ 6.698725] systemd[1]: Starting dracut cmdline hook... [ 6.704059] hub 1-1:1.0: USB hub found [ 6.708021] hub 1-1:1.0: 4 ports detected [ 6.716563] systemd[1]: Starting Journal Service... [ 6.726632] systemd[1]: Starting Create list of required static device nodes for the current kernel... [ 6.744688] systemd[1]: Starting Apply Kernel Variables... [ 6.756187] systemd[1]: Listening on udev Control Socket. [ 6.765152] usb 2-1: new SuperSpeed USB device number 2 using xhci_hcd [ 6.775153] systemd[1]: Reached target Sockets. [ 6.784560] systemd[1]: Starting Setup Virtual Console... [ 6.791273] usb 2-1: New USB device found, idVendor=0424, idProduct=5744 [ 6.798206] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=0 [ 6.798207] usb 2-1: Product: USB5734 [ 6.798209] usb 2-1: Manufacturer: Microchip Tech [ 6.802148] systemd[1]: Reached target Swap. [ 6.814178] hub 2-1:1.0: USB hub found [ 6.814520] hub 2-1:1.0: 4 ports detected [ 6.815574] usb: port power management may be unreliable [ 6.852428] systemd[1]: Started Journal Service. [ 6.994749] pps_core: LinuxPPS API ver. 1 registered [ 6.999720] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti [ 7.011024] PTP clock support registered [ 7.016604] megasas: 07.705.02.00-rh1 [ 7.016626] mlx_compat: loading out-of-tree module taints kernel. [ 7.029426] megaraid_sas 0000:c1:00.0: FW now in Ready state [ 7.035224] megaraid_sas 0000:c1:00.0: 64 bit DMA mask and 32 bit consistent mask [ 7.043554] libata version 3.00 loaded. [ 7.043784] megaraid_sas 0000:c1:00.0: irq 68 for MSI/MSI-X [ 7.043817] megaraid_sas 0000:c1:00.0: irq 69 for MSI/MSI-X [ 7.043844] megaraid_sas 0000:c1:00.0: irq 70 for MSI/MSI-X [ 7.043870] megaraid_sas 0000:c1:00.0: irq 71 for MSI/MSI-X [ 7.043894] megaraid_sas 0000:c1:00.0: irq 72 for MSI/MSI-X [ 7.043919] megaraid_sas 0000:c1:00.0: irq 73 for MSI/MSI-X [ 7.043948] megaraid_sas 0000:c1:00.0: irq 74 for MSI/MSI-X [ 7.043974] megaraid_sas 0000:c1:00.0: irq 75 for MSI/MSI-X [ 7.044003] megaraid_sas 0000:c1:00.0: irq 76 for MSI/MSI-X [ 7.044052] megaraid_sas 0000:c1:00.0: irq 77 for MSI/MSI-X [ 7.044119] mlx_compat: module verification failed: signature and/or required key missing - tainting kernel [ 7.054634] megaraid_sas 0000:c1:00.0: irq 78 for MSI/MSI-X [ 7.054664] megaraid_sas 0000:c1:00.0: irq 79 for MSI/MSI-X [ 7.054697] megaraid_sas 0000:c1:00.0: irq 80 for MSI/MSI-X [ 7.054724] megaraid_sas 0000:c1:00.0: irq 81 for MSI/MSI-X [ 7.054749] megaraid_sas 0000:c1:00.0: irq 82 for MSI/MSI-X [ 7.054776] megaraid_sas 0000:c1:00.0: irq 83 for MSI/MSI-X [ 7.054801] megaraid_sas 0000:c1:00.0: irq 84 for MSI/MSI-X [ 7.054827] megaraid_sas 0000:c1:00.0: irq 85 for MSI/MSI-X [ 7.054855] megaraid_sas 0000:c1:00.0: irq 86 for MSI/MSI-X [ 7.054879] megaraid_sas 0000:c1:00.0: irq 87 for MSI/MSI-X [ 7.054902] megaraid_sas 0000:c1:00.0: irq 88 for MSI/MSI-X [ 7.054927] megaraid_sas 0000:c1:00.0: irq 89 for MSI/MSI-X [ 7.054950] megaraid_sas 0000:c1:00.0: irq 90 for MSI/MSI-X [ 7.054974] megaraid_sas 0000:c1:00.0: irq 91 for MSI/MSI-X [ 7.055007] megaraid_sas 0000:c1:00.0: irq 92 for MSI/MSI-X [ 7.055032] megaraid_sas 0000:c1:00.0: irq 93 for MSI/MSI-X [ 7.055057] megaraid_sas 0000:c1:00.0: irq 94 for MSI/MSI-X [ 7.055082] megaraid_sas 0000:c1:00.0: irq 95 for MSI/MSI-X [ 7.055111] megaraid_sas 0000:c1:00.0: irq 96 for MSI/MSI-X [ 7.055137] megaraid_sas 0000:c1:00.0: irq 97 for MSI/MSI-X [ 7.055159] megaraid_sas 0000:c1:00.0: irq 98 for MSI/MSI-X [ 7.055185] megaraid_sas 0000:c1:00.0: irq 99 for MSI/MSI-X [ 7.055209] megaraid_sas 0000:c1:00.0: irq 100 for MSI/MSI-X [ 7.055233] megaraid_sas 0000:c1:00.0: irq 101 for MSI/MSI-X [ 7.055257] megaraid_sas 0000:c1:00.0: irq 102 for MSI/MSI-X [ 7.055283] megaraid_sas 0000:c1:00.0: irq 103 for MSI/MSI-X [ 7.055308] megaraid_sas 0000:c1:00.0: irq 104 for MSI/MSI-X [ 7.055331] megaraid_sas 0000:c1:00.0: irq 105 for MSI/MSI-X [ 7.055355] megaraid_sas 0000:c1:00.0: irq 106 for MSI/MSI-X [ 7.055378] megaraid_sas 0000:c1:00.0: irq 107 for MSI/MSI-X [ 7.055401] megaraid_sas 0000:c1:00.0: irq 108 for MSI/MSI-X [ 7.055424] megaraid_sas 0000:c1:00.0: irq 109 for MSI/MSI-X [ 7.055447] megaraid_sas 0000:c1:00.0: irq 110 for MSI/MSI-X [ 7.055472] megaraid_sas 0000:c1:00.0: irq 111 for MSI/MSI-X [ 7.055495] megaraid_sas 0000:c1:00.0: irq 112 for MSI/MSI-X [ 7.055516] megaraid_sas 0000:c1:00.0: irq 113 for MSI/MSI-X [ 7.055540] megaraid_sas 0000:c1:00.0: irq 114 for MSI/MSI-X [ 7.055564] megaraid_sas 0000:c1:00.0: irq 115 for MSI/MSI-X [ 7.055704] megaraid_sas 0000:c1:00.0: firmware supports msix : (96) [ 7.062321] megaraid_sas 0000:c1:00.0: current msix/online cpus : (48/48) [ 7.070498] megaraid_sas 0000:c1:00.0: RDPQ mode : (disabled) [ 7.076254] megaraid_sas 0000:c1:00.0: Current firmware supports maximum commands: 928 LDIO threshold: 237 [ 7.087085] megaraid_sas 0000:c1:00.0: Configured max firmware commands: 927 [ 7.097100] megaraid_sas 0000:c1:00.0: FW supports sync cache : No [ 7.108272] Compat-mlnx-ofed backport release: 1c4bf42 [ 7.113755] Backport based on mlnx_ofed/mlnx-ofa_kernel-4.0.git 1c4bf42 [ 7.113756] compat.git: mlnx_ofed/mlnx-ofa_kernel-4.0.git [ 7.129921] tg3.c:v3.137 (May 11, 2014) [ 7.145546] mpt3sas version 31.00.00.00 loaded [ 7.147224] tg3 0000:81:00.0 eth0: Tigon3 [partno(BCM95720) rev 5720000] (PCI Express) MAC address d0:94:66:34:4b:07 [ 7.147227] tg3 0000:81:00.0 eth0: attached PHY is 5720C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 7.147229] tg3 0000:81:00.0 eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 7.147230] tg3 0000:81:00.0 eth0: dma_rwctrl[00000001] dma_mask[64-bit] [ 7.167692] tg3 0000:81:00.1 eth1: Tigon3 [partno(BCM95720) rev 5720000] (PCI Express) MAC address d0:94:66:34:4b:08 [ 7.167695] tg3 0000:81:00.1 eth1: attached PHY is 5720C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 7.167697] tg3 0000:81:00.1 eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 7.167699] tg3 0000:81:00.1 eth1: dma_rwctrl[00000001] dma_mask[64-bit] [ 7.179073] mpt3sas_cm0: 63 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (263565236 kB) [ 7.232383] mlx5_core 0000:01:00.0: firmware version: 20.26.1040 [ 7.232420] ahci 0000:86:00.2: version 3.0 [ 7.232884] ahci 0000:86:00.2: irq 121 for MSI/MSI-X [ 7.232889] ahci 0000:86:00.2: irq 122 for MSI/MSI-X [ 7.232893] ahci 0000:86:00.2: irq 123 for MSI/MSI-X [ 7.232897] ahci 0000:86:00.2: irq 124 for MSI/MSI-X [ 7.232901] ahci 0000:86:00.2: irq 125 for MSI/MSI-X [ 7.232905] ahci 0000:86:00.2: irq 126 for MSI/MSI-X [ 7.232909] ahci 0000:86:00.2: irq 127 for MSI/MSI-X [ 7.232913] ahci 0000:86:00.2: irq 128 for MSI/MSI-X [ 7.232916] ahci 0000:86:00.2: irq 129 for MSI/MSI-X [ 7.232920] ahci 0000:86:00.2: irq 130 for MSI/MSI-X [ 7.232923] ahci 0000:86:00.2: irq 131 for MSI/MSI-X [ 7.232928] ahci 0000:86:00.2: irq 132 for MSI/MSI-X [ 7.232932] ahci 0000:86:00.2: irq 133 for MSI/MSI-X [ 7.232936] ahci 0000:86:00.2: irq 134 for MSI/MSI-X [ 7.232939] ahci 0000:86:00.2: irq 135 for MSI/MSI-X [ 7.232943] ahci 0000:86:00.2: irq 136 for MSI/MSI-X [ 7.233210] ahci 0000:86:00.2: AHCI 0001.0301 32 slots 1 ports 6 Gbps 0x1 impl SATA mode [ 7.233213] ahci 0000:86:00.2: flags: 64bit ncq sntf ilck pm led clo only pmp fbs pio slum part [ 7.235882] scsi host2: ahci [ 7.236146] ata1: SATA max UDMA/133 abar m4096@0xc0a02000 port 0xc0a02100 irq 121 [ 7.257110] mpt3sas_cm0: IOC Number : 0 [ 7.257112] mpt3sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k [ 7.257196] mpt3sas 0000:84:00.0: irq 137 for MSI/MSI-X [ 7.257217] mpt3sas 0000:84:00.0: irq 138 for MSI/MSI-X [ 7.257237] mpt3sas 0000:84:00.0: irq 139 for MSI/MSI-X [ 7.257258] mpt3sas 0000:84:00.0: irq 140 for MSI/MSI-X [ 7.257280] mpt3sas 0000:84:00.0: irq 141 for MSI/MSI-X [ 7.257299] mpt3sas 0000:84:00.0: irq 142 for MSI/MSI-X [ 7.257320] mpt3sas 0000:84:00.0: irq 143 for MSI/MSI-X [ 7.257338] mpt3sas 0000:84:00.0: irq 144 for MSI/MSI-X [ 7.257359] mpt3sas 0000:84:00.0: irq 145 for MSI/MSI-X [ 7.257378] mpt3sas 0000:84:00.0: irq 146 for MSI/MSI-X [ 7.257397] mpt3sas 0000:84:00.0: irq 147 for MSI/MSI-X [ 7.257418] mpt3sas 0000:84:00.0: irq 148 for MSI/MSI-X [ 7.257437] mpt3sas 0000:84:00.0: irq 149 for MSI/MSI-X [ 7.257456] mpt3sas 0000:84:00.0: irq 150 for MSI/MSI-X [ 7.257475] mpt3sas 0000:84:00.0: irq 151 for MSI/MSI-X [ 7.257493] mpt3sas 0000:84:00.0: irq 152 for MSI/MSI-X [ 7.257512] mpt3sas 0000:84:00.0: irq 153 for MSI/MSI-X [ 7.257531] mpt3sas 0000:84:00.0: irq 154 for MSI/MSI-X [ 7.257550] mpt3sas 0000:84:00.0: irq 155 for MSI/MSI-X [ 7.257569] mpt3sas 0000:84:00.0: irq 156 for MSI/MSI-X [ 7.257588] mpt3sas 0000:84:00.0: irq 157 for MSI/MSI-X [ 7.257606] mpt3sas 0000:84:00.0: irq 158 for MSI/MSI-X [ 7.257626] mpt3sas 0000:84:00.0: irq 159 for MSI/MSI-X [ 7.257649] mpt3sas 0000:84:00.0: irq 160 for MSI/MSI-X [ 7.257668] mpt3sas 0000:84:00.0: irq 161 for MSI/MSI-X [ 7.257687] mpt3sas 0000:84:00.0: irq 162 for MSI/MSI-X [ 7.257707] mpt3sas 0000:84:00.0: irq 163 for MSI/MSI-X [ 7.257725] mpt3sas 0000:84:00.0: irq 164 for MSI/MSI-X [ 7.257750] mpt3sas 0000:84:00.0: irq 165 for MSI/MSI-X [ 7.257772] mpt3sas 0000:84:00.0: irq 166 for MSI/MSI-X [ 7.257793] mpt3sas 0000:84:00.0: irq 167 for MSI/MSI-X [ 7.257815] mpt3sas 0000:84:00.0: irq 168 for MSI/MSI-X [ 7.257836] mpt3sas 0000:84:00.0: irq 169 for MSI/MSI-X [ 7.257859] mpt3sas 0000:84:00.0: irq 170 for MSI/MSI-X [ 7.257880] mpt3sas 0000:84:00.0: irq 171 for MSI/MSI-X [ 7.257904] mpt3sas 0000:84:00.0: irq 172 for MSI/MSI-X [ 7.257928] mpt3sas 0000:84:00.0: irq 173 for MSI/MSI-X [ 7.257948] mpt3sas 0000:84:00.0: irq 174 for MSI/MSI-X [ 7.257969] mpt3sas 0000:84:00.0: irq 175 for MSI/MSI-X [ 7.257992] mpt3sas 0000:84:00.0: irq 176 for MSI/MSI-X [ 7.258014] mpt3sas 0000:84:00.0: irq 177 for MSI/MSI-X [ 7.258037] mpt3sas 0000:84:00.0: irq 178 for MSI/MSI-X [ 7.258055] mpt3sas 0000:84:00.0: irq 179 for MSI/MSI-X [ 7.258075] mpt3sas 0000:84:00.0: irq 180 for MSI/MSI-X [ 7.258102] mpt3sas 0000:84:00.0: irq 181 for MSI/MSI-X [ 7.258123] mpt3sas 0000:84:00.0: irq 182 for MSI/MSI-X [ 7.258142] mpt3sas 0000:84:00.0: irq 183 for MSI/MSI-X [ 7.258161] mpt3sas 0000:84:00.0: irq 184 for MSI/MSI-X [ 7.258625] mpt3sas0-msix0: PCI-MSI-X enabled: IRQ 137 [ 7.258627] mpt3sas0-msix1: PCI-MSI-X enabled: IRQ 138 [ 7.258627] mpt3sas0-msix2: PCI-MSI-X enabled: IRQ 139 [ 7.258628] mpt3sas0-msix3: PCI-MSI-X enabled: IRQ 140 [ 7.258628] mpt3sas0-msix4: PCI-MSI-X enabled: IRQ 141 [ 7.258629] mpt3sas0-msix5: PCI-MSI-X enabled: IRQ 142 [ 7.258629] mpt3sas0-msix6: PCI-MSI-X enabled: IRQ 143 [ 7.258630] mpt3sas0-msix7: PCI-MSI-X enabled: IRQ 144 [ 7.258630] mpt3sas0-msix8: PCI-MSI-X enabled: IRQ 145 [ 7.258631] mpt3sas0-msix9: PCI-MSI-X enabled: IRQ 146 [ 7.258631] mpt3sas0-msix10: PCI-MSI-X enabled: IRQ 147 [ 7.258632] mpt3sas0-msix11: PCI-MSI-X enabled: IRQ 148 [ 7.258632] mpt3sas0-msix12: PCI-MSI-X enabled: IRQ 149 [ 7.258633] mpt3sas0-msix13: PCI-MSI-X enabled: IRQ 150 [ 7.258633] mpt3sas0-msix14: PCI-MSI-X enabled: IRQ 151 [ 7.258634] mpt3sas0-msix15: PCI-MSI-X enabled: IRQ 152 [ 7.258634] mpt3sas0-msix16: PCI-MSI-X enabled: IRQ 153 [ 7.258635] mpt3sas0-msix17: PCI-MSI-X enabled: IRQ 154 [ 7.258635] mpt3sas0-msix18: PCI-MSI-X enabled: IRQ 155 [ 7.258636] mpt3sas0-msix19: PCI-MSI-X enabled: IRQ 156 [ 7.258636] mpt3sas0-msix20: PCI-MSI-X enabled: IRQ 157 [ 7.258637] mpt3sas0-msix21: PCI-MSI-X enabled: IRQ 158 [ 7.258637] mpt3sas0-msix22: PCI-MSI-X enabled: IRQ 159 [ 7.258638] mpt3sas0-msix23: PCI-MSI-X enabled: IRQ 160 [ 7.258638] mpt3sas0-msix24: PCI-MSI-X enabled: IRQ 161 [ 7.258638] mpt3sas0-msix25: PCI-MSI-X enabled: IRQ 162 [ 7.258639] mpt3sas0-msix26: PCI-MSI-X enabled: IRQ 163 [ 7.258639] mpt3sas0-msix27: PCI-MSI-X enabled: IRQ 164 [ 7.258640] mpt3sas0-msix28: PCI-MSI-X enabled: IRQ 165 [ 7.258640] mpt3sas0-msix29: PCI-MSI-X enabled: IRQ 166 [ 7.258641] mpt3sas0-msix30: PCI-MSI-X enabled: IRQ 167 [ 7.258641] mpt3sas0-msix31: PCI-MSI-X enabled: IRQ 168 [ 7.258642] mpt3sas0-msix32: PCI-MSI-X enabled: IRQ 169 [ 7.258642] mpt3sas0-msix33: PCI-MSI-X enabled: IRQ 170 [ 7.258643] mpt3sas0-msix34: PCI-MSI-X enabled: IRQ 171 [ 7.258643] mpt3sas0-msix35: PCI-MSI-X enabled: IRQ 172 [ 7.258644] mpt3sas0-msix36: PCI-MSI-X enabled: IRQ 173 [ 7.258644] mpt3sas0-msix37: PCI-MSI-X enabled: IRQ 174 [ 7.258645] mpt3sas0-msix38: PCI-MSI-X enabled: IRQ 175 [ 7.258645] mpt3sas0-msix39: PCI-MSI-X enabled: IRQ 176 [ 7.258645] mpt3sas0-msix40: PCI-MSI-X enabled: IRQ 177 [ 7.258646] mpt3sas0-msix41: PCI-MSI-X enabled: IRQ 178 [ 7.258646] mpt3sas0-msix42: PCI-MSI-X enabled: IRQ 179 [ 7.258647] mpt3sas0-msix43: PCI-MSI-X enabled: IRQ 180 [ 7.258647] mpt3sas0-msix44: PCI-MSI-X enabled: IRQ 181 [ 7.258648] mpt3sas0-msix45: PCI-MSI-X enabled: IRQ 182 [ 7.258648] mpt3sas0-msix46: PCI-MSI-X enabled: IRQ 183 [ 7.258649] mpt3sas0-msix47: PCI-MSI-X enabled: IRQ 184 [ 7.258650] mpt3sas_cm0: iomem(0x00000000ac000000), mapped(0xffffadf81a200000), size(1048576) [ 7.258651] mpt3sas_cm0: ioport(0x0000000000008000), size(256) [ 7.313344] mlx5_core 0000:01:00.0: 126.016 Gb/s available PCIe bandwidth, limited by 8 GT/s x16 link at 0000:00:03.1 (capable of 252.048 Gb/s with 16 GT/s x16 link) [ 7.338103] mpt3sas_cm0: IOC Number : 0 [ 7.338104] mpt3sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k [ 7.461111] megaraid_sas 0000:c1:00.0: Init cmd return status SUCCESS for SCSI host 0 [ 7.482109] megaraid_sas 0000:c1:00.0: firmware type : Legacy(64 VD) firmware [ 7.482110] megaraid_sas 0000:c1:00.0: controller type : iMR(0MB) [ 7.482112] megaraid_sas 0000:c1:00.0: Online Controller Reset(OCR) : Enabled [ 7.482113] megaraid_sas 0000:c1:00.0: Secure JBOD support : No [ 7.482114] megaraid_sas 0000:c1:00.0: NVMe passthru support : No [ 7.503624] megaraid_sas 0000:c1:00.0: INIT adapter done [ 7.503627] megaraid_sas 0000:c1:00.0: Jbod map is not supported megasas_setup_jbod_map 5146 [ 7.511597] mpt3sas_cm0: Allocated physical memory: size(38831 kB) [ 7.511599] mpt3sas_cm0: Current Controller Queue Depth(7564), Max Controller Queue Depth(7680) [ 7.511599] mpt3sas_cm0: Scatter Gather Elements per IO(128) [ 7.529913] megaraid_sas 0000:c1:00.0: pci id : (0x1000)/(0x005f)/(0x1028)/(0x1f4b) [ 7.529915] megaraid_sas 0000:c1:00.0: unevenspan support : yes [ 7.529916] megaraid_sas 0000:c1:00.0: firmware crash dump : no [ 7.529916] megaraid_sas 0000:c1:00.0: jbod sync map : no [ 7.529921] scsi host0: Avago SAS based MegaRAID driver [ 7.546125] ata1: SATA link down (SStatus 0 SControl 300) [ 7.550063] scsi 0:2:0:0: Direct-Access DELL PERC H330 Mini 4.30 PQ: 0 ANSI: 5 [ 7.570867] mlx5_core 0000:01:00.0: irq 185 for MSI/MSI-X [ 7.570888] mlx5_core 0000:01:00.0: irq 186 for MSI/MSI-X [ 7.570909] mlx5_core 0000:01:00.0: irq 187 for MSI/MSI-X [ 7.570930] mlx5_core 0000:01:00.0: irq 188 for MSI/MSI-X [ 7.570952] mlx5_core 0000:01:00.0: irq 189 for MSI/MSI-X [ 7.570973] mlx5_core 0000:01:00.0: irq 190 for MSI/MSI-X [ 7.570991] mlx5_core 0000:01:00.0: irq 191 for MSI/MSI-X [ 7.571010] mlx5_core 0000:01:00.0: irq 192 for MSI/MSI-X [ 7.571031] mlx5_core 0000:01:00.0: irq 193 for MSI/MSI-X [ 7.571050] mlx5_core 0000:01:00.0: irq 194 for MSI/MSI-X [ 7.571067] mlx5_core 0000:01:00.0: irq 195 for MSI/MSI-X [ 7.571086] mlx5_core 0000:01:00.0: irq 196 for MSI/MSI-X [ 7.571112] mlx5_core 0000:01:00.0: irq 197 for MSI/MSI-X [ 7.571132] mlx5_core 0000:01:00.0: irq 198 for MSI/MSI-X [ 7.571150] mlx5_core 0000:01:00.0: irq 199 for MSI/MSI-X [ 7.571169] mlx5_core 0000:01:00.0: irq 200 for MSI/MSI-X [ 7.571187] mlx5_core 0000:01:00.0: irq 201 for MSI/MSI-X [ 7.571205] mlx5_core 0000:01:00.0: irq 202 for MSI/MSI-X [ 7.571225] mlx5_core 0000:01:00.0: irq 203 for MSI/MSI-X [ 7.571246] mlx5_core 0000:01:00.0: irq 204 for MSI/MSI-X [ 7.571264] mlx5_core 0000:01:00.0: irq 205 for MSI/MSI-X [ 7.571285] mlx5_core 0000:01:00.0: irq 206 for MSI/MSI-X [ 7.571302] mlx5_core 0000:01:00.0: irq 207 for MSI/MSI-X [ 7.571321] mlx5_core 0000:01:00.0: irq 208 for MSI/MSI-X [ 7.571340] mlx5_core 0000:01:00.0: irq 209 for MSI/MSI-X [ 7.571358] mlx5_core 0000:01:00.0: irq 210 for MSI/MSI-X [ 7.571377] mlx5_core 0000:01:00.0: irq 211 for MSI/MSI-X [ 7.571397] mlx5_core 0000:01:00.0: irq 212 for MSI/MSI-X [ 7.571421] mlx5_core 0000:01:00.0: irq 213 for MSI/MSI-X [ 7.571446] mlx5_core 0000:01:00.0: irq 214 for MSI/MSI-X [ 7.571466] mlx5_core 0000:01:00.0: irq 215 for MSI/MSI-X [ 7.571484] mlx5_core 0000:01:00.0: irq 216 for MSI/MSI-X [ 7.571502] mlx5_core 0000:01:00.0: irq 217 for MSI/MSI-X [ 7.571520] mlx5_core 0000:01:00.0: irq 218 for MSI/MSI-X [ 7.571544] mlx5_core 0000:01:00.0: irq 219 for MSI/MSI-X [ 7.571564] mlx5_core 0000:01:00.0: irq 220 for MSI/MSI-X [ 7.571581] mlx5_core 0000:01:00.0: irq 221 for MSI/MSI-X [ 7.571600] mlx5_core 0000:01:00.0: irq 222 for MSI/MSI-X [ 7.571618] mlx5_core 0000:01:00.0: irq 223 for MSI/MSI-X [ 7.571637] mlx5_core 0000:01:00.0: irq 224 for MSI/MSI-X [ 7.571655] mlx5_core 0000:01:00.0: irq 225 for MSI/MSI-X [ 7.571674] mlx5_core 0000:01:00.0: irq 226 for MSI/MSI-X [ 7.571692] mlx5_core 0000:01:00.0: irq 227 for MSI/MSI-X [ 7.571711] mlx5_core 0000:01:00.0: irq 228 for MSI/MSI-X [ 7.571730] mlx5_core 0000:01:00.0: irq 229 for MSI/MSI-X [ 7.571749] mlx5_core 0000:01:00.0: irq 230 for MSI/MSI-X [ 7.571766] mlx5_core 0000:01:00.0: irq 231 for MSI/MSI-X [ 7.571784] mlx5_core 0000:01:00.0: irq 232 for MSI/MSI-X [ 7.571804] mlx5_core 0000:01:00.0: irq 233 for MSI/MSI-X [ 7.573112] mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged [ 7.573360] mlx5_core 0000:01:00.0: mlx5_pcie_event:303:(pid 317): PCIe slot advertised sufficient power (27W). [ 7.581097] mlx5_core 0000:01:00.0: mlx5_fw_tracer_start:776:(pid 326): FWTracer: Ownership granted and active [ 7.660712] mpt3sas_cm0: FW Package Version(12.00.00.00) [ 7.660962] mpt3sas_cm0: SAS3616: FWVersion(12.00.00.00), ChipRevision(0x02), BiosVersion(00.00.00.00) [ 7.660966] mpt3sas_cm0: Protocol=(Initiator,Target,NVMe), Capabilities=(TLR,EEDP,Diag Trace Buffer,Task Set Full,NCQ) [ 7.661032] mpt3sas 0000:84:00.0: Enabled Extended Tags as Controller Supports [ 7.661047] mpt3sas_cm0: : host protection capabilities enabled DIF1 DIF2 DIF3 [ 7.661057] scsi host1: Fusion MPT SAS Host [ 7.661304] mpt3sas_cm0: sending port enable !! [ 7.812313] mlx5_ib: Mellanox Connect-IB Infiniband driver v4.7-1.0.0 [ 7.870665] sd 0:2:0:0: [sda] 233308160 512-byte logical blocks: (119 GB/111 GiB) [ 7.878352] sd 0:2:0:0: [sda] Write Protect is off [ 7.883190] sd 0:2:0:0: [sda] Mode Sense: 1f 00 10 08 [ 7.883231] sd 0:2:0:0: [sda] Write cache: disabled, read cache: disabled, supports DPO and FUA [ 7.893596] sda: sda1 sda2 sda3 [ 7.897321] sd 0:2:0:0: [sda] Attached SCSI disk [ 8.015940] random: crng init done [ 9.806790] mpt3sas_cm0: hba_port entry: ffff9e7db3a79240, port: 255 is added to hba_port list [ 9.818263] mpt3sas_cm0: host_add: handle(0x0001), sas_addr(0x500605b00db90900), phys(21) [ 9.827044] mpt3sas_cm0: detecting: handle(0x0011), sas_address(0x300705b00db90900), phy(16) [ 9.835484] mpt3sas_cm0: REPORT_LUNS: handle(0x0011), retries(0) [ 9.841528] mpt3sas_cm0: TEST_UNIT_READY: handle(0x0011), lun(0) [ 9.847992] scsi 1:0:0:0: Enclosure LSI VirtualSES 03 PQ: 0 ANSI: 7 [ 9.856132] scsi 1:0:0:0: set ignore_delay_remove for handle(0x0011) [ 9.862484] scsi 1:0:0:0: SES: handle(0x0011), sas_addr(0x300705b00db90900), phy(16), device_name(0x300705b00db90900) [ 9.873081] scsi 1:0:0:0: enclosure logical id(0x300605b00d110900), slot(16) [ 9.880215] scsi 1:0:0:0: enclosure level(0x0000), connector name( C3 ) [ 9.886935] scsi 1:0:0:0: serial_number(300605B00D110900) [ 9.892336] scsi 1:0:0:0: qdepth(1), tagged(0), simple(0), ordered(0), scsi_level(8), cmd_que(0) [ 9.901152] mpt3sas_cm0: log_info(0x31200206): originator(PL), code(0x20), sub_code(0x0206) [ 9.923717] mpt3sas_cm0: detecting: handle(0x0017), sas_address(0x500a0984dfa20c20), phy(0) [ 9.932079] mpt3sas_cm0: REPORT_LUNS: handle(0x0017), retries(0) [ 9.938226] mpt3sas_cm0: REPORT_LUNS: handle(0x0017), retries(1) [ 9.945347] mpt3sas_cm0: TEST_UNIT_READY: handle(0x0017), lun(0) [ 9.951717] mpt3sas_cm0: detecting: handle(0x0017), sas_address(0x500a0984dfa20c20), phy(0) [ 9.960080] mpt3sas_cm0: REPORT_LUNS: handle(0x0017), retries(0) [ 9.967277] mpt3sas_cm0: TEST_UNIT_READY: handle(0x0017), lun(0) [ 9.973892] scsi 1:0:1:0: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 9.982217] scsi 1:0:1:0: SSP: handle(0x0017), sas_addr(0x500a0984dfa20c20), phy(0), device_name(0x500a0984dfa20c20) [ 9.992729] scsi 1:0:1:0: enclosure logical id(0x300605b00d110900), slot(13) [ 9.999861] scsi 1:0:1:0: enclosure level(0x0000), connector name( C3 ) [ 10.006579] scsi 1:0:1:0: serial_number(021825001558 ) [ 10.011980] scsi 1:0:1:0: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 10.106015] scsi 1:0:1:1: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 10.114184] scsi 1:0:1:1: SSP: handle(0x0017), sas_addr(0x500a0984dfa20c20), phy(0), device_name(0x500a0984dfa20c20) [ 10.124700] scsi 1:0:1:1: enclosure logical id(0x300605b00d110900), slot(13) [ 10.131833] scsi 1:0:1:1: enclosure level(0x0000), connector name( C3 ) [ 10.138537] scsi 1:0:1:1: serial_number(021825001558 ) [ 10.143941] scsi 1:0:1:1: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 10.166383] scsi 1:0:1:31: Direct-Access DELL Universal Xport 0825 PQ: 0 ANSI: 5 [ 10.174638] scsi 1:0:1:31: SSP: handle(0x0017), sas_addr(0x500a0984dfa20c20), phy(0), device_name(0x500a0984dfa20c20) [ 10.185239] scsi 1:0:1:31: enclosure logical id(0x300605b00d110900), slot(13) [ 10.192457] scsi 1:0:1:31: enclosure level(0x0000), connector name( C3 ) [ 10.199262] scsi 1:0:1:31: serial_number(021825001558 ) [ 10.204750] scsi 1:0:1:31: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 10.227702] mpt3sas_cm0: detecting: handle(0x0018), sas_address(0x500a0984da0f9b20), phy(8) [ 10.236055] mpt3sas_cm0: REPORT_LUNS: handle(0x0018), retries(0) [ 10.263556] mpt3sas_cm0: REPORT_LUNS: handle(0x0018), retries(1) [ 10.270586] mpt3sas_cm0: TEST_UNIT_READY: handle(0x0018), lun(0) [ 10.276913] mpt3sas_cm0: detecting: handle(0x0018), sas_address(0x500a0984da0f9b20), phy(8) [ 10.285281] mpt3sas_cm0: REPORT_LUNS: handle(0x0018), retries(0) [ 10.291920] mpt3sas_cm0: TEST_UNIT_READY: handle(0x0018), lun(0) [ 10.298462] scsi 1:0:2:0: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 10.306649] scsi 1:0:2:0: SSP: handle(0x0018), sas_addr(0x500a0984da0f9b20), phy(8), device_name(0x500a0984da0f9b20) [ 10.317165] scsi 1:0:2:0: enclosure logical id(0x300605b00d110900), slot(5) [ 10.324212] scsi 1:0:2:0: enclosure level(0x0000), connector name( C1 ) [ 10.330930] scsi 1:0:2:0: serial_number(021812047179 ) [ 10.336329] scsi 1:0:2:0: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 10.357086] scsi 1:0:2:1: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 10.365248] scsi 1:0:2:1: SSP: handle(0x0018), sas_addr(0x500a0984da0f9b20), phy(8), device_name(0x500a0984da0f9b20) [ 10.375762] scsi 1:0:2:1: enclosure logical id(0x300605b00d110900), slot(5) [ 10.382809] scsi 1:0:2:1: enclosure level(0x0000), connector name( C1 ) [ 10.389526] scsi 1:0:2:1: serial_number(021812047179 ) [ 10.394928] scsi 1:0:2:1: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 10.417384] scsi 1:0:2:2: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 10.425541] scsi 1:0:2:2: SSP: handle(0x0018), sas_addr(0x500a0984da0f9b20), phy(8), device_name(0x500a0984da0f9b20) [ 10.436049] scsi 1:0:2:2: enclosure logical id(0x300605b00d110900), slot(5) [ 10.443094] scsi 1:0:2:2: enclosure level(0x0000), connector name( C1 ) [ 10.449812] scsi 1:0:2:2: serial_number(021812047179 ) [ 10.455214] scsi 1:0:2:2: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 10.477382] scsi 1:0:2:31: Direct-Access DELL Universal Xport 0825 PQ: 0 ANSI: 5 [ 10.485628] scsi 1:0:2:31: SSP: handle(0x0018), sas_addr(0x500a0984da0f9b20), phy(8), device_name(0x500a0984da0f9b20) [ 10.496223] scsi 1:0:2:31: enclosure logical id(0x300605b00d110900), slot(5) [ 10.503357] scsi 1:0:2:31: enclosure level(0x0000), connector name( C1 ) [ 10.510161] scsi 1:0:2:31: serial_number(021812047179 ) [ 10.515650] scsi 1:0:2:31: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 10.538841] mpt3sas_cm0: detecting: handle(0x0019), sas_address(0x500a0984db2fa914), phy(12) [ 10.547276] mpt3sas_cm0: REPORT_LUNS: handle(0x0019), retries(0) [ 10.553420] mpt3sas_cm0: REPORT_LUNS: handle(0x0019), retries(1) [ 10.562038] mpt3sas_cm0: TEST_UNIT_READY: handle(0x0019), lun(0) [ 10.568365] mpt3sas_cm0: detecting: handle(0x0019), sas_address(0x500a0984db2fa914), phy(12) [ 10.576857] mpt3sas_cm0: REPORT_LUNS: handle(0x0019), retries(0) [ 10.585544] mpt3sas_cm0: TEST_UNIT_READY: handle(0x0019), lun(0) [ 10.591836] mpt3sas_cm0: detecting: handle(0x0019), sas_address(0x500a0984db2fa914), phy(12) [ 10.600286] mpt3sas_cm0: REPORT_LUNS: handle(0x0019), retries(0) [ 10.607036] mpt3sas_cm0: TEST_UNIT_READY: handle(0x0019), lun(0) [ 10.613604] scsi 1:0:3:0: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 10.621787] scsi 1:0:3:0: SSP: handle(0x0019), sas_addr(0x500a0984db2fa914), phy(12), device_name(0x500a0984db2fa914) [ 10.632389] scsi 1:0:3:0: enclosure logical id(0x300605b00d110900), slot(1) [ 10.639436] scsi 1:0:3:0: enclosure level(0x0000), connector name( C0 ) [ 10.646152] scsi 1:0:3:0: serial_number(021815000354 ) [ 10.651556] scsi 1:0:3:0: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 10.672132] scsi 1:0:3:1: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 10.680300] scsi 1:0:3:1: SSP: handle(0x0019), sas_addr(0x500a0984db2fa914), phy(12), device_name(0x500a0984db2fa914) [ 10.690900] scsi 1:0:3:1: enclosure logical id(0x300605b00d110900), slot(1) [ 10.697947] scsi 1:0:3:1: enclosure level(0x0000), connector name( C0 ) [ 10.704663] scsi 1:0:3:1: serial_number(021815000354 ) [ 10.710067] scsi 1:0:3:1: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 10.719255] scsi 1:0:3:1: Mode parameters changed [ 10.735397] scsi 1:0:3:2: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 10.743575] scsi 1:0:3:2: SSP: handle(0x0019), sas_addr(0x500a0984db2fa914), phy(12), device_name(0x500a0984db2fa914) [ 10.754176] scsi 1:0:3:2: enclosure logical id(0x300605b00d110900), slot(1) [ 10.761222] scsi 1:0:3:2: enclosure level(0x0000), connector name( C0 ) [ 10.767940] scsi 1:0:3:2: serial_number(021815000354 ) [ 10.773343] scsi 1:0:3:2: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 10.782525] scsi 1:0:3:2: Mode parameters changed [ 10.798394] scsi 1:0:3:31: Direct-Access DELL Universal Xport 0825 PQ: 0 ANSI: 5 [ 10.806664] scsi 1:0:3:31: SSP: handle(0x0019), sas_addr(0x500a0984db2fa914), phy(12), device_name(0x500a0984db2fa914) [ 10.817350] scsi 1:0:3:31: enclosure logical id(0x300605b00d110900), slot(1) [ 10.824484] scsi 1:0:3:31: enclosure level(0x0000), connector name( C0 ) [ 10.831287] scsi 1:0:3:31: serial_number(021815000354 ) [ 10.836775] scsi 1:0:3:31: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 10.857909] mpt3sas_cm0: detecting: handle(0x001a), sas_address(0x500a0984dfa1fa14), phy(4) [ 10.866264] mpt3sas_cm0: REPORT_LUNS: handle(0x001a), retries(0) [ 10.872407] mpt3sas_cm0: REPORT_LUNS: handle(0x001a), retries(1) [ 10.879597] mpt3sas_cm0: TEST_UNIT_READY: handle(0x001a), lun(0) [ 10.885907] mpt3sas_cm0: detecting: handle(0x001a), sas_address(0x500a0984dfa1fa14), phy(4) [ 10.894268] mpt3sas_cm0: REPORT_LUNS: handle(0x001a), retries(0) [ 10.901156] mpt3sas_cm0: TEST_UNIT_READY: handle(0x001a), lun(0) [ 10.907451] mpt3sas_cm0: detecting: handle(0x001a), sas_address(0x500a0984dfa1fa14), phy(4) [ 10.915814] mpt3sas_cm0: REPORT_LUNS: handle(0x001a), retries(0) [ 10.922591] mpt3sas_cm0: TEST_UNIT_READY: handle(0x001a), lun(0) [ 10.929159] scsi 1:0:4:0: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 10.937349] scsi 1:0:4:0: SSP: handle(0x001a), sas_addr(0x500a0984dfa1fa14), phy(4), device_name(0x500a0984dfa1fa14) [ 10.947864] scsi 1:0:4:0: enclosure logical id(0x300605b00d110900), slot(9) [ 10.954910] scsi 1:0:4:0: enclosure level(0x0000), connector name( C2 ) [ 10.961630] scsi 1:0:4:0: serial_number(021825001369 ) [ 10.967031] scsi 1:0:4:0: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 10.990143] scsi 1:0:4:1: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 10.998309] scsi 1:0:4:1: SSP: handle(0x001a), sas_addr(0x500a0984dfa1fa14), phy(4), device_name(0x500a0984dfa1fa14) [ 11.008820] scsi 1:0:4:1: enclosure logical id(0x300605b00d110900), slot(9) [ 11.015867] scsi 1:0:4:1: enclosure level(0x0000), connector name( C2 ) [ 11.022581] scsi 1:0:4:1: serial_number(021825001369 ) [ 11.027985] scsi 1:0:4:1: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 11.037174] scsi 1:0:4:1: Mode parameters changed [ 11.052399] scsi 1:0:4:31: Direct-Access DELL Universal Xport 0825 PQ: 0 ANSI: 5 [ 11.060662] scsi 1:0:4:31: SSP: handle(0x001a), sas_addr(0x500a0984dfa1fa14), phy(4), device_name(0x500a0984dfa1fa14) [ 11.071265] scsi 1:0:4:31: enclosure logical id(0x300605b00d110900), slot(9) [ 11.078396] scsi 1:0:4:31: enclosure level(0x0000), connector name( C2 ) [ 11.085203] scsi 1:0:4:31: serial_number(021825001369 ) [ 11.090689] scsi 1:0:4:31: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 15.814296] mpt3sas_cm0: port enable: SUCCESS [ 15.819272] scsi 1:0:1:0: rdac: LUN 0 (IOSHIP) (unowned) [ 15.824874] sd 1:0:1:0: [sdb] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 15.832945] scsi 1:0:1:1: rdac: LUN 1 (IOSHIP) (owned) [ 15.838274] sd 1:0:1:0: [sdb] Write Protect is off [ 15.843093] sd 1:0:1:0: [sdb] Mode Sense: 83 00 10 08 [ 15.843095] sd 1:0:1:1: [sdc] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 15.843235] sd 1:0:1:0: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 15.843550] scsi 1:0:2:0: rdac: LUN 0 (IOSHIP) (unowned) [ 15.843798] sd 1:0:2:0: [sdd] 926167040 512-byte logical blocks: (474 GB/441 GiB) [ 15.843800] sd 1:0:2:0: [sdd] 4096-byte physical blocks [ 15.844094] scsi 1:0:2:1: rdac: LUN 1 (IOSHIP) (owned) [ 15.844340] sd 1:0:2:0: [sdd] Write Protect is off [ 15.844342] sd 1:0:2:0: [sdd] Mode Sense: 83 00 10 08 [ 15.844426] sd 1:0:2:1: [sde] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 15.844629] sd 1:0:2:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 15.844876] scsi 1:0:2:2: rdac: LUN 2 (IOSHIP) (unowned) [ 15.845161] sd 1:0:2:1: [sde] Write Protect is off [ 15.845162] sd 1:0:2:1: [sde] Mode Sense: 83 00 10 08 [ 15.845219] sd 1:0:2:2: [sdf] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 15.845397] sd 1:0:2:1: [sde] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 15.845642] scsi 1:0:3:0: rdac: LUN 0 (IOSHIP) (owned) [ 15.845898] sd 1:0:2:2: [sdf] Write Protect is off [ 15.845899] sd 1:0:2:2: [sdf] Mode Sense: 83 00 10 08 [ 15.845950] sd 1:0:3:0: [sdg] 926167040 512-byte logical blocks: (474 GB/441 GiB) [ 15.845951] sd 1:0:3:0: [sdg] 4096-byte physical blocks [ 15.846163] sd 1:0:2:2: [sdf] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 15.846255] scsi 1:0:3:1: rdac: LUN 1 (IOSHIP) (unowned) [ 15.846485] sd 1:0:3:0: [sdg] Write Protect is off [ 15.846487] sd 1:0:3:0: [sdg] Mode Sense: 83 00 10 08 [ 15.846639] sd 1:0:3:0: [sdg] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 15.846659] sd 1:0:1:0: [sdb] Attached SCSI disk [ 15.847185] sd 1:0:3:1: [sdh] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 15.847871] scsi 1:0:3:2: rdac: LUN 2 (IOSHIP) (owned) [ 15.848070] sd 1:0:3:1: [sdh] Write Protect is off [ 15.848072] sd 1:0:3:1: [sdh] Mode Sense: 83 00 10 08 [ 15.848330] sd 1:0:3:2: [sdi] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 15.848370] sd 1:0:3:1: [sdh] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 15.848599] scsi 1:0:4:0: rdac: LUN 0 (IOSHIP) (owned) [ 15.848890] sd 1:0:4:0: [sdj] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 15.848936] sd 1:0:2:1: [sde] Attached SCSI disk [ 15.849237] scsi 1:0:4:1: rdac: LUN 1 (IOSHIP) (unowned) [ 15.849361] sd 1:0:3:2: [sdi] Write Protect is off [ 15.849363] sd 1:0:3:2: [sdi] Mode Sense: 83 00 10 08 [ 15.849466] sd 1:0:4:0: [sdj] Write Protect is off [ 15.849467] sd 1:0:4:0: [sdj] Mode Sense: 83 00 10 08 [ 15.849485] sd 1:0:2:0: [sdd] Attached SCSI disk [ 15.849508] sd 1:0:4:1: [sdk] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 15.849669] sd 1:0:4:0: [sdj] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 15.849908] sd 1:0:3:2: [sdi] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 15.850126] sd 1:0:4:1: [sdk] Write Protect is off [ 15.850128] sd 1:0:4:1: [sdk] Mode Sense: 83 00 10 08 [ 15.850268] sd 1:0:4:1: [sdk] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 15.851210] sd 1:0:3:0: [sdg] Attached SCSI disk [ 15.851542] sd 1:0:2:2: [sdf] Attached SCSI disk [ 15.853915] sd 1:0:4:0: [sdj] Attached SCSI disk [ 15.854393] sd 1:0:3:2: [sdi] Attached SCSI disk [ 15.854700] sd 1:0:3:1: [sdh] Attached SCSI disk [ 15.855845] sd 1:0:4:1: [sdk] Attached SCSI disk [ 16.122179] sd 1:0:1:1: [sdc] Write Protect is off [ 16.126974] sd 1:0:1:1: [sdc] Mode Sense: 83 00 10 08 [ 16.127112] sd 1:0:1:1: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 16.137653] sd 1:0:1:1: [sdc] Attached SCSI disk [ 16.219634] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null) [ 16.442370] systemd-journald[368]: Received SIGTERM from PID 1 (systemd). [ 16.470817] SELinux: Disabled at runtime. [ 16.475354] SELinux: Unregistering netfilter hooks [ 16.516320] type=1404 audit(1575991769.002:2): selinux=0 auid=4294967295 ses=4294967295 [ 16.544007] ip_tables: (C) 2000-2006 Netfilter Core Team [ 16.550126] systemd[1]: Inserted module 'ip_tables' [ 16.642003] EXT4-fs (sda2): re-mounted. Opts: (null) [ 16.654295] systemd-journald[4904]: Received request to flush runtime journal from PID 1 [ 16.713008] ACPI Error: No handler for Region [SYSI] (ffff9e5e69e98a68) [IPMI] (20130517/evregion-162) [ 16.726410] ACPI Error: Region IPMI (ID=7) has no handler (20130517/exfldio-305) [ 16.742295] ipmi message handler version 39.2 [ 16.738019] ACPI Error: Method parse/execution failed [\_SB_.PMI0._GHL] (Node ffff9e5e69e795a0), AE_NOT_EXIST (20130517/psparse-536) [ 16.742692] ACPI Error: [ 16.742694] Method parse/execution failed [ 16.742695] [\_SB_.PMI0._PMC] (Node ffff9e5e69e79500) [ 16.742696] , AE_NOT_EXIST [ 16.742696] (20130517/psparse-536) [ 16.742705] ACPI Exception: AE_NOT_EXIST, [ 16.742705] Evaluating _PMC [ 16.742706] (20130517/power_meter-753) [ 16.791449] piix4_smbus 0000:00:14.0: SMBus Host Controller at 0xb00, revision 0 [ 16.799547] piix4_smbus 0000:00:14.0: Using register 0x2e for SMBus port selection [ 16.811087] input: PC Speaker as /devices/platform/pcspkr/input/input2 [ 16.817920] ipmi device interface [ 16.818018] ccp 0000:02:00.2: 3 command queues available [ 16.818065] ccp 0000:02:00.2: irq 235 for MSI/MSI-X [ 16.818085] ccp 0000:02:00.2: irq 236 for MSI/MSI-X [ 16.818128] ccp 0000:02:00.2: Queue 2 can access 4 LSB regions [ 16.818131] ccp 0000:02:00.2: Queue 3 can access 4 LSB regions [ 16.818133] ccp 0000:02:00.2: Queue 4 can access 4 LSB regions [ 16.818135] ccp 0000:02:00.2: Queue 0 gets LSB 4 [ 16.818136] ccp 0000:02:00.2: Queue 1 gets LSB 5 [ 16.818137] ccp 0000:02:00.2: Queue 2 gets LSB 6 [ 16.837039] ccp 0000:02:00.2: enabled [ 16.837288] ccp 0000:03:00.1: 5 command queues available [ 16.837393] ccp 0000:03:00.1: irq 238 for MSI/MSI-X [ 16.837428] ccp 0000:03:00.1: Queue 0 can access 7 LSB regions [ 16.837430] ccp 0000:03:00.1: Queue 1 can access 7 LSB regions [ 16.837432] ccp 0000:03:00.1: Queue 2 can access 7 LSB regions [ 16.837435] ccp 0000:03:00.1: Queue 3 can access 7 LSB regions [ 16.837438] ccp 0000:03:00.1: Queue 4 can access 7 LSB regions [ 16.837441] ccp 0000:03:00.1: Queue 0 gets LSB 1 [ 16.837442] ccp 0000:03:00.1: Queue 1 gets LSB 2 [ 16.837443] ccp 0000:03:00.1: Queue 2 gets LSB 3 [ 16.837444] ccp 0000:03:00.1: Queue 3 gets LSB 4 [ 16.837446] ccp 0000:03:00.1: Queue 4 gets LSB 5 [ 16.837986] ccp 0000:03:00.1: enabled [ 16.838237] ccp 0000:41:00.2: 3 command queues available [ 16.838287] ccp 0000:41:00.2: irq 240 for MSI/MSI-X [ 16.838308] ccp 0000:41:00.2: irq 241 for MSI/MSI-X [ 16.838360] ccp 0000:41:00.2: Queue 2 can access 4 LSB regions [ 16.838361] ccp 0000:41:00.2: Queue 3 can access 4 LSB regions [ 16.838363] ccp 0000:41:00.2: Queue 4 can access 4 LSB regions [ 16.838365] ccp 0000:41:00.2: Queue 0 gets LSB 4 [ 16.838366] ccp 0000:41:00.2: Queue 1 gets LSB 5 [ 16.838367] ccp 0000:41:00.2: Queue 2 gets LSB 6 [ 16.838728] ccp 0000:41:00.2: enabled [ 16.838883] ccp 0000:42:00.1: 5 command queues available [ 16.838925] ccp 0000:42:00.1: irq 243 for MSI/MSI-X [ 16.838949] ccp 0000:42:00.1: Queue 0 can access 7 LSB regions [ 16.838950] ccp 0000:42:00.1: Queue 1 can access 7 LSB regions [ 16.838952] ccp 0000:42:00.1: Queue 2 can access 7 LSB regions [ 16.838954] ccp 0000:42:00.1: Queue 3 can access 7 LSB regions [ 16.838956] ccp 0000:42:00.1: Queue 4 can access 7 LSB regions [ 16.838958] ccp 0000:42:00.1: Queue 0 gets LSB 1 [ 16.838958] ccp 0000:42:00.1: Queue 1 gets LSB 2 [ 16.838959] ccp 0000:42:00.1: Queue 2 gets LSB 3 [ 16.838960] ccp 0000:42:00.1: Queue 3 gets LSB 4 [ 16.838961] ccp 0000:42:00.1: Queue 4 gets LSB 5 [ 16.839387] ccp 0000:42:00.1: enabled [ 16.839583] ccp 0000:85:00.2: 3 command queues available [ 16.839644] ccp 0000:85:00.2: irq 245 for MSI/MSI-X [ 16.839667] ccp 0000:85:00.2: irq 246 for MSI/MSI-X [ 16.839716] ccp 0000:85:00.2: Queue 2 can access 4 LSB regions [ 16.839718] ccp 0000:85:00.2: Queue 3 can access 4 LSB regions [ 16.839720] ccp 0000:85:00.2: Queue 4 can access 4 LSB regions [ 16.839722] ccp 0000:85:00.2: Queue 0 gets LSB 4 [ 16.839723] ccp 0000:85:00.2: Queue 1 gets LSB 5 [ 16.839725] ccp 0000:85:00.2: Queue 2 gets LSB 6 [ 16.840107] ccp 0000:85:00.2: enabled [ 16.840227] ccp 0000:86:00.1: 5 command queues available [ 16.840273] ccp 0000:86:00.1: irq 248 for MSI/MSI-X [ 16.840303] ccp 0000:86:00.1: Queue 0 can access 7 LSB regions [ 16.840305] ccp 0000:86:00.1: Queue 1 can access 7 LSB regions [ 16.840307] ccp 0000:86:00.1: Queue 2 can access 7 LSB regions [ 16.840309] ccp 0000:86:00.1: Queue 3 can access 7 LSB regions [ 16.840311] ccp 0000:86:00.1: Queue 4 can access 7 LSB regions [ 16.840312] ccp 0000:86:00.1: Queue 0 gets LSB 1 [ 16.840314] ccp 0000:86:00.1: Queue 1 gets LSB 2 [ 16.840315] ccp 0000:86:00.1: Queue 2 gets LSB 3 [ 16.840316] ccp 0000:86:00.1: Queue 3 gets LSB 4 [ 16.840324] ccp 0000:86:00.1: Queue 4 gets LSB 5 [ 16.841728] ccp 0000:86:00.1: enabled [ 16.841937] ccp 0000:c2:00.2: 3 command queues available [ 16.841986] ccp 0000:c2:00.2: irq 250 for MSI/MSI-X [ 16.842008] ccp 0000:c2:00.2: irq 251 for MSI/MSI-X [ 16.842053] ccp 0000:c2:00.2: Queue 2 can access 4 LSB regions [ 16.842055] ccp 0000:c2:00.2: Queue 3 can access 4 LSB regions [ 16.842057] ccp 0000:c2:00.2: Queue 4 can access 4 LSB regions [ 16.842058] ccp 0000:c2:00.2: Queue 0 gets LSB 4 [ 16.842060] ccp 0000:c2:00.2: Queue 1 gets LSB 5 [ 16.842061] ccp 0000:c2:00.2: Queue 2 gets LSB 6 [ 16.842361] ccp 0000:c2:00.2: enabled [ 16.842496] ccp 0000:c3:00.1: 5 command queues available [ 16.842540] ccp 0000:c3:00.1: irq 253 for MSI/MSI-X [ 16.842561] ccp 0000:c3:00.1: Queue 0 can access 7 LSB regions [ 16.842563] ccp 0000:c3:00.1: Queue 1 can access 7 LSB regions [ 16.842564] ccp 0000:c3:00.1: Queue 2 can access 7 LSB regions [ 16.842566] ccp 0000:c3:00.1: Queue 3 can access 7 LSB regions [ 16.842568] ccp 0000:c3:00.1: Queue 4 can access 7 LSB regions [ 16.842569] ccp 0000:c3:00.1: Queue 0 gets LSB 1 [ 16.842570] ccp 0000:c3:00.1: Queue 1 gets LSB 2 [ 16.842571] ccp 0000:c3:00.1: Queue 2 gets LSB 3 [ 16.842572] ccp 0000:c3:00.1: Queue 3 gets LSB 4 [ 16.842573] ccp 0000:c3:00.1: Queue 4 gets LSB 5 [ 16.843066] ccp 0000:c3:00.1: enabled [ 17.045000] cryptd: max_cpu_qlen set to 1000 [ 17.045173] device-mapper: uevent: version 1.0.3 [ 17.045367] device-mapper: ioctl: 4.37.1-ioctl (2018-04-03) initialised: dm-devel@redhat.com [ 17.121007] sd 0:2:0:0: Attached scsi generic sg0 type 0 [ 17.121054] scsi 1:0:0:0: Attached scsi generic sg1 type 13 [ 17.121185] sd 1:0:1:0: Attached scsi generic sg2 type 0 [ 17.121252] sd 1:0:1:1: Attached scsi generic sg3 type 0 [ 17.121338] scsi 1:0:1:31: Attached scsi generic sg4 type 0 [ 17.121433] sd 1:0:2:0: Attached scsi generic sg5 type 0 [ 17.121517] sd 1:0:2:1: Attached scsi generic sg6 type 0 [ 17.121744] sd 1:0:2:2: Attached scsi generic sg7 type 0 [ 17.121852] scsi 1:0:2:31: Attached scsi generic sg8 type 0 [ 17.121900] sd 1:0:3:0: Attached scsi generic sg9 type 0 [ 17.121948] sd 1:0:3:1: Attached scsi generic sg10 type 0 [ 17.121992] sd 1:0:3:2: Attached scsi generic sg11 type 0 [ 17.122034] scsi 1:0:3:31: Attached scsi generic sg12 type 0 [ 17.122095] sd 1:0:4:0: Attached scsi generic sg13 type 0 [ 17.122147] sd 1:0:4:1: Attached scsi generic sg14 type 0 [ 17.122185] scsi 1:0:4:31: Attached scsi generic sg15 type 0 [ 17.466890] AVX2 version of gcm_enc/dec engaged. [ 17.472270] AES CTR mode by8 optimization enabled [ 17.473655] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.3) [ 17.509423] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) [ 17.509528] alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) [ 17.533979] IPMI System Interface driver [ 17.538948] ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS [ 17.546653] ipmi_si: SMBIOS: io 0xca8 regsize 1 spacing 4 irq 10 [ 17.554050] ipmi_si: Adding SMBIOS-specified kcs state machine [ 17.561320] ipmi_si IPI0001:00: ipmi_platform: probing via ACPI [ 17.567834] ipmi_si IPI0001:00: [io 0x0ca8] regsize 1 spacing 4 irq 10 [ 17.569065] sd 1:0:1:0: Embedded Enclosure Device [ 17.571226] sd 1:0:1:1: Embedded Enclosure Device [ 17.573285] scsi 1:0:1:31: Embedded Enclosure Device [ 17.575364] sd 1:0:2:0: Embedded Enclosure Device [ 17.577561] sd 1:0:2:1: Embedded Enclosure Device [ 17.579602] sd 1:0:2:2: Embedded Enclosure Device [ 17.581648] scsi 1:0:2:31: Embedded Enclosure Device [ 17.583674] sd 1:0:3:0: Embedded Enclosure Device [ 17.585869] sd 1:0:3:1: Embedded Enclosure Device [ 17.587951] sd 1:0:3:2: Embedded Enclosure Device [ 17.588860] ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI [ 17.588861] ipmi_si: Adding ACPI-specified kcs state machine [ 17.588968] ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca8, slave address 0x20, irq 10 [ 17.590035] scsi 1:0:3:31: Embedded Enclosure Device [ 17.592106] sd 1:0:4:0: Embedded Enclosure Device [ 17.594252] sd 1:0:4:1: Embedded Enclosure Device [ 17.596305] scsi 1:0:4:31: Embedded Enclosure Device [ 17.598374] ses 1:0:0:0: Attached Enclosure device [ 17.619378] ipmi_si IPI0001:00: The BMC does not support setting the recv irq bit, compensating, but the BMC needs to be fixed. [ 17.624456] ipmi_si IPI0001:00: Using irq 10 [ 17.650047] ipmi_si IPI0001:00: Found new BMC (man_id: 0x0002a2, prod_id: 0x0100, dev_id: 0x20) [ 17.705523] kvm: Nested Paging enabled [ 17.712616] MCE: In-kernel MCE decoding enabled. [ 17.721270] AMD64 EDAC driver v3.4.0 [ 17.724889] EDAC amd64: DRAM ECC enabled. [ 17.728909] EDAC amd64: F17h detected (node 0). [ 17.729247] ipmi_si IPI0001:00: IPMI kcs interface initialized [ 17.739362] EDAC MC: UMC0 chip selects: [ 17.739365] EDAC amd64: MC: 0: 0MB 1: 0MB [ 17.744080] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 17.748792] EDAC amd64: MC: 4: 0MB 5: 0MB [ 17.753498] EDAC amd64: MC: 6: 0MB 7: 0MB [ 17.758207] EDAC MC: UMC1 chip selects: [ 17.758208] EDAC amd64: MC: 0: 0MB 1: 0MB [ 17.762920] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 17.767623] EDAC amd64: MC: 4: 0MB 5: 0MB [ 17.772331] EDAC amd64: MC: 6: 0MB 7: 0MB [ 17.777036] EDAC amd64: using x8 syndromes. [ 17.781225] EDAC amd64: MCT channel count: 2 [ 17.785669] EDAC MC0: Giving out device to 'amd64_edac' 'F17h': DEV 0000:00:18.3 [ 17.793067] EDAC amd64: DRAM ECC enabled. [ 17.797084] EDAC amd64: F17h detected (node 1). [ 17.801654] EDAC MC: UMC0 chip selects: [ 17.801655] EDAC amd64: MC: 0: 0MB 1: 0MB [ 17.806366] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 17.811071] EDAC amd64: MC: 4: 0MB 5: 0MB [ 17.815777] EDAC amd64: MC: 6: 0MB 7: 0MB [ 17.820486] EDAC MC: UMC1 chip selects: [ 17.820487] EDAC amd64: MC: 0: 0MB 1: 0MB [ 17.825198] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 17.829905] EDAC amd64: MC: 4: 0MB 5: 0MB [ 17.834612] EDAC amd64: MC: 6: 0MB 7: 0MB [ 17.839317] EDAC amd64: using x8 syndromes. [ 17.843504] EDAC amd64: MCT channel count: 2 [ 17.847914] EDAC MC1: Giving out device to 'amd64_edac' 'F17h': DEV 0000:00:19.3 [ 17.855316] EDAC amd64: DRAM ECC enabled. [ 17.859330] EDAC amd64: F17h detected (node 2). [ 17.863903] EDAC MC: UMC0 chip selects: [ 17.863905] EDAC amd64: MC: 0: 0MB 1: 0MB [ 17.868612] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 17.873318] EDAC amd64: MC: 4: 0MB 5: 0MB [ 17.878024] EDAC amd64: MC: 6: 0MB 7: 0MB [ 17.882731] EDAC MC: UMC1 chip selects: [ 17.882732] EDAC amd64: MC: 0: 0MB 1: 0MB [ 17.887436] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 17.892142] EDAC amd64: MC: 4: 0MB 5: 0MB [ 17.896849] EDAC amd64: MC: 6: 0MB 7: 0MB [ 17.901556] EDAC amd64: using x8 syndromes. [ 17.905740] EDAC amd64: MCT channel count: 2 [ 17.910148] EDAC MC2: Giving out device to 'amd64_edac' 'F17h': DEV 0000:00:1a.3 [ 17.917552] EDAC amd64: DRAM ECC enabled. [ 17.921567] EDAC amd64: F17h detected (node 3). [ 17.926142] EDAC MC: UMC0 chip selects: [ 17.926144] EDAC amd64: MC: 0: 0MB 1: 0MB [ 17.930848] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 17.935557] EDAC amd64: MC: 4: 0MB 5: 0MB [ 17.940262] EDAC amd64: MC: 6: 0MB 7: 0MB [ 17.944969] EDAC MC: UMC1 chip selects: [ 17.944970] EDAC amd64: MC: 0: 0MB 1: 0MB [ 17.949674] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 17.954380] EDAC amd64: MC: 4: 0MB 5: 0MB [ 17.959085] EDAC amd64: MC: 6: 0MB 7: 0MB [ 17.963792] EDAC amd64: using x8 syndromes. [ 17.967976] EDAC amd64: MCT channel count: 2 [ 17.972391] EDAC MC3: Giving out device to 'amd64_edac' 'F17h': DEV 0000:00:1b.3 [ 17.979797] EDAC PCI0: Giving out device to module 'amd64_edac' controller 'EDAC PCI controller': DEV '0000:00:18.0' (POLLED) [ 41.167045] device-mapper: multipath round-robin: version 1.2.0 loaded [ 65.480892] Adding 4194300k swap on /dev/sda3. Priority:-2 extents:1 across:4194300k FS [ 65.520761] type=1305 audit(1575991818.005:3): audit_pid=11924 old=0 auid=4294967295 ses=4294967295 res=1 [ 65.542832] RPC: Registered named UNIX socket transport module. [ 65.549016] RPC: Registered udp transport module. [ 65.555103] RPC: Registered tcp transport module. [ 65.561192] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 66.185253] mlx5_core 0000:01:00.0: slow_pci_heuristic:5575:(pid 12214): Max link speed = 100000, PCI BW = 126016 [ 66.195585] mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) [ 66.203890] mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) [ 66.756828] tg3 0000:81:00.0: irq 254 for MSI/MSI-X [ 66.756842] tg3 0000:81:00.0: irq 255 for MSI/MSI-X [ 66.756859] tg3 0000:81:00.0: irq 256 for MSI/MSI-X [ 66.756870] tg3 0000:81:00.0: irq 257 for MSI/MSI-X [ 66.756888] tg3 0000:81:00.0: irq 258 for MSI/MSI-X [ 66.882927] IPv6: ADDRCONF(NETDEV_UP): em1: link is not ready [ 70.439958] tg3 0000:81:00.0 em1: Link is up at 1000 Mbps, full duplex [ 70.446495] tg3 0000:81:00.0 em1: Flow control is on for TX and on for RX [ 70.453294] tg3 0000:81:00.0 em1: EEE is enabled [ 70.457933] IPv6: ADDRCONF(NETDEV_CHANGE): em1: link becomes ready [ 71.239148] IPv6: ADDRCONF(NETDEV_UP): ib0: link is not ready [ 71.532710] IPv6: ADDRCONF(NETDEV_CHANGE): ib0: link becomes ready [ 75.375974] FS-Cache: Loaded [ 75.406239] FS-Cache: Netfs 'nfs' registered for caching [ 75.415227] Key type dns_resolver registered [ 75.443866] NFS: Registering the id_resolver key type [ 75.450203] Key type id_resolver registered [ 75.455680] Key type id_legacy registered [ 143.324040] LNet: HW NUMA nodes: 4, HW CPU cores: 48, npartitions: 4 [ 143.331581] alg: No test for adler32 (adler32-zlib) [ 144.131873] Lustre: Lustre: Build Version: 2.12.3_4_g142b4d4 [ 144.237699] LNet: 21556:0:(config.c:1627:lnet_inet_enumerate()) lnet: Ignoring interface em2: it's down [ 144.247476] LNet: Using FastReg for registration [ 144.263897] LNet: Added LNI 10.0.10.52@o2ib7 [8/256/0/180] [ 252.240184] LNetError: 21600:0:(o2iblnd_cb.c:3350:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [ 252.250357] LNetError: 21600:0:(o2iblnd_cb.c:3425:kiblnd_check_conns()) Timed out RDMA with 10.0.10.106@o2ib7 (106): c: 8, oc: 0, rc: 8 [ 256.240301] LNetError: 21600:0:(o2iblnd_cb.c:3350:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [ 256.250472] LNetError: 21600:0:(o2iblnd_cb.c:3350:kiblnd_check_txs_locked()) Skipped 2 previous similar messages [ 256.260640] LNetError: 21600:0:(o2iblnd_cb.c:3425:kiblnd_check_conns()) Timed out RDMA with 10.0.10.51@o2ib7 (106): c: 8, oc: 0, rc: 8 [ 256.272709] LNetError: 21600:0:(o2iblnd_cb.c:3425:kiblnd_check_conns()) Skipped 2 previous similar messages [ 266.240571] LNetError: 21600:0:(o2iblnd_cb.c:3350:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [ 266.250742] LNetError: 21600:0:(o2iblnd_cb.c:3425:kiblnd_check_conns()) Timed out RDMA with 10.0.10.111@o2ib7 (107): c: 8, oc: 0, rc: 8 [ 276.240851] LNetError: 21600:0:(o2iblnd_cb.c:3350:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [ 276.251022] LNetError: 21600:0:(o2iblnd_cb.c:3425:kiblnd_check_conns()) Timed out RDMA with 10.0.10.107@o2ib7 (106): c: 8, oc: 0, rc: 8 [ 284.241071] LNetError: 21600:0:(o2iblnd_cb.c:3350:kiblnd_check_txs_locked()) Timed out tx: active_txs, 1 seconds [ 284.251240] LNetError: 21600:0:(o2iblnd_cb.c:3425:kiblnd_check_conns()) Timed out RDMA with 10.0.10.109@o2ib7 (107): c: 8, oc: 0, rc: 8 [ 304.241626] LNetError: 21600:0:(o2iblnd_cb.c:3350:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [ 304.251799] LNetError: 21600:0:(o2iblnd_cb.c:3350:kiblnd_check_txs_locked()) Skipped 2 previous similar messages [ 304.261970] LNetError: 21600:0:(o2iblnd_cb.c:3425:kiblnd_check_conns()) Timed out RDMA with 10.0.10.53@o2ib7 (105): c: 8, oc: 0, rc: 8 [ 304.274047] LNetError: 21600:0:(o2iblnd_cb.c:3425:kiblnd_check_conns()) Skipped 2 previous similar messages [ 309.273906] LNetError: 21603:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) lpni 10.0.10.202@o2ib7 added to recovery queue. Health = 900 [ 528.286883] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 528.379199] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,acl,no_mbcache,nodelalloc [ 534.408137] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.9.101.36@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [ 534.915875] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.9.102.23@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [ 534.933270] LustreError: Skipped 4 previous similar messages [ 535.923287] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.9.115.10@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [ 535.940655] LustreError: Skipped 132 previous similar messages [ 537.952694] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.9.110.29@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [ 537.970071] LustreError: Skipped 203 previous similar messages [ 542.200665] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.9.110.54@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [ 542.218046] LustreError: Skipped 113 previous similar messages [ 545.860273] LustreError: 22192:0:(mgc_request.c:249:do_config_log_add()) MGC10.0.10.51@o2ib7: failed processing log, type 1: rc = -5 [ 550.201083] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.8.24.15@o2ib6 (no target). If you are running an HA pair check that the target is mounted on the other server. [ 550.218368] LustreError: Skipped 339 previous similar messages [ 576.872124] LustreError: 22192:0:(mgc_request.c:249:do_config_log_add()) MGC10.0.10.51@o2ib7: failed processing log, type 4: rc = -110 [ 618.257246] Lustre: fir-MDT0001: Imperative Recovery not enabled, recovery window 300-900 [ 618.481882] Lustre: fir-MDD0001: changelog on [ 618.488556] Lustre: fir-MDT0001: in recovery but waiting for the first client to connect [ 634.762885] Lustre: fir-MDT0001: Will be in recovery for at least 5:00, or until 1290 clients reconnect [ 635.772193] Lustre: fir-MDT0001: Connection restored to (at 10.8.21.2@o2ib6) [ 635.779335] Lustre: Skipped 63 previous similar messages [ 636.278026] Lustre: fir-MDT0001: Connection restored to (at 10.9.115.10@o2ib4) [ 636.285338] Lustre: Skipped 74 previous similar messages [ 637.278955] Lustre: fir-MDT0001: Connection restored to (at 10.9.104.60@o2ib4) [ 637.286268] Lustre: Skipped 104 previous similar messages [ 639.292893] Lustre: fir-MDT0001: Connection restored to (at 10.8.21.33@o2ib6) [ 639.300122] Lustre: Skipped 178 previous similar messages [ 643.426387] Lustre: fir-MDT0001: Connection restored to (at 10.9.102.46@o2ib4) [ 643.433702] Lustre: Skipped 39 previous similar messages [ 651.430002] Lustre: fir-MDT0001: Connection restored to (at 10.8.20.34@o2ib6) [ 651.437227] Lustre: Skipped 446 previous similar messages [ 666.414062] Lustre: fir-MDT0001: Recovery over after 0:32, of 1290 clients 1290 recovered and 0 were evicted. [ 1460.038616] Lustre: fir-MDT0001: haven't heard from client 82b9ac9e-bd42-fb9c-cb3e-f327857b510c (at 10.9.0.62@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d97232400, cur 1575993213 expire 1575993063 last 1575992986 [ 3916.813684] Lustre: fir-MDT0001: Connection restored to (at 10.9.0.62@o2ib4) [ 3916.820817] Lustre: Skipped 472 previous similar messages [ 3918.077161] Lustre: fir-MDT0001: haven't heard from client cec884d3-ca4b-8127-2f6b-7762665aa5f8 (at 10.9.0.64@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d99a95400, cur 1575995671 expire 1575995521 last 1575995444 [ 6106.209293] Lustre: fir-MDT0001: Connection restored to (at 10.9.0.64@o2ib4) [ 6539.158093] Lustre: fir-MDT0001: haven't heard from client fb9a2d5e-e9b3-4fb9-b988-9954fcfb0920 (at 10.8.0.66@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da3950c00, cur 1575998292 expire 1575998142 last 1575998065 [ 8591.931907] Lustre: fir-MDT0001: Connection restored to (at 10.8.0.66@o2ib6) [10081.244158] Lustre: fir-MDT0001: haven't heard from client 40a204f8-61bd-7bf5-8e8b-66a640362528 (at 10.8.21.28@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da4dab800, cur 1576001834 expire 1576001684 last 1576001607 [11727.322087] perf: interrupt took too long (2508 > 2500), lowering kernel.perf_event_max_sample_rate to 79000 [11817.926095] Lustre: fir-MDT0001: Connection restored to fd516b75-9a6c-4 (at 10.9.108.39@o2ib4) [12157.100915] Lustre: fir-MDT0001: Connection restored to (at 10.8.21.14@o2ib6) [12171.704728] Lustre: fir-MDT0001: Connection restored to (at 10.8.21.13@o2ib6) [12174.170954] Lustre: fir-MDT0001: Connection restored to (at 10.8.21.8@o2ib6) [12174.178105] Lustre: Skipped 1 previous similar message [12184.126264] Lustre: fir-MDT0001: Connection restored to (at 10.8.21.28@o2ib6) [12184.133496] Lustre: Skipped 1 previous similar message [12194.328275] Lustre: fir-MDT0001: Connection restored to (at 10.8.21.20@o2ib6) [12210.732359] Lustre: fir-MDT0001: Connection restored to (at 10.8.20.18@o2ib6) [12210.739587] Lustre: Skipped 5 previous similar messages [12244.696432] Lustre: fir-MDT0001: Connection restored to (at 10.8.20.15@o2ib6) [12244.703689] Lustre: Skipped 13 previous similar messages [12334.450401] Lustre: fir-MDT0001: Connection restored to (at 10.8.21.18@o2ib6) [12334.457639] Lustre: Skipped 15 previous similar messages [13186.169993] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.12@o2ib6) [13209.032137] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.5@o2ib6) [13283.254100] Lustre: fir-MDT0001: Connection restored to (at 10.8.20.8@o2ib6) [13283.261263] Lustre: Skipped 2 previous similar messages [15250.985326] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.4@o2ib6) [16709.671657] perf: interrupt took too long (3136 > 3135), lowering kernel.perf_event_max_sample_rate to 63000 [20326.686897] Lustre: fir-MDT0001: Connection restored to (at 10.8.20.27@o2ib6) [22535.586194] Lustre: fir-MDT0001: haven't heard from client 8fbd1a16-d09d-1ef7-e10d-4e68dc0a9f97 (at 10.8.23.32@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d99bb6400, cur 1576014288 expire 1576014138 last 1576014061 [22535.607894] Lustre: Skipped 12 previous similar messages [24711.192524] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.32@o2ib6) [26324.364476] Lustre: fir-MDT0001: Connection restored to (at 10.8.21.36@o2ib6) [28780.768296] Lustre: fir-MDT0001: haven't heard from client ee4590b6-1057-e690-5db0-89b0af3963cd (at 10.8.22.30@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da6abc800, cur 1576020533 expire 1576020383 last 1576020306 [29156.961472] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.29@o2ib6) [30579.565892] LNetError: 21616:0:(lib-msg.c:822:lnet_is_health_check()) Msg is in inconsistent state, don't perform health checking (-125, 0) [30581.748883] Lustre: fir-MDT0001: Client cfe93466-ba97-4 (at 10.9.0.62@o2ib4) reconnecting [30581.757093] Lustre: fir-MDT0001: Connection restored to (at 10.9.0.62@o2ib4) [30585.016770] Lustre: fir-MDT0001: Client c350f8ed-891d-7148-1d37-4ac35ca3772c (at 10.9.102.16@o2ib4) reconnecting [30585.026977] Lustre: fir-MDT0001: Connection restored to (at 10.9.102.16@o2ib4) [30865.650542] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.30@o2ib6) [34329.709852] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.20@o2ib6) [38983.011973] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.6@o2ib6) [39687.067271] Lustre: fir-MDT0001: haven't heard from client b6bab463-5f5c-8f5c-f09a-8f0ce0f6e1cd (at 10.8.21.31@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d9676a400, cur 1576031439 expire 1576031289 last 1576031212 [39763.094009] Lustre: fir-MDT0001: haven't heard from client 7515dbe4-f1c8-844a-9186-76f9c6288c34 (at 10.9.104.2@o2ib4) in 222 seconds. I think it's dead, and I am evicting it. exp ffff9e5d9676d000, cur 1576031515 expire 1576031365 last 1576031293 [39763.115715] Lustre: Skipped 4 previous similar messages [40915.424636] Lustre: fir-MDT0001: Connection restored to (at 10.9.114.14@o2ib4) [40962.695872] Lustre: fir-MDT0001: Connection restored to (at 10.8.19.6@o2ib6) [41151.567412] Lustre: fir-MDT0001: Connection restored to (at 10.9.110.71@o2ib4) [41190.903515] Lustre: fir-MDT0001: Connection restored to (at 10.9.107.9@o2ib4) [41275.367217] Lustre: fir-MDT0001: Connection restored to (at 10.9.109.25@o2ib4) [41405.666355] Lustre: fir-MDT0001: Connection restored to (at 10.9.110.63@o2ib4) [41422.586393] Lustre: fir-MDT0001: Connection restored to (at 10.9.110.62@o2ib4) [41422.593711] Lustre: Skipped 2 previous similar messages [41646.986113] Lustre: fir-MDT0001: Connection restored to (at 10.9.104.34@o2ib4) [41646.993429] Lustre: Skipped 2 previous similar messages [41754.586282] Lustre: fir-MDT0001: Connection restored to (at 10.8.21.31@o2ib6) [41754.593515] Lustre: Skipped 3 previous similar messages [41971.853000] Lustre: fir-MDT0001: Connection restored to (at 10.8.28.9@o2ib6) [41971.860144] Lustre: Skipped 4 previous similar messages [44005.198096] Lustre: fir-MDT0001: haven't heard from client aadbd140-afe6-3cc5-5efa-1bf64465f6e7 (at 10.8.20.34@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d97234800, cur 1576035757 expire 1576035607 last 1576035530 [44005.219803] Lustre: Skipped 13 previous similar messages [46116.317494] Lustre: fir-MDT0001: Connection restored to (at 10.8.20.34@o2ib6) [46116.324723] Lustre: Skipped 8 previous similar messages [50858.886065] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.17@o2ib6) [57825.864480] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.19@o2ib6) [58327.996981] Lustre: fir-MDT0001: Connection restored to (at 10.8.20.5@o2ib6) [63963.727679] Lustre: fir-MDT0001: haven't heard from client 09a03217-f2a1-2632-097f-38339f6cbc7c (at 10.8.22.1@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da6ab8000, cur 1576055715 expire 1576055565 last 1576055488 [64014.288966] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.2@o2ib6) [65909.363234] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.25@o2ib6) [66083.953169] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.1@o2ib6) [67273.821969] Lustre: fir-MDT0001: haven't heard from client d48dfcab-ce8f-b93c-3409-a3e76df7c945 (at 10.8.23.22@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d97235400, cur 1576059025 expire 1576058875 last 1576058798 [69455.403477] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.22@o2ib6) [85222.185451] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.7@o2ib6) [90018.449477] Lustre: fir-MDT0001: haven't heard from client 5a6b489d-8a0c-1dc7-c222-8c5330c92213 (at 10.8.8.20@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da6a82000, cur 1576081769 expire 1576081619 last 1576081542 [90199.453506] Lustre: fir-MDT0001: haven't heard from client dcb788f4-67f3-4 (at 10.9.109.25@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d7d655800, cur 1576081950 expire 1576081800 last 1576081723 [90199.473477] Lustre: Skipped 8 previous similar messages [90206.866511] Lustre: fir-MDT0001: Connection restored to (at 10.9.107.20@o2ib4) [90454.094153] Lustre: fir-MDT0001: Connection restored to (at 10.9.110.71@o2ib4) [90469.077065] Lustre: fir-MDT0001: Connection restored to (at 10.9.109.25@o2ib4) [91451.832005] Lustre: fir-MDT0001: Connection restored to (at 10.9.117.46@o2ib4) [91484.768313] Lustre: fir-MDT0001: Connection restored to (at 10.8.9.1@o2ib6) [91698.948853] Lustre: fir-MDT0001: Connection restored to (at 10.8.7.5@o2ib6) [91792.344889] Lustre: fir-MDT0001: Connection restored to (at 10.9.101.60@o2ib4) [91809.000738] Lustre: fir-MDT0001: Connection restored to (at 10.9.101.57@o2ib4) [91819.187064] Lustre: fir-MDT0001: Connection restored to (at 10.9.101.59@o2ib4) [91908.429806] Lustre: fir-MDT0001: Connection restored to (at 10.8.8.20@o2ib6) [92112.756061] Lustre: fir-MDT0001: Connection restored to (at 10.8.8.30@o2ib6) [92364.000393] Lustre: fir-MDT0001: Connection restored to (at 10.9.102.48@o2ib4) [92364.007708] Lustre: Skipped 2 previous similar messages [92822.149605] Lustre: fir-MDT0001: Connection restored to (at 10.8.21.16@o2ib6) [92822.156836] Lustre: Skipped 1 previous similar message [100789.499052] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.15@o2ib6) [100789.506395] Lustre: Skipped 1 previous similar message [102199.817397] Lustre: fir-MDT0001: haven't heard from client 45ffa07c-203c-dad9-8f0d-e714fc6465b8 (at 10.8.22.11@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d97233c00, cur 1576093950 expire 1576093800 last 1576093723 [102199.839201] Lustre: Skipped 1 previous similar message [103893.829115] Lustre: fir-MDT0001: haven't heard from client 704e8622-7442-8eb3-b4e3-c86a69ef45af (at 10.8.20.21@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d96728000, cur 1576095644 expire 1576095494 last 1576095417 [104265.985419] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.11@o2ib6) [104277.377463] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.10@o2ib6) [104333.130294] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.23@o2ib6) [104915.853612] Lustre: fir-MDT0001: haven't heard from client c3415e6e-dda3-8602-28df-a932f656881d (at 10.9.112.17@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e6db9b26800, cur 1576096666 expire 1576096516 last 1576096439 [105994.661598] Lustre: fir-MDT0001: Connection restored to (at 10.8.20.21@o2ib6) [106026.201464] Lustre: fir-MDT0001: Connection restored to (at 10.9.112.17@o2ib4) [106386.608408] Lustre: fir-MDT0001: Connection restored to (at 10.8.9.1@o2ib6) [106401.155776] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.13@o2ib6) [106495.488878] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.2@o2ib6) [106593.438833] Lustre: fir-MDT0001: Connection restored to (at 10.9.113.13@o2ib4) [106674.558849] Lustre: fir-MDT0001: Connection restored to (at 10.9.101.60@o2ib4) [107006.720271] Lustre: fir-MDT0001: Connection restored to (at 10.8.24.7@o2ib6) [107006.727497] Lustre: Skipped 1 previous similar message [108509.962332] Lustre: fir-MDT0001: haven't heard from client 000d6715-906a-fe00-99d9-1ba39760e7f7 (at 10.8.22.16@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da6abf400, cur 1576100260 expire 1576100110 last 1576100033 [109002.967861] Lustre: fir-MDT0001: haven't heard from client 85fbdf3d-35db-072c-03b7-e9977baaa2bf (at 10.8.23.12@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d90c20400, cur 1576100753 expire 1576100603 last 1576100526 [109002.989655] Lustre: Skipped 1 previous similar message [109213.122588] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.12@o2ib6) [110588.665552] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.8@o2ib6) [110596.574043] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.18@o2ib6) [110607.224602] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.18@o2ib6) [110624.407992] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.33@o2ib6) [110624.415311] Lustre: Skipped 1 previous similar message [111694.037169] Lustre: fir-MDT0001: haven't heard from client 8c2fd243-a078-4 (at 10.9.117.46@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e7a1af76c00, cur 1576103444 expire 1576103294 last 1576103217 [111848.657557] Lustre: fir-MDT0001: Connection restored to (at 10.9.117.46@o2ib4) [111848.664960] Lustre: Skipped 2 previous similar messages [112088.017973] Lustre: fir-MDT0001: Connection restored to (at 10.8.21.2@o2ib6) [112099.364485] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.32@o2ib6) [113764.858077] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.27@o2ib6) [114014.413370] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.27@o2ib6) [127746.573205] Lustre: fir-MDT0001: Connection restored to (at 10.8.20.26@o2ib6) [129924.610677] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.21@o2ib6) [137465.020561] Lustre: fir-MDT0001: Connection restored to (at 10.8.25.17@o2ib6) [137518.762092] Lustre: fir-MDT0001: haven't heard from client e15078c5-8209-4 (at 10.8.25.17@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da8287000, cur 1576129268 expire 1576129118 last 1576129041 [137893.769897] Lustre: fir-MDT0001: haven't heard from client 208ccf09-d6ca-4 (at 10.8.25.17@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e6c0ebf4800, cur 1576129643 expire 1576129493 last 1576129416 [139034.388945] Lustre: fir-MDT0001: Connection restored to (at 10.8.25.17@o2ib6) [139513.817633] Lustre: fir-MDT0001: haven't heard from client 0cfc0c49-f407-4 (at 10.8.25.17@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e6d78aa2800, cur 1576131263 expire 1576131113 last 1576131036 [141930.727702] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.20@o2ib6) [141936.849658] Lustre: fir-MDT0001: Connection restored to (at 10.8.21.1@o2ib6) [142001.133224] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.26@o2ib6) [146785.008608] Lustre: fir-MDT0001: Connection restored to (at 10.8.23.26@o2ib6) [146798.240498] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.17@o2ib6) [146800.561864] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.22@o2ib6) [146817.363817] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.24@o2ib6) [146914.046943] Lustre: 21795:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576138062/real 1576138062] req@ffff9e7d60689f80 x1652547758815088/t0(0) o6->fir-OST0056-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 24 to 1 dl 1576138663 ref 1 fl Rpc:X/0/ffffffff rc 0/-1 [146914.075251] Lustre: fir-OST0056-osc-MDT0001: Connection to fir-OST0056 (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [146914.091806] Lustre: fir-OST0056-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [147062.276555] Lustre: fir-MDT0001: Connection restored to (at 10.8.22.14@o2ib6) [147297.306471] Lustre: 21799:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576138663/real 1576138663] req@ffff9e7877e06780 x1652547759073776/t0(0) o6->fir-OST0056-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576139046 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [147297.334854] Lustre: fir-OST0056-osc-MDT0001: Connection to fir-OST0056 (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [147297.351226] Lustre: fir-OST0056-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [147670.111734] Lustre: 22521:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576138663/real 1576138663] req@ffff9e5c89ebe300 x1652547759244944/t0(0) o5->fir-OST0056-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 432/432 e 0 to 1 dl 1576139419 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 [147670.140035] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [147680.860031] Lustre: 21799:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576139046/real 1576139046] req@ffff9e7877e06780 x1652547759073776/t0(0) o6->fir-OST0056-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576139429 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [147680.888414] Lustre: fir-OST0056-osc-MDT0001: Connection to fir-OST0056 (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [147680.904737] Lustre: fir-OST0056-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [148063.566595] Lustre: 21799:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576139429/real 1576139429] req@ffff9e7877e06780 x1652547759073776/t0(0) o6->fir-OST0056-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576139812 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [148063.594973] Lustre: fir-OST0056-osc-MDT0001: Connection to fir-OST0056 (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [148063.611304] Lustre: fir-OST0056-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [148427.173575] Lustre: 22521:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576139420/real 1576139420] req@ffff9e5c89ebc380 x1652547759721264/t0(0) o5->fir-OST0056-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 432/432 e 0 to 1 dl 1576140176 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 [148427.201875] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [148446.129094] Lustre: 21799:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576139812/real 1576139812] req@ffff9e7877e06780 x1652547759073776/t0(0) o6->fir-OST0056-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576140195 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [148446.157479] Lustre: fir-OST0056-osc-MDT0001: Connection to fir-OST0056 (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [148446.173805] Lustre: fir-OST0056-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [148660.687024] Lustre: 21787:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576139653/real 1576139653] req@ffff9e6c01e77080 x1652547759816912/t0(0) o6->fir-OST0054-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 0 to 1 dl 1576140409 ref 1 fl Rpc:X/0/ffffffff rc 0/-1 [148660.715232] Lustre: fir-OST0054-osc-MDT0001: Connection to fir-OST0054 (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [148829.139655] Lustre: 21799:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576140195/real 1576140195] req@ffff9e7877e06780 x1652547759073776/t0(0) o6->fir-OST0056-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576140578 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [148829.168037] Lustre: fir-OST0056-osc-MDT0001: Connection to fir-OST0056 (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [148829.184386] Lustre: fir-OST0056-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [148829.194324] Lustre: Skipped 1 previous similar message [149029.231170] Lustre: 22521:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576140177/real 1576140177] req@ffff9e5c89ebd580 x1652547759990416/t0(0) o5->fir-OST0056-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 432/432 e 1 to 1 dl 1576140778 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 [149029.259470] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [149046.039484] LustreError: 137-5: fir-MDT0002_UUID: not available for connect from 10.8.25.17@o2ib6 (no target). If you are running an HA pair check that the target is mounted on the other server. [149046.056857] LustreError: Skipped 587 previous similar messages [149146.394303] LustreError: 137-5: fir-MDT0002_UUID: not available for connect from 10.8.25.17@o2ib6 (no target). If you are running an HA pair check that the target is mounted on the other server. [149213.022244] Lustre: fir-OST0056-osc-MDT0001: Connection to fir-OST0056 (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [149300.112638] Lustre: 21806:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576140293/real 1576140293] req@ffff9e5b12234c80 x1652547760027216/t0(0) o6->fir-OST005e-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 0 to 1 dl 1576141049 ref 1 fl Rpc:X/0/ffffffff rc 0/-1 [149300.140839] Lustre: 21806:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 1 previous similar message [149300.150587] Lustre: fir-OST005e-osc-MDT0001: Connection to fir-OST005e (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [149416.876002] Lustre: fir-OST0054-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [149416.885920] Lustre: Skipped 3 previous similar messages [149531.895011] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [149531.911179] Lustre: Skipped 1 previous similar message [149632.109626] Lustre: fir-MDT0001: haven't heard from client 619199f2-141e-aa07-09cb-eb294e06c3f1 (at 10.9.116.4@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d99a94000, cur 1576141381 expire 1576141231 last 1576141154 [149786.293034] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [149978.667329] Lustre: 21799:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576141344/real 1576141344] req@ffff9e7877e06780 x1652547759073776/t0(0) o6->fir-OST0056-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576141727 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [149978.695705] Lustre: 21799:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 4 previous similar messages [149978.705538] Lustre: fir-OST0056-osc-MDT0001: Connection to fir-OST0056 (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [149978.721711] Lustre: Skipped 1 previous similar message [150056.149634] Lustre: fir-OST005e-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [150056.159554] Lustre: Skipped 3 previous similar messages [150245.514824] INFO: task mdt00_013:22615 blocked for more than 120 seconds. [150245.521706] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [150245.529622] mdt00_013 D ffff9e5da9969040 0 22615 2 0x00000080 [150245.536813] Call Trace: [150245.539368] [] ? lquota_disk_read+0xf2/0x390 [lquota] [150245.546180] [] schedule+0x29/0x70 [150245.551236] [] rwsem_down_write_failed+0x225/0x3a0 [150245.557785] [] ? cfs_hash_lookup+0xa2/0xd0 [libcfs] [150245.564417] [] call_rwsem_down_write_failed+0x17/0x30 [150245.571206] [] down_write+0x2d/0x3d [150245.576485] [] lod_qos_statfs_update+0x97/0x2b0 [lod] [150245.583284] [] lod_qos_prep_create+0x16a/0x1890 [lod] [150245.590079] [] ? qsd_op_begin+0x262/0x4b0 [lquota] [150245.596641] [] ? osd_declare_qid+0x200/0x4a0 [osd_ldiskfs] [150245.603871] [] ? osd_declare_inode_qid+0x27b/0x430 [osd_ldiskfs] [150245.611624] [] lod_prepare_create+0x215/0x2e0 [lod] [150245.618262] [] lod_declare_striped_create+0x1ee/0x980 [lod] [150245.625578] [] ? lod_sub_declare_create+0xdf/0x210 [lod] [150245.632636] [] lod_declare_create+0x204/0x590 [lod] [150245.639306] [] ? lu_context_refill+0x19/0x50 [obdclass] [150245.646296] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [150245.654224] [] mdd_declare_create+0x4c/0xcb0 [mdd] [150245.660781] [] mdd_create+0x847/0x14e0 [mdd] [150245.666810] [] mdt_reint_open+0x224f/0x3240 [mdt] [150245.673278] [] ? upcall_cache_get_entry+0x218/0x8b0 [obdclass] [150245.680876] [] mdt_reint_rec+0x83/0x210 [mdt] [150245.686987] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [150245.693618] [] ? mdt_intent_fixup_resent+0x36/0x220 [mdt] [150245.700784] [] mdt_intent_open+0x82/0x3a0 [mdt] [150245.707076] [] ? lprocfs_counter_add+0xf9/0x160 [obdclass] [150245.714314] [] mdt_intent_policy+0x435/0xd80 [mdt] [150245.720872] [] ? mdt_intent_fixup_resent+0x220/0x220 [mdt] [150245.728130] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [150245.734930] [] ? cfs_hash_bd_add_locked+0x63/0x80 [libcfs] [150245.742177] [] ? cfs_hash_add+0xbe/0x1a0 [libcfs] [150245.748652] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [150245.755828] [] ? lustre_swab_ldlm_lock_desc+0x30/0x30 [ptlrpc] [150245.763457] [] tgt_enqueue+0x62/0x210 [ptlrpc] [150245.769678] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [150245.776680] [] ? ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc] [150245.784374] [] ? ktime_get_real_seconds+0xe/0x10 [libcfs] [150245.791540] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [150245.799285] [] ? __wake_up+0x44/0x50 [150245.804650] [] ? ptlrpc_server_handle_req_in+0x8df/0xd60 [ptlrpc] [150245.812513] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [150245.818904] [] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc] [150245.826408] [] kthread+0xd1/0xe0 [150245.831377] [] ? insert_kthread_work+0x40/0x40 [150245.837568] [] ret_from_fork_nospec_begin+0xe/0x21 [150245.844114] [] ? insert_kthread_work+0x40/0x40 [150245.850330] INFO: task mdt00_018:23331 blocked for more than 120 seconds. [150245.857212] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [150245.865137] mdt00_018 D ffff9e6db9b7d140 0 23331 2 0x00000080 [150245.872324] Call Trace: [150245.874872] [] ? lquota_disk_read+0xf2/0x390 [lquota] [150245.881665] [] schedule+0x29/0x70 [150245.886740] [] rwsem_down_write_failed+0x225/0x3a0 [150245.893283] [] ? cfs_hash_lookup+0xa2/0xd0 [libcfs] [150245.899909] [] call_rwsem_down_write_failed+0x17/0x30 [150245.906717] [] down_write+0x2d/0x3d [150245.911975] [] lod_qos_statfs_update+0x97/0x2b0 [lod] [150245.918773] [] lod_qos_prep_create+0x16a/0x1890 [lod] [150245.925589] [] ? qsd_op_begin+0x262/0x4b0 [lquota] [150245.932130] [] ? osd_declare_qid+0x200/0x4a0 [osd_ldiskfs] [150245.939364] [] ? osd_declare_inode_qid+0x27b/0x430 [osd_ldiskfs] [150245.947125] [] lod_prepare_create+0x215/0x2e0 [lod] [150245.953750] [] lod_declare_striped_create+0x1ee/0x980 [lod] [150245.961068] [] ? lod_sub_declare_create+0xdf/0x210 [lod] [150245.968144] [] lod_declare_create+0x204/0x590 [lod] [150245.974780] [] ? lu_context_refill+0x19/0x50 [obdclass] [150245.981756] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [150245.989703] [] mdd_declare_create+0x4c/0xcb0 [mdd] [150245.996260] [] mdd_create+0x847/0x14e0 [mdd] [150246.002283] [] mdt_reint_open+0x224f/0x3240 [mdt] [150246.008758] [] ? upcall_cache_get_entry+0x218/0x8b0 [obdclass] [150246.016338] [] mdt_reint_rec+0x83/0x210 [mdt] [150246.022447] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [150246.029093] [] ? mdt_intent_fixup_resent+0x36/0x220 [mdt] [150246.036239] [] mdt_intent_open+0x82/0x3a0 [mdt] [150246.042530] [] ? lprocfs_counter_add+0xf9/0x160 [obdclass] [150246.049795] [] mdt_intent_policy+0x435/0xd80 [mdt] [150246.056334] [] ? mdt_intent_fixup_resent+0x220/0x220 [mdt] [150246.063590] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [150246.070403] [] ? cfs_hash_bd_add_locked+0x63/0x80 [libcfs] [150246.077634] [] ? cfs_hash_add+0xbe/0x1a0 [libcfs] [150246.084108] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [150246.091290] [] ? lustre_swab_ldlm_lock_desc+0x30/0x30 [ptlrpc] [150246.098896] [] tgt_enqueue+0x62/0x210 [ptlrpc] [150246.105121] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [150246.112134] [] ? ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc] [150246.119812] [] ? ktime_get_real_seconds+0xe/0x10 [libcfs] [150246.126981] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [150246.134771] [] ? ptlrpc_wait_event+0xa5/0x360 [ptlrpc] [150246.141651] [] ? __wake_up+0x44/0x50 [150246.146999] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [150246.153407] [] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc] [150246.160896] [] kthread+0xd1/0xe0 [150246.165886] [] ? insert_kthread_work+0x40/0x40 [150246.172075] [] ret_from_fork_nospec_begin+0xe/0x21 [150246.178611] [] ? insert_kthread_work+0x40/0x40 [150543.326881] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [150733.472105] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576141881/real 1576141881] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576142482 ref 1 fl Rpc:X/2/ffffffff rc 0/-1 [150733.500310] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 6 previous similar messages [150733.510148] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [150733.526331] Lustre: Skipped 5 previous similar messages [150733.531795] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [150733.541714] Lustre: Skipped 4 previous similar messages [151300.360621] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [151334.544572] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576142482/real 1576142482] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576143083 ref 1 fl Rpc:X/2/ffffffff rc 0/-1 [151334.572774] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 6 previous similar messages [151334.582611] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [151334.598780] Lustre: Skipped 5 previous similar messages [151334.604243] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [151334.614180] Lustre: Skipped 5 previous similar messages [151935.849284] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576143083/real 1576143083] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576143684 ref 1 fl Rpc:X/2/ffffffff rc 0/-1 [151935.877490] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 6 previous similar messages [151935.887324] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [151935.903497] Lustre: Skipped 6 previous similar messages [151935.908994] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [151935.918919] Lustre: Skipped 6 previous similar messages [152057.394651] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [152536.258139] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576143684/real 1576143684] req@ffff9e8cc8600d80 x1652547760361024/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576144285 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [152536.286515] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 6 previous similar messages [152536.296349] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [152536.312522] Lustre: Skipped 5 previous similar messages [152536.317993] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [152536.327910] Lustre: Skipped 5 previous similar messages [152814.428866] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [153137.858729] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576144285/real 1576144285] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576144886 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [153137.858732] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576144285/real 1576144285] req@ffff9e8cc8600d80 x1652547760361024/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576144886 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [153137.858736] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 5 previous similar messages [153137.858743] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [153137.858744] Lustre: Skipped 4 previous similar messages [153137.858890] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [153137.858892] Lustre: Skipped 4 previous similar messages [153571.462613] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [153738.419175] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576144886/real 1576144886] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576145487 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [153738.447551] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 5 previous similar messages [153738.457398] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [153738.473559] Lustre: Skipped 4 previous similar messages [153738.479026] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [153738.488948] Lustre: Skipped 4 previous similar messages [153966.288539] INFO: task mdt00_001:22248 blocked for more than 120 seconds. [153966.295418] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [153966.303338] mdt00_001 D ffff9e7da10be180 0 22248 2 0x00000080 [153966.310530] Call Trace: [153966.313079] [] ? update_curr+0x14c/0x1e0 [153966.318768] [] schedule+0x29/0x70 [153966.323831] [] rwsem_down_write_failed+0x225/0x3a0 [153966.330369] [] call_rwsem_down_write_failed+0x17/0x30 [153966.337182] [] down_write+0x2d/0x3d [153966.342435] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [153966.349839] [] ? try_to_del_timer_sync+0x5e/0x90 [153966.356216] [] ? del_timer_sync+0x52/0x60 [153966.361971] [] ? schedule_timeout+0x170/0x2d0 [153966.368084] [] ? lod_qos_statfs_update+0x3c/0x2b0 [lod] [153966.375089] [] ? lod_prepare_avoidance+0x375/0x780 [lod] [153966.382148] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [153966.389076] [] ? ldlm_inodebits_alloc_lock+0x66/0x180 [ptlrpc] [153966.396680] [] ? wake_up_state+0x20/0x20 [153966.402349] [] lod_declare_instantiate_components+0x9a/0x1d0 [lod] [153966.410293] [] lod_declare_layout_change+0xb65/0x10f0 [lod] [153966.417615] [] mdd_declare_layout_change+0x62/0x120 [mdd] [153966.424763] [] mdd_layout_change+0x882/0x1000 [mdd] [153966.431417] [] ? mdt_object_lock_internal+0x70/0x360 [mdt] [153966.438656] [] mdt_layout_change+0x337/0x430 [mdt] [153966.445213] [] mdt_intent_layout+0x7ee/0xcc0 [mdt] [153966.451777] [] ? lustre_msg_buf+0x17/0x60 [ptlrpc] [153966.458336] [] mdt_intent_policy+0x435/0xd80 [mdt] [153966.464888] [] ? mdt_intent_open+0x3a0/0x3a0 [mdt] [153966.471442] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [153966.478261] [] ? cfs_hash_bd_add_locked+0x63/0x80 [libcfs] [153966.485491] [] ? cfs_hash_add+0xbe/0x1a0 [libcfs] [153966.491969] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [153966.499153] [] ? lustre_swab_ldlm_lock_desc+0x30/0x30 [ptlrpc] [153966.506768] [] tgt_enqueue+0x62/0x210 [ptlrpc] [153966.512995] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [153966.520008] [] ? ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc] [153966.527670] [] ? ktime_get_real_seconds+0xe/0x10 [libcfs] [153966.534838] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [153966.542627] [] ? ptlrpc_wait_event+0xa5/0x360 [ptlrpc] [153966.549507] [] ? __wake_up+0x44/0x50 [153966.554857] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [153966.561260] [] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc] [153966.568748] [] kthread+0xd1/0xe0 [153966.573724] [] ? insert_kthread_work+0x40/0x40 [153966.579928] [] ret_from_fork_nospec_begin+0xe/0x21 [153966.586460] [] ? insert_kthread_work+0x40/0x40 [153966.592646] INFO: task mdt02_002:22255 blocked for more than 120 seconds. [153966.599549] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [153966.607470] mdt02_002 D ffff9e8db9202080 0 22255 2 0x00000080 [153966.614678] Call Trace: [153966.617222] [] ? mutex_lock+0x12/0x2f [153966.622631] [] schedule+0x29/0x70 [153966.627704] [] rwsem_down_write_failed+0x225/0x3a0 [153966.634235] [] ? dquot_get_dqblk+0x144/0x1f0 [153966.640247] [] call_rwsem_down_write_failed+0x17/0x30 [153966.647057] [] down_write+0x2d/0x3d [153966.652293] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [153966.659706] [] ? cfs_hash_lookup+0xa2/0xd0 [libcfs] [153966.666347] [] ? lod_qos_statfs_update+0x3c/0x2b0 [lod] [153966.673320] [] ? lod_prepare_avoidance+0x375/0x780 [lod] [153966.680384] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [153966.687287] [] ? qsd_op_begin+0x262/0x4b0 [lquota] [153966.693830] [] ? osd_declare_inode_qid+0x27b/0x430 [osd_ldiskfs] [153966.701581] [] lod_prepare_create+0x215/0x2e0 [lod] [153966.708228] [] lod_declare_striped_create+0x1ee/0x980 [lod] [153966.715542] [] ? lod_sub_declare_create+0xdf/0x210 [lod] [153966.722600] [] lod_declare_create+0x204/0x590 [lod] [153966.729282] [] ? lu_context_refill+0x19/0x50 [obdclass] [153966.736259] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [153966.744185] [] mdd_declare_create+0x4c/0xcb0 [mdd] [153966.750739] [] mdd_create+0x847/0x14e0 [mdd] [153966.756763] [] mdt_reint_open+0x224f/0x3240 [mdt] [153966.763226] [] ? upcall_cache_get_entry+0x218/0x8b0 [obdclass] [153966.770825] [] mdt_reint_rec+0x83/0x210 [mdt] [153966.776937] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [153966.783581] [] ? mdt_intent_fixup_resent+0x36/0x220 [mdt] [153966.790746] [] mdt_intent_open+0x82/0x3a0 [mdt] [153966.797033] [] ? lprocfs_counter_add+0xf9/0x160 [obdclass] [153966.804284] [] mdt_intent_policy+0x435/0xd80 [mdt] [153966.810823] [] ? mdt_intent_fixup_resent+0x220/0x220 [mdt] [153966.818076] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [153966.824892] [] ? cfs_hash_bd_add_locked+0x63/0x80 [libcfs] [153966.832121] [] ? cfs_hash_add+0xbe/0x1a0 [libcfs] [153966.838592] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [153966.845776] [] ? lustre_swab_ldlm_lock_desc+0x30/0x30 [ptlrpc] [153966.853385] [] tgt_enqueue+0x62/0x210 [ptlrpc] [153966.859607] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [153966.866636] [] ? ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc] [153966.874302] [] ? ktime_get_real_seconds+0xe/0x10 [libcfs] [153966.881471] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [153966.889259] [] ? ptlrpc_wait_event+0xa5/0x360 [ptlrpc] [153966.896138] [] ? __wake_up+0x44/0x50 [153966.901489] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [153966.907890] [] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc] [153966.915381] [] kthread+0xd1/0xe0 [153966.920354] [] ? insert_kthread_work+0x40/0x40 [153966.926560] [] ret_from_fork_nospec_begin+0xe/0x21 [153966.933119] [] ? insert_kthread_work+0x40/0x40 [153966.939310] INFO: task mdt03_001:22257 blocked for more than 120 seconds. [153966.946202] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [153966.954120] mdt03_001 D ffff9e8db9204100 0 22257 2 0x00000080 [153966.961327] Call Trace: [153966.963870] [] ? update_curr+0x14c/0x1e0 [153966.969537] [] schedule+0x29/0x70 [153966.974617] [] rwsem_down_write_failed+0x225/0x3a0 [153966.981152] [] call_rwsem_down_write_failed+0x17/0x30 [153966.987947] [] down_write+0x2d/0x3d [153966.993218] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [153967.000620] [] ? try_to_del_timer_sync+0x5e/0x90 [153967.006982] [] ? del_timer_sync+0x52/0x60 [153967.012750] [] ? schedule_timeout+0x170/0x2d0 [153967.018859] [] ? lod_qos_statfs_update+0x3c/0x2b0 [lod] [153967.025831] [] ? lod_prepare_avoidance+0x375/0x780 [lod] [153967.032907] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [153967.039794] [] ? osd_trans_create+0xa0/0x620 [osd_ldiskfs] [153967.047036] [] ? wake_up_state+0x20/0x20 [153967.052718] [] lod_declare_instantiate_components+0x9a/0x1d0 [lod] [153967.060646] [] lod_declare_layout_change+0xb65/0x10f0 [lod] [153967.067984] [] mdd_declare_layout_change+0x62/0x120 [mdd] [153967.075126] [] mdd_layout_change+0x882/0x1000 [mdd] [153967.081760] [] ? mdt_object_lock_internal+0x70/0x360 [mdt] [153967.089010] [] mdt_layout_change+0x337/0x430 [mdt] [153967.095547] [] mdt_intent_layout+0x7ee/0xcc0 [mdt] [153967.102114] [] ? lustre_msg_buf+0x17/0x60 [ptlrpc] [153967.108678] [] mdt_intent_policy+0x435/0xd80 [mdt] [153967.115222] [] ? mdt_intent_open+0x3a0/0x3a0 [mdt] [153967.121782] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [153967.128596] [] ? cfs_hash_bd_add_locked+0x63/0x80 [libcfs] [153967.135826] [] ? cfs_hash_add+0xbe/0x1a0 [libcfs] [153967.142298] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [153967.149483] [] ? lustre_swab_ldlm_lock_desc+0x30/0x30 [ptlrpc] [153967.157089] [] tgt_enqueue+0x62/0x210 [ptlrpc] [153967.163312] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [153967.170327] [] ? ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc] [153967.177988] [] ? ktime_get_real_seconds+0xe/0x10 [libcfs] [153967.185157] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [153967.192947] [] ? ptlrpc_wait_event+0xa5/0x360 [ptlrpc] [153967.199844] [] ? __wake_up+0x44/0x50 [153967.205196] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [153967.211598] [] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc] [153967.219084] [] kthread+0xd1/0xe0 [153967.224060] [] ? insert_kthread_work+0x40/0x40 [153967.230264] [] ret_from_fork_nospec_begin+0xe/0x21 [153967.236791] [] ? insert_kthread_work+0x40/0x40 [153967.242990] INFO: task mdt01_004:22305 blocked for more than 120 seconds. [153967.249867] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [153967.257784] mdt01_004 D ffff9e8da08730c0 0 22305 2 0x00000080 [153967.265007] Call Trace: [153967.267556] [] ? lquota_disk_read+0xf2/0x390 [lquota] [153967.274344] [] schedule+0x29/0x70 [153967.279422] [] rwsem_down_write_failed+0x225/0x3a0 [153967.285955] [] ? cfs_hash_lookup+0xa2/0xd0 [libcfs] [153967.292575] [] call_rwsem_down_write_failed+0x17/0x30 [153967.299381] [] down_write+0x2d/0x3d [153967.304616] [] lod_qos_statfs_update+0x97/0x2b0 [lod] [153967.311413] [] lod_qos_prep_create+0x16a/0x1890 [lod] [153967.318231] [] ? qsd_op_begin+0x262/0x4b0 [lquota] [153967.324783] [] ? osd_declare_qid+0x200/0x4a0 [osd_ldiskfs] [153967.332017] [] ? osd_declare_inode_qid+0x27b/0x430 [osd_ldiskfs] [153967.339782] [] lod_prepare_create+0x215/0x2e0 [lod] [153967.346408] [] lod_declare_striped_create+0x1ee/0x980 [lod] [153967.353725] [] ? lod_sub_declare_create+0xdf/0x210 [lod] [153967.360801] [] lod_declare_create+0x204/0x590 [lod] [153967.367438] [] ? lu_context_refill+0x19/0x50 [obdclass] [153967.374417] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [153967.382357] [] mdd_declare_create+0x4c/0xcb0 [mdd] [153967.388894] [] mdd_create+0x847/0x14e0 [mdd] [153967.394953] [] mdt_reint_open+0x224f/0x3240 [mdt] [153967.401419] [] ? upcall_cache_get_entry+0x218/0x8b0 [obdclass] [153967.409005] [] mdt_reint_rec+0x83/0x210 [mdt] [153967.415127] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [153967.421753] [] ? mdt_intent_fixup_resent+0x36/0x220 [mdt] [153967.428903] [] mdt_intent_open+0x82/0x3a0 [mdt] [153967.435209] [] ? lprocfs_counter_add+0xf9/0x160 [obdclass] [153967.442440] [] mdt_intent_policy+0x435/0xd80 [mdt] [153967.448981] [] ? mdt_intent_fixup_resent+0x220/0x220 [mdt] [153967.456244] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [153967.463040] [] ? cfs_hash_bd_add_locked+0x63/0x80 [libcfs] [153967.470288] [] ? cfs_hash_add+0xbe/0x1a0 [libcfs] [153967.476761] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [153967.483933] [] ? lustre_swab_ldlm_lock_desc+0x30/0x30 [ptlrpc] [153967.491551] [] tgt_enqueue+0x62/0x210 [ptlrpc] [153967.497766] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [153967.504762] [] ? ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc] [153967.512447] [] ? ktime_get_real_seconds+0xe/0x10 [libcfs] [153967.519611] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [153967.527362] [] ? __wake_up+0x44/0x50 [153967.532738] [] ? ptlrpc_server_handle_req_in+0x8df/0xd60 [ptlrpc] [153967.540593] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [153967.546983] [] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc] [153967.554483] [] kthread+0xd1/0xe0 [153967.559453] [] ? insert_kthread_work+0x40/0x40 [153967.565641] [] ret_from_fork_nospec_begin+0xe/0x21 [153967.572191] [] ? insert_kthread_work+0x40/0x40 [153967.578379] INFO: task mdt03_004:22306 blocked for more than 120 seconds. [153967.585255] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [153967.593192] mdt03_004 D ffff9e8da0020000 0 22306 2 0x00000080 [153967.600399] Call Trace: [153967.602942] [] ? mutex_lock+0x12/0x2f [153967.608351] [] schedule+0x29/0x70 [153967.613427] [] rwsem_down_write_failed+0x225/0x3a0 [153967.619957] [] ? dquot_get_dqblk+0x144/0x1f0 [153967.625974] [] call_rwsem_down_write_failed+0x17/0x30 [153967.632779] [] down_write+0x2d/0x3d [153967.638014] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [153967.645416] [] ? cfs_hash_lookup+0xa2/0xd0 [libcfs] [153967.652068] [] ? lod_qos_statfs_update+0x3c/0x2b0 [lod] [153967.659057] [] ? lod_prepare_avoidance+0x375/0x780 [lod] [153967.666114] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [153967.673016] [] ? qsd_op_begin+0x262/0x4b0 [lquota] [153967.679559] [] ? osd_declare_inode_qid+0x27b/0x430 [osd_ldiskfs] [153967.687314] [] lod_prepare_create+0x215/0x2e0 [lod] [153967.693959] [] lod_declare_striped_create+0x1ee/0x980 [lod] [153967.701276] [] ? lod_sub_declare_create+0xdf/0x210 [lod] [153967.708338] [] lod_declare_create+0x204/0x590 [lod] [153967.714996] [] ? lu_context_refill+0x19/0x50 [obdclass] [153967.721970] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [153967.729893] [] mdd_declare_create+0x4c/0xcb0 [mdd] [153967.736455] [] mdd_create+0x847/0x14e0 [mdd] [153967.742483] [] mdt_reint_open+0x224f/0x3240 [mdt] [153967.748945] [] ? upcall_cache_get_entry+0x218/0x8b0 [obdclass] [153967.756559] [] mdt_reint_rec+0x83/0x210 [mdt] [153967.762663] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [153967.769294] [] ? mdt_intent_fixup_resent+0x36/0x220 [mdt] [153967.776465] [] mdt_intent_open+0x82/0x3a0 [mdt] [153967.782755] [] ? lprocfs_counter_add+0xf9/0x160 [obdclass] [153967.789994] [] mdt_intent_policy+0x435/0xd80 [mdt] [153967.796568] [] ? mdt_intent_fixup_resent+0x220/0x220 [mdt] [153967.803815] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [153967.810619] [] ? cfs_hash_bd_add_locked+0x63/0x80 [libcfs] [153967.817866] [] ? cfs_hash_add+0xbe/0x1a0 [libcfs] [153967.824339] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [153967.831512] [] ? lustre_swab_ldlm_lock_desc+0x30/0x30 [ptlrpc] [153967.839131] [] tgt_enqueue+0x62/0x210 [ptlrpc] [153967.845344] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [153967.852341] [] ? ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc] [153967.860022] [] ? ktime_get_real_seconds+0xe/0x10 [libcfs] [153967.867205] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [153967.874983] [] ? ptlrpc_wait_event+0xa5/0x360 [ptlrpc] [153967.881878] [] ? __wake_up+0x44/0x50 [153967.887227] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [153967.893616] [] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc] [153967.901116] [] kthread+0xd1/0xe0 [153967.906085] [] ? insert_kthread_work+0x40/0x40 [153967.912273] [] ret_from_fork_nospec_begin+0xe/0x21 [153967.918821] [] ? insert_kthread_work+0x40/0x40 [153967.925036] INFO: task mdt00_009:22600 blocked for more than 120 seconds. [153967.931912] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [153967.939862] mdt00_009 D ffff9e8da3301040 0 22600 2 0x00000080 [153967.947050] Call Trace: [153967.949596] [] ? lquota_disk_read+0xf2/0x390 [lquota] [153967.956388] [] schedule+0x29/0x70 [153967.961467] [] rwsem_down_write_failed+0x225/0x3a0 [153967.968007] [] ? cfs_hash_lookup+0xa2/0xd0 [libcfs] [153967.974623] [] call_rwsem_down_write_failed+0x17/0x30 [153967.981434] [] down_write+0x2d/0x3d [153967.986679] [] lod_qos_statfs_update+0x97/0x2b0 [lod] [153967.993480] [] lod_qos_prep_create+0x16a/0x1890 [lod] [153968.000300] [] ? qsd_op_begin+0x262/0x4b0 [lquota] [153968.006839] [] ? osd_declare_qid+0x200/0x4a0 [osd_ldiskfs] [153968.014074] [] ? osd_declare_inode_qid+0x27b/0x430 [osd_ldiskfs] [153968.021845] [] lod_prepare_create+0x215/0x2e0 [lod] [153968.028467] [] lod_declare_striped_create+0x1ee/0x980 [lod] [153968.035783] [] ? lod_sub_declare_create+0xdf/0x210 [lod] [153968.042860] [] lod_declare_create+0x204/0x590 [lod] [153968.049497] [] ? lu_context_refill+0x19/0x50 [obdclass] [153968.056471] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [153968.064429] [] mdd_declare_create+0x4c/0xcb0 [mdd] [153968.070968] [] mdd_create+0x847/0x14e0 [mdd] [153968.076991] [] mdt_reint_open+0x224f/0x3240 [mdt] [153968.083474] [] ? upcall_cache_get_entry+0x218/0x8b0 [obdclass] [153968.091055] [] mdt_reint_rec+0x83/0x210 [mdt] [153968.097165] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [153968.103808] [] ? mdt_intent_fixup_resent+0x36/0x220 [mdt] [153968.110955] [] mdt_intent_open+0x82/0x3a0 [mdt] [153968.117246] [] ? lprocfs_counter_add+0xf9/0x160 [obdclass] [153968.124512] [] mdt_intent_policy+0x435/0xd80 [mdt] [153968.131065] [] ? mdt_intent_fixup_resent+0x220/0x220 [mdt] [153968.138314] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [153968.145130] [] ? cfs_hash_bd_add_locked+0x63/0x80 [libcfs] [153968.152358] [] ? cfs_hash_add+0xbe/0x1a0 [libcfs] [153968.158830] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [153968.166018] [] ? lustre_swab_ldlm_lock_desc+0x30/0x30 [ptlrpc] [153968.173622] [] tgt_enqueue+0x62/0x210 [ptlrpc] [153968.179842] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [153968.186851] [] ? ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc] [153968.194513] [] ? ktime_get_real_seconds+0xe/0x10 [libcfs] [153968.201683] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [153968.209472] [] ? ptlrpc_wait_event+0xa5/0x360 [ptlrpc] [153968.216351] [] ? __wake_up+0x44/0x50 [153968.221714] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [153968.228108] [] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc] [153968.235608] [] kthread+0xd1/0xe0 [153968.240577] [] ? insert_kthread_work+0x40/0x40 [153968.246778] [] ret_from_fork_nospec_begin+0xe/0x21 [153968.253305] [] ? insert_kthread_work+0x40/0x40 [153968.259492] INFO: task mdt03_008:22611 blocked for more than 120 seconds. [153968.266397] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [153968.274316] mdt03_008 D ffff9e5daf8ee180 0 22611 2 0x00000080 [153968.281522] Call Trace: [153968.284065] [] ? mutex_lock+0x12/0x2f [153968.289477] [] schedule+0x29/0x70 [153968.294551] [] rwsem_down_write_failed+0x225/0x3a0 [153968.301081] [] ? dquot_get_dqblk+0x144/0x1f0 [153968.307093] [] call_rwsem_down_write_failed+0x17/0x30 [153968.313904] [] down_write+0x2d/0x3d [153968.319148] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [153968.326557] [] ? cfs_hash_lookup+0xa2/0xd0 [libcfs] [153968.333195] [] ? lod_qos_statfs_update+0x3c/0x2b0 [lod] [153968.340163] [] ? lod_prepare_avoidance+0x375/0x780 [lod] [153968.347220] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [153968.354122] [] ? qsd_op_begin+0x262/0x4b0 [lquota] [153968.360665] [] ? osd_declare_inode_qid+0x27b/0x430 [osd_ldiskfs] [153968.368419] [] lod_prepare_create+0x215/0x2e0 [lod] [153968.375081] [] lod_declare_striped_create+0x1ee/0x980 [lod] [153968.382398] [] ? lod_sub_declare_create+0xdf/0x210 [lod] [153968.389465] [] lod_declare_create+0x204/0x590 [lod] [153968.396131] [] ? lu_context_refill+0x19/0x50 [obdclass] [153968.403104] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [153968.411033] [] mdd_declare_create+0x4c/0xcb0 [mdd] [153968.417583] [] mdd_create+0x847/0x14e0 [mdd] [153968.423606] [] mdt_reint_open+0x224f/0x3240 [mdt] [153968.430067] [] ? upcall_cache_get_entry+0x218/0x8b0 [obdclass] [153968.437667] [] mdt_reint_rec+0x83/0x210 [mdt] [153968.443769] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [153968.450403] [] ? mdt_intent_fixup_resent+0x36/0x220 [mdt] [153968.457569] [] mdt_intent_open+0x82/0x3a0 [mdt] [153968.463887] [] ? lprocfs_counter_add+0xf9/0x160 [obdclass] [153968.471125] [] mdt_intent_policy+0x435/0xd80 [mdt] [153968.477682] [] ? mdt_intent_fixup_resent+0x220/0x220 [mdt] [153968.484927] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [153968.491723] [] ? cfs_hash_bd_add_locked+0x63/0x80 [libcfs] [153968.498971] [] ? cfs_hash_add+0xbe/0x1a0 [libcfs] [153968.505447] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [153968.512619] [] ? lustre_swab_ldlm_lock_desc+0x30/0x30 [ptlrpc] [153968.520242] [] tgt_enqueue+0x62/0x210 [ptlrpc] [153968.526460] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [153968.533461] [] ? ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc] [153968.541144] [] ? ktime_get_real_seconds+0xe/0x10 [libcfs] [153968.548314] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [153968.556088] [] ? ptlrpc_wait_event+0xa5/0x360 [ptlrpc] [153968.562982] [] ? __wake_up+0x44/0x50 [153968.568324] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [153968.574713] [] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc] [153968.582215] [] kthread+0xd1/0xe0 [153968.587192] [] ? insert_kthread_work+0x40/0x40 [153968.593396] [] ret_from_fork_nospec_begin+0xe/0x21 [153968.599962] [] ? insert_kthread_work+0x40/0x40 [153968.606151] INFO: task mdt03_009:22622 blocked for more than 120 seconds. [153968.613052] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [153968.620973] mdt03_009 D ffff9e8da0ae4100 0 22622 2 0x00000080 [153968.628172] Call Trace: [153968.630719] [] ? lquota_disk_read+0xf2/0x390 [lquota] [153968.637515] [] schedule+0x29/0x70 [153968.642586] [] rwsem_down_write_failed+0x225/0x3a0 [153968.649120] [] ? cfs_hash_lookup+0xa2/0xd0 [libcfs] [153968.655740] [] call_rwsem_down_write_failed+0x17/0x30 [153968.662561] [] down_write+0x2d/0x3d [153968.667798] [] lod_qos_statfs_update+0x97/0x2b0 [lod] [153968.674597] [] lod_qos_prep_create+0x16a/0x1890 [lod] [153968.681413] [] ? qsd_op_begin+0x262/0x4b0 [lquota] [153968.687951] [] ? osd_declare_qid+0x200/0x4a0 [osd_ldiskfs] [153968.695188] [] ? osd_declare_inode_qid+0x27b/0x430 [osd_ldiskfs] [153968.702959] [] lod_prepare_create+0x215/0x2e0 [lod] [153968.709581] [] lod_declare_striped_create+0x1ee/0x980 [lod] [153968.716900] [] ? lod_sub_declare_create+0xdf/0x210 [lod] [153968.723975] [] lod_declare_create+0x204/0x590 [lod] [153968.730626] [] ? lu_context_refill+0x19/0x50 [obdclass] [153968.737602] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [153968.745536] [] mdd_declare_create+0x4c/0xcb0 [mdd] [153968.752074] [] mdd_create+0x847/0x14e0 [mdd] [153968.758096] [] mdt_reint_open+0x224f/0x3240 [mdt] [153968.764574] [] ? upcall_cache_get_entry+0x218/0x8b0 [obdclass] [153968.772152] [] mdt_reint_rec+0x83/0x210 [mdt] [153968.778263] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [153968.784907] [] ? mdt_intent_fixup_resent+0x36/0x220 [mdt] [153968.792049] [] mdt_intent_open+0x82/0x3a0 [mdt] [153968.798341] [] ? lprocfs_counter_add+0xf9/0x160 [obdclass] [153968.805595] [] mdt_intent_policy+0x435/0xd80 [mdt] [153968.812132] [] ? mdt_intent_fixup_resent+0x220/0x220 [mdt] [153968.819385] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [153968.826202] [] ? cfs_hash_bd_add_locked+0x63/0x80 [libcfs] [153968.833429] [] ? cfs_hash_add+0xbe/0x1a0 [libcfs] [153968.839903] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [153968.847101] [] ? lustre_swab_ldlm_lock_desc+0x30/0x30 [ptlrpc] [153968.854700] [] tgt_enqueue+0x62/0x210 [ptlrpc] [153968.860918] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [153968.867945] [] ? ptlrpc_nrs_req_get_nolock0+0xd1/0x170 [ptlrpc] [153968.875612] [] ? ktime_get_real_seconds+0xe/0x10 [libcfs] [153968.882779] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [153968.890570] [] ? ptlrpc_wait_event+0xa5/0x360 [ptlrpc] [153968.897448] [] ? __wake_up+0x44/0x50 [153968.902792] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [153968.909190] [] ? ptlrpc_register_service+0xf80/0xf80 [ptlrpc] [153968.916672] [] kthread+0xd1/0xe0 [153968.921649] [] ? insert_kthread_work+0x40/0x40 [153968.927851] [] ret_from_fork_nospec_begin+0xe/0x21 [153968.934418] [] ? insert_kthread_work+0x40/0x40 [154328.496364] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [154339.811665] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576145487/real 1576145487] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576146088 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [154339.840058] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 8 previous similar messages [154339.849886] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [154339.866049] Lustre: Skipped 6 previous similar messages [154339.871550] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [154339.881465] Lustre: Skipped 6 previous similar messages [154940.334158] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576146088/real 1576146088] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576146689 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [154940.334163] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [154940.334165] Lustre: Skipped 5 previous similar messages [154940.334322] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [154940.334324] Lustre: Skipped 5 previous similar messages [154940.399259] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [155085.530093] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [155541.756522] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576146689/real 1576146689] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576147290 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [155541.784899] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 8 previous similar messages [155541.794734] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [155541.810905] Lustre: Skipped 6 previous similar messages [155541.816395] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [155541.826319] Lustre: Skipped 6 previous similar messages [155842.563733] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [156142.412917] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576147290/real 1576147290] req@ffff9e8cc8600d80 x1652547760361024/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576147891 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [156142.441292] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 5 previous similar messages [156142.451127] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [156142.467294] Lustre: Skipped 4 previous similar messages [156142.472785] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [156142.482706] Lustre: Skipped 4 previous similar messages [156599.597402] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [156744.669322] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576147891/real 1576147891] req@ffff9e8cc8600d80 x1652547760361024/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576148492 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [156744.669328] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [156744.669330] Lustre: Skipped 4 previous similar messages [156744.669482] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [156744.669484] Lustre: Skipped 4 previous similar messages [156744.734377] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [156879.295427] Lustre: fir-MDT0001: haven't heard from client 33fb836e-8923-4 (at 10.9.113.13@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e7c88b4b800, cur 1576148628 expire 1576148478 last 1576148401 [157164.304413] Lustre: fir-MDT0001: haven't heard from client 99c0707c-5cac-72fe-8449-b2fab5cd2307 (at 10.9.103.9@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da4daf400, cur 1576148913 expire 1576148763 last 1576148686 [157201.626783] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [157345.613740] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576148493/real 1576148493] req@ffff9e8cc8600d80 x1652547760361024/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576149094 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [157345.642117] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 8 previous similar messages [157345.651952] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [157345.668118] Lustre: Skipped 6 previous similar messages [157345.673619] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [157345.683558] Lustre: Skipped 7 previous similar messages [157947.230142] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576149094/real 1576149094] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576149695 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [157947.258517] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [157947.268350] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [157947.284525] Lustre: Skipped 5 previous similar messages [157947.290023] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [157947.299940] Lustre: Skipped 5 previous similar messages [157958.660459] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [158547.726533] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576149695/real 1576149695] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576150296 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [158547.754906] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [158547.764739] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [158547.780904] Lustre: Skipped 6 previous similar messages [158547.786367] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [158547.796281] Lustre: Skipped 6 previous similar messages [158560.689902] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [159148.518978] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576150296/real 1576150296] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576150897 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [159148.547358] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 4 previous similar messages [159148.557188] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [159148.573353] Lustre: Skipped 3 previous similar messages [159148.578824] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [159148.588758] Lustre: Skipped 3 previous similar messages [159162.719373] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [159749.807362] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576150897/real 1576150897] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576151498 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [159749.835737] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [159749.845571] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [159749.861738] Lustre: Skipped 5 previous similar messages [159749.867213] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [159749.877138] Lustre: Skipped 5 previous similar messages [159764.748778] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [160350.783805] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576151498/real 1576151498] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576152099 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [160350.812180] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [160350.822012] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [160350.838183] Lustre: Skipped 6 previous similar messages [160350.843681] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [160350.853602] Lustre: Skipped 6 previous similar messages [160455.406575] Lustre: fir-MDT0001: haven't heard from client a83208a9-361d-4 (at 10.9.112.4@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d9eac3c00, cur 1576152204 expire 1576152054 last 1576151977 [160521.782498] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [160790.418381] Lustre: fir-MDT0001: haven't heard from client 46023962-0c0f-4f56-ba25-877d19751e9f (at 10.8.18.14@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d99bb2800, cur 1576152539 expire 1576152389 last 1576152312 [160790.440165] Lustre: Skipped 1 previous similar message [160952.199257] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576152099/real 1576152099] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576152700 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [160952.227716] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [160952.237571] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [160952.253753] Lustre: Skipped 5 previous similar messages [160952.259281] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [160952.269230] Lustre: Skipped 7 previous similar messages [161123.811968] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [161553.376810] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576152700/real 1576152700] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576153301 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [161553.405189] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 8 previous similar messages [161553.415022] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [161553.431195] Lustre: Skipped 6 previous similar messages [161553.436655] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [161553.446581] Lustre: Skipped 6 previous similar messages [161630.452725] Lustre: fir-MDT0001: haven't heard from client 3c8d6e9e-a50e-0a1b-c656-8992c6066eb7 (at 10.9.103.17@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d97272c00, cur 1576153379 expire 1576153229 last 1576153152 [161880.845832] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [162154.801328] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576153302/real 1576153302] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576153903 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [162154.829700] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [162154.839532] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [162154.855701] Lustre: Skipped 2 previous similar messages [162154.861177] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [162154.871115] Lustre: Skipped 3 previous similar messages [162637.879584] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [162756.062824] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576153903/real 1576153903] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576154504 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [162756.062827] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576153903/real 1576153903] req@ffff9e8cc8600d80 x1652547760361024/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576154504 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [162756.062830] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [162756.062837] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [162756.062838] Lustre: Skipped 6 previous similar messages [162756.062983] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [162756.062985] Lustre: Skipped 6 previous similar messages [163357.522315] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576154504/real 1576154504] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576155105 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [163357.550687] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 6 previous similar messages [163357.560521] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [163357.576682] Lustre: Skipped 5 previous similar messages [163357.582161] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [163357.592084] Lustre: Skipped 5 previous similar messages [163394.913343] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [163959.602812] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576155106/real 1576155106] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576155707 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [163959.631192] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [163959.641024] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [163959.657192] Lustre: Skipped 6 previous similar messages [163959.662666] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [163959.672588] Lustre: Skipped 6 previous similar messages [163972.512886] Lustre: fir-MDT0001: haven't heard from client 27dd63c4-0630-b8af-eb2d-2f38c1747230 (at 10.8.19.5@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da7604000, cur 1576155721 expire 1576155571 last 1576155494 [164151.947089] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [164560.595233] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576155708/real 1576155708] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576156309 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [164560.623613] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 8 previous similar messages [164560.627234] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [164560.627236] Lustre: Skipped 6 previous similar messages [164560.627390] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [164560.627392] Lustre: Skipped 6 previous similar messages [164646.515769] Lustre: fir-MDT0001: haven't heard from client d4e78436-48cb-55f2-4bab-88419072f51d (at 10.9.103.16@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da3044400, cur 1576156395 expire 1576156245 last 1576156168 [164753.976523] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [165161.939707] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576156309/real 1576156309] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576156910 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [165161.968153] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [165161.977985] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [165161.994150] Lustre: Skipped 2 previous similar messages [165161.999643] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [165162.009564] Lustre: Skipped 2 previous similar messages [165280.536882] Lustre: fir-MDT0001: haven't heard from client a322cdb3-da3a-2edb-3b54-5c31a21230cc (at 10.9.104.20@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da4deb400, cur 1576157029 expire 1576156879 last 1576156802 [165511.010266] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [165575.544622] Lustre: fir-MDT0001: haven't heard from client ee45735a-3c72-071c-fe40-2e82d3a751bd (at 10.8.7.12@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da7605400, cur 1576157324 expire 1576157174 last 1576157097 [165762.596149] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576156910/real 1576156910] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576157511 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [165762.612155] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [165762.612156] Lustre: Skipped 6 previous similar messages [165762.612310] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [165762.612312] Lustre: Skipped 6 previous similar messages [165762.661213] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 8 previous similar messages [166113.039756] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [166197.589277] Lustre: fir-MDT0001: haven't heard from client 2d6a9cf7-46ee-4 (at 10.8.7.5@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5beb3ee000, cur 1576157946 expire 1576157796 last 1576157719 [166362.569118] Lustre: fir-MDT0001: haven't heard from client 19c70918-a172-38a5-2512-02b987cb686f (at 10.9.116.8@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da4da8c00, cur 1576158111 expire 1576157961 last 1576157884 [166363.588608] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576157511/real 1576157511] req@ffff9e8cc8600d80 x1652547760361024/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576158112 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [166363.616984] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [166363.626817] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [166363.642976] Lustre: Skipped 5 previous similar messages [166363.648455] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [166363.658369] Lustre: Skipped 6 previous similar messages [166870.073474] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [166965.325077] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576158112/real 1576158112] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576158713 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [166965.353449] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 8 previous similar messages [166965.363282] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [166965.379446] Lustre: Skipped 6 previous similar messages [166965.384927] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [166965.394859] Lustre: Skipped 7 previous similar messages [166990.581646] Lustre: fir-MDT0001: haven't heard from client 75c6d6d0-df4c-7543-716f-77a06d0b577a (at 10.9.103.68@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d99acbc00, cur 1576158739 expire 1576158589 last 1576158512 [167411.616907] Lustre: fir-MDT0001: haven't heard from client 030cce72-3f78-2631-9a21-d2dac6dcbefa (at 10.8.19.1@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da6abb000, cur 1576159160 expire 1576159010 last 1576158933 [167565.853574] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576158713/real 1576158713] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576159314 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [167565.881947] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 5 previous similar messages [167565.885578] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [167565.885580] Lustre: Skipped 5 previous similar messages [167565.885726] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [167565.885728] Lustre: Skipped 5 previous similar messages [167604.596376] Lustre: fir-MDT0001: haven't heard from client 7e6b1bcc-06cc-6146-e31c-86eefaf425fd (at 10.9.101.53@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d99bb2c00, cur 1576159353 expire 1576159203 last 1576159126 [167604.618262] Lustre: Skipped 1 previous similar message [167627.107276] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [168167.238119] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576159314/real 1576159314] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576159915 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [168167.266492] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 4 previous similar messages [168167.276326] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [168167.292513] Lustre: Skipped 3 previous similar messages [168167.297972] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [168167.307887] Lustre: Skipped 3 previous similar messages [168175.617706] Lustre: fir-MDT0001: haven't heard from client 7ac0db55-de36-c1c6-f1a9-d7191d6b9947 (at 10.9.103.29@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d98f59c00, cur 1576159924 expire 1576159774 last 1576159697 [168384.141087] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [168768.150611] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576159915/real 1576159915] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576160516 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [168768.178986] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 8 previous similar messages [168768.188818] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [168768.204993] Lustre: Skipped 6 previous similar messages [168768.210454] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [168768.220384] Lustre: Skipped 6 previous similar messages [169141.174864] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [169368.935119] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576160516/real 1576160516] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576161117 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [169368.963495] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 6 previous similar messages [169368.973329] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [169368.989518] Lustre: Skipped 5 previous similar messages [169368.994985] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [169369.004905] Lustre: Skipped 5 previous similar messages [169898.208675] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [169970.103646] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576161117/real 1576161117] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576161718 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [169970.132043] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [169970.141878] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [169970.158093] Lustre: Skipped 6 previous similar messages [169970.163591] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [169970.173506] Lustre: Skipped 6 previous similar messages [170571.000121] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576161718/real 1576161718] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576162319 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [170571.028498] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [170571.038332] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [170571.054503] Lustre: Skipped 3 previous similar messages [170571.060034] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [170571.069952] Lustre: Skipped 3 previous similar messages [170655.242432] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [171172.184573] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576162319/real 1576162319] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576162920 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [171172.212952] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 6 previous similar messages [171172.222782] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [171172.238948] Lustre: Skipped 5 previous similar messages [171172.244411] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [171172.254327] Lustre: Skipped 5 previous similar messages [171230.058179] LustreError: 22537:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST005e-osc-MDT0001: cannot cleanup orphans: rc = -11 [171327.079830] LustreError: 22529:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST005a-osc-MDT0001: cannot cleanup orphans: rc = -107 [171332.708989] LustreError: 22525:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0058-osc-MDT0001: cannot cleanup orphans: rc = -11 [171341.045219] LustreError: 22517:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0054-osc-MDT0001: cannot cleanup orphans: rc = -11 [171412.276162] LustreError: 22521:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST0056-osc-MDT0001: cannot cleanup orphans: rc = -107 [171512.713918] LustreError: 22533:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST005c-osc-MDT0001: cannot cleanup orphans: rc = -107 [171773.377057] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576162920/real 1576162920] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576163521 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [171773.405433] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 8 previous similar messages [171773.415270] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [171773.431442] Lustre: Skipped 5 previous similar messages [171773.436927] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [171773.446847] Lustre: Skipped 5 previous similar messages [171781.725948] Lustre: fir-MDT0001: haven't heard from client 3bd651a1-07e6-0cec-1800-45156860eb64 (at 10.9.110.39@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da1594000, cur 1576163530 expire 1576163380 last 1576163303 [171987.091926] LustreError: 22537:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST005e-osc-MDT0001: cannot cleanup orphans: rc = -107 [172084.113588] LustreError: 22529:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST005a-osc-MDT0001: cannot cleanup orphans: rc = -107 [172183.964328] LNet: Service thread pid 42416 was inactive for 200.30s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [172183.981350] Pid: 42416, comm: mdt03_019 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [172183.991612] Call Trace: [172183.994169] [] call_rwsem_down_write_failed+0x17/0x30 [172184.000995] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [172184.008458] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [172184.015374] [] lod_declare_instantiate_components+0x9a/0x1d0 [lod] [172184.023338] [] lod_declare_layout_change+0xb65/0x10f0 [lod] [172184.030679] [] mdd_declare_layout_change+0x62/0x120 [mdd] [172184.037870] [] mdd_layout_change+0x882/0x1000 [mdd] [172184.044530] [] mdt_layout_change+0x337/0x430 [mdt] [172184.051123] [] mdt_intent_layout+0x7ee/0xcc0 [mdt] [172184.057695] [] mdt_intent_policy+0x435/0xd80 [mdt] [172184.064279] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [172184.071137] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [172184.078347] [] tgt_enqueue+0x62/0x210 [ptlrpc] [172184.084619] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [172184.091667] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [172184.099470] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [172184.105898] [] kthread+0xd1/0xe0 [172184.110901] [] ret_from_fork_nospec_begin+0xe/0x21 [172184.117475] [] 0xffffffffffffffff [172184.122587] LustreError: dumping log to /tmp/lustre-log.1576163932.42416 [172269.747692] LustreError: 22533:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST005c-osc-MDT0001: cannot cleanup orphans: rc = -107 [172269.760819] LustreError: 22533:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) Skipped 3 previous similar messages [172270.494704] LNet: Service thread pid 22252 was inactive for 286.96s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [172270.511728] Pid: 22252, comm: mdt01_002 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [172270.521989] Call Trace: [172270.524548] [] osp_precreate_reserve+0x2e8/0x800 [osp] [172270.531470] [] osp_declare_create+0x199/0x5b0 [osp] [172270.538140] [] lod_sub_declare_create+0xdf/0x210 [lod] [172270.545059] [] lod_qos_declare_object_on+0xbe/0x3a0 [lod] [172270.552241] [] lod_alloc_qos.constprop.18+0x10f4/0x1840 [lod] [172270.559782] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [172270.566713] [] lod_declare_instantiate_components+0x9a/0x1d0 [lod] [172270.574673] [] lod_declare_layout_change+0xb65/0x10f0 [lod] [172270.582029] [] mdd_declare_layout_change+0x62/0x120 [mdd] [172270.589208] [] mdd_layout_change+0x882/0x1000 [mdd] [172270.595868] [] mdt_layout_change+0x337/0x430 [mdt] [172270.602451] [] mdt_intent_layout+0x7ee/0xcc0 [mdt] [172270.609035] [] mdt_intent_policy+0x435/0xd80 [mdt] [172270.615608] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [172270.622478] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [172270.629691] [] tgt_enqueue+0x62/0x210 [ptlrpc] [172270.635962] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [172270.642996] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [172270.650817] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [172270.657235] [] kthread+0xd1/0xe0 [172270.662247] [] ret_from_fork_nospec_begin+0xe/0x21 [172270.668812] [] 0xffffffffffffffff [172270.673934] LustreError: dumping log to /tmp/lustre-log.1576164018.22252 [172281.759013] LNet: Service thread pid 22614 was inactive for 286.29s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [172281.776044] Pid: 22614, comm: mdt01_018 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [172281.786319] Call Trace: [172281.788879] [] call_rwsem_down_write_failed+0x17/0x30 [172281.795718] [] lod_qos_statfs_update+0x97/0x2b0 [lod] [172281.802566] [] lod_qos_prep_create+0x16a/0x1890 [lod] [172281.809405] [] lod_prepare_create+0x215/0x2e0 [lod] [172281.816095] [] lod_declare_striped_create+0x1ee/0x980 [lod] [172281.823443] [] lod_declare_create+0x204/0x590 [lod] [172281.830116] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [172281.838092] [] mdd_declare_create+0x4c/0xcb0 [mdd] [172281.844668] [] mdd_create+0x847/0x14e0 [mdd] [172281.850736] [] mdt_reint_open+0x224f/0x3240 [mdt] [172281.857269] [] mdt_reint_rec+0x83/0x210 [mdt] [172281.863424] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [172281.870103] [] mdt_intent_open+0x82/0x3a0 [mdt] [172281.876434] [] mdt_intent_policy+0x435/0xd80 [mdt] [172281.883025] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [172281.889904] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [172281.897144] [] tgt_enqueue+0x62/0x210 [ptlrpc] [172281.903423] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [172281.910467] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [172281.918295] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [172281.924721] [] kthread+0xd1/0xe0 [172281.929726] [] ret_from_fork_nospec_begin+0xe/0x21 [172281.936305] [] 0xffffffffffffffff [172281.941421] LustreError: dumping log to /tmp/lustre-log.1576164030.22614 [172283.346526] LNet: Service thread pid 22252 completed after 299.82s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [172326.816254] LNet: Service thread pid 22624 was inactive for 276.43s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [172326.833277] Pid: 22624, comm: mdt00_016 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [172326.843537] Call Trace: [172326.846096] [] call_rwsem_down_write_failed+0x17/0x30 [172326.852919] [] lod_qos_statfs_update+0x97/0x2b0 [lod] [172326.859751] [] lod_qos_prep_create+0x16a/0x1890 [lod] [172326.866589] [] lod_prepare_create+0x215/0x2e0 [lod] [172326.873237] [] lod_declare_striped_create+0x1ee/0x980 [lod] [172326.880592] [] lod_declare_create+0x204/0x590 [lod] [172326.887240] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [172326.895213] [] mdd_declare_create+0x4c/0xcb0 [mdd] [172326.901776] [] mdd_create+0x847/0x14e0 [mdd] [172326.907845] [] mdt_reint_open+0x224f/0x3240 [mdt] [172326.914342] [] mdt_reint_rec+0x83/0x210 [mdt] [172326.920492] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [172326.927152] [] mdt_intent_open+0x82/0x3a0 [mdt] [172326.933467] [] mdt_intent_policy+0x435/0xd80 [mdt] [172326.940039] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [172326.946911] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [172326.954106] [] tgt_enqueue+0x62/0x210 [ptlrpc] [172326.960370] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [172326.967402] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [172326.975230] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [172326.981650] [] kthread+0xd1/0xe0 [172326.986664] [] ret_from_fork_nospec_begin+0xe/0x21 [172326.993227] [] 0xffffffffffffffff [172326.998350] LustreError: dumping log to /tmp/lustre-log.1576164075.22624 [172372.897516] LNet: Service thread pid 22597 was inactive for 287.00s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [172372.914544] Pid: 22597, comm: mdt01_013 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [172372.924803] Call Trace: [172372.927359] [] call_rwsem_down_write_failed+0x17/0x30 [172372.934185] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [172372.941621] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [172372.948542] [] lod_prepare_create+0x215/0x2e0 [lod] [172372.955204] [] lod_declare_striped_create+0x1ee/0x980 [lod] [172372.962543] [] lod_declare_create+0x204/0x590 [lod] [172372.969190] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [172372.977163] [] mdd_declare_create+0x4c/0xcb0 [mdd] [172372.983726] [] mdd_create+0x847/0x14e0 [mdd] [172372.989779] [] mdt_reint_open+0x224f/0x3240 [mdt] [172372.996276] [] mdt_reint_rec+0x83/0x210 [mdt] [172373.002426] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [172373.009077] [] mdt_intent_open+0x82/0x3a0 [mdt] [172373.015390] [] mdt_intent_policy+0x435/0xd80 [mdt] [172373.021962] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [172373.028836] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [172373.036031] [] tgt_enqueue+0x62/0x210 [ptlrpc] [172373.042293] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [172373.049316] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [172373.057129] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [172373.063553] [] kthread+0xd1/0xe0 [172373.068556] [] ret_from_fork_nospec_begin+0xe/0x21 [172373.075132] [] 0xffffffffffffffff [172373.080255] LustreError: dumping log to /tmp/lustre-log.1576164121.22597 [172374.001548] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576163521/real 1576163521] req@ffff9e8cc8600d80 x1652547760361024/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576164122 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [172374.029917] Lustre: 21798:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 12 previous similar messages [172374.039833] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [172374.056000] Lustre: Skipped 6 previous similar messages [172374.061492] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [172374.071408] Lustre: Skipped 6 previous similar messages [172383.349338] LNet: Service thread pid 42416 completed after 399.68s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [172383.365585] LNet: Skipped 2 previous similar messages [172385.185850] LNet: Service thread pid 22303 was inactive for 286.78s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [172385.198800] LustreError: dumping log to /tmp/lustre-log.1576164133.22303 [172483.352217] LNet: Service thread pid 22597 completed after 397.45s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [172583.355043] LNet: Service thread pid 22303 completed after 484.94s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [172709.759323] Lustre: fir-MDT0001: haven't heard from client c1504d4c-7504-c251-de3c-6f26c7b8e7d5 (at 10.9.102.26@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da6af9800, cur 1576164458 expire 1576164308 last 1576164231 [172744.125826] LustreError: 22537:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST005e-osc-MDT0001: cannot cleanup orphans: rc = -107 [172975.314200] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576164122/real 1576164122] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576164723 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [172975.330208] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [172975.330210] Lustre: Skipped 6 previous similar messages [172975.330384] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [172975.330386] Lustre: Skipped 6 previous similar messages [172975.379260] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 13 previous similar messages [173421.765661] Lustre: fir-MDT0001: haven't heard from client 3c020cd0-089d-acb1-e879-86429192cebf (at 10.8.27.2@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da6a80c00, cur 1576165170 expire 1576165020 last 1576164943 [173501.159630] LustreError: 22537:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST005e-osc-MDT0001: cannot cleanup orphans: rc = -107 [173501.172752] LustreError: 22537:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) Skipped 5 previous similar messages [173575.962674] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576164723/real 1576164723] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576165324 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [173575.991049] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 5 previous similar messages [173576.000881] Lustre: fir-OST005a-osc-MDT0001: Connection to fir-OST005a (at 10.0.10.115@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [173576.017049] Lustre: Skipped 3 previous similar messages [173576.022514] Lustre: fir-OST005a-osc-MDT0001: Connection restored to 10.0.10.115@o2ib7 (at 10.0.10.115@o2ib7) [173576.032446] Lustre: Skipped 3 previous similar messages [173665.022123] LNetError: 21600:0:(o2iblnd_cb.c:3350:kiblnd_check_txs_locked()) Timed out tx: tx_queue, 0 seconds [173665.032204] LNetError: 21600:0:(o2iblnd_cb.c:3350:kiblnd_check_txs_locked()) Skipped 13 previous similar messages [173665.042545] LNetError: 21600:0:(o2iblnd_cb.c:3425:kiblnd_check_conns()) Timed out RDMA with 10.0.10.115@o2ib7 (5): c: 0, oc: 0, rc: 8 [173665.054615] LNetError: 21600:0:(o2iblnd_cb.c:3425:kiblnd_check_conns()) Skipped 13 previous similar messages [173665.064803] LNetError: 21600:0:(lib-msg.c:485:lnet_handle_local_failure()) ni 10.0.10.52@o2ib7 added to recovery queue. Health = 900 [173666.022156] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Timed out tx for 10.0.10.115@o2ib7: 1 seconds [173668.022201] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Timed out tx for 10.0.10.115@o2ib7: 3 seconds [173668.032453] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Skipped 3 previous similar messages [173669.022229] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Timed out tx for 10.0.10.115@o2ib7: 4 seconds [173669.032486] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Skipped 2 previous similar messages [173672.022307] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Timed out tx for 10.0.10.115@o2ib7: 0 seconds [173677.022444] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Timed out tx for 10.0.10.115@o2ib7: 0 seconds [173677.032699] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Skipped 4 previous similar messages [173685.022673] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Timed out tx for 10.0.10.115@o2ib7: 0 seconds [173685.032925] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Skipped 4 previous similar messages [173690.455586] Lustre: fir-MDT0001: Client 8d6bc192-952c-a08a-3683-972363815437 (at 10.9.107.5@o2ib4) reconnecting [173696.011014] Lustre: fir-MDT0001: Client 90361a1c-a1b2-5c6d-9d49-7688ae471845 (at 10.9.102.69@o2ib4) reconnecting [173697.210298] Lustre: fir-MDT0001: Client 55c89a19-c2de-4 (at 10.8.0.82@o2ib6) reconnecting [173702.260598] Lustre: fir-MDT0001: Client 4f123d4f-2df1-fc67-b5f5-a6cfd73bd706 (at 10.9.104.54@o2ib4) reconnecting [173703.674499] LustreError: 137-5: fir-MDT0003_UUID: not available for connect from 10.9.108.14@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [173708.968042] Lustre: fir-MDT0001: Client 55c89a19-c2de-4 (at 10.8.0.82@o2ib6) reconnecting [173709.206339] LNetError: 90618:0:(lib-msg.c:485:lnet_handle_local_failure()) ni 10.0.10.52@o2ib7 added to recovery queue. Health = 900 [173717.957142] Lustre: fir-MDT0001: Client c319d260-6432-b651-7aeb-47c9eb331ac0 (at 10.9.104.66@o2ib4) reconnecting [173717.967406] Lustre: Skipped 4 previous similar messages [173721.023670] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Timed out tx for 10.0.10.115@o2ib7: 0 seconds [173721.033931] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Skipped 2 previous similar messages [173721.043342] LNetError: 21600:0:(lib-msg.c:485:lnet_handle_local_failure()) ni 10.0.10.52@o2ib7 added to recovery queue. Health = 900 [173735.954662] Lustre: fir-MDT0001: Client 24e9e2b8-c701-7886-1991-a2238348e3e1 (at 10.9.108.11@o2ib4) reconnecting [173735.964922] Lustre: Skipped 7 previous similar messages [173747.287390] LNetError: 90618:0:(lib-msg.c:485:lnet_handle_local_failure()) ni 10.0.10.52@o2ib7 added to recovery queue. Health = 900 [173753.207542] LNetError: 90618:0:(lib-msg.c:485:lnet_handle_local_failure()) ni 10.0.10.52@o2ib7 added to recovery queue. Health = 900 [173755.024603] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Timed out tx for 10.0.10.115@o2ib7: 1 seconds [173755.034863] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Skipped 7 previous similar messages [173765.207880] LNetError: 90618:0:(lib-msg.c:485:lnet_handle_local_failure()) ni 10.0.10.52@o2ib7 added to recovery queue. Health = 900 [173765.219901] LNetError: 90618:0:(lib-msg.c:485:lnet_handle_local_failure()) Skipped 1 previous similar message [173768.682161] Lustre: fir-MDT0001: Client 51d039f0-180b-c2f2-39da-443d9476c206 (at 10.8.7.16@o2ib6) reconnecting [173768.692262] Lustre: Skipped 20 previous similar messages [173785.559137] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.9.117.44@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [173787.413325] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.9.109.10@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [173796.210811] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.9.102.5@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [173801.531718] LustreError: 137-5: fir-MDT0003_UUID: not available for connect from 10.9.114.8@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [173803.025919] LNetError: 21600:0:(lib-msg.c:485:lnet_handle_local_failure()) ni 10.0.10.52@o2ib7 added to recovery queue. Health = 900 [173803.037909] LNetError: 21600:0:(lib-msg.c:485:lnet_handle_local_failure()) Skipped 1 previous similar message [173810.377255] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.9.109.56@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [173810.394715] LustreError: Skipped 5 previous similar messages [173821.026404] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Timed out tx for 10.0.10.115@o2ib7: 0 seconds [173821.036658] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Skipped 8 previous similar messages [173826.534895] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.9.101.64@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [173826.552353] LustreError: Skipped 57 previous similar messages [173840.217941] LNetError: 90618:0:(lib-msg.c:485:lnet_handle_local_failure()) ni 10.0.10.52@o2ib7 added to recovery queue. Health = 900 [173840.229974] LNetError: 90618:0:(lib-msg.c:485:lnet_handle_local_failure()) Skipped 4 previous similar messages [173841.773317] Lustre: fir-MDT0001: haven't heard from client fir-MDT0001-lwp-OST005a_UUID (at 10.0.10.115@o2ib7) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e8da0937c00, cur 1576165590 expire 1576165440 last 1576165363 [173847.355841] Lustre: fir-MDT0001: Client dec44f56-c755-7619-7feb-f6d1f087af92 (at 10.9.110.27@o2ib4) reconnecting [173847.366115] Lustre: Skipped 61 previous similar messages [173859.165488] LustreError: 137-5: fir-MDT0002_UUID: not available for connect from 10.9.107.42@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [173859.182942] LustreError: Skipped 79 previous similar messages [173909.211861] LNetError: 90618:0:(lib-msg.c:485:lnet_handle_local_failure()) ni 10.0.10.52@o2ib7 added to recovery queue. Health = 900 [173909.223876] LNetError: 90618:0:(lib-msg.c:485:lnet_handle_local_failure()) Skipped 10 previous similar messages [173953.030067] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Timed out tx for 10.0.10.115@o2ib7: 0 seconds [173953.040326] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Skipped 27 previous similar messages [173999.292142] Lustre: fir-MDT0001: Client d833ee08-9e03-4 (at 10.9.107.9@o2ib4) reconnecting [173999.300503] Lustre: Skipped 61 previous similar messages [174100.034111] LNetError: 21600:0:(o2iblnd_cb.c:3350:kiblnd_check_txs_locked()) Timed out tx: tx_queue, 1 seconds [174100.044212] LNetError: 21600:0:(o2iblnd_cb.c:3425:kiblnd_check_conns()) Timed out RDMA with 10.0.10.115@o2ib7 (0): c: 0, oc: 0, rc: 8 [174100.056534] LNetError: 21600:0:(lib-msg.c:485:lnet_handle_local_failure()) ni 10.0.10.52@o2ib7 added to recovery queue. Health = 900 [174100.068567] LNetError: 21600:0:(lib-msg.c:485:lnet_handle_local_failure()) Skipped 15 previous similar messages [174170.426898] LustreError: 137-5: fir-MDT0002_UUID: not available for connect from 10.9.107.9@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [174170.444275] LustreError: Skipped 61 previous similar messages [174176.843254] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576165324/real 1576165324] req@ffff9e7873fc0900 x1652547760165872/t0(0) o6->fir-OST005a-osc-MDT0001@10.0.10.115@o2ib7:28/4 lens 544/432 e 1 to 1 dl 1576165925 ref 1 fl Rpc:X/2/ffffffff rc -11/-1 [174176.871632] Lustre: 21797:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 27 previous similar messages [174178.797955] Lustre: fir-MDT0001: haven't heard from client 2f29ff9b-1f0b-7030-94fa-3b368aa715dc (at 10.9.103.24@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d97270c00, cur 1576165927 expire 1576165777 last 1576165700 [174178.819842] Lustre: Skipped 5 previous similar messages [174258.204536] LustreError: 22537:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) fir-OST005e-osc-MDT0001: cannot cleanup orphans: rc = -11 [174258.217569] LustreError: 22537:0:(osp_precreate.c:940:osp_precreate_cleanup_orphans()) Skipped 7 previous similar messages [174273.449482] Lustre: fir-MDT0001: Client d833ee08-9e03-4 (at 10.9.107.9@o2ib4) reconnecting [174273.457835] Lustre: Skipped 2 previous similar messages [174273.463180] Lustre: fir-MDT0001: Connection restored to (at 10.9.107.9@o2ib4) [174273.470497] Lustre: Skipped 167 previous similar messages [174396.481773] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.9.107.9@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [174396.499141] LustreError: Skipped 1 previous similar message [174528.778569] Lustre: fir-MDT0001: haven't heard from client ebe3cd7a-b33e-e40e-0146-5e12c2a33567 (at 10.9.107.41@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d96aac800, cur 1576166277 expire 1576166127 last 1576166050 [174547.421547] LNet: Service thread pid 22312 was inactive for 200.31s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [174547.438574] Pid: 22312, comm: mdt02_005 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [174547.448829] Call Trace: [174547.451388] [] call_rwsem_down_write_failed+0x17/0x30 [174547.458215] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [174547.465683] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [174547.472590] [] lod_prepare_create+0x215/0x2e0 [lod] [174547.479251] [] lod_declare_striped_create+0x1ee/0x980 [lod] [174547.486596] [] lod_declare_create+0x204/0x590 [lod] [174547.493258] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [174547.501218] [] mdd_declare_create+0x4c/0xcb0 [mdd] [174547.507794] [] mdd_create+0x847/0x14e0 [mdd] [174547.513837] [] mdt_reint_open+0x224f/0x3240 [mdt] [174547.520333] [] mdt_reint_rec+0x83/0x210 [mdt] [174547.526473] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [174547.533148] [] mdt_intent_open+0x82/0x3a0 [mdt] [174547.539457] [] mdt_intent_policy+0x435/0xd80 [mdt] [174547.546040] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [174547.552890] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [174547.560098] [] tgt_enqueue+0x62/0x210 [ptlrpc] [174547.566349] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [174547.573392] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [174547.581197] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [174547.587624] [] kthread+0xd1/0xe0 [174547.592621] [] ret_from_fork_nospec_begin+0xe/0x21 [174547.599208] [] 0xffffffffffffffff [174547.604313] LustreError: dumping log to /tmp/lustre-log.1576166295.22312 [174551.517681] LNet: Service thread pid 22967 was inactive for 204.34s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [174551.534706] Pid: 22967, comm: mdt03_014 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [174551.544966] Call Trace: [174551.547526] [] call_rwsem_down_write_failed+0x17/0x30 [174551.554349] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [174551.561803] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [174551.568711] [] lod_declare_instantiate_components+0x9a/0x1d0 [lod] [174551.576672] [] lod_declare_layout_change+0xb65/0x10f0 [lod] [174551.584032] [] mdd_declare_layout_change+0x62/0x120 [mdd] [174551.591226] [] mdd_layout_change+0x882/0x1000 [mdd] [174551.597875] [] mdt_layout_change+0x337/0x430 [mdt] [174551.604457] [] mdt_intent_layout+0x7ee/0xcc0 [mdt] [174551.611033] [] mdt_intent_policy+0x435/0xd80 [mdt] [174551.617617] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [174551.624473] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [174551.631681] [] tgt_enqueue+0x62/0x210 [ptlrpc] [174551.637934] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [174551.644968] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [174551.652785] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [174551.659220] [] kthread+0xd1/0xe0 [174551.664222] [] ret_from_fork_nospec_begin+0xe/0x21 [174551.670796] [] 0xffffffffffffffff [174551.675916] LustreError: dumping log to /tmp/lustre-log.1576166299.22967 [174556.125757] LNet: Service thread pid 22304 was inactive for 200.69s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [174556.142777] Pid: 22304, comm: mdt02_003 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [174556.153038] Call Trace: [174556.155589] [] call_rwsem_down_write_failed+0x17/0x30 [174556.162416] [] lod_qos_statfs_update+0x97/0x2b0 [lod] [174556.169275] [] lod_qos_prep_create+0x16a/0x1890 [lod] [174556.176109] [] lod_prepare_create+0x215/0x2e0 [lod] [174556.182771] [] lod_declare_striped_create+0x1ee/0x980 [lod] [174556.190112] [] lod_declare_create+0x204/0x590 [lod] [174556.196775] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [174556.204733] [] mdd_declare_create+0x4c/0xcb0 [mdd] [174556.211310] [] mdd_create+0x847/0x14e0 [mdd] [174556.217354] [] mdt_reint_open+0x224f/0x3240 [mdt] [174556.223852] [] mdt_reint_rec+0x83/0x210 [mdt] [174556.229991] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [174556.236673] [] mdt_intent_open+0x82/0x3a0 [mdt] [174556.242975] [] mdt_intent_policy+0x435/0xd80 [mdt] [174556.249548] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [174556.256409] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [174556.263614] [] tgt_enqueue+0x62/0x210 [ptlrpc] [174556.269866] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [174556.276903] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [174556.284706] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [174556.291132] [] kthread+0xd1/0xe0 [174556.296137] [] ret_from_fork_nospec_begin+0xe/0x21 [174556.302698] [] 0xffffffffffffffff [174556.307828] LustreError: dumping log to /tmp/lustre-log.1576166304.22304 [174559.197852] Pid: 22255, comm: mdt02_002 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [174559.208113] Call Trace: [174559.210659] [] call_rwsem_down_write_failed+0x17/0x30 [174559.217481] [] lod_qos_statfs_update+0x97/0x2b0 [lod] [174559.224325] [] lod_qos_prep_create+0x16a/0x1890 [lod] [174559.231159] [] lod_declare_instantiate_components+0x9a/0x1d0 [lod] [174559.239120] [] lod_declare_layout_change+0xb65/0x10f0 [lod] [174559.246465] [] mdd_declare_layout_change+0x62/0x120 [mdd] [174559.253669] [] mdd_layout_change+0x882/0x1000 [mdd] [174559.260323] [] mdt_layout_change+0x337/0x430 [mdt] [174559.266907] [] mdt_intent_layout+0x7ee/0xcc0 [mdt] [174559.273472] [] mdt_intent_policy+0x435/0xd80 [mdt] [174559.280046] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [174559.286896] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [174559.294102] [] tgt_enqueue+0x62/0x210 [ptlrpc] [174559.300355] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [174559.307389] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [174559.315195] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [174559.321633] [] kthread+0xd1/0xe0 [174559.326634] [] ret_from_fork_nospec_begin+0xe/0x21 [174559.333208] [] 0xffffffffffffffff [174559.338309] LustreError: dumping log to /tmp/lustre-log.1576166307.22255 [174599.134958] LNet: Service thread pid 42416 was inactive for 204.05s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [174599.151979] LNet: Skipped 1 previous similar message [174599.157041] Pid: 42416, comm: mdt03_019 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [174599.167322] Call Trace: [174599.169876] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [174599.176920] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [174599.184217] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [174599.191134] [] mdt_object_lock_internal+0x70/0x360 [mdt] [174599.198240] [] mdt_getattr_name_lock+0x90a/0x1c30 [mdt] [174599.205244] [] mdt_intent_getattr+0x2b5/0x480 [mdt] [174599.211913] [] mdt_intent_policy+0x435/0xd80 [mdt] [174599.218488] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [174599.225342] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [174599.232538] [] tgt_enqueue+0x62/0x210 [ptlrpc] [174599.238823] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [174599.245856] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [174599.253673] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [174599.260087] [] kthread+0xd1/0xe0 [174599.265105] [] ret_from_fork_nospec_begin+0xe/0x21 [174599.271667] [] 0xffffffffffffffff [174599.276788] LustreError: dumping log to /tmp/lustre-log.1576166347.42416 [174604.784848] Lustre: fir-MDT0001: haven't heard from client 29e66763-b95c-3d3e-5532-53facc0d6b7a (at 10.9.109.32@o2ib4) in 221 seconds. I think it's dead, and I am evicting it. exp ffff9e5db5fb3800, cur 1576166353 expire 1576166203 last 1576166132 [174604.806724] Lustre: Skipped 19 previous similar messages [174621.222561] LustreError: 166-1: MGC10.0.10.51@o2ib7: Connection to MGS (at 10.0.10.51@o2ib7) was lost; in progress operations using this service will fail [174621.236526] LustreError: 22244:0:(ldlm_request.c:147:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576166069, 300s ago), entering recovery for MGS@MGC10.0.10.51@o2ib7_0 ns: MGC10.0.10.51@o2ib7 lock: ffff9e5d9ddd6e40/0x9161590831e4e1da lrc: 4/1,0 mode: --/CR res: [0x726966:0x2:0x0].0x0 rrc: 2 type: PLN flags: 0x1000000000000 nid: local remote: 0xc3c20c06c1d5a0d0 expref: -99 pid: 22244 timeout: 0 lvb_type: 0 [174621.274461] LustreError: 91657:0:(ldlm_resource.c:1147:ldlm_resource_complain()) MGC10.0.10.51@o2ib7: namespace resource [0x726966:0x2:0x0].0x0 (ffff9e6da2a9e3c0) refcount nonzero (1) after lock cleanup; forcing cleanup. [174646.240272] LNet: Service thread pid 43561 was inactive for 204.38s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [174646.253219] LustreError: dumping log to /tmp/lustre-log.1576166394.43561 [174647.040744] LNet: Service thread pid 22312 completed after 299.93s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [174695.078602] LustreError: 42416:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576166143, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5d80eb60c0/0x9161590831e51bb6 lrc: 3/1,0 mode: --/PR res: [0x240010083:0x68e5:0x0].0x0 bits 0x13/0x0 rrc: 7 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 42416 timeout: 0 lvb_type: 0 [174695.118201] LustreError: dumping log to /tmp/lustre-log.1576166443.42416 [174733.548500] LustreError: 137-5: fir-MDT0003_UUID: not available for connect from 10.9.107.9@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [174733.565868] LustreError: Skipped 5 previous similar messages [174757.859337] LNet: Service thread pid 22317 was inactive for 410.63s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [174757.872285] LustreError: dumping log to /tmp/lustre-log.1576166506.22317 [174761.955450] LNet: Service thread pid 22602 was inactive for 401.20s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [174761.968399] LustreError: dumping log to /tmp/lustre-log.1576166510.22602 [174765.027557] LNet: Service thread pid 22252 was inactive for 400.98s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [174765.040504] LustreError: dumping log to /tmp/lustre-log.1576166513.22252 [174772.195734] LNet: Service thread pid 22249 was inactive for 410.60s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [174772.208684] LustreError: dumping log to /tmp/lustre-log.1576166520.22249 [174783.313980] Lustre: fir-MDT0001: haven't heard from client 646257db-4a10-1d7d-1435-2f2425d1bdb2 (at 10.8.18.26@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d99bf7400, cur 1576166531 expire 1576166381 last 1576166304 [174783.335792] Lustre: Skipped 1 previous similar message [174847.046261] LNet: Service thread pid 22967 completed after 499.86s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [174847.062527] LNet: Skipped 2 previous similar messages [174885.862906] LNet: Service thread pid 43359 was inactive for 364.56s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [174885.879928] Pid: 43359, comm: mdt03_020 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [174885.890188] Call Trace: [174885.892746] [] call_rwsem_down_write_failed+0x17/0x30 [174885.899571] [] lod_qos_statfs_update+0x97/0x2b0 [lod] [174885.906416] [] lod_qos_prep_create+0x16a/0x1890 [lod] [174885.913249] [] lod_prepare_create+0x215/0x2e0 [lod] [174885.919910] [] lod_declare_striped_create+0x1ee/0x980 [lod] [174885.927255] [] lod_declare_create+0x204/0x590 [lod] [174885.933916] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [174885.941874] [] mdd_declare_create+0x4c/0xcb0 [mdd] [174885.948451] [] mdd_create+0x847/0x14e0 [mdd] [174885.954494] [] mdt_reint_open+0x224f/0x3240 [mdt] [174885.961001] [] mdt_reint_rec+0x83/0x210 [mdt] [174885.967139] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [174885.973811] [] mdt_intent_open+0x82/0x3a0 [mdt] [174885.980120] [] mdt_intent_policy+0x435/0xd80 [mdt] [174885.986696] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [174885.993558] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [174886.000788] [] tgt_enqueue+0x62/0x210 [ptlrpc] [174886.007041] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [174886.014086] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [174886.021887] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [174886.028315] [] kthread+0xd1/0xe0 [174886.033321] [] ret_from_fork_nospec_begin+0xe/0x21 [174886.039892] [] 0xffffffffffffffff [174886.045005] LustreError: dumping log to /tmp/lustre-log.1576166634.43359 [174893.935097] LustreError: 22305:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576166342, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e79f5277bc0/0x9161590831e597b0 lrc: 3/1,0 mode: --/PR res: [0x240039389:0x54:0x0].0x0 bits 0x13/0x0 rrc: 13 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22305 timeout: 0 lvb_type: 0 [174904.226393] LustreError: 22307:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576166352, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5b87f20000/0x9161590831e59d21 lrc: 3/1,0 mode: --/PR res: [0x240039a83:0x441:0x0].0x0 bits 0x13/0x0 rrc: 6 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22307 timeout: 0 lvb_type: 0 [174929.184060] LustreError: 166-1: MGC10.0.10.51@o2ib7: Connection to MGS (at 10.0.10.51@o2ib7) was lost; in progress operations using this service will fail [174929.197975] LustreError: 22244:0:(ldlm_request.c:147:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576166377, 300s ago), entering recovery for MGS@MGC10.0.10.51@o2ib7_0 ns: MGC10.0.10.51@o2ib7 lock: ffff9e5d9ddd2400/0x9161590831e5aac6 lrc: 4/1,0 mode: --/CR res: [0x726966:0x2:0x0].0x0 rrc: 2 type: PLN flags: 0x1000000000000 nid: local remote: 0xc3c20c06c1d78199 expref: -99 pid: 22244 timeout: 0 lvb_type: 0 [174929.235825] LustreError: 91781:0:(ldlm_resource.c:1147:ldlm_resource_complain()) MGC10.0.10.51@o2ib7: namespace resource [0x726966:0x2:0x0].0x0 (ffff9e8d67682540) refcount nonzero (1) after lock cleanup; forcing cleanup. [174929.255461] Lustre: MGC10.0.10.51@o2ib7: Connection restored to MGC10.0.10.51@o2ib7_0 (at 10.0.10.51@o2ib7) [174929.265291] Lustre: Skipped 10 previous similar messages [174941.160394] LNet: Service thread pid 22626 was inactive for 414.84s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [174941.177417] Pid: 22626, comm: mdt03_010 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [174941.187677] Call Trace: [174941.190236] [] call_rwsem_down_write_failed+0x17/0x30 [174941.197061] [] lod_qos_statfs_update+0x97/0x2b0 [lod] [174941.203904] [] lod_qos_prep_create+0x16a/0x1890 [lod] [174941.210730] [] lod_prepare_create+0x215/0x2e0 [lod] [174941.217392] [] lod_declare_striped_create+0x1ee/0x980 [lod] [174941.224734] [] lod_declare_create+0x204/0x590 [lod] [174941.231398] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [174941.239353] [] mdd_declare_create+0x4c/0xcb0 [mdd] [174941.245932] [] mdd_create+0x847/0x14e0 [mdd] [174941.251975] [] mdt_reint_open+0x224f/0x3240 [mdt] [174941.258473] [] mdt_reint_rec+0x83/0x210 [mdt] [174941.264601] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [174941.271263] [] mdt_intent_open+0x82/0x3a0 [mdt] [174941.277585] [] mdt_intent_policy+0x435/0xd80 [mdt] [174941.284160] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [174941.291022] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [174941.298237] [] tgt_enqueue+0x62/0x210 [ptlrpc] [174941.304485] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [174941.311522] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [174941.319324] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [174941.325753] [] kthread+0xd1/0xe0 [174941.330755] [] ret_from_fork_nospec_begin+0xe/0x21 [174941.337317] [] 0xffffffffffffffff [174941.342446] LustreError: dumping log to /tmp/lustre-log.1576166689.22626 [174947.048806] LNet: Service thread pid 22255 completed after 588.58s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [174947.065141] LNet: Skipped 7 previous similar messages [174997.686297] Lustre: fir-MDT0001: Client d833ee08-9e03-4 (at 10.9.107.9@o2ib4) reconnecting [174997.694653] Lustre: Skipped 3 previous similar messages [175238.565557] LustreError: 166-1: MGC10.0.10.51@o2ib7: Connection to MGS (at 10.0.10.51@o2ib7) was lost; in progress operations using this service will fail [175238.579468] LustreError: 22244:0:(ldlm_request.c:147:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576166686, 300s ago), entering recovery for MGS@MGC10.0.10.51@o2ib7_0 ns: MGC10.0.10.51@o2ib7 lock: ffff9e5d9ddd5a00/0x9161590831e6513e lrc: 4/1,0 mode: --/CR res: [0x726966:0x2:0x0].0x0 rrc: 2 type: PLN flags: 0x1000000000000 nid: local remote: 0xc3c20c06c1d9aaa0 expref: -99 pid: 22244 timeout: 0 lvb_type: 0 [175238.617320] LustreError: 91862:0:(ldlm_resource.c:1147:ldlm_resource_complain()) MGC10.0.10.51@o2ib7: namespace resource [0x726966:0x2:0x0].0x0 (ffff9e7c1db4c780) refcount nonzero (1) after lock cleanup; forcing cleanup. [175253.769760] LustreError: 137-5: fir-MDT0002_UUID: not available for connect from 10.9.107.9@o2ib4 (no target). If you are running an HA pair check that the target is mounted on the other server. [175253.787127] LustreError: Skipped 5 previous similar messages [175256.836771] Lustre: fir-MDT0001: haven't heard from client db44fcc6-df61-0a83-7c51-af3e9a77d479 (at 10.8.7.13@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da7606800, cur 1576167005 expire 1576166855 last 1576166778 [175256.858475] Lustre: Skipped 14 previous similar messages [175547.937035] LustreError: 166-1: MGC10.0.10.51@o2ib7: Connection to MGS (at 10.0.10.51@o2ib7) was lost; in progress operations using this service will fail [175547.950955] LustreError: 22244:0:(ldlm_request.c:147:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576166996, 300s ago), entering recovery for MGS@MGC10.0.10.51@o2ib7_0 ns: MGC10.0.10.51@o2ib7 lock: ffff9e6bb3b3bcc0/0x9161590831e70451 lrc: 4/1,0 mode: --/CR res: [0x726966:0x2:0x0].0x0 rrc: 2 type: PLN flags: 0x1000000000000 nid: local remote: 0xc3c20c06c1dceb76 expref: -99 pid: 22244 timeout: 0 lvb_type: 0 [175547.988823] LustreError: 92058:0:(ldlm_resource.c:1147:ldlm_resource_complain()) MGC10.0.10.51@o2ib7: namespace resource [0x726966:0x2:0x0].0x0 (ffff9e6db7e57e00) refcount nonzero (1) after lock cleanup; forcing cleanup. [175548.008479] Lustre: MGC10.0.10.51@o2ib7: Connection restored to MGC10.0.10.51@o2ib7_0 (at 10.0.10.51@o2ib7) [175548.018310] Lustre: Skipped 12 previous similar messages [175562.812492] Lustre: fir-MDT0001: haven't heard from client d59b4a25-94cd-9118-509c-0144bd0df5bb (at 10.9.109.19@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da02fa800, cur 1576167311 expire 1576167161 last 1576167084 [175639.822766] Lustre: fir-MDT0001: haven't heard from client ec95172c-af62-15aa-37b1-9f40e3145075 (at 10.9.107.7@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d987c2400, cur 1576167388 expire 1576167238 last 1576167161 [175639.844564] Lustre: Skipped 20 previous similar messages [175794.821992] Lustre: fir-MDT0001: haven't heard from client 6fe05dcf-b9e2-99d7-33ce-acbd0a395824 (at 10.9.117.43@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5db5fb1400, cur 1576167543 expire 1576167393 last 1576167316 [175794.843875] Lustre: Skipped 8 previous similar messages [175858.108499] LustreError: 166-1: MGC10.0.10.51@o2ib7: Connection to MGS (at 10.0.10.51@o2ib7) was lost; in progress operations using this service will fail [175858.122412] LustreError: 22244:0:(ldlm_request.c:147:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576167306, 300s ago), entering recovery for MGS@MGC10.0.10.51@o2ib7_0 ns: MGC10.0.10.51@o2ib7 lock: ffff9e78d37c0000/0x9161590831ed700f lrc: 4/1,0 mode: --/CR res: [0x726966:0x2:0x0].0x0 rrc: 2 type: PLN flags: 0x1000000000000 nid: local remote: 0xc3c20c06c1e97aa6 expref: -99 pid: 22244 timeout: 0 lvb_type: 0 [175858.160357] LustreError: 92223:0:(ldlm_resource.c:1147:ldlm_resource_complain()) MGC10.0.10.51@o2ib7: namespace resource [0x726966:0x2:0x0].0x0 (ffff9e8db71a2cc0) refcount nonzero (1) after lock cleanup; forcing cleanup. [175896.322558] Lustre: 21782:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1576167637/real 1576167637] req@ffff9e6d2efcc380 x1652547770844976/t0(0) o41->fir-MDT0003-osp-MDT0001@10.0.10.54@o2ib7:24/4 lens 224/368 e 0 to 1 dl 1576167644 ref 1 fl Rpc:X/0/ffffffff rc 0/-1 [175896.350761] Lustre: 21782:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 24 previous similar messages [175896.360683] Lustre: fir-MDT0003-osp-MDT0001: Connection to fir-MDT0003 (at 10.0.10.54@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [175896.376772] Lustre: Skipped 8 previous similar messages [175942.066997] LustreError: 137-5: fir-MDT0003_UUID: not available for connect from 10.0.10.53@o2ib7 (no target). If you are running an HA pair check that the target is mounted on the other server. [175942.084386] LustreError: Skipped 3 previous similar messages [176003.086472] LNetError: 21600:0:(o2iblnd_cb.c:3350:kiblnd_check_txs_locked()) Timed out tx: active_txs, 1 seconds [176003.096730] LNetError: 21600:0:(o2iblnd_cb.c:3425:kiblnd_check_conns()) Timed out RDMA with 10.0.10.54@o2ib7 (106): c: 5, oc: 0, rc: 8 [176003.109130] LNetError: 21610:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) lpni 10.0.10.54@o2ib7 added to recovery queue. Health = 900 [176003.122098] LNetError: 21610:0:(peer.c:3451:lnet_peer_ni_add_to_recoveryq_locked()) Skipped 7 previous similar messages [176003.274626] LNetError: 91444:0:(lib-msg.c:485:lnet_handle_local_failure()) ni 10.0.10.52@o2ib7 added to recovery queue. Health = 900 [176037.397417] LustreError: 22603:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576167485, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5becb098c0/0x9161590831f1e2ca lrc: 3/1,0 mode: --/PR res: [0x240038caa:0x288e:0x0].0x0 bits 0x13/0x0 rrc: 14 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 22603 timeout: 0 lvb_type: 0 [176044.275751] LNetError: 91444:0:(lib-msg.c:485:lnet_handle_local_failure()) ni 10.0.10.52@o2ib7 added to recovery queue. Health = 900 [176044.287745] LNetError: 91444:0:(lib-msg.c:485:lnet_handle_local_failure()) Skipped 4 previous similar messages [176066.088199] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Timed out tx for 10.0.10.54@o2ib7: 1 seconds [176066.098372] LNet: 21600:0:(o2iblnd_cb.c:3396:kiblnd_check_conns()) Skipped 23 previous similar messages [176077.694516] LustreError: 22301:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576167525, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5b9d258b40/0x9161590831f4ad87 lrc: 3/1,0 mode: --/PR res: [0x240010083:0x68e5:0x0].0x0 bits 0x13/0x0 rrc: 16 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22301 timeout: 0 lvb_type: 0 [176162.991427] Lustre: fir-MDT0001: Connection restored to (at 10.8.25.17@o2ib6) [176162.998739] Lustre: Skipped 5 previous similar messages [176176.091190] LNetError: 21600:0:(o2iblnd_cb.c:3350:kiblnd_check_txs_locked()) Timed out tx: tx_queue, 1 seconds [176176.101273] LNetError: 21600:0:(o2iblnd_cb.c:3425:kiblnd_check_conns()) Timed out RDMA with 10.0.10.54@o2ib7 (0): c: 0, oc: 0, rc: 8 [176176.113507] LNetError: 21600:0:(lib-msg.c:485:lnet_handle_local_failure()) ni 10.0.10.52@o2ib7 added to recovery queue. Health = 900 [176176.125515] LNetError: 21600:0:(lib-msg.c:485:lnet_handle_local_failure()) Skipped 1 previous similar message [176285.094196] LNetError: 21600:0:(o2iblnd_cb.c:3350:kiblnd_check_txs_locked()) Timed out tx: tx_queue, 0 seconds [176285.104282] LNetError: 21600:0:(o2iblnd_cb.c:3425:kiblnd_check_conns()) Timed out RDMA with 10.0.10.54@o2ib7 (20): c: 0, oc: 0, rc: 8 [176378.849084] Lustre: fir-MDT0001: haven't heard from client a1acf167-afde-6f5a-879d-1a7c0814f282 (at 10.9.117.21@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5db5fb0000, cur 1576168127 expire 1576167977 last 1576167900 [176378.870967] Lustre: Skipped 9 previous similar messages [176684.814223] LustreError: 166-1: MGC10.0.10.51@o2ib7: Connection to MGS (at 10.0.10.51@o2ib7) was lost; in progress operations using this service will fail [176684.828134] LustreError: 22244:0:(ldlm_request.c:147:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576168132, 300s ago), entering recovery for MGS@MGC10.0.10.51@o2ib7_0 ns: MGC10.0.10.51@o2ib7 lock: ffff9e5d8d766540/0x91615908320ced06 lrc: 4/1,0 mode: --/CR res: [0x726966:0x2:0x0].0x0 rrc: 2 type: PLN flags: 0x1000000000000 nid: local remote: 0xc3c20c06c275bada expref: -99 pid: 22244 timeout: 0 lvb_type: 0 [176684.865982] LustreError: 92823:0:(ldlm_resource.c:1147:ldlm_resource_complain()) MGC10.0.10.51@o2ib7: namespace resource [0x726966:0x2:0x0].0x0 (ffff9e5cae245680) refcount nonzero (1) after lock cleanup; forcing cleanup. [176765.094979] Lustre: fir-MDT0001: Connection restored to (at 10.9.107.41@o2ib4) [176765.102382] Lustre: Skipped 14 previous similar messages [176982.851719] Lustre: fir-MDT0001: haven't heard from client dec5062c-f101-0dc5-128b-72e40bd60a5a (at 10.9.112.12@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d9eaba400, cur 1576168731 expire 1576168581 last 1576168504 [176982.873602] Lustre: Skipped 3 previous similar messages [176993.315665] LustreError: 166-1: MGC10.0.10.51@o2ib7: Connection to MGS (at 10.0.10.51@o2ib7) was lost; in progress operations using this service will fail [176993.329582] LustreError: 22244:0:(ldlm_request.c:147:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576168441, 300s ago), entering recovery for MGS@MGC10.0.10.51@o2ib7_0 ns: MGC10.0.10.51@o2ib7 lock: ffff9e5bdd2a33c0/0x916159083213c99d lrc: 4/1,0 mode: --/CR res: [0x726966:0x2:0x0].0x0 rrc: 2 type: PLN flags: 0x1000000000000 nid: local remote: 0xc3c20c06c28adc91 expref: -99 pid: 22244 timeout: 0 lvb_type: 0 [176993.367439] LustreError: 92979:0:(ldlm_resource.c:1147:ldlm_resource_complain()) MGC10.0.10.51@o2ib7: namespace resource [0x726966:0x2:0x0].0x0 (ffff9e5c17e4cc00) refcount nonzero (1) after lock cleanup; forcing cleanup. [177300.627076] LustreError: 166-1: MGC10.0.10.51@o2ib7: Connection to MGS (at 10.0.10.51@o2ib7) was lost; in progress operations using this service will fail [177300.640997] LustreError: 22244:0:(ldlm_request.c:147:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576168748, 300s ago), entering recovery for MGS@MGC10.0.10.51@o2ib7_0 ns: MGC10.0.10.51@o2ib7 lock: ffff9e5bdd2a0d80/0x91615908321d8ecd lrc: 4/1,0 mode: --/CR res: [0x726966:0x2:0x0].0x0 rrc: 2 type: PLN flags: 0x1000000000000 nid: local remote: 0xc3c20c06c2ae3315 expref: -99 pid: 22244 timeout: 0 lvb_type: 0 [177300.678854] LustreError: 93078:0:(ldlm_resource.c:1147:ldlm_resource_complain()) MGC10.0.10.51@o2ib7: namespace resource [0x726966:0x2:0x0].0x0 (ffff9e5cd23db5c0) refcount nonzero (1) after lock cleanup; forcing cleanup. [177401.279590] Lustre: fir-MDT0001: Connection restored to (at 10.9.102.27@o2ib4) [177401.286988] Lustre: Skipped 23 previous similar messages [177559.600126] LNet: Service thread pid 92716 was inactive for 398.58s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [177559.617151] Pid: 92716, comm: mdt00_028 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [177559.627413] Call Trace: [177559.629970] [] osp_precreate_reserve+0x2e8/0x800 [osp] [177559.636889] [] osp_declare_create+0x199/0x5b0 [osp] [177559.643553] [] lod_sub_declare_create+0xdf/0x210 [lod] [177559.650473] [] lod_qos_declare_object_on+0xbe/0x3a0 [lod] [177559.657653] [] lod_alloc_qos.constprop.18+0x10f4/0x1840 [lod] [177559.665170] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [177559.672101] [] lod_prepare_create+0x215/0x2e0 [lod] [177559.678759] [] lod_declare_striped_create+0x1ee/0x980 [lod] [177559.686117] [] lod_declare_create+0x204/0x590 [lod] [177559.692770] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [177559.700747] [] mdd_declare_create+0x4c/0xcb0 [mdd] [177559.707308] [] mdd_create+0x847/0x14e0 [mdd] [177559.713364] [] mdt_reint_open+0x224f/0x3240 [mdt] [177559.719850] [] mdt_reint_rec+0x83/0x210 [mdt] [177559.726001] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [177559.732677] [] mdt_intent_open+0x82/0x3a0 [mdt] [177559.739000] [] mdt_intent_policy+0x435/0xd80 [mdt] [177559.745573] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [177559.752442] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [177559.759641] [] tgt_enqueue+0x62/0x210 [ptlrpc] [177559.765913] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [177559.772944] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [177559.780759] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [177559.787176] [] kthread+0xd1/0xe0 [177559.792200] [] ret_from_fork_nospec_begin+0xe/0x21 [177559.798762] [] 0xffffffffffffffff [177559.803905] LustreError: dumping log to /tmp/lustre-log.1576169307.92716 [177569.539397] LustreError: 22617:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576169017, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5bbee6e780/0x91615908322ec65e lrc: 3/1,0 mode: --/PR res: [0x240039a83:0x441:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22617 timeout: 0 lvb_type: 0 [177593.393098] LNet: Service thread pid 22966 was inactive for 398.83s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [177593.410123] Pid: 22966, comm: mdt03_013 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [177593.420384] Call Trace: [177593.422942] [] call_rwsem_down_write_failed+0x17/0x30 [177593.429767] [] lod_qos_statfs_update+0x97/0x2b0 [lod] [177593.436614] [] lod_qos_prep_create+0x16a/0x1890 [lod] [177593.443444] [] lod_prepare_create+0x215/0x2e0 [lod] [177593.450105] [] lod_declare_striped_create+0x1ee/0x980 [lod] [177593.457467] [] lod_declare_create+0x204/0x590 [lod] [177593.464131] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [177593.472090] [] mdd_declare_create+0x4c/0xcb0 [mdd] [177593.478665] [] mdd_create+0x847/0x14e0 [mdd] [177593.484706] [] mdt_reint_open+0x224f/0x3240 [mdt] [177593.491229] [] mdt_reint_rec+0x83/0x210 [mdt] [177593.497367] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [177593.504043] [] mdt_intent_open+0x82/0x3a0 [mdt] [177593.510355] [] mdt_intent_policy+0x435/0xd80 [mdt] [177593.516945] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [177593.523818] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [177593.531031] [] tgt_enqueue+0x62/0x210 [ptlrpc] [177593.537290] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [177593.544334] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [177593.552137] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [177593.558564] [] kthread+0xd1/0xe0 [177593.563567] [] ret_from_fork_nospec_begin+0xe/0x21 [177593.570142] [] 0xffffffffffffffff [177593.575253] LustreError: dumping log to /tmp/lustre-log.1576169341.22966 [177610.808519] LustreError: 166-1: MGC10.0.10.51@o2ib7: Connection to MGS (at 10.0.10.51@o2ib7) was lost; in progress operations using this service will fail [177610.822437] LustreError: 22244:0:(ldlm_request.c:147:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576169058, 300s ago), entering recovery for MGS@MGC10.0.10.51@o2ib7_0 ns: MGC10.0.10.51@o2ib7 lock: ffff9e5bdd2a06c0/0x91615908323156f2 lrc: 4/1,0 mode: --/CR res: [0x726966:0x2:0x0].0x0 rrc: 2 type: PLN flags: 0x1000000000000 nid: local remote: 0xc3c20c06c2c37c4f expref: -99 pid: 22244 timeout: 0 lvb_type: 0 [177610.860398] LustreError: 93166:0:(ldlm_resource.c:1147:ldlm_resource_complain()) MGC10.0.10.51@o2ib7: namespace resource [0x726966:0x2:0x0].0x0 (ffff9e6db73acc00) refcount nonzero (1) after lock cleanup; forcing cleanup. [177623.990234] LNet: Service thread pid 92716 completed after 462.97s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [177624.006488] LNet: Skipped 1 previous similar message [177923.824054] LustreError: 22252:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576169371, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5c29241680/0x91615908323b3440 lrc: 3/1,0 mode: --/PR res: [0x240010083:0x68e5:0x0].0x0 bits 0x13/0x0 rrc: 26 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 22252 timeout: 0 lvb_type: 0 [177923.863700] LustreError: 22252:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 4 previous similar messages [178018.618560] LustreError: 22311:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576169466, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5c51665e80/0x91615908323d12d9 lrc: 3/1,0 mode: --/PR res: [0x240038caa:0x288e:0x0].0x0 bits 0x13/0x0 rrc: 14 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22311 timeout: 0 lvb_type: 0 [178018.658212] LustreError: 22311:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 4 previous similar messages [178043.652705] Lustre: fir-MDT0001: Connection restored to (at 10.9.112.12@o2ib4) [178043.660109] Lustre: Skipped 35 previous similar messages [178132.887170] Lustre: fir-MDT0001: haven't heard from client 295209bb-0224-d868-bd7c-cd75c3b19a1c (at 10.8.18.20@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da4dae000, cur 1576169881 expire 1576169731 last 1576169654 [178132.908966] Lustre: Skipped 2 previous similar messages [178205.414647] LustreError: 22595:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576169653, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5ba8bf2f40/0x9161590832458022 lrc: 3/1,0 mode: --/PR res: [0x240035491:0x1335:0x0].0x0 bits 0x13/0x0 rrc: 5 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22595 timeout: 0 lvb_type: 0 [178205.454201] LustreError: 22595:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 1 previous similar message [178219.222024] Lustre: 92024:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9e6c67bbec00 x1652731538448896/t0(0) o101->d2bd0014-3bea-4@10.9.114.7@o2ib4:262/0 lens 1792/3288 e 2 to 0 dl 1576169972 ref 2 fl Interpret:/0/0 rc 0/0 [178550.008171] LustreError: 22616:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576169998, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e6bb764b3c0/0x916159083252c0b3 lrc: 3/1,0 mode: --/PR res: [0x240038caa:0x288e:0x0].0x0 bits 0x13/0x0 rrc: 26 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 22616 timeout: 0 lvb_type: 0 [178624.023022] LustreError: 92202:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576170072, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e8d563886c0/0x916159083256eee1 lrc: 3/0,1 mode: --/CW res: [0x240038caa:0x288e:0x0].0x0 bits 0x2/0x0 rrc: 26 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 92202 timeout: 0 lvb_type: 0 [178624.062597] LustreError: 92202:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 6 previous similar messages [178724.026706] LustreError: 22601:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576170172, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e78227af2c0/0x91615908325a8ca8 lrc: 3/0,1 mode: --/CW res: [0x240038caa:0x288e:0x0].0x0 bits 0x2/0x0 rrc: 28 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22601 timeout: 0 lvb_type: 0 [178924.033113] LustreError: 22608:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576170372, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e77f1bb7500/0x91615908325dd1d7 lrc: 3/0,1 mode: --/CW res: [0x240038caa:0x288e:0x0].0x0 bits 0x2/0x0 rrc: 29 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22608 timeout: 0 lvb_type: 0 [178924.072666] LustreError: 22608:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 5 previous similar messages [178956.010150] Lustre: fir-MDT0001: Connection restored to (at 10.8.27.10@o2ib6) [178956.017463] Lustre: Skipped 5 previous similar messages [178999.280149] LustreError: 22304:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576170447, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e77ebe14380/0x91615908325fb4a6 lrc: 3/0,1 mode: --/CW res: [0x240038caa:0x2891:0x0].0x0 bits 0x2/0x0 rrc: 16 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 22304 timeout: 0 lvb_type: 0 [178999.319701] LustreError: 22304:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 1 previous similar message [179000.151181] Lustre: 22604:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-150), not sending early reply req@ffff9e5bb7a14380 x1648848314462464/t0(0) o101->8e3bf475-0833-510a-ec9f-d8743c1caa75@10.9.105.41@o2ib4:288/0 lens 576/3264 e 0 to 0 dl 1576170753 ref 2 fl Interpret:/0/0 rc 0/0 [179000.180508] Lustre: 22604:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 3 previous similar messages [179000.919300] Lustre: 41905:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (4/-151), not sending early reply req@ffff9e6d3f61ba80 x1649932735167312/t0(0) o101->4e8251e5-eb6b-473d-1b55-6cf68aeb84d4@10.9.105.59@o2ib4:288/0 lens 576/3264 e 0 to 0 dl 1576170753 ref 2 fl Interpret:/0/0 rc 0/0 [179000.948631] Lustre: 41905:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 2 previous similar messages [179006.196151] Lustre: fir-MDT0001: Client 8e4fe161-7440-1bc3-60cf-ef16452a7501 (at 10.9.105.43@o2ib4) reconnecting [179006.206411] Lustre: Skipped 6 previous similar messages [179075.161212] Lustre: 92262:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (4/-151), not sending early reply req@ffff9e8c7fa8ad00 x1649067653470144/t0(0) o101->bb7d080c-8ae8-f7ed-5d33-d34ca54d93de@10.9.108.19@o2ib4:362/0 lens 1824/3288 e 0 to 0 dl 1576170827 ref 2 fl Interpret:/0/0 rc 0/0 [179075.190634] Lustre: 92262:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 1 previous similar message [179174.491895] Lustre: 22613:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-150), not sending early reply req@ffff9e7a05bdcc80 x1648844261639120/t0(0) o101->86135b24-4615-6e9e-bc1c-d4370b719ce7@10.9.105.60@o2ib4:462/0 lens 1824/3288 e 0 to 0 dl 1576170927 ref 2 fl Interpret:/0/0 rc 0/0 [179180.103838] Lustre: fir-MDT0001: Client 86135b24-4615-6e9e-bc1c-d4370b719ce7 (at 10.9.105.60@o2ib4) reconnecting [179180.114106] Lustre: Skipped 3 previous similar messages [179276.797667] LustreError: 22595:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576170724, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e77fa260480/0x9161590832650324 lrc: 3/1,0 mode: --/PR res: [0x240038caa:0x2891:0x0].0x0 bits 0x13/0x0 rrc: 12 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22595 timeout: 0 lvb_type: 0 [179276.837308] LustreError: 22595:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 3 previous similar messages [179316.831729] Lustre: 22607:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9e5d8a65c800 x1652205021853312/t0(0) o101->e930c269-2a9e-4@10.9.0.63@o2ib4:604/0 lens 1784/3288 e 1 to 0 dl 1576171069 ref 2 fl Interpret:/0/0 rc 0/0 [179334.944219] Lustre: 92239:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9e5bf6fe0d80 x1652127611972032/t0(0) o101->336b594d-4bc6-4@10.9.101.2@o2ib4:623/0 lens 576/3264 e 1 to 0 dl 1576171088 ref 2 fl Interpret:/0/0 rc 0/0 [179334.971377] Lustre: 92239:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 6 previous similar messages [179341.061603] Lustre: fir-MDT0001: Client 336b594d-4bc6-4 (at 10.9.101.2@o2ib4) reconnecting [179341.069962] Lustre: Skipped 7 previous similar messages [179352.928704] Lustre: 92720:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (4/4), not sending early reply req@ffff9e5d116bad00 x1652128291531584/t0(0) o101->2a89d1c7-a2e4-4@10.9.101.58@o2ib4:640/0 lens 1792/3288 e 1 to 0 dl 1576171105 ref 2 fl Interpret:/0/0 rc 0/0 [179390.849723] Lustre: 93477:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-150), not sending early reply req@ffff9e77fd7cba80 x1648844330143648/t0(0) o101->c6e4797f-9032-698c-aa33-e028e63ca336@10.9.105.53@o2ib4:678/0 lens 376/1600 e 0 to 0 dl 1576171143 ref 2 fl Interpret:/0/0 rc 0/0 [179390.879056] Lustre: 93477:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 14 previous similar messages [179497.060570] LNet: Service thread pid 22304 was inactive for 797.76s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [179497.077594] Pid: 22304, comm: mdt02_003 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [179497.087852] Call Trace: [179497.090405] [] ldlm_completion_ast+0x4e5/0x860 [ptlrpc] [179497.097429] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [179497.104733] [] mdt_object_local_lock+0x438/0xb20 [mdt] [179497.111650] [] mdt_object_lock_internal+0x70/0x360 [mdt] [179497.118748] [] mdt_object_lock+0x20/0x30 [mdt] [179497.124962] [] mdt_reint_open+0x106a/0x3240 [mdt] [179497.131460] [] mdt_reint_rec+0x83/0x210 [mdt] [179497.137600] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [179497.144264] [] mdt_intent_open+0x82/0x3a0 [mdt] [179497.150564] [] mdt_intent_policy+0x435/0xd80 [mdt] [179497.157140] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [179497.163995] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [179497.171207] [] tgt_enqueue+0x62/0x210 [ptlrpc] [179497.177460] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [179497.184501] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [179497.192307] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [179497.198734] [] kthread+0xd1/0xe0 [179497.203737] [] ret_from_fork_nospec_begin+0xe/0x21 [179497.210311] [] 0xffffffffffffffff [179497.215421] LustreError: dumping log to /tmp/lustre-log.1576171245.22304 [179521.637219] LNet: Service thread pid 92111 was inactive for 797.58s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [179521.654247] Pid: 92111, comm: mdt01_029 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [179521.664507] Call Trace: [179521.667066] [] osp_precreate_reserve+0x2e8/0x800 [osp] [179521.673987] [] osp_declare_create+0x199/0x5b0 [osp] [179521.680649] [] lod_sub_declare_create+0xdf/0x210 [lod] [179521.687566] [] lod_qos_declare_object_on+0xbe/0x3a0 [lod] [179521.694747] [] lod_alloc_qos.constprop.18+0x10f4/0x1840 [lod] [179521.702274] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [179521.709215] [] lod_prepare_create+0x215/0x2e0 [lod] [179521.715863] [] lod_declare_striped_create+0x1ee/0x980 [lod] [179521.723222] [] lod_declare_create+0x204/0x590 [lod] [179521.729869] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [179521.737841] [] mdd_declare_create+0x4c/0xcb0 [mdd] [179521.744404] [] mdd_create+0x847/0x14e0 [mdd] [179521.750459] [] mdt_reint_open+0x224f/0x3240 [mdt] [179521.756953] [] mdt_reint_rec+0x83/0x210 [mdt] [179521.763103] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [179521.769762] [] mdt_intent_open+0x82/0x3a0 [mdt] [179521.776102] [] mdt_intent_policy+0x435/0xd80 [mdt] [179521.782677] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [179521.789546] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [179521.796743] [] tgt_enqueue+0x62/0x210 [ptlrpc] [179521.803006] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [179521.810039] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [179521.817852] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [179521.824270] [] kthread+0xd1/0xe0 [179521.829284] [] ret_from_fork_nospec_begin+0xe/0x21 [179521.835847] [] 0xffffffffffffffff [179521.840980] LustreError: dumping log to /tmp/lustre-log.1576171269.92111 [179521.848311] Pid: 22308, comm: mdt01_005 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [179521.858581] Call Trace: [179521.861125] [] call_rwsem_down_write_failed+0x17/0x30 [179521.867948] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [179521.875392] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [179521.882302] [] lod_prepare_create+0x215/0x2e0 [lod] [179521.888963] [] lod_declare_striped_create+0x1ee/0x980 [lod] [179521.896307] [] lod_declare_create+0x204/0x590 [lod] [179521.902968] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [179521.910919] [] mdd_declare_create+0x4c/0xcb0 [mdd] [179521.917494] [] mdd_create+0x847/0x14e0 [mdd] [179521.923539] [] mdt_reint_open+0x224f/0x3240 [mdt] [179521.930035] [] mdt_reint_rec+0x83/0x210 [mdt] [179521.936166] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [179521.942829] [] mdt_intent_open+0x82/0x3a0 [mdt] [179521.949133] [] mdt_intent_policy+0x435/0xd80 [mdt] [179521.955707] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [179521.962555] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [179521.969758] [] tgt_enqueue+0x62/0x210 [ptlrpc] [179521.976014] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [179521.983053] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [179521.990854] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [179521.997273] [] kthread+0xd1/0xe0 [179522.002273] [] ret_from_fork_nospec_begin+0xe/0x21 [179522.008844] [] 0xffffffffffffffff [179523.685276] LNet: Service thread pid 22311 was inactive for 799.60s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [179523.702295] LNet: Skipped 1 previous similar message [179523.707357] Pid: 22311, comm: mdt00_005 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [179523.717636] Call Trace: [179523.720194] [] call_rwsem_down_write_failed+0x17/0x30 [179523.727029] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [179523.734481] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [179523.741391] [] lod_prepare_create+0x215/0x2e0 [lod] [179523.748059] [] lod_declare_striped_create+0x1ee/0x980 [lod] [179523.755403] [] lod_declare_create+0x204/0x590 [lod] [179523.762065] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [179523.770025] [] mdd_declare_create+0x4c/0xcb0 [mdd] [179523.776599] [] mdd_create+0x847/0x14e0 [mdd] [179523.782643] [] mdt_reint_open+0x224f/0x3240 [mdt] [179523.789163] [] mdt_reint_rec+0x83/0x210 [mdt] [179523.795307] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [179523.801977] [] mdt_intent_open+0x82/0x3a0 [mdt] [179523.808289] [] mdt_intent_policy+0x435/0xd80 [mdt] [179523.814875] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [179523.821740] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [179523.828948] [] tgt_enqueue+0x62/0x210 [ptlrpc] [179523.835209] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [179523.842251] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [179523.850056] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [179523.856498] [] kthread+0xd1/0xe0 [179523.861519] [] ret_from_fork_nospec_begin+0xe/0x21 [179523.868084] [] 0xffffffffffffffff [179523.873222] LustreError: dumping log to /tmp/lustre-log.1576171271.22311 [179523.880537] Pid: 93205, comm: mdt00_043 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [179523.890813] Call Trace: [179523.893360] [] call_rwsem_down_write_failed+0x17/0x30 [179523.900173] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [179523.907617] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [179523.914529] [] lod_prepare_create+0x215/0x2e0 [lod] [179523.921204] [] lod_declare_striped_create+0x1ee/0x980 [lod] [179523.928550] [] lod_declare_create+0x204/0x590 [lod] [179523.935211] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [179523.943163] [] mdd_declare_create+0x4c/0xcb0 [mdd] [179523.949736] [] mdd_create+0x847/0x14e0 [mdd] [179523.955781] [] mdt_reint_open+0x224f/0x3240 [mdt] [179523.962280] [] mdt_reint_rec+0x83/0x210 [mdt] [179523.968410] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [179523.975070] [] mdt_intent_open+0x82/0x3a0 [mdt] [179523.981375] [] mdt_intent_policy+0x435/0xd80 [mdt] [179523.987963] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [179523.994800] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [179524.002010] [] tgt_enqueue+0x62/0x210 [ptlrpc] [179524.008250] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [179524.015285] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [179524.023087] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [179524.029516] [] kthread+0xd1/0xe0 [179524.034511] [] ret_from_fork_nospec_begin+0xe/0x21 [179524.041086] [] 0xffffffffffffffff [179524.051519] LNet: Service thread pid 22308 completed after 799.98s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [179524.067766] LNet: Skipped 3 previous similar messages [179577.545736] LustreError: 92718:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576171025, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5c4e2e57c0/0x91615908326896e2 lrc: 3/1,0 mode: --/PR res: [0x240038caa:0x2891:0x0].0x0 bits 0x13/0x0 rrc: 12 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 92718 timeout: 0 lvb_type: 0 [179577.585384] LustreError: 92718:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 11 previous similar messages [179578.982772] Lustre: 41678:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (4/-280), not sending early reply req@ffff9e77f3bc9f80 x1649067653541920/t0(0) o101->bb7d080c-8ae8-f7ed-5d33-d34ca54d93de@10.9.108.19@o2ib4:111/0 lens 1784/3288 e 0 to 0 dl 1576171331 ref 2 fl Interpret:/0/0 rc 0/0 [179579.012187] Lustre: 41678:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 6 previous similar messages [179584.153630] Lustre: fir-MDT0001: Connection restored to (at 10.9.108.19@o2ib4) [179584.161060] Lustre: Skipped 35 previous similar messages [179599.463310] LNet: Service thread pid 41902 was inactive for 801.81s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [179599.476261] LustreError: dumping log to /tmp/lustre-log.1576171347.41902 [179601.916920] Lustre: fir-MDT0001: haven't heard from client 75167b5d-e2d7-d704-ea07-95d8feb377a6 (at 10.9.102.1@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da15cf400, cur 1576171350 expire 1576171200 last 1576171123 [179609.703581] LNet: Service thread pid 22254 was inactive for 797.18s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [179609.716532] LustreError: dumping log to /tmp/lustre-log.1576171357.22254 [179624.057116] LNet: Service thread pid 22304 completed after 924.75s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [179624.073391] LNet: Skipped 2 previous similar messages [179767.933349] Lustre: fir-MDT0001: haven't heard from client 135543df-9fa8-fe17-ef67-a6cd12881d1d (at 10.8.7.19@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da3957400, cur 1576171516 expire 1576171366 last 1576171289 [180042.982292] Lustre: fir-MDT0001: haven't heard from client 3fa61b7b-3364-0c3e-efb9-55ce1343c799 (at 10.8.23.34@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d99bf1800, cur 1576171791 expire 1576171641 last 1576171564 [180090.151478] LustreError: 22624:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576171538, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e6d0d67a1c0/0x91615908326fd691 lrc: 3/1,0 mode: --/PR res: [0x240010083:0x68e5:0x0].0x0 bits 0x13/0x0 rrc: 23 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 22624 timeout: 0 lvb_type: 0 [180090.191120] LustreError: 22624:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 23 previous similar messages [180666.959908] Lustre: fir-MDT0001: haven't heard from client ee8a8d10-65c2-ae96-bc67-9f6bae32e110 (at 10.8.18.18@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d99bf4800, cur 1576172415 expire 1576172265 last 1576172188 [180832.252339] LustreError: 92024:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576172280, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e6d78f6a880/0x916159083278f485 lrc: 3/1,0 mode: --/PR res: [0x240038439:0x5a55:0x0].0x0 bits 0x13/0x0 rrc: 5 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 92024 timeout: 0 lvb_type: 0 [180832.291922] LustreError: 92024:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 11 previous similar messages [180962.944572] Lustre: fir-MDT0001: Client 57f091b6-2713-4 (at 10.9.104.7@o2ib4) reconnecting [180962.952928] Lustre: Skipped 19 previous similar messages [180962.958361] Lustre: fir-MDT0001: Connection restored to (at 10.9.104.7@o2ib4) [180962.965679] Lustre: Skipped 1 previous similar message [181225.618840] LNet: Service thread pid 22590 was inactive for 401.28s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [181225.635864] LNet: Skipped 1 previous similar message [181225.640923] Pid: 22590, comm: mdt01_009 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [181225.651213] Call Trace: [181225.653764] [] call_rwsem_down_write_failed+0x17/0x30 [181225.660598] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [181225.668052] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [181225.674976] [] lod_prepare_create+0x215/0x2e0 [lod] [181225.681639] [] lod_declare_striped_create+0x1ee/0x980 [lod] [181225.688984] [] lod_declare_create+0x204/0x590 [lod] [181225.695645] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [181225.703604] [] mdd_declare_create+0x4c/0xcb0 [mdd] [181225.710180] [] mdd_create+0x847/0x14e0 [mdd] [181225.716224] [] mdt_reint_open+0x224f/0x3240 [mdt] [181225.722732] [] mdt_reint_rec+0x83/0x210 [mdt] [181225.728869] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [181225.735541] [] mdt_intent_open+0x82/0x3a0 [mdt] [181225.741867] [] mdt_intent_policy+0x435/0xd80 [mdt] [181225.748454] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [181225.755310] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [181225.762520] [] tgt_enqueue+0x62/0x210 [ptlrpc] [181225.768770] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [181225.775815] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [181225.783616] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [181225.790045] [] kthread+0xd1/0xe0 [181225.795048] [] ret_from_fork_nospec_begin+0xe/0x21 [181225.801624] [] 0xffffffffffffffff [181225.806748] LustreError: dumping log to /tmp/lustre-log.1576172973.22590 [181289.064542] Lustre: 22302:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-150), not sending early reply req@ffff9e5bc6a56780 x1648849902919344/t0(0) o101->8a88f4e3-1527-a9ba-0c95-87171252c73f@10.9.105.42@o2ib4:312/0 lens 576/3264 e 0 to 0 dl 1576173042 ref 2 fl Interpret:/0/0 rc 0/0 [181289.093874] Lustre: 22302:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 1 previous similar message [181295.398869] Lustre: fir-MDT0001: Client 4e8251e5-eb6b-473d-1b55-6cf68aeb84d4 (at 10.9.105.59@o2ib4) reconnecting [181295.409146] Lustre: fir-MDT0001: Connection restored to (at 10.9.105.59@o2ib4) [181320.969062] Lustre: fir-MDT0001: haven't heard from client ef78dfe0-80b9-391e-81c2-9236655a36fe (at 10.9.103.59@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e6db9b26400, cur 1576173069 expire 1576172919 last 1576172842 [181324.305126] LNet: Service thread pid 22590 completed after 499.97s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [181505.269926] Lustre: fir-MDT0001: Connection restored to (at 10.9.104.7@o2ib4) [181505.277246] Lustre: Skipped 1 previous similar message [181937.822880] Lustre: fir-MDT0001: Connection restored to (at 10.9.102.1@o2ib4) [181937.830195] Lustre: Skipped 2 previous similar messages [181946.943200] LustreError: 22618:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576173394, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e6b7f620d80/0x916159083292f745 lrc: 3/1,0 mode: --/PR res: [0x240038caa:0x288e:0x0].0x0 bits 0x13/0x0 rrc: 14 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22618 timeout: 0 lvb_type: 0 [181946.982840] LustreError: 22618:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 11 previous similar messages [182245.003372] Lustre: fir-MDT0001: haven't heard from client 227d7a25-50be-a469-9b6d-83846499cd76 (at 10.8.27.14@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da763d800, cur 1576173993 expire 1576173843 last 1576173766 [182824.699272] Lustre: fir-MDT0001: Connection restored to (at 10.8.18.18@o2ib6) [182824.706583] Lustre: Skipped 3 previous similar messages [182824.734048] LustreError: 22966:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576174272, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5b51643a80/0x9161590832a9ac63 lrc: 3/1,0 mode: --/PR res: [0x240038caa:0x288e:0x0].0x0 bits 0x13/0x0 rrc: 19 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 22966 timeout: 0 lvb_type: 0 [182824.773699] LustreError: 22966:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 15 previous similar messages [183449.028965] Lustre: fir-MDT0001: haven't heard from client 4ddd9e11-580e-5fd9-690c-d09be6f90077 (at 10.9.101.42@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d97276000, cur 1576175197 expire 1576175047 last 1576174970 [183449.050850] Lustre: Skipped 2 previous similar messages [183746.730212] LustreError: 22600:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576175194, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5c9df24ec0/0x9161590832beb5a1 lrc: 3/1,0 mode: --/PR res: [0x2400372ff:0xbc7f:0x0].0x0 bits 0x13/0x0 rrc: 6 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22600 timeout: 0 lvb_type: 0 [183746.769770] LustreError: 22600:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 8 previous similar messages [184338.939345] Lustre: fir-MDT0001: Connection restored to (at 10.8.27.14@o2ib6) [184338.946664] Lustre: Skipped 1 previous similar message [184434.956990] Lustre: fir-MDT0001: Client 554dcc06-8c06-f49d-eac2-beeb59276b64 (at 10.9.109.6@o2ib4) reconnecting [184434.967169] Lustre: Skipped 1 previous similar message [184434.972425] Lustre: fir-MDT0001: Connection restored to (at 10.9.109.6@o2ib4) [184434.979741] Lustre: Skipped 1 previous similar message [184438.016181] LustreError: 22604:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576175885, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5af93c2400/0x9161590832cbb269 lrc: 3/1,0 mode: --/PR res: [0x240038caa:0x288e:0x0].0x0 bits 0x13/0x0 rrc: 12 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22604 timeout: 0 lvb_type: 0 [184438.055830] LustreError: 22604:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 11 previous similar messages [184591.248595] Lustre: fir-MDT0001: Client e930c269-2a9e-4 (at 10.9.0.63@o2ib4) reconnecting [184591.256891] Lustre: fir-MDT0001: Connection restored to (at 10.9.0.63@o2ib4) [184734.940331] Lustre: 92239:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-8), not sending early reply req@ffff9e8da2562400 x1650958611901776/t0(0) o101->bdc6a669-f745-2944-1b74-3762ff7d0bf8@10.9.101.36@o2ib4:737/0 lens 1784/3288 e 0 to 0 dl 1576176487 ref 2 fl Interpret:/0/0 rc 0/0 [184734.969575] Lustre: 92239:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 1 previous similar message [184736.970383] Lustre: 22591:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-8), not sending early reply req@ffff9e7a6cb19680 x1649048233868144/t0(0) o101->b4c9913c-f59e-b8ac-70a9-c2d8d6c39257@10.9.101.34@o2ib4:739/0 lens 1784/3288 e 0 to 0 dl 1576176489 ref 2 fl Interpret:/0/0 rc 0/0 [184740.390976] Lustre: fir-MDT0001: Client f7e00986-8544-51b5-5c7e-5b48cb50b80d (at 10.9.108.65@o2ib4) reconnecting [184742.978546] Lustre: 22591:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-8), not sending early reply req@ffff9e5d33fa7080 x1652554649622720/t0(0) o101->f58fa07b-04e0-4@10.9.0.64@o2ib4:745/0 lens 384/1600 e 0 to 0 dl 1576176495 ref 2 fl Interpret:/0/0 rc 0/0 [184758.994993] Lustre: 22827:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-8), not sending early reply req@ffff9e8d57350480 x1649315914262144/t0(0) o101->4c24f803-ac39-f2e9-62fc-ef86388a1d21@10.9.110.2@o2ib4:6/0 lens 1800/3288 e 0 to 0 dl 1576176511 ref 2 fl Interpret:/0/0 rc 0/0 [184759.023977] Lustre: 22827:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 3 previous similar messages [184875.070895] Lustre: fir-MDT0001: haven't heard from client 5d110741-f52f-a556-c0fd-775bc1eebbda (at 10.9.105.33@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d9672d800, cur 1576176623 expire 1576176473 last 1576176396 [184875.092771] Lustre: Skipped 9 previous similar messages [184920.477048] Lustre: fir-MDT0001: Connection restored to (at 10.8.26.36@o2ib6) [184920.484369] Lustre: Skipped 10 previous similar messages [185075.787659] Lustre: 22315:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9e8da1bcda00 x1650567050813632/t0(0) o101->8ea9d9cd-8086-16f1-7cea-3b482c5d9f4c@10.9.108.20@o2ib4:323/0 lens 1784/3288 e 1 to 0 dl 1576176828 ref 2 fl Interpret:/0/0 rc 0/0 [185082.056575] Lustre: fir-MDT0001: Client 8ea9d9cd-8086-16f1-7cea-3b482c5d9f4c (at 10.9.108.20@o2ib4) reconnecting [185082.066837] Lustre: Skipped 7 previous similar messages [185114.470672] Lustre: 22313:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9e6c51e53600 x1652592178954304/t0(0) o101->366fe29e-6125-4@10.9.108.18@o2ib4:362/0 lens 1784/3288 e 1 to 0 dl 1576176867 ref 2 fl Interpret:/0/0 rc 0/0 [185124.940958] LustreError: 22604:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576176572, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5b28ee4800/0x9161590832d206cd lrc: 3/0,1 mode: --/CW res: [0x24003aa21:0x9a:0x0].0x0 bits 0x2/0x0 rrc: 6 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22604 timeout: 0 lvb_type: 0 [185124.980289] LustreError: 22604:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 15 previous similar messages [185613.757112] Lustre: fir-MDT0001: Connection restored to (at 10.9.115.12@o2ib4) [185613.764511] Lustre: Skipped 9 previous similar messages [185724.958455] LustreError: 22618:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576177172, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e6d68385100/0x9161590832e044df lrc: 3/1,0 mode: --/PR res: [0x240038caa:0x288e:0x0].0x0 bits 0x13/0x0 rrc: 42 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 22618 timeout: 0 lvb_type: 0 [185724.998112] LustreError: 22618:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 39 previous similar messages [185766.797908] Lustre: fir-MDT0001: Client cfe93466-ba97-4 (at 10.9.0.62@o2ib4) reconnecting [185766.806181] Lustre: Skipped 1 previous similar message [186279.105264] Lustre: fir-MDT0001: haven't heard from client e8e18d90-dcac-7195-a7b7-bbaf10be70ce (at 10.9.103.52@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da8281c00, cur 1576178027 expire 1576177877 last 1576177800 [186437.635808] LustreError: 22966:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576177885, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5d1b65c140/0x9161590832f4c5ea lrc: 3/1,0 mode: --/PR res: [0x240038caa:0x288e:0x0].0x0 bits 0x13/0x0 rrc: 12 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22966 timeout: 0 lvb_type: 0 [186437.675458] LustreError: 22966:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 11 previous similar messages [186502.361304] Lustre: fir-MDT0001: Client e930c269-2a9e-4 (at 10.9.0.63@o2ib4) reconnecting [186502.369599] Lustre: fir-MDT0001: Connection restored to (at 10.9.0.63@o2ib4) [186502.376826] Lustre: Skipped 1 previous similar message [187125.187581] LustreError: 22595:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576178573, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5be0e60b40/0x91615908330b2890 lrc: 3/1,0 mode: --/PR res: [0x240039384:0xeb6b:0x0].0x0 bits 0x13/0x0 rrc: 7 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 22595 timeout: 0 lvb_type: 0 [187125.227138] LustreError: 22595:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 10 previous similar messages [187125.245984] Lustre: fir-MDT0001: Client e930c269-2a9e-4 (at 10.9.0.63@o2ib4) reconnecting [187125.254256] Lustre: Skipped 1 previous similar message [187125.259514] Lustre: fir-MDT0001: Connection restored to (at 10.9.0.63@o2ib4) [187125.266742] Lustre: Skipped 2 previous similar messages [187195.509506] Lustre: 22254:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-150), not sending early reply req@ffff9e787f34d100 x1648845380768800/t0(0) o101->067e45d2-124e-8a90-700d-20a4cb9a0f62@10.9.105.38@o2ib4:178/0 lens 376/1600 e 0 to 0 dl 1576178948 ref 2 fl Interpret:/0/0 rc 0/0 [187195.538837] Lustre: 22254:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 1 previous similar message [187269.943526] Lustre: 22302:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-25), not sending early reply req@ffff9e5b3caa6780 x1648846816333264/t0(0) o101->9a91b993-1399-1978-f4a8-fbbdfe7e9dbc@10.9.105.36@o2ib4:252/0 lens 376/1600 e 0 to 0 dl 1576179022 ref 2 fl Interpret:/0/0 rc 0/0 [187393.338929] Lustre: 22614:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-150), not sending early reply req@ffff9e6b81b27500 x1648848317112768/t0(0) o101->8e3bf475-0833-510a-ec9f-d8743c1caa75@10.9.105.41@o2ib4:376/0 lens 376/1600 e 0 to 0 dl 1576179146 ref 2 fl Interpret:/0/0 rc 0/0 [187475.517128] Lustre: 22614:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-150), not sending early reply req@ffff9e6b7e274380 x1648595935227712/t0(0) o101->4e1cc11b-70d3-525f-ee38-2a7467cc154b@10.8.30.3@o2ib6:458/0 lens 1792/3288 e 0 to 0 dl 1576179228 ref 2 fl Interpret:/0/0 rc 0/0 [187475.546377] Lustre: 22614:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 4 previous similar messages [187575.615846] Lustre: 41903:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-150), not sending early reply req@ffff9e5d886a6c00 x1649932737016432/t0(0) o101->4e8251e5-eb6b-473d-1b55-6cf68aeb84d4@10.9.105.59@o2ib4:558/0 lens 376/1600 e 0 to 0 dl 1576179328 ref 2 fl Interpret:/0/0 rc 0/0 [187575.645179] Lustre: 41903:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 11 previous similar messages [187725.207523] Lustre: 22592:0:(service.c:2165:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (755:61s); client may timeout. req@ffff9e6db714ba80 x1649392543797200/t416897840310(0) o101->4587af08-4157-6320-9e58-cade8713b082@10.9.105.30@o2ib4:642/0 lens 1784/904 e 0 to 0 dl 1576179412 ref 1 fl Complete:/0/0 rc 0/0 [187907.924818] Lustre: fir-MDT0001: Connection restored to (at 10.9.103.52@o2ib4) [187907.932216] Lustre: Skipped 35 previous similar messages [188025.247171] LustreError: 22613:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576179473, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e7832a65c40/0x91615908331d90fc lrc: 3/1,0 mode: --/PR res: [0x240038caa:0x288e:0x0].0x0 bits 0x13/0x0 rrc: 27 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 22613 timeout: 0 lvb_type: 0 [188025.286834] LustreError: 22613:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 25 previous similar messages [188070.151398] Lustre: fir-MDT0001: haven't heard from client c804f06b-97c0-205b-aa77-e2392ade35bd (at 10.8.7.7@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da6a86400, cur 1576179818 expire 1576179668 last 1576179591 [188271.154006] Lustre: fir-MDT0001: haven't heard from client 1d444526-0c94-9229-34be-9d214c0c6bbd (at 10.9.101.46@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da8284400, cur 1576180019 expire 1576179869 last 1576179792 [188519.161231] Lustre: fir-MDT0001: haven't heard from client 7126efc2-9676-1db9-94d0-ae09c1520697 (at 10.9.101.26@o2ib4) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5da15c9000, cur 1576180267 expire 1576180117 last 1576180040 [188825.311940] LustreError: 92239:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576180273, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e8d1762ec00/0x91615908333dab11 lrc: 3/1,0 mode: --/PR res: [0x240038caa:0x288e:0x0].0x0 bits 0x13/0x0 rrc: 22 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 92239 timeout: 0 lvb_type: 0 [188825.351617] LustreError: 92239:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 13 previous similar messages [189371.182252] Lustre: fir-MDT0001: haven't heard from client 02eb8135-4034-bcb2-8df8-77d00506e76a (at 10.8.7.15@o2ib6) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9e5d9672c400, cur 1576181119 expire 1576180969 last 1576180892 [189425.352308] LustreError: 22596:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576180873, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e6daeb998c0/0x916159083350ce4d lrc: 3/0,1 mode: --/CW res: [0x240038caa:0x288e:0x0].0x0 bits 0x2/0x0 rrc: 25 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 22596 timeout: 0 lvb_type: 0 [189425.391894] LustreError: 22596:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 10 previous similar messages [189821.180310] Lustre: fir-MDT0001: Connection restored to (at 10.8.7.7@o2ib6) [190125.381344] LustreError: 22258:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576181573, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5d81e0e780/0x91615908336938c8 lrc: 3/0,1 mode: --/CW res: [0x240038caa:0x288e:0x0].0x0 bits 0x2/0x0 rrc: 39 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 22258 timeout: 0 lvb_type: 0 [190125.381346] LustreError: 92260:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576181573, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5bb1fd0240/0x91615908336938ac lrc: 3/1,0 mode: --/PR res: [0x240038caa:0x288e:0x0].0x0 bits 0x13/0x0 rrc: 40 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 92260 timeout: 0 lvb_type: 0 [190125.381350] LustreError: 92260:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 2 previous similar messages [190427.588406] Lustre: fir-MDT0001: Client 03dd52b8-a4fc-4 (at 10.9.0.61@o2ib4) reconnecting [190427.596677] Lustre: Skipped 35 previous similar messages [190427.602105] Lustre: fir-MDT0001: Connection restored to (at 10.9.0.61@o2ib4) [190461.646535] Lustre: 95465:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-22), not sending early reply req@ffff9e6b7de02880 x1649516731037056/t0(0) o101->e28be1e7-a280-e0e2-d404-71ed26b45978@10.9.105.49@o2ib4:424/0 lens 376/1600 e 0 to 0 dl 1576182214 ref 2 fl Interpret:/0/0 rc 0/0 [190461.675782] Lustre: 95465:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 18 previous similar messages [190486.031186] Lustre: 92239:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-60), not sending early reply req@ffff9e5b8beb9200 x1648851837967600/t0(0) o101->fbe5710d-aabe-42ab-2430-c68f46d76b74@10.9.105.35@o2ib4:448/0 lens 1824/3288 e 0 to 0 dl 1576182238 ref 2 fl Interpret:/0/0 rc 0/0 [190486.060515] Lustre: 92239:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 1 previous similar message [190511.186560] Lustre: fir-MDT0001: Client c9a034bd-61c9-11d3-0f6d-8788e52205ea (at 10.9.105.51@o2ib4) reconnecting [190511.196818] Lustre: Skipped 4 previous similar messages [190519.536092] Lustre: 22304:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-22), not sending early reply req@ffff9e7da6b74c80 x1649303619441792/t0(0) o101->964f90b2-201f-0e40-0c9b-d52b03dcf753@10.9.105.61@o2ib4:482/0 lens 1792/3288 e 0 to 0 dl 1576182272 ref 2 fl Interpret:/0/0 rc 0/0 [190519.565420] Lustre: 22304:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 3 previous similar messages [190525.432661] Lustre: 41904:0:(service.c:2165:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (627:1s); client may timeout. req@ffff9e7da6b74c80 x1649303619441792/t416898106308(0) o101->964f90b2-201f-0e40-0c9b-d52b03dcf753@10.9.105.61@o2ib4:482/0 lens 1792/904 e 0 to 0 dl 1576182272 ref 1 fl Complete:/0/0 rc 0/0 [190568.849438] LNet: Service thread pid 22622 was inactive for 601.55s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [190568.866461] Pid: 22622, comm: mdt03_009 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [190568.876717] Call Trace: [190568.879275] [] call_rwsem_down_write_failed+0x17/0x30 [190568.886100] [] lod_qos_statfs_update+0x97/0x2b0 [lod] [190568.892946] [] lod_qos_prep_create+0x16a/0x1890 [lod] [190568.899768] [] lod_prepare_create+0x215/0x2e0 [lod] [190568.906432] [] lod_declare_striped_create+0x1ee/0x980 [lod] [190568.913774] [] lod_declare_create+0x204/0x590 [lod] [190568.920435] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [190568.928419] [] mdd_declare_create+0x4c/0xcb0 [mdd] [190568.934997] [] mdd_create+0x847/0x14e0 [mdd] [190568.941041] [] mdt_reint_open+0x224f/0x3240 [mdt] [190568.947539] [] mdt_reint_rec+0x83/0x210 [mdt] [190568.953669] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [190568.960331] [] mdt_intent_open+0x82/0x3a0 [mdt] [190568.966642] [] mdt_intent_policy+0x435/0xd80 [mdt] [190568.973204] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [190568.980077] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [190568.987271] [] tgt_enqueue+0x62/0x210 [ptlrpc] [190568.993547] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [190569.000581] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [190569.008382] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [190569.014810] [] kthread+0xd1/0xe0 [190569.019812] [] ret_from_fork_nospec_begin+0xe/0x21 [190569.026374] [] 0xffffffffffffffff [190569.031496] LustreError: dumping log to /tmp/lustre-log.1576182316.22622 [190586.225916] Lustre: 22967:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (4/4), not sending early reply req@ffff9e8da3edda00 x1649067655704416/t0(0) o101->bb7d080c-8ae8-f7ed-5d33-d34ca54d93de@10.9.108.19@o2ib4:548/0 lens 1784/3288 e 1 to 0 dl 1576182338 ref 2 fl Interpret:/0/0 rc 0/0 [190586.255111] Lustre: 22967:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 1 previous similar message [190591.378045] LNet: Service thread pid 92202 was inactive for 600.58s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [190591.395069] Pid: 92202, comm: mdt03_024 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [190591.405331] Call Trace: [190591.407889] [] call_rwsem_down_write_failed+0x17/0x30 [190591.414714] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [190591.422184] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [190591.429095] [] lod_prepare_create+0x215/0x2e0 [lod] [190591.435765] [] lod_declare_striped_create+0x1ee/0x980 [lod] [190591.443107] [] lod_declare_create+0x204/0x590 [lod] [190591.449772] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [190591.457730] [] mdd_declare_create+0x4c/0xcb0 [mdd] [190591.464325] [] mdd_create+0x847/0x14e0 [mdd] [190591.470364] [] mdt_reint_open+0x224f/0x3240 [mdt] [190591.476871] [] mdt_reint_rec+0x83/0x210 [mdt] [190591.483020] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [190591.489711] [] mdt_intent_open+0x82/0x3a0 [mdt] [190591.496026] [] mdt_intent_policy+0x435/0xd80 [mdt] [190591.502613] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [190591.509472] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [190591.516687] [] tgt_enqueue+0x62/0x210 [ptlrpc] [190591.522938] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [190591.529997] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [190591.537801] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [190591.544230] [] kthread+0xd1/0xe0 [190591.549234] [] ret_from_fork_nospec_begin+0xe/0x21 [190591.555806] [] 0xffffffffffffffff [190591.560918] LustreError: dumping log to /tmp/lustre-log.1576182339.92202 [190591.839501] Lustre: fir-MDT0001: Connection restored to (at 10.9.108.19@o2ib4) [190591.846903] Lustre: Skipped 9 previous similar messages [190613.906656] LNet: Service thread pid 22598 was inactive for 588.49s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [190613.923675] Pid: 22598, comm: mdt02_009 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [190613.933933] Call Trace: [190613.936481] [] call_rwsem_down_write_failed+0x17/0x30 [190613.943301] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [190613.950762] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [190613.957674] [] lod_declare_instantiate_components+0x9a/0x1d0 [lod] [190613.965654] [] lod_declare_layout_change+0xb65/0x10f0 [lod] [190613.973008] [] mdd_declare_layout_change+0x62/0x120 [mdd] [190613.980186] [] mdd_layout_change+0x882/0x1000 [mdd] [190613.986848] [] mdt_layout_change+0x337/0x430 [mdt] [190613.993435] [] mdt_intent_layout+0x7ee/0xcc0 [mdt] [190614.000008] [] mdt_intent_policy+0x435/0xd80 [mdt] [190614.006592] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [190614.013450] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [190614.020645] [] tgt_enqueue+0x62/0x210 [ptlrpc] [190614.026893] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [190614.033922] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [190614.041722] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [190614.048143] [] kthread+0xd1/0xe0 [190614.053145] [] ret_from_fork_nospec_begin+0xe/0x21 [190614.059706] [] 0xffffffffffffffff [190614.064821] LustreError: dumping log to /tmp/lustre-log.1576182361.22598 [190615.954708] LNet: Service thread pid 22595 was inactive for 588.59s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [190615.971733] Pid: 22595, comm: mdt02_008 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [190615.981992] Call Trace: [190615.984557] [] call_rwsem_down_write_failed+0x17/0x30 [190615.991382] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [190615.998839] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [190616.005753] [] lod_prepare_create+0x215/0x2e0 [lod] [190616.012436] [] lod_declare_striped_create+0x1ee/0x980 [lod] [190616.019786] [] lod_declare_create+0x204/0x590 [lod] [190616.026460] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [190616.034420] [] mdd_declare_create+0x4c/0xcb0 [mdd] [190616.040994] [] mdd_create+0x847/0x14e0 [mdd] [190616.047038] [] mdt_reint_open+0x224f/0x3240 [mdt] [190616.053546] [] mdt_reint_rec+0x83/0x210 [mdt] [190616.059682] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [190616.066360] [] mdt_intent_open+0x82/0x3a0 [mdt] [190616.072676] [] mdt_intent_policy+0x435/0xd80 [mdt] [190616.079265] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [190616.086130] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [190616.093351] [] tgt_enqueue+0x62/0x210 [ptlrpc] [190616.099604] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [190616.106638] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [190616.114441] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [190616.120877] [] kthread+0xd1/0xe0 [190616.125881] [] ret_from_fork_nospec_begin+0xe/0x21 [190616.132461] [] 0xffffffffffffffff [190616.137585] LustreError: dumping log to /tmp/lustre-log.1576182363.22595 [190616.144897] Pid: 93285, comm: mdt02_024 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [190616.155156] Call Trace: [190616.157706] [] call_rwsem_down_write_failed+0x17/0x30 [190616.164524] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [190616.171966] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [190616.178876] [] lod_prepare_create+0x215/0x2e0 [lod] [190616.185538] [] lod_declare_striped_create+0x1ee/0x980 [lod] [190616.192882] [] lod_declare_create+0x204/0x590 [lod] [190616.199543] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [190616.207494] [] mdd_declare_create+0x4c/0xcb0 [mdd] [190616.214074] [] mdd_create+0x847/0x14e0 [mdd] [190616.220113] [] mdt_reint_open+0x224f/0x3240 [mdt] [190616.226611] [] mdt_reint_rec+0x83/0x210 [mdt] [190616.232766] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [190616.239430] [] mdt_intent_open+0x82/0x3a0 [mdt] [190616.245733] [] mdt_intent_policy+0x435/0xd80 [mdt] [190616.252309] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [190616.259151] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [190616.266349] [] tgt_enqueue+0x62/0x210 [ptlrpc] [190616.272593] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [190616.279626] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [190616.287430] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [190616.293857] [] kthread+0xd1/0xe0 [190616.298869] [] ret_from_fork_nospec_begin+0xe/0x21 [190616.305444] [] 0xffffffffffffffff [190622.098875] LNet: Service thread pid 23332 was inactive for 602.33s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [190622.111824] LustreError: dumping log to /tmp/lustre-log.1576182369.23332 [190634.387219] LNet: Service thread pid 92263 was inactive for 601.65s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [190634.400170] LustreError: dumping log to /tmp/lustre-log.1576182382.92263 [190661.011946] LNet: Service thread pid 93477 was inactive for 588.62s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [190661.024894] LustreError: dumping log to /tmp/lustre-log.1576182408.93477 [190673.944299] Lustre: fir-MDT0001: Client 970bc850-7648-f96d-fc2b-8b8c64ce0bd4 (at 10.9.101.52@o2ib4) reconnecting [190673.954559] Lustre: Skipped 17 previous similar messages [190720.373584] Lustre: 22317:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9e5c37e1d100 x1648841369485664/t0(0) o101->80c2d2b5-6593-b875-0210-0fba1ee83aaf@10.9.105.34@o2ib4:683/0 lens 1824/3288 e 2 to 0 dl 1576182473 ref 2 fl Interpret:/0/0 rc 0/0 [190720.402742] Lustre: 22317:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 16 previous similar messages [190725.438176] LNet: Service thread pid 22622 completed after 758.13s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [190725.438732] LustreError: 22315:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1576182173, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0001_UUID lock: ffff9e5b7bbf9d40/0x91615908337e85b3 lrc: 3/1,0 mode: --/PR res: [0x240038caa:0x2891:0x0].0x0 bits 0x13/0x0 rrc: 28 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 22315 timeout: 0 lvb_type: 0 [190725.438735] LustreError: 22315:0:(ldlm_request.c:129:ldlm_expired_completion_wait()) Skipped 23 previous similar messages [190726.549737] LNet: Service thread pid 22301 was inactive for 601.14s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [190726.562682] LustreError: dumping log to /tmp/lustre-log.1576182474.22301 [190732.693908] LNet: Service thread pid 92024 was inactive for 607.29s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [190732.706860] LNet: Skipped 1 previous similar message [190732.711922] LustreError: dumping log to /tmp/lustre-log.1576182480.92024 [190753.174464] LNet: Service thread pid 92023 was inactive for 607.91s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [190753.187411] LNet: Skipped 6 previous similar messages [190753.192566] LustreError: dumping log to /tmp/lustre-log.1576182500.92023 [190825.440865] LNet: Service thread pid 22598 completed after 800.02s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [190825.457144] LNet: Skipped 3 previous similar messages [190833.048654] LNet: Service thread pid 43560 was inactive for 707.60s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [190833.061601] LustreError: dumping log to /tmp/lustre-log.1576182580.43560 [190843.288936] LustreError: dumping log to /tmp/lustre-log.1576182591.22603 [190849.433103] LustreError: dumping log to /tmp/lustre-log.1576182597.22311 [190853.529213] LustreError: dumping log to /tmp/lustre-log.1576182601.92718 [190865.817550] LNet: Service thread pid 22248 was inactive for 707.52s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [190865.830504] LNet: Skipped 11 previous similar messages [190865.835742] LustreError: dumping log to /tmp/lustre-log.1576182613.22248 [190871.961714] LNet: Service thread pid 95968 was inactive for 746.51s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [190871.978739] LNet: Skipped 1 previous similar message [190871.983798] Pid: 95968, comm: mdt02_033 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [190871.994074] Call Trace: [190871.996624] [] call_rwsem_down_write_failed+0x17/0x30 [190872.003450] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [190872.010904] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [190872.017822] [] lod_prepare_create+0x215/0x2e0 [lod] [190872.024492] [] lod_declare_striped_create+0x1ee/0x980 [lod] [190872.031836] [] lod_declare_create+0x204/0x590 [lod] [190872.038498] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [190872.046473] [] mdd_declare_create+0x4c/0xcb0 [mdd] [190872.053050] [] mdd_create+0x847/0x14e0 [mdd] [190872.059094] [] mdt_reint_open+0x224f/0x3240 [mdt] [190872.065591] [] mdt_reint_rec+0x83/0x210 [mdt] [190872.071729] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [190872.078390] [] mdt_intent_open+0x82/0x3a0 [mdt] [190872.084707] [] mdt_intent_policy+0x435/0xd80 [mdt] [190872.091298] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [190872.098144] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [190872.105354] [] tgt_enqueue+0x62/0x210 [ptlrpc] [190872.111620] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [190872.118665] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [190872.126470] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [190872.132896] [] kthread+0xd1/0xe0 [190872.137900] [] ret_from_fork_nospec_begin+0xe/0x21 [190872.144475] [] 0xffffffffffffffff [190872.149585] LustreError: dumping log to /tmp/lustre-log.1576182619.95968 [190872.156849] Pid: 22617, comm: mdt02_016 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [190872.167124] Call Trace: [190872.169671] [] ldlm_completion_ast+0x4e5/0x860 [ptlrpc] [190872.176692] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [190872.183993] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [190872.190899] [] mdt_object_lock_internal+0x70/0x360 [mdt] [190872.197994] [] mdt_getattr_name_lock+0x90a/0x1c30 [mdt] [190872.204990] [] mdt_intent_getattr+0x2b5/0x480 [mdt] [190872.211650] [] mdt_intent_policy+0x435/0xd80 [mdt] [190872.218207] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [190872.225060] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [190872.232247] [] tgt_enqueue+0x62/0x210 [ptlrpc] [190872.238503] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [190872.245542] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [190872.253359] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [190872.259777] [] kthread+0xd1/0xe0 [190872.264780] [] ret_from_fork_nospec_begin+0xe/0x21 [190872.271343] [] 0xffffffffffffffff [190872.276450] Pid: 22609, comm: mdt02_013 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [190872.286703] Call Trace: [190872.289247] [] call_rwsem_down_write_failed+0x17/0x30 [190872.296062] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [190872.303507] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [190872.310415] [] lod_prepare_create+0x215/0x2e0 [lod] [190872.317077] [] lod_declare_striped_create+0x1ee/0x980 [lod] [190872.324422] [] lod_declare_create+0x204/0x590 [lod] [190872.331070] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [190872.339031] [] mdd_declare_create+0x4c/0xcb0 [mdd] [190872.345593] [] mdd_create+0x847/0x14e0 [mdd] [190872.351650] [] mdt_reint_open+0x224f/0x3240 [mdt] [190872.358136] [] mdt_reint_rec+0x83/0x210 [mdt] [190872.364276] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [190872.370936] [] mdt_intent_open+0x82/0x3a0 [mdt] [190872.377238] [] mdt_intent_policy+0x435/0xd80 [mdt] [190872.383812] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [190872.390653] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [190872.397852] [] tgt_enqueue+0x62/0x210 [ptlrpc] [190872.404096] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [190872.411130] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [190872.418935] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [190872.425362] [] kthread+0xd1/0xe0 [190872.430366] [] ret_from_fork_nospec_begin+0xe/0x21 [190872.436933] [] 0xffffffffffffffff [190884.250051] LNet: Service thread pid 22258 was inactive for 1058.84s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [190884.267156] LNet: Skipped 2 previous similar messages [190884.272305] Pid: 22258, comm: mdt03_002 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [190884.282576] Call Trace: [190884.285126] [] call_rwsem_down_write_failed+0x17/0x30 [190884.291949] [] lod_alloc_qos.constprop.18+0x205/0x1840 [lod] [190884.299398] [] lod_qos_prep_create+0x12d7/0x1890 [lod] [190884.306309] [] lod_prepare_create+0x215/0x2e0 [lod] [190884.312968] [] lod_declare_striped_create+0x1ee/0x980 [lod] [190884.320316] [] lod_declare_create+0x204/0x590 [lod] [190884.326990] [] mdd_declare_create_object_internal+0xe2/0x2f0 [mdd] [190884.334945] [] mdd_declare_create+0x4c/0xcb0 [mdd] [190884.341518] [] mdd_create+0x847/0x14e0 [mdd] [190884.347564] [] mdt_reint_open+0x224f/0x3240 [mdt] [190884.354059] [] mdt_reint_rec+0x83/0x210 [mdt] [190884.360191] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [190884.366837] [] mdt_intent_open+0x82/0x3a0 [mdt] [190884.373139] [] mdt_intent_policy+0x435/0xd80 [mdt] [190884.379712] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [190884.386562] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [190884.393785] [] tgt_enqueue+0x62/0x210 [ptlrpc] [190884.400031] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [190884.407068] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [190884.414870] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [190884.421298] [] kthread+0xd1/0xe0 [190884.426302] [] ret_from_fork_nospec_begin+0xe/0x21 [190884.432878] [] 0xffffffffffffffff [190884.437986] LustreError: dumping log to /tmp/lustre-log.1576182632.22258 [190884.445300] Pid: 22604, comm: mdt00_011 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [190884.455568] Call Trace: [190884.458116] [] call_rwsem_down_write_failed+0x17/0x30 [190884.464932] [] lod_qos_statfs_update+0x97/0x2b0 [lod] [190884.471764] [] lod_qos_prep_create+0x16a/0x1890 [lod] [190884.478587] [] lod_declare_instantiate_components+0x9a/0x1d0 [lod] [190884.486559] [] lod_declare_layout_change+0xb65/0x10f0 [lod] [190884.493915] [] mdd_declare_layout_change+0x62/0x120 [mdd] [190884.501103] [] mdd_layout_change+0x882/0x1000 [mdd] [190884.507753] [] mdt_layout_change+0x337/0x430 [mdt] [190884.514326] [] mdt_intent_layout+0x7ee/0xcc0 [mdt] [190884.520893] [] mdt_intent_policy+0x435/0xd80 [mdt] [190884.527478] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [190884.534317] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [190884.541517] [] tgt_enqueue+0x62/0x210 [ptlrpc] [190884.547759] [] tgt_request_handle+0xaea/0x1580 [ptlrpc] [190884.554792] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [190884.562596] [] ptlrpc_main+0xb2c/0x1460 [ptlrpc] [190884.569024] [] kthread+0xd1/0xe0 [190884.574019] [] ret_from_fork_nospec_begin+0xe/0x21 [190884.580587] [] 0xffffffffffffffff [190888.346166] LustreError: dumping log to /tmp/lustre-log.1576182636.22626 [190896.538389] LustreError: dumping log to /tmp/lustre-log.1576182644.95884 [190904.730613] LustreError: dumping log to /tmp/lustre-log.1576182652.22255 [190906.778673] LustreError: dumping log to /tmp/lustre-log.1576182654.41898 [190908.826727] LustreError: dumping log to /tmp/lustre-log.1576182656.22620 [190925.443833] LNet: Service thread pid 22609 completed after 799.99s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [190925.460082] LNet: Skipped 31 previous similar messages [190945.691760] LNet: Service thread pid 41905 was inactive for 607.98s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [190945.704705] LNet: Skipped 8 previous similar messages [190945.709858] LustreError: dumping log to /tmp/lustre-log.1576182693.41905 [190993.821055] Lustre: 93235:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9e5b6aacc380 x1649405117742992/t0(0) o101->3a18a690-f6fb-7d4d-c179-697da5c59619@10.9.116.10@o2ib4:201/0 lens 1784/3288 e 1 to 0 dl 1576182746 ref 2 fl Interpret:/0/0 rc 0/0 [190993.850216] Lustre: 93235:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 24 previous similar messages [190999.826932] Lustre: fir-MDT0001: Client 3a18a690-f6fb-7d4d-c179-697da5c59619 (at 10.9.116.10@o2ib4) reconnecting [190999.837197] Lustre: Skipped 20 previous similar messages [190999.842623] Lustre: fir-MDT0001: Connection restored to (at 10.9.116.10@o2ib4) [190999.850021] Lustre: Skipped 35 previous similar messages [191025.465281] LNet: Service thread pid 41905 completed after 687.75s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [191025.465442] Lustre: 92717:0:(service.c:2165:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (600:27s); client may timeout. req@ffff9e5b6aacc380 x1649405117742992/t416898146432(0) o101->3a18a690-f6fb-7d4d-c179-697da5c59619@10.9.116.10@o2ib4:201/0 lens 1784/952 e 1 to 0 dl 1576182746 ref 1 fl Complete:/0/0 rc 0/0 [191025.511203] LNet: Skipped 2 previous similar messages [191052.190638] LustreError: dumping log to /tmp/lustre-log.1576182799.93464 [191125.468850] Lustre: 93477:0:(service.c:2165:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (600:453s); client may timeout. req@ffff9e77a7af3180 x1649420595670320/t416898158831(0) o101->970bc850-7648-f96d-fc2b-8b8c64ce0bd4@10.9.101.52@o2ib4:630/0 lens 1808/904 e 2 to 0 dl 1576182420 ref 1 fl Complete:/0/0 rc 0/0 [191125.498917] LNet: Service thread pid 93477 completed after 1053.10s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [191224.396603] SysRq : Trigger a crash [191224.400249] BUG: unable to handle kernel NULL pointer dereference at (null) [191224.408215] IP: [] sysrq_handle_crash+0x16/0x20 [191224.414430] PGD 3f1a652067 PUD 3f6727a067 PMD 0 [191224.419213] Oops: 0002 [#1] SMP [191224.422585] Modules linked in: osp(OE) mdd(OE) lod(OE) mdt(OE) lfsck(OE) mgc(OE) osd_ldiskfs(OE) lquota(OE) ldiskfs(OE) lustre(OE) lmv(OE) mdc(OE) osc(OE) lov(OE) fid(OE) fld(OE) ko2iblnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache rdma_ucm(OE) ib_ucm(OE) rdma_cm(OE) iw_cm(OE) ib_ipoib(OE) ib_cm(OE) ib_umad(OE) mlx4_en(OE) mlx4_ib(OE) mlx4_core(OE) dell_rbu sunrpc vfat fat dm_round_robin amd64_edac_mod edac_mce_amd kvm_amd kvm ses irqbypass crc32_pclmul ghash_clmulni_intel dm_multipath aesni_intel enclosure ipmi_si lrw dcdbas gf128mul glue_helper ablk_helper dm_mod cryptd sg ipmi_devintf pcspkr ccp i2c_piix4 k10temp ipmi_msghandler acpi_power_meter ip_tables ext4 mbcache jbd2 sd_mod crc_t10dif crct10dif_generic mlx5_ib(OE) [191224.494953] ib_uverbs(OE) ib_core(OE) i2c_algo_bit mlx5_core(OE) drm_kms_helper ahci mlxfw(OE) devlink syscopyarea sysfillrect mpt3sas(OE) sysimgblt fb_sys_fops ttm crct10dif_pclmul crct10dif_common libahci mlx_compat(OE) drm tg3 libata crc32c_intel raid_class ptp megaraid_sas scsi_transport_sas drm_panel_orientation_quirks pps_core [191224.523879] CPU: 17 PID: 97428 Comm: bash Kdump: loaded Tainted: G OE ------------ 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 [191224.536206] Hardware name: Dell Inc. PowerEdge R6415/065PKD, BIOS 1.10.6 08/15/2019 [191224.543946] task: ffff9e7da2444100 ti: ffff9e779e61c000 task.ti: ffff9e779e61c000 [191224.551512] RIP: 0010:[] [] sysrq_handle_crash+0x16/0x20 [191224.560145] RSP: 0018:ffff9e779e61fe58 EFLAGS: 00010246 [191224.565546] RAX: ffffffff9de64430 RBX: ffffffff9e6e4f80 RCX: 0000000000000000 [191224.572764] RDX: 0000000000000000 RSI: ffff9e6dbf713898 RDI: 0000000000000063 [191224.579985] RBP: ffff9e779e61fe58 R08: ffffffff9e9e38bc R09: ffffffff9ea4f85b [191224.587205] R10: 00000000000011a5 R11: 00000000000011a4 R12: 0000000000000063 [191224.594423] R13: 0000000000000000 R14: 0000000000000007 R15: 0000000000000000 [191224.601643] FS: 00007fa59d9c7740(0000) GS:ffff9e6dbf700000(0000) knlGS:0000000000000000 [191224.609817] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [191224.615649] CR2: 0000000000000000 CR3: 0000003f13f26000 CR4: 00000000003407e0 [191224.622869] Call Trace: [191224.625411] [] __handle_sysrq+0x10d/0x170 [191224.631155] [] write_sysrq_trigger+0x28/0x40 [191224.637165] [] proc_reg_write+0x40/0x80 [191224.642735] [] vfs_write+0xc0/0x1f0 [191224.647960] [] SyS_write+0x7f/0xf0 [191224.653102] [] system_call_fastpath+0x22/0x27 [191224.659192] Code: eb 9b 45 01 f4 45 39 65 34 75 e5 4c 89 ef e8 e2 f7 ff ff eb db 66 66 66 66 90 55 48 89 e5 c7 05 91 31 7e 00 01 00 00 00 0f ae f8 04 25 00 00 00 00 01 5d c3 66 66 66 66 90 55 31 c0 c7 05 0e [191224.679801] RIP [] sysrq_handle_crash+0x16/0x20 [191224.686093] RSP [191224.689671] CR2: 0000000000000000