[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Initializing cgroup subsys cpuacct [ 0.000000] Linux version 3.10.0-957.27.2.el7_lustre.pl2.x86_64 (sthiell@oak-rbh01) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) ) #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-3.10.0-957.27.2.el7_lustre.pl2.x86_64 root=UUID=13d7db90-76a6-4160-92a3-3d6e156edf61 ro crashkernel=auto nomodeset console=ttyS0,115200 LANG=en_US.UTF-8 [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000008efff] usable [ 0.000000] BIOS-e820: [mem 0x000000000008f000-0x000000000008ffff] ACPI NVS [ 0.000000] BIOS-e820: [mem 0x0000000000090000-0x000000000009ffff] usable [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000004f722fff] usable [ 0.000000] BIOS-e820: [mem 0x000000004f723000-0x000000005772bfff] reserved [ 0.000000] BIOS-e820: [mem 0x000000005772c000-0x000000006cacefff] usable [ 0.000000] BIOS-e820: [mem 0x000000006cacf000-0x000000006efcefff] reserved [ 0.000000] BIOS-e820: [mem 0x000000006efcf000-0x000000006fdfefff] ACPI NVS [ 0.000000] BIOS-e820: [mem 0x000000006fdff000-0x000000006fffefff] ACPI data [ 0.000000] BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable [ 0.000000] BIOS-e820: [mem 0x0000000070000000-0x000000008fffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fec10000-0x00000000fec10fff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fed80000-0x00000000fed80fff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000107f37ffff] usable [ 0.000000] BIOS-e820: [mem 0x000000107f380000-0x000000107fffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000001080000000-0x000000207ff7ffff] usable [ 0.000000] BIOS-e820: [mem 0x000000207ff80000-0x000000207fffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000002080000000-0x000000307ff7ffff] usable [ 0.000000] BIOS-e820: [mem 0x000000307ff80000-0x000000307fffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000003080000000-0x000000407ff7ffff] usable [ 0.000000] BIOS-e820: [mem 0x000000407ff80000-0x000000407fffffff] reserved [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] e820: update [mem 0x373d9020-0x3747ac5f] usable ==> usable [ 0.000000] e820: update [mem 0x373a7020-0x373d8e5f] usable ==> usable [ 0.000000] e820: update [mem 0x37375020-0x373a6e5f] usable ==> usable [ 0.000000] e820: update [mem 0x3736c020-0x3737405f] usable ==> usable [ 0.000000] e820: update [mem 0x37346020-0x3736bc5f] usable ==> usable [ 0.000000] e820: update [mem 0x3732d020-0x3734565f] usable ==> usable [ 0.000000] extended physical RAM map: [ 0.000000] reserve setup_data: [mem 0x0000000000000000-0x000000000008efff] usable [ 0.000000] reserve setup_data: [mem 0x000000000008f000-0x000000000008ffff] ACPI NVS [ 0.000000] reserve setup_data: [mem 0x0000000000090000-0x000000000009ffff] usable [ 0.000000] reserve setup_data: [mem 0x0000000000100000-0x000000003732d01f] usable [ 0.000000] reserve setup_data: [mem 0x000000003732d020-0x000000003734565f] usable [ 0.000000] reserve setup_data: [mem 0x0000000037345660-0x000000003734601f] usable [ 0.000000] reserve setup_data: [mem 0x0000000037346020-0x000000003736bc5f] usable [ 0.000000] reserve setup_data: [mem 0x000000003736bc60-0x000000003736c01f] usable [ 0.000000] reserve setup_data: [mem 0x000000003736c020-0x000000003737405f] usable [ 0.000000] reserve setup_data: [mem 0x0000000037374060-0x000000003737501f] usable [ 0.000000] reserve setup_data: [mem 0x0000000037375020-0x00000000373a6e5f] usable [ 0.000000] reserve setup_data: [mem 0x00000000373a6e60-0x00000000373a701f] usable [ 0.000000] reserve setup_data: [mem 0x00000000373a7020-0x00000000373d8e5f] usable [ 0.000000] reserve setup_data: [mem 0x00000000373d8e60-0x00000000373d901f] usable [ 0.000000] reserve setup_data: [mem 0x00000000373d9020-0x000000003747ac5f] usable [ 0.000000] reserve setup_data: [mem 0x000000003747ac60-0x000000004f722fff] usable [ 0.000000] reserve setup_data: [mem 0x000000004f723000-0x000000005772bfff] reserved [ 0.000000] reserve setup_data: [mem 0x000000005772c000-0x000000006cacefff] usable [ 0.000000] reserve setup_data: [mem 0x000000006cacf000-0x000000006efcefff] reserved [ 0.000000] reserve setup_data: [mem 0x000000006efcf000-0x000000006fdfefff] ACPI NVS [ 0.000000] reserve setup_data: [mem 0x000000006fdff000-0x000000006fffefff] ACPI data [ 0.000000] reserve setup_data: [mem 0x000000006ffff000-0x000000006fffffff] usable [ 0.000000] reserve setup_data: [mem 0x0000000070000000-0x000000008fffffff] reserved [ 0.000000] reserve setup_data: [mem 0x00000000fec10000-0x00000000fec10fff] reserved [ 0.000000] reserve setup_data: [mem 0x00000000fed80000-0x00000000fed80fff] reserved [ 0.000000] reserve setup_data: [mem 0x0000000100000000-0x000000107f37ffff] usable [ 0.000000] reserve setup_data: [mem 0x000000107f380000-0x000000107fffffff] reserved [ 0.000000] reserve setup_data: [mem 0x0000001080000000-0x000000207ff7ffff] usable [ 0.000000] reserve setup_data: [mem 0x000000207ff80000-0x000000207fffffff] reserved [ 0.000000] reserve setup_data: [mem 0x0000002080000000-0x000000307ff7ffff] usable [ 0.000000] reserve setup_data: [mem 0x000000307ff80000-0x000000307fffffff] reserved [ 0.000000] reserve setup_data: [mem 0x0000003080000000-0x000000407ff7ffff] usable [ 0.000000] reserve setup_data: [mem 0x000000407ff80000-0x000000407fffffff] reserved [ 0.000000] efi: EFI v2.50 by Dell Inc. [ 0.000000] efi: ACPI=0x6fffe000 ACPI 2.0=0x6fffe014 SMBIOS=0x6eab5000 SMBIOS 3.0=0x6eab3000 [ 0.000000] efi: mem00: type=3, attr=0xf, range=[0x0000000000000000-0x0000000000001000) (0MB) [ 0.000000] efi: mem01: type=2, attr=0xf, range=[0x0000000000001000-0x0000000000002000) (0MB) [ 0.000000] efi: mem02: type=7, attr=0xf, range=[0x0000000000002000-0x0000000000010000) (0MB) [ 0.000000] efi: mem03: type=3, attr=0xf, range=[0x0000000000010000-0x0000000000014000) (0MB) [ 0.000000] efi: mem04: type=7, attr=0xf, range=[0x0000000000014000-0x0000000000063000) (0MB) [ 0.000000] efi: mem05: type=3, attr=0xf, range=[0x0000000000063000-0x000000000008f000) (0MB) [ 0.000000] efi: mem06: type=10, attr=0xf, range=[0x000000000008f000-0x0000000000090000) (0MB) [ 0.000000] efi: mem07: type=3, attr=0xf, range=[0x0000000000090000-0x00000000000a0000) (0MB) [ 0.000000] efi: mem08: type=4, attr=0xf, range=[0x0000000000100000-0x0000000000120000) (0MB) [ 0.000000] efi: mem09: type=7, attr=0xf, range=[0x0000000000120000-0x0000000000c00000) (10MB) [ 0.000000] efi: mem10: type=3, attr=0xf, range=[0x0000000000c00000-0x0000000001000000) (4MB) [ 0.000000] efi: mem11: type=2, attr=0xf, range=[0x0000000001000000-0x000000000267b000) (22MB) [ 0.000000] efi: mem12: type=7, attr=0xf, range=[0x000000000267b000-0x0000000004000000) (25MB) [ 0.000000] efi: mem13: type=4, attr=0xf, range=[0x0000000004000000-0x000000000403b000) (0MB) [ 0.000000] efi: mem14: type=7, attr=0xf, range=[0x000000000403b000-0x000000003732d000) (818MB) [ 0.000000] efi: mem15: type=2, attr=0xf, range=[0x000000003732d000-0x000000004ed3b000) (378MB) [ 0.000000] efi: mem16: type=7, attr=0xf, range=[0x000000004ed3b000-0x000000004ed41000) (0MB) [ 0.000000] efi: mem17: type=1, attr=0xf, range=[0x000000004ed41000-0x000000004ee5e000) (1MB) [ 0.000000] efi: mem18: type=2, attr=0xf, range=[0x000000004ee5e000-0x000000004ef7c000) (1MB) [ 0.000000] efi: mem19: type=1, attr=0xf, range=[0x000000004ef7c000-0x000000004f08b000) (1MB) [ 0.000000] efi: mem20: type=3, attr=0xf, range=[0x000000004f08b000-0x000000004f723000) (6MB) [ 0.000000] efi: mem21: type=0, attr=0xf, range=[0x000000004f723000-0x000000005772c000) (128MB) [ 0.000000] efi: mem22: type=3, attr=0xf, range=[0x000000005772c000-0x000000005792e000) (2MB) [ 0.000000] efi: mem23: type=4, attr=0xf, range=[0x000000005792e000-0x000000005b4cf000) (59MB) [ 0.000000] efi: mem24: type=3, attr=0xf, range=[0x000000005b4cf000-0x000000005b4d0000) (0MB) [ 0.000000] efi: mem25: type=7, attr=0xf, range=[0x000000005b4d0000-0x000000005b4d2000) (0MB) [ 0.000000] efi: mem26: type=2, attr=0xf, range=[0x000000005b4d2000-0x000000005b4d6000) (0MB) [ 0.000000] efi: mem27: type=3, attr=0xf, range=[0x000000005b4d6000-0x000000005b8cf000) (3MB) [ 0.000000] efi: mem28: type=7, attr=0xf, range=[0x000000005b8cf000-0x00000000652e8000) (154MB) [ 0.000000] efi: mem29: type=4, attr=0xf, range=[0x00000000652e8000-0x000000006537b000) (0MB) [ 0.000000] efi: mem30: type=7, attr=0xf, range=[0x000000006537b000-0x000000006537f000) (0MB) [ 0.000000] efi: mem31: type=4, attr=0xf, range=[0x000000006537f000-0x000000006538e000) (0MB) [ 0.000000] efi: mem32: type=7, attr=0xf, range=[0x000000006538e000-0x0000000065390000) (0MB) [ 0.000000] efi: mem33: type=4, attr=0xf, range=[0x0000000065390000-0x0000000065420000) (0MB) [ 0.000000] efi: mem34: type=7, attr=0xf, range=[0x0000000065420000-0x0000000065421000) (0MB) [ 0.000000] efi: mem35: type=4, attr=0xf, range=[0x0000000065421000-0x0000000065734000) (3MB) [ 0.000000] efi: mem36: type=7, attr=0xf, range=[0x0000000065734000-0x0000000065736000) (0MB) [ 0.000000] efi: mem37: type=4, attr=0xf, range=[0x0000000065736000-0x0000000065737000) (0MB) [ 0.000000] efi: mem38: type=7, attr=0xf, range=[0x0000000065737000-0x0000000065739000) (0MB) [ 0.000000] efi: mem39: type=4, attr=0xf, range=[0x0000000065739000-0x000000006573b000) (0MB) [ 0.000000] efi: mem40: type=7, attr=0xf, range=[0x000000006573b000-0x000000006573d000) (0MB) [ 0.000000] efi: mem41: type=4, attr=0xf, range=[0x000000006573d000-0x000000006573f000) (0MB) [ 0.000000] efi: mem42: type=7, attr=0xf, range=[0x000000006573f000-0x0000000065740000) (0MB) [ 0.000000] efi: mem43: type=4, attr=0xf, range=[0x0000000065740000-0x0000000065741000) (0MB) [ 0.000000] efi: mem44: type=7, attr=0xf, range=[0x0000000065741000-0x0000000065743000) (0MB) [ 0.000000] efi: mem45: type=4, attr=0xf, range=[0x0000000065743000-0x0000000065745000) (0MB) [ 0.000000] efi: mem46: type=7, attr=0xf, range=[0x0000000065745000-0x0000000065746000) (0MB) [ 0.000000] efi: mem47: type=4, attr=0xf, range=[0x0000000065746000-0x000000006574b000) (0MB) [ 0.000000] efi: mem48: type=7, attr=0xf, range=[0x000000006574b000-0x000000006574d000) (0MB) [ 0.000000] efi: mem49: type=4, attr=0xf, range=[0x000000006574d000-0x000000006574e000) (0MB) [ 0.000000] efi: mem50: type=7, attr=0xf, range=[0x000000006574e000-0x000000006574f000) (0MB) [ 0.000000] efi: mem51: type=4, attr=0xf, range=[0x000000006574f000-0x0000000065752000) (0MB) [ 0.000000] efi: mem52: type=7, attr=0xf, range=[0x0000000065752000-0x0000000065754000) (0MB) [ 0.000000] efi: mem53: type=4, attr=0xf, range=[0x0000000065754000-0x0000000065756000) (0MB) [ 0.000000] efi: mem54: type=7, attr=0xf, range=[0x0000000065756000-0x0000000065759000) (0MB) [ 0.000000] efi: mem55: type=4, attr=0xf, range=[0x0000000065759000-0x000000006575a000) (0MB) [ 0.000000] efi: mem56: type=7, attr=0xf, range=[0x000000006575a000-0x000000006575b000) (0MB) [ 0.000000] efi: mem57: type=4, attr=0xf, range=[0x000000006575b000-0x0000000065776000) (0MB) [ 0.000000] efi: mem58: type=7, attr=0xf, range=[0x0000000065776000-0x0000000065778000) (0MB) [ 0.000000] efi: mem59: type=4, attr=0xf, range=[0x0000000065778000-0x000000006577a000) (0MB) [ 0.000000] efi: mem60: type=7, attr=0xf, range=[0x000000006577a000-0x000000006577c000) (0MB) [ 0.000000] efi: mem61: type=4, attr=0xf, range=[0x000000006577c000-0x000000006577e000) (0MB) [ 0.000000] efi: mem62: type=7, attr=0xf, range=[0x000000006577e000-0x0000000065780000) (0MB) [ 0.000000] efi: mem63: type=4, attr=0xf, range=[0x0000000065780000-0x0000000065782000) (0MB) [ 0.000000] efi: mem64: type=7, attr=0xf, range=[0x0000000065782000-0x0000000065785000) (0MB) [ 0.000000] efi: mem65: type=4, attr=0xf, range=[0x0000000065785000-0x0000000065787000) (0MB) [ 0.000000] efi: mem66: type=7, attr=0xf, range=[0x0000000065787000-0x0000000065789000) (0MB) [ 0.000000] efi: mem67: type=4, attr=0xf, range=[0x0000000065789000-0x000000006578b000) (0MB) [ 0.000000] efi: mem68: type=7, attr=0xf, range=[0x000000006578b000-0x000000006578d000) (0MB) [ 0.000000] efi: mem69: type=4, attr=0xf, range=[0x000000006578d000-0x000000006578f000) (0MB) [ 0.000000] efi: mem70: type=7, attr=0xf, range=[0x000000006578f000-0x0000000065790000) (0MB) [ 0.000000] efi: mem71: type=4, attr=0xf, range=[0x0000000065790000-0x0000000065795000) (0MB) [ 0.000000] efi: mem72: type=7, attr=0xf, range=[0x0000000065795000-0x0000000065796000) (0MB) [ 0.000000] efi: mem73: type=4, attr=0xf, range=[0x0000000065796000-0x0000000065799000) (0MB) [ 0.000000] efi: mem74: type=7, attr=0xf, range=[0x0000000065799000-0x000000006579b000) (0MB) [ 0.000000] efi: mem75: type=4, attr=0xf, range=[0x000000006579b000-0x000000006579e000) (0MB) [ 0.000000] efi: mem76: type=7, attr=0xf, range=[0x000000006579e000-0x000000006579f000) (0MB) [ 0.000000] efi: mem77: type=4, attr=0xf, range=[0x000000006579f000-0x000000006592c000) (1MB) [ 0.000000] efi: mem78: type=7, attr=0xf, range=[0x000000006592c000-0x000000006592d000) (0MB) [ 0.000000] efi: mem79: type=4, attr=0xf, range=[0x000000006592d000-0x000000006593d000) (0MB) [ 0.000000] efi: mem80: type=7, attr=0xf, range=[0x000000006593d000-0x000000006593e000) (0MB) [ 0.000000] efi: mem81: type=4, attr=0xf, range=[0x000000006593e000-0x0000000065947000) (0MB) [ 0.000000] efi: mem82: type=7, attr=0xf, range=[0x0000000065947000-0x0000000065948000) (0MB) [ 0.000000] efi: mem83: type=4, attr=0xf, range=[0x0000000065948000-0x000000006594d000) (0MB) [ 0.000000] efi: mem84: type=7, attr=0xf, range=[0x000000006594d000-0x000000006594e000) (0MB) [ 0.000000] efi: mem85: type=4, attr=0xf, range=[0x000000006594e000-0x0000000065952000) (0MB) [ 0.000000] efi: mem86: type=7, attr=0xf, range=[0x0000000065952000-0x0000000065953000) (0MB) [ 0.000000] efi: mem87: type=4, attr=0xf, range=[0x0000000065953000-0x0000000065955000) (0MB) [ 0.000000] efi: mem88: type=7, attr=0xf, range=[0x0000000065955000-0x0000000065956000) (0MB) [ 0.000000] efi: mem89: type=4, attr=0xf, range=[0x0000000065956000-0x000000006595a000) (0MB) [ 0.000000] efi: mem90: type=7, attr=0xf, range=[0x000000006595a000-0x000000006595b000) (0MB) [ 0.000000] efi: mem91: type=4, attr=0xf, range=[0x000000006595b000-0x0000000065974000) (0MB) [ 0.000000] efi: mem92: type=7, attr=0xf, range=[0x0000000065974000-0x0000000065976000) (0MB) [ 0.000000] efi: mem93: type=4, attr=0xf, range=[0x0000000065976000-0x000000006597a000) (0MB) [ 0.000000] efi: mem94: type=7, attr=0xf, range=[0x000000006597a000-0x000000006597b000) (0MB) [ 0.000000] efi: mem95: type=4, attr=0xf, range=[0x000000006597b000-0x0000000065982000) (0MB) [ 0.000000] efi: mem96: type=7, attr=0xf, range=[0x0000000065982000-0x0000000065983000) (0MB) [ 0.000000] efi: mem97: type=4, attr=0xf, range=[0x0000000065983000-0x000000006598f000) (0MB) [ 0.000000] efi: mem98: type=7, attr=0xf, range=[0x000000006598f000-0x0000000065990000) (0MB) [ 0.000000] efi: mem99: type=4, attr=0xf, range=[0x0000000065990000-0x0000000065997000) (0MB) [ 0.000000] efi: mem100: type=7, attr=0xf, range=[0x0000000065997000-0x0000000065998000) (0MB) [ 0.000000] efi: mem101: type=4, attr=0xf, range=[0x0000000065998000-0x00000000659a5000) (0MB) [ 0.000000] efi: mem102: type=7, attr=0xf, range=[0x00000000659a5000-0x00000000659a6000) (0MB) [ 0.000000] efi: mem103: type=4, attr=0xf, range=[0x00000000659a6000-0x0000000065cb1000) (3MB) [ 0.000000] efi: mem104: type=7, attr=0xf, range=[0x0000000065cb1000-0x0000000065cb2000) (0MB) [ 0.000000] efi: mem105: type=4, attr=0xf, range=[0x0000000065cb2000-0x0000000065d0e000) (0MB) [ 0.000000] efi: mem106: type=7, attr=0xf, range=[0x0000000065d0e000-0x0000000065d0f000) (0MB) [ 0.000000] efi: mem107: type=4, attr=0xf, range=[0x0000000065d0f000-0x0000000065d11000) (0MB) [ 0.000000] efi: mem108: type=7, attr=0xf, range=[0x0000000065d11000-0x0000000065d12000) (0MB) [ 0.000000] efi: mem109: type=4, attr=0xf, range=[0x0000000065d12000-0x0000000065d4d000) (0MB) [ 0.000000] efi: mem110: type=7, attr=0xf, range=[0x0000000065d4d000-0x0000000065d4e000) (0MB) [ 0.000000] efi: mem111: type=4, attr=0xf, range=[0x0000000065d4e000-0x0000000065d51000) (0MB) [ 0.000000] efi: mem112: type=7, attr=0xf, range=[0x0000000065d51000-0x0000000065d52000) (0MB) [ 0.000000] efi: mem113: type=4, attr=0xf, range=[0x0000000065d52000-0x0000000065d53000) (0MB) [ 0.000000] efi: mem114: type=7, attr=0xf, range=[0x0000000065d53000-0x0000000065d54000) (0MB) [ 0.000000] efi: mem115: type=4, attr=0xf, range=[0x0000000065d54000-0x0000000065d69000) (0MB) [ 0.000000] efi: mem116: type=7, attr=0xf, range=[0x0000000065d69000-0x0000000065d6a000) (0MB) [ 0.000000] efi: mem117: type=4, attr=0xf, range=[0x0000000065d6a000-0x0000000065da4000) (0MB) [ 0.000000] efi: mem118: type=7, attr=0xf, range=[0x0000000065da4000-0x0000000065da5000) (0MB) [ 0.000000] efi: mem119: type=4, attr=0xf, range=[0x0000000065da5000-0x0000000065db3000) (0MB) [ 0.000000] efi: mem120: type=7, attr=0xf, range=[0x0000000065db3000-0x0000000065db4000) (0MB) [ 0.000000] efi: mem121: type=4, attr=0xf, range=[0x0000000065db4000-0x0000000065db9000) (0MB) [ 0.000000] efi: mem122: type=7, attr=0xf, range=[0x0000000065db9000-0x0000000065dba000) (0MB) [ 0.000000] efi: mem123: type=4, attr=0xf, range=[0x0000000065dba000-0x0000000065dd2000) (0MB) [ 0.000000] efi: mem124: type=7, attr=0xf, range=[0x0000000065dd2000-0x0000000065dd4000) (0MB) [ 0.000000] efi: mem125: type=4, attr=0xf, range=[0x0000000065dd4000-0x0000000065dd9000) (0MB) [ 0.000000] efi: mem126: type=7, attr=0xf, range=[0x0000000065dd9000-0x0000000065dda000) (0MB) [ 0.000000] efi: mem127: type=4, attr=0xf, range=[0x0000000065dda000-0x0000000065de6000) (0MB) [ 0.000000] efi: mem128: type=7, attr=0xf, range=[0x0000000065de6000-0x0000000065de7000) (0MB) [ 0.000000] efi: mem129: type=4, attr=0xf, range=[0x0000000065de7000-0x0000000065de8000) (0MB) [ 0.000000] efi: mem130: type=7, attr=0xf, range=[0x0000000065de8000-0x0000000065dea000) (0MB) [ 0.000000] efi: mem131: type=4, attr=0xf, range=[0x0000000065dea000-0x0000000065e4a000) (0MB) [ 0.000000] efi: mem132: type=7, attr=0xf, range=[0x0000000065e4a000-0x0000000065e4b000) (0MB) [ 0.000000] efi: mem133: type=4, attr=0xf, range=[0x0000000065e4b000-0x0000000065e5e000) (0MB) [ 0.000000] efi: mem134: type=7, attr=0xf, range=[0x0000000065e5e000-0x0000000065e5f000) (0MB) [ 0.000000] efi: mem135: type=4, attr=0xf, range=[0x0000000065e5f000-0x0000000065e66000) (0MB) [ 0.000000] efi: mem136: type=7, attr=0xf, range=[0x0000000065e66000-0x0000000065e67000) (0MB) [ 0.000000] efi: mem137: type=4, attr=0xf, range=[0x0000000065e67000-0x0000000065e69000) (0MB) [ 0.000000] efi: mem138: type=7, attr=0xf, range=[0x0000000065e69000-0x0000000065e6a000) (0MB) [ 0.000000] efi: mem139: type=4, attr=0xf, range=[0x0000000065e6a000-0x0000000065e73000) (0MB) [ 0.000000] efi: mem140: type=7, attr=0xf, range=[0x0000000065e73000-0x0000000065e74000) (0MB) [ 0.000000] efi: mem141: type=4, attr=0xf, range=[0x0000000065e74000-0x0000000065e8e000) (0MB) [ 0.000000] efi: mem142: type=7, attr=0xf, range=[0x0000000065e8e000-0x0000000065e8f000) (0MB) [ 0.000000] efi: mem143: type=4, attr=0xf, range=[0x0000000065e8f000-0x0000000065ea0000) (0MB) [ 0.000000] efi: mem144: type=7, attr=0xf, range=[0x0000000065ea0000-0x0000000065ea1000) (0MB) [ 0.000000] efi: mem145: type=4, attr=0xf, range=[0x0000000065ea1000-0x0000000069793000) (56MB) [ 0.000000] efi: mem146: type=7, attr=0xf, range=[0x0000000069793000-0x0000000069795000) (0MB) [ 0.000000] efi: mem147: type=4, attr=0xf, range=[0x0000000069795000-0x000000006b8cf000) (33MB) [ 0.000000] efi: mem148: type=3, attr=0xf, range=[0x000000006b8cf000-0x000000006cacf000) (18MB) [ 0.000000] efi: mem149: type=6, attr=0x800000000000000f, range=[0x000000006cacf000-0x000000006cbcf000) (1MB) [ 0.000000] efi: mem150: type=5, attr=0x800000000000000f, range=[0x000000006cbcf000-0x000000006cdcf000) (2MB) [ 0.000000] efi: mem151: type=0, attr=0xf, range=[0x000000006cdcf000-0x000000006efcf000) (34MB) [ 0.000000] efi: mem152: type=10, attr=0xf, range=[0x000000006efcf000-0x000000006fdff000) (14MB) [ 0.000000] efi: mem153: type=9, attr=0xf, range=[0x000000006fdff000-0x000000006ffff000) (2MB) [ 0.000000] efi: mem154: type=4, attr=0xf, range=[0x000000006ffff000-0x0000000070000000) (0MB) [ 0.000000] efi: mem155: type=7, attr=0xf, range=[0x0000000100000000-0x000000107f380000) (63475MB) [ 0.000000] efi: mem156: type=7, attr=0xf, range=[0x0000001080000000-0x000000207ff80000) (65535MB) [ 0.000000] efi: mem157: type=7, attr=0xf, range=[0x0000002080000000-0x000000307ff80000) (65535MB) [ 0.000000] efi: mem158: type=7, attr=0xf, range=[0x0000003080000000-0x000000407ff80000) (65535MB) [ 0.000000] efi: mem159: type=0, attr=0x9, range=[0x0000000070000000-0x0000000080000000) (256MB) [ 0.000000] efi: mem160: type=11, attr=0x800000000000000f, range=[0x0000000080000000-0x0000000090000000) (256MB) [ 0.000000] efi: mem161: type=11, attr=0x800000000000000f, range=[0x00000000fec10000-0x00000000fec11000) (0MB) [ 0.000000] efi: mem162: type=11, attr=0x800000000000000f, range=[0x00000000fed80000-0x00000000fed81000) (0MB) [ 0.000000] efi: mem163: type=0, attr=0x0, range=[0x000000107f380000-0x0000001080000000) (12MB) [ 0.000000] efi: mem164: type=0, attr=0x0, range=[0x000000207ff80000-0x0000002080000000) (0MB) [ 0.000000] efi: mem165: type=0, attr=0x0, range=[0x000000307ff80000-0x0000003080000000) (0MB) [ 0.000000] efi: mem166: type=0, attr=0x0, range=[0x000000407ff80000-0x0000004080000000) (0MB) [ 0.000000] SMBIOS 3.2.0 present. [ 0.000000] DMI: Dell Inc. PowerEdge R6415/07YXFK, BIOS 1.12.2 11/15/2019 [ 0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved [ 0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable [ 0.000000] e820: last_pfn = 0x407ff80 max_arch_pfn = 0x400000000 [ 0.000000] MTRR default type: uncachable [ 0.000000] MTRR fixed ranges enabled: [ 0.000000] 00000-9FFFF write-back [ 0.000000] A0000-FFFFF uncachable [ 0.000000] MTRR variable ranges enabled: [ 0.000000] 0 base 0000FF000000 mask FFFFFF000000 write-protect [ 0.000000] 1 base 000000000000 mask FFFF80000000 write-back [ 0.000000] 2 base 000070000000 mask FFFFF0000000 uncachable [ 0.000000] 3 disabled [ 0.000000] 4 disabled [ 0.000000] 5 disabled [ 0.000000] 6 disabled [ 0.000000] 7 disabled [ 0.000000] TOM2: 0000004080000000 aka 264192M [ 0.000000] PAT configuration [0-7]: WB WC UC- UC WB WP UC- UC [ 0.000000] e820: last_pfn = 0x70000 max_arch_pfn = 0x400000000 [ 0.000000] Base memory trampoline at [ffff9ce0c0099000] 99000 size 24576 [ 0.000000] Using GB pages for direct mapping [ 0.000000] BRK [0x3c2c53000, 0x3c2c53fff] PGTABLE [ 0.000000] BRK [0x3c2c54000, 0x3c2c54fff] PGTABLE [ 0.000000] BRK [0x3c2c55000, 0x3c2c55fff] PGTABLE [ 0.000000] BRK [0x3c2c56000, 0x3c2c56fff] PGTABLE [ 0.000000] BRK [0x3c2c57000, 0x3c2c57fff] PGTABLE [ 0.000000] BRK [0x3c2c58000, 0x3c2c58fff] PGTABLE [ 0.000000] BRK [0x3c2c59000, 0x3c2c59fff] PGTABLE [ 0.000000] BRK [0x3c2c5a000, 0x3c2c5afff] PGTABLE [ 0.000000] BRK [0x3c2c5b000, 0x3c2c5bfff] PGTABLE [ 0.000000] BRK [0x3c2c5c000, 0x3c2c5cfff] PGTABLE [ 0.000000] BRK [0x3c2c5d000, 0x3c2c5dfff] PGTABLE [ 0.000000] BRK [0x3c2c5e000, 0x3c2c5efff] PGTABLE [ 0.000000] RAMDISK: [mem 0x3747b000-0x38ca4fff] [ 0.000000] Early table checksum verification disabled [ 0.000000] ACPI: RSDP 000000006fffe014 00024 (v02 DELL ) [ 0.000000] ACPI: XSDT 000000006fffd0e8 000AC (v01 DELL PE_SC3 00000002 DELL 00000001) [ 0.000000] ACPI: FACP 000000006fff0000 00114 (v06 DELL PE_SC3 00000002 DELL 00000001) [ 0.000000] ACPI: DSDT 000000006ffe1000 0BD00 (v02 DELL PE_SC3 00000002 DELL 00000001) [ 0.000000] ACPI: FACS 000000006fdd3000 00040 [ 0.000000] ACPI: SSDT 000000006fffc000 000D2 (v02 DELL PE_SC3 00000002 MSFT 04000000) [ 0.000000] ACPI: BERT 000000006fffb000 00030 (v01 DELL BERT 00000001 DELL 00000001) [ 0.000000] ACPI: HEST 000000006fffa000 006DC (v01 DELL HEST 00000001 DELL 00000001) [ 0.000000] ACPI: SSDT 000000006fff9000 00294 (v01 DELL PE_SC3 00000001 AMD 00000001) [ 0.000000] ACPI: SRAT 000000006fff8000 00420 (v03 DELL PE_SC3 00000001 AMD 00000001) [ 0.000000] ACPI: MSCT 000000006fff7000 0004E (v01 DELL PE_SC3 00000000 AMD 00000001) [ 0.000000] ACPI: SLIT 000000006fff6000 0003C (v01 DELL PE_SC3 00000001 AMD 00000001) [ 0.000000] ACPI: CRAT 000000006fff3000 02DC0 (v01 DELL PE_SC3 00000001 AMD 00000001) [ 0.000000] ACPI: EINJ 000000006fff2000 00150 (v01 DELL PE_SC3 00000001 AMD 00000001) [ 0.000000] ACPI: SLIC 000000006fff1000 00024 (v01 DELL PE_SC3 00000002 DELL 00000001) [ 0.000000] ACPI: HPET 000000006ffef000 00038 (v01 DELL PE_SC3 00000002 DELL 00000001) [ 0.000000] ACPI: APIC 000000006ffee000 004B2 (v03 DELL PE_SC3 00000002 DELL 00000001) [ 0.000000] ACPI: MCFG 000000006ffed000 0003C (v01 DELL PE_SC3 00000002 DELL 00000001) [ 0.000000] ACPI: SSDT 000000006ffe0000 00629 (v02 DELL xhc_port 00000001 INTL 20170119) [ 0.000000] ACPI: IVRS 000000006ffdf000 00210 (v02 DELL PE_SC3 00000001 AMD 00000000) [ 0.000000] ACPI: SSDT 000000006ffdd000 01658 (v01 AMD CPMCMN 00000001 INTL 20170119) [ 0.000000] ACPI: Local APIC address 0xfee00000 [ 0.000000] SRAT: PXM 0 -> APIC 0x00 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x01 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x02 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x03 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x04 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x05 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x08 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x09 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x0a -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x0b -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x0c -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x0d -> Node 0 [ 0.000000] SRAT: PXM 1 -> APIC 0x10 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x11 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x12 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x13 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x14 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x15 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x18 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x19 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x1a -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x1b -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x1c -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x1d -> Node 1 [ 0.000000] SRAT: PXM 2 -> APIC 0x20 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x21 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x22 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x23 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x24 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x25 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x28 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x29 -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x2a -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x2b -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x2c -> Node 2 [ 0.000000] SRAT: PXM 2 -> APIC 0x2d -> Node 2 [ 0.000000] SRAT: PXM 3 -> APIC 0x30 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x31 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x32 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x33 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x34 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x35 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x38 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x39 -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x3a -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x3b -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x3c -> Node 3 [ 0.000000] SRAT: PXM 3 -> APIC 0x3d -> Node 3 [ 0.000000] SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] [ 0.000000] SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] [ 0.000000] SRAT: Node 0 PXM 0 [mem 0x100000000-0x107fffffff] [ 0.000000] SRAT: Node 1 PXM 1 [mem 0x1080000000-0x207fffffff] [ 0.000000] SRAT: Node 2 PXM 2 [mem 0x2080000000-0x307fffffff] [ 0.000000] SRAT: Node 3 PXM 3 [mem 0x3080000000-0x407fffffff] [ 0.000000] NUMA: Initialized distance table, cnt=4 [ 0.000000] NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] [ 0.000000] NUMA: Node 0 [mem 0x00000000-0x7fffffff] + [mem 0x100000000-0x107fffffff] -> [mem 0x00000000-0x107fffffff] [ 0.000000] NODE_DATA(0) allocated [mem 0x107f359000-0x107f37ffff] [ 0.000000] NODE_DATA(1) allocated [mem 0x207ff59000-0x207ff7ffff] [ 0.000000] NODE_DATA(2) allocated [mem 0x307ff59000-0x307ff7ffff] [ 0.000000] NODE_DATA(3) allocated [mem 0x407ff58000-0x407ff7efff] [ 0.000000] Reserving 176MB of memory at 704MB for crashkernel (System RAM: 261692MB) [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00001000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x407ff7ffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00001000-0x0008efff] [ 0.000000] node 0: [mem 0x00090000-0x0009ffff] [ 0.000000] node 0: [mem 0x00100000-0x4f722fff] [ 0.000000] node 0: [mem 0x5772c000-0x6cacefff] [ 0.000000] node 0: [mem 0x6ffff000-0x6fffffff] [ 0.000000] node 0: [mem 0x100000000-0x107f37ffff] [ 0.000000] node 1: [mem 0x1080000000-0x207ff7ffff] [ 0.000000] node 2: [mem 0x2080000000-0x307ff7ffff] [ 0.000000] node 3: [mem 0x3080000000-0x407ff7ffff] [ 0.000000] Initmem setup node 0 [mem 0x00001000-0x107f37ffff] [ 0.000000] On node 0 totalpages: 16661989 [ 0.000000] DMA zone: 64 pages used for memmap [ 0.000000] DMA zone: 1126 pages reserved [ 0.000000] DMA zone: 3998 pages, LIFO batch:0 [ 0.000000] DMA32 zone: 6380 pages used for memmap [ 0.000000] DMA32 zone: 408263 pages, LIFO batch:31 [ 0.000000] Normal zone: 253902 pages used for memmap [ 0.000000] Normal zone: 16249728 pages, LIFO batch:31 [ 0.000000] Initmem setup node 1 [mem 0x1080000000-0x207ff7ffff] [ 0.000000] On node 1 totalpages: 16777088 [ 0.000000] Normal zone: 262142 pages used for memmap [ 0.000000] Normal zone: 16777088 pages, LIFO batch:31 [ 0.000000] Initmem setup node 2 [mem 0x2080000000-0x307ff7ffff] [ 0.000000] On node 2 totalpages: 16777088 [ 0.000000] Normal zone: 262142 pages used for memmap [ 0.000000] Normal zone: 16777088 pages, LIFO batch:31 [ 0.000000] Initmem setup node 3 [mem 0x3080000000-0x407ff7ffff] [ 0.000000] On node 3 totalpages: 16777088 [ 0.000000] Normal zone: 262142 pages used for memmap [ 0.000000] Normal zone: 16777088 pages, LIFO batch:31 [ 0.000000] ACPI: PM-Timer IO Port: 0x408 [ 0.000000] ACPI: Local APIC address 0xfee00000 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x10] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x20] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x30] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x08] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x18] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x28] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x38] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x12] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x22] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x32] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x0a] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x1a] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x2a] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x3a] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x10] lapic_id[0x04] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x11] lapic_id[0x14] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x12] lapic_id[0x24] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x13] lapic_id[0x34] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x14] lapic_id[0x0c] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x15] lapic_id[0x1c] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x16] lapic_id[0x2c] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x17] lapic_id[0x3c] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x18] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x19] lapic_id[0x11] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x1a] lapic_id[0x21] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x1b] lapic_id[0x31] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x1c] lapic_id[0x09] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x1d] lapic_id[0x19] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x1e] lapic_id[0x29] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x1f] lapic_id[0x39] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x20] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x21] lapic_id[0x13] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x22] lapic_id[0x23] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x23] lapic_id[0x33] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x24] lapic_id[0x0b] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x25] lapic_id[0x1b] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x26] lapic_id[0x2b] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x27] lapic_id[0x3b] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x28] lapic_id[0x05] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x29] lapic_id[0x15] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x2a] lapic_id[0x25] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x2b] lapic_id[0x35] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x2c] lapic_id[0x0d] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x2d] lapic_id[0x1d] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x2e] lapic_id[0x2d] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x2f] lapic_id[0x3d] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x30] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x31] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x32] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x33] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x34] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x35] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x36] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x37] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x38] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x39] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x3a] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x3b] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x3c] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x3d] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x3e] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x3f] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x40] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x41] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x42] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x43] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x44] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x45] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x46] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x47] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x48] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x49] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x4a] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x4b] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x4c] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x4d] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x4e] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x4f] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x50] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x51] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x52] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x53] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x54] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x55] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x56] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x57] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x58] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x59] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x5a] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x5b] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x5c] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x5d] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x5e] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x5f] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x60] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x61] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x62] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x63] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x64] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x65] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x66] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x67] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x68] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x69] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x6a] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x6b] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x6c] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x6d] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x6e] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x6f] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x70] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x71] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x72] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x73] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x74] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x75] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x76] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x77] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x78] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x79] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x7a] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x7b] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x7c] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x7d] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x7e] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x7f] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1]) [ 0.000000] ACPI: IOAPIC (id[0x80] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 128, version 33, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: IOAPIC (id[0x81] address[0xfd880000] gsi_base[24]) [ 0.000000] IOAPIC[1]: apic_id 129, version 33, address 0xfd880000, GSI 24-55 [ 0.000000] ACPI: IOAPIC (id[0x82] address[0xe0900000] gsi_base[56]) [ 0.000000] IOAPIC[2]: apic_id 130, version 33, address 0xe0900000, GSI 56-87 [ 0.000000] ACPI: IOAPIC (id[0x83] address[0xc5900000] gsi_base[88]) [ 0.000000] IOAPIC[3]: apic_id 131, version 33, address 0xc5900000, GSI 88-119 [ 0.000000] ACPI: IOAPIC (id[0x84] address[0xaa900000] gsi_base[120]) [ 0.000000] IOAPIC[4]: apic_id 132, version 33, address 0xaa900000, GSI 120-151 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level) [ 0.000000] ACPI: IRQ0 used by override. [ 0.000000] ACPI: IRQ9 used by override. [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x10228201 base: 0xfed00000 [ 0.000000] smpboot: Allowing 128 CPUs, 80 hotplug CPUs [ 0.000000] PM: Registered nosave memory: [mem 0x0008f000-0x0008ffff] [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000fffff] [ 0.000000] PM: Registered nosave memory: [mem 0x3732d000-0x3732dfff] [ 0.000000] PM: Registered nosave memory: [mem 0x37345000-0x37345fff] [ 0.000000] PM: Registered nosave memory: [mem 0x37346000-0x37346fff] [ 0.000000] PM: Registered nosave memory: [mem 0x3736b000-0x3736bfff] [ 0.000000] PM: Registered nosave memory: [mem 0x3736c000-0x3736cfff] [ 0.000000] PM: Registered nosave memory: [mem 0x37374000-0x37374fff] [ 0.000000] PM: Registered nosave memory: [mem 0x37375000-0x37375fff] [ 0.000000] PM: Registered nosave memory: [mem 0x373a6000-0x373a6fff] [ 0.000000] PM: Registered nosave memory: [mem 0x373a7000-0x373a7fff] [ 0.000000] PM: Registered nosave memory: [mem 0x373d8000-0x373d8fff] [ 0.000000] PM: Registered nosave memory: [mem 0x373d9000-0x373d9fff] [ 0.000000] PM: Registered nosave memory: [mem 0x3747a000-0x3747afff] [ 0.000000] PM: Registered nosave memory: [mem 0x4f723000-0x5772bfff] [ 0.000000] PM: Registered nosave memory: [mem 0x6cacf000-0x6efcefff] [ 0.000000] PM: Registered nosave memory: [mem 0x6efcf000-0x6fdfefff] [ 0.000000] PM: Registered nosave memory: [mem 0x6fdff000-0x6fffefff] [ 0.000000] PM: Registered nosave memory: [mem 0x70000000-0x8fffffff] [ 0.000000] PM: Registered nosave memory: [mem 0x90000000-0xfec0ffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfec10000-0xfec10fff] [ 0.000000] PM: Registered nosave memory: [mem 0xfec11000-0xfed7ffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfed80000-0xfed80fff] [ 0.000000] PM: Registered nosave memory: [mem 0xfed81000-0xffffffff] [ 0.000000] PM: Registered nosave memory: [mem 0x107f380000-0x107fffffff] [ 0.000000] PM: Registered nosave memory: [mem 0x207ff80000-0x207fffffff] [ 0.000000] PM: Registered nosave memory: [mem 0x307ff80000-0x307fffffff] [ 0.000000] e820: [mem 0x90000000-0xfec0ffff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on bare hardware [ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:4 [ 0.000000] PERCPU: Embedded 38 pages/cpu @ffff9cf0fee00000 s118784 r8192 d28672 u262144 [ 0.000000] pcpu-alloc: s118784 r8192 d28672 u262144 alloc=1*2097152 [ 0.000000] pcpu-alloc: [0] 000 004 008 012 016 020 024 028 [ 0.000000] pcpu-alloc: [0] 032 036 040 044 048 052 056 060 [ 0.000000] pcpu-alloc: [0] 064 068 072 076 080 084 088 092 [ 0.000000] pcpu-alloc: [0] 096 100 104 108 112 116 120 124 [ 0.000000] pcpu-alloc: [1] 001 005 009 013 017 021 025 029 [ 0.000000] pcpu-alloc: [1] 033 037 041 045 049 053 057 061 [ 0.000000] pcpu-alloc: [1] 065 069 073 077 081 085 089 093 [ 0.000000] pcpu-alloc: [1] 097 101 105 109 113 117 121 125 [ 0.000000] pcpu-alloc: [2] 002 006 010 014 018 022 026 030 [ 0.000000] pcpu-alloc: [2] 034 038 042 046 050 054 058 062 [ 0.000000] pcpu-alloc: [2] 066 070 074 078 082 086 090 094 [ 0.000000] pcpu-alloc: [2] 098 102 106 110 114 118 122 126 [ 0.000000] pcpu-alloc: [3] 003 007 011 015 019 023 027 031 [ 0.000000] pcpu-alloc: [3] 035 039 043 047 051 055 059 063 [ 0.000000] pcpu-alloc: [3] 067 071 075 079 083 087 091 095 [ 0.000000] pcpu-alloc: [3] 099 103 107 111 115 119 123 127 [ 0.000000] Built 4 zonelists in Zone order, mobility grouping on. Total pages: 65945355 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.10.0-957.27.2.el7_lustre.pl2.x86_64 root=UUID=13d7db90-76a6-4160-92a3-3d6e156edf61 ro crashkernel=auto nomodeset console=ttyS0,115200 LANG=en_US.UTF-8 [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100 [ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form [ 0.000000] Memory: 9564640k/270532096k available (7676k kernel code, 2559084k absent, 4703320k reserved, 6045k data, 1876k init) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=4 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=128. [ 0.000000] NR_IRQS:327936 nr_irqs:3624 0 [ 0.000000] Console: colour dummy device 80x25 [ 0.000000] console [ttyS0] enabled [ 0.000000] allocated 1072693248 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] Enabling automatic NUMA balancing. Configure with numa_balancing= or the kernel.numa_balancing sysctl [ 0.000000] hpet clockevent registered [ 0.000000] tsc: Fast TSC calibration using PIT [ 0.000000] tsc: Detected 1996.200 MHz processor [ 0.000054] Calibrating delay loop (skipped), value calculated using timer frequency.. 3992.40 BogoMIPS (lpj=1996200) [ 0.010696] pid_max: default: 131072 minimum: 1024 [ 0.016358] Security Framework initialized [ 0.020477] SELinux: Initializing. [ 0.024039] SELinux: Starting in permissive mode [ 0.024040] Yama: becoming mindful. [ 0.044156] Dentry cache hash table entries: 33554432 (order: 16, 268435456 bytes) [ 0.100594] Inode-cache hash table entries: 16777216 (order: 15, 134217728 bytes) [ 0.128388] Mount-cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.135775] Mountpoint-cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.144854] Initializing cgroup subsys memory [ 0.149255] Initializing cgroup subsys devices [ 0.153710] Initializing cgroup subsys freezer [ 0.158163] Initializing cgroup subsys net_cls [ 0.162619] Initializing cgroup subsys blkio [ 0.166900] Initializing cgroup subsys perf_event [ 0.171622] Initializing cgroup subsys hugetlb [ 0.176078] Initializing cgroup subsys pids [ 0.180274] Initializing cgroup subsys net_prio [ 0.184879] tseg: 0070000000 [ 0.190499] LVT offset 2 assigned for vector 0xf4 [ 0.195227] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 512 [ 0.201247] Last level dTLB entries: 4KB 1536, 2MB 1536, 4MB 768 [ 0.207261] tlb_flushall_shift: 6 [ 0.210608] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp [ 0.220175] FEATURE SPEC_CTRL Not Present [ 0.224197] FEATURE IBPB_SUPPORT Present [ 0.228133] Spectre V2 : Enabling Indirect Branch Prediction Barrier [ 0.234566] Spectre V2 : Mitigation: Full retpoline [ 0.239797] Freeing SMP alternatives: 28k freed [ 0.246003] ACPI: Core revision 20130517 [ 0.254939] ACPI: All ACPI Tables successfully acquired [ 0.266565] ftrace: allocating 29216 entries in 115 pages [ 0.606225] Switched APIC routing to physical flat. [ 0.613147] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.629159] smpboot: CPU0: AMD EPYC 7401P 24-Core Processor (fam: 17, model: 01, stepping: 02) [ 0.711418] random: fast init done [ 0.741418] APIC calibration not consistent with PM-Timer: 101ms instead of 100ms [ 0.748901] APIC delta adjusted to PM-Timer: 623827 (636297) [ 0.754589] Performance Events: Fam17h core perfctr, AMD PMU driver. [ 0.761019] ... version: 0 [ 0.765030] ... bit width: 48 [ 0.769129] ... generic registers: 6 [ 0.773142] ... value mask: 0000ffffffffffff [ 0.778454] ... max period: 00007fffffffffff [ 0.783767] ... fixed-purpose events: 0 [ 0.787779] ... event mask: 000000000000003f [ 0.796031] NMI watchdog: enabled on all CPUs, permanently consumes one hw-PMU counter. [ 0.804112] smpboot: Booting Node 1, Processors #1 OK [ 0.817317] smpboot: Booting Node 2, Processors #2 OK [ 0.830517] smpboot: Booting Node 3, Processors #3 OK [ 0.843712] smpboot: Booting Node 0, Processors #4 OK [ 0.856893] smpboot: Booting Node 1, Processors #5 OK [ 0.870082] smpboot: Booting Node 2, Processors #6 OK [ 0.883255] smpboot: Booting Node 3, Processors #7 OK [ 0.896437] smpboot: Booting Node 0, Processors #8 OK [ 0.909831] smpboot: Booting Node 1, Processors #9 OK [ 0.923022] smpboot: Booting Node 2, Processors #10 OK [ 0.936297] smpboot: Booting Node 3, Processors #11 OK [ 0.949566] smpboot: Booting Node 0, Processors #12 OK [ 0.962837] smpboot: Booting Node 1, Processors #13 OK [ 0.976112] smpboot: Booting Node 2, Processors #14 OK [ 0.989381] smpboot: Booting Node 3, Processors #15 OK [ 1.002653] smpboot: Booting Node 0, Processors #16 OK [ 1.016026] smpboot: Booting Node 1, Processors #17 OK [ 1.029311] smpboot: Booting Node 2, Processors #18 OK [ 1.042588] smpboot: Booting Node 3, Processors #19 OK [ 1.055861] smpboot: Booting Node 0, Processors #20 OK [ 1.069127] smpboot: Booting Node 1, Processors #21 OK [ 1.082395] smpboot: Booting Node 2, Processors #22 OK [ 1.095676] smpboot: Booting Node 3, Processors #23 OK [ 1.108946] smpboot: Booting Node 0, Processors #24 OK [ 1.122666] smpboot: Booting Node 1, Processors #25 OK [ 1.135908] smpboot: Booting Node 2, Processors #26 OK [ 1.149156] smpboot: Booting Node 3, Processors #27 OK [ 1.162383] smpboot: Booting Node 0, Processors #28 OK [ 1.175620] smpboot: Booting Node 1, Processors #29 OK [ 1.188853] smpboot: Booting Node 2, Processors #30 OK [ 1.202095] smpboot: Booting Node 3, Processors #31 OK [ 1.215320] smpboot: Booting Node 0, Processors #32 OK [ 1.228652] smpboot: Booting Node 1, Processors #33 OK [ 1.241897] smpboot: Booting Node 2, Processors #34 OK [ 1.255241] smpboot: Booting Node 3, Processors #35 OK [ 1.268466] smpboot: Booting Node 0, Processors #36 OK [ 1.281686] smpboot: Booting Node 1, Processors #37 OK [ 1.294920] smpboot: Booting Node 2, Processors #38 OK [ 1.308168] smpboot: Booting Node 3, Processors #39 OK [ 1.321388] smpboot: Booting Node 0, Processors #40 OK [ 1.334719] smpboot: Booting Node 1, Processors #41 OK [ 1.348066] smpboot: Booting Node 2, Processors #42 OK [ 1.361297] smpboot: Booting Node 3, Processors #43 OK [ 1.374532] smpboot: Booting Node 0, Processors #44 OK [ 1.387762] smpboot: Booting Node 1, Processors #45 OK [ 1.401005] smpboot: Booting Node 2, Processors #46 OK [ 1.414243] smpboot: Booting Node 3, Processors #47 [ 1.426949] Brought up 48 CPUs [ 1.430212] smpboot: Max logical packages: 3 [ 1.434489] smpboot: Total of 48 processors activated (191635.20 BogoMIPS) [ 1.731330] node 0 initialised, 15458277 pages in 283ms [ 1.732051] node 2 initialised, 15989367 pages in 283ms [ 1.732145] node 3 initialised, 15989248 pages in 283ms [ 1.732226] node 1 initialised, 15989367 pages in 283ms [ 1.752776] devtmpfs: initialized [ 1.778620] EVM: security.selinux [ 1.781940] EVM: security.ima [ 1.784915] EVM: security.capability [ 1.788586] PM: Registering ACPI NVS region [mem 0x0008f000-0x0008ffff] (4096 bytes) [ 1.796332] PM: Registering ACPI NVS region [mem 0x6efcf000-0x6fdfefff] (14876672 bytes) [ 1.805957] atomic64 test passed for x86-64 platform with CX8 and with SSE [ 1.812832] pinctrl core: initialized pinctrl subsystem [ 1.818159] RTC time: 15:44:20, date: 08/08/20 [ 1.822757] NET: Registered protocol family 16 [ 1.827548] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it [ 1.835114] ACPI: bus type PCI registered [ 1.839128] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 1.845705] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0x80000000-0x8fffffff] (base 0x80000000) [ 1.855005] PCI: MMCONFIG at [mem 0x80000000-0x8fffffff] reserved in E820 [ 1.861796] PCI: Using configuration type 1 for base access [ 1.867381] PCI: Dell System detected, enabling pci=bfsort. [ 1.882901] ACPI: Added _OSI(Module Device) [ 1.887092] ACPI: Added _OSI(Processor Device) [ 1.891535] ACPI: Added _OSI(3.0 _SCP Extensions) [ 1.896240] ACPI: Added _OSI(Processor Aggregator Device) [ 1.901641] ACPI: Added _OSI(Linux-Dell-Video) [ 1.906949] ACPI: EC: Look up EC in DSDT [ 1.907988] ACPI: Executed 2 blocks of module-level executable AML code [ 1.920057] ACPI: Interpreter enabled [ 1.923732] ACPI: (supports S0 S5) [ 1.927140] ACPI: Using IOAPIC for interrupt routing [ 1.932314] HEST: Table parsing has been initialized. [ 1.937368] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 1.946518] ACPI: Enabled 1 GPEs in block 00 to 1F [ 1.959070] ACPI: PCI Interrupt Link [LNKA] (IRQs 4 5 7 10 11 14 15) *0 [ 1.965976] ACPI: PCI Interrupt Link [LNKB] (IRQs 4 5 7 10 11 14 15) *0 [ 1.972882] ACPI: PCI Interrupt Link [LNKC] (IRQs 4 5 7 10 11 14 15) *0 [ 1.979790] ACPI: PCI Interrupt Link [LNKD] (IRQs 4 5 7 10 11 14 15) *0 [ 1.986699] ACPI: PCI Interrupt Link [LNKE] (IRQs 4 5 7 10 11 14 15) *0 [ 1.993604] ACPI: PCI Interrupt Link [LNKF] (IRQs 4 5 7 10 11 14 15) *0 [ 2.000513] ACPI: PCI Interrupt Link [LNKG] (IRQs 4 5 7 10 11 14 15) *0 [ 2.007419] ACPI: PCI Interrupt Link [LNKH] (IRQs 4 5 7 10 11 14 15) *0 [ 2.014472] ACPI: PCI Root Bridge [PC00] (domain 0000 [bus 00-3f]) [ 2.020659] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI] [ 2.028871] acpi PNP0A08:00: PCIe AER handled by firmware [ 2.034313] acpi PNP0A08:00: _OSC: platform does not support [SHPCHotplug] [ 2.041262] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] [ 2.048919] acpi PNP0A08:00: FADT indicates ASPM is unsupported, using BIOS configuration [ 2.057535] PCI host bridge to bus 0000:00 [ 2.061634] pci_bus 0000:00: root bus resource [io 0x0000-0x03af window] [ 2.068419] pci_bus 0000:00: root bus resource [io 0x03e0-0x0cf7 window] [ 2.075207] pci_bus 0000:00: root bus resource [mem 0x000c0000-0x000c3fff window] [ 2.082686] pci_bus 0000:00: root bus resource [mem 0x000c4000-0x000c7fff window] [ 2.090164] pci_bus 0000:00: root bus resource [mem 0x000c8000-0x000cbfff window] [ 2.097646] pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window] [ 2.105125] pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window] [ 2.112605] pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window] [ 2.120083] pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window] [ 2.127563] pci_bus 0000:00: root bus resource [mem 0x000dc000-0x000dffff window] [ 2.135044] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000e3fff window] [ 2.142523] pci_bus 0000:00: root bus resource [mem 0x000e4000-0x000e7fff window] [ 2.150002] pci_bus 0000:00: root bus resource [mem 0x000e8000-0x000ebfff window] [ 2.157482] pci_bus 0000:00: root bus resource [mem 0x000ec000-0x000effff window] [ 2.164961] pci_bus 0000:00: root bus resource [mem 0x000f0000-0x000fffff window] [ 2.172440] pci_bus 0000:00: root bus resource [io 0x0d00-0x3fff window] [ 2.179228] pci_bus 0000:00: root bus resource [mem 0xe1000000-0xfebfffff window] [ 2.186706] pci_bus 0000:00: root bus resource [mem 0x10000000000-0x2bf3fffffff window] [ 2.194706] pci_bus 0000:00: root bus resource [bus 00-3f] [ 2.200198] pci 0000:00:00.0: [1022:1450] type 00 class 0x060000 [ 2.200279] pci 0000:00:00.2: [1022:1451] type 00 class 0x080600 [ 2.200367] pci 0000:00:01.0: [1022:1452] type 00 class 0x060000 [ 2.200445] pci 0000:00:02.0: [1022:1452] type 00 class 0x060000 [ 2.200519] pci 0000:00:03.0: [1022:1452] type 00 class 0x060000 [ 2.200583] pci 0000:00:03.1: [1022:1453] type 01 class 0x060400 [ 2.201299] pci 0000:00:03.1: PME# supported from D0 D3hot D3cold [ 2.201399] pci 0000:00:04.0: [1022:1452] type 00 class 0x060000 [ 2.201479] pci 0000:00:07.0: [1022:1452] type 00 class 0x060000 [ 2.201543] pci 0000:00:07.1: [1022:1454] type 01 class 0x060400 [ 2.202306] pci 0000:00:07.1: PME# supported from D0 D3hot D3cold [ 2.202385] pci 0000:00:08.0: [1022:1452] type 00 class 0x060000 [ 2.202446] pci 0000:00:08.1: [1022:1454] type 01 class 0x060400 [ 2.203288] pci 0000:00:08.1: PME# supported from D0 D3hot D3cold [ 2.203402] pci 0000:00:14.0: [1022:790b] type 00 class 0x0c0500 [ 2.203602] pci 0000:00:14.3: [1022:790e] type 00 class 0x060100 [ 2.203807] pci 0000:00:18.0: [1022:1460] type 00 class 0x060000 [ 2.203858] pci 0000:00:18.1: [1022:1461] type 00 class 0x060000 [ 2.203909] pci 0000:00:18.2: [1022:1462] type 00 class 0x060000 [ 2.203959] pci 0000:00:18.3: [1022:1463] type 00 class 0x060000 [ 2.204011] pci 0000:00:18.4: [1022:1464] type 00 class 0x060000 [ 2.204061] pci 0000:00:18.5: [1022:1465] type 00 class 0x060000 [ 2.204113] pci 0000:00:18.6: [1022:1466] type 00 class 0x060000 [ 2.204162] pci 0000:00:18.7: [1022:1467] type 00 class 0x060000 [ 2.204212] pci 0000:00:19.0: [1022:1460] type 00 class 0x060000 [ 2.204267] pci 0000:00:19.1: [1022:1461] type 00 class 0x060000 [ 2.204321] pci 0000:00:19.2: [1022:1462] type 00 class 0x060000 [ 2.204373] pci 0000:00:19.3: [1022:1463] type 00 class 0x060000 [ 2.204426] pci 0000:00:19.4: [1022:1464] type 00 class 0x060000 [ 2.204478] pci 0000:00:19.5: [1022:1465] type 00 class 0x060000 [ 2.204534] pci 0000:00:19.6: [1022:1466] type 00 class 0x060000 [ 2.204587] pci 0000:00:19.7: [1022:1467] type 00 class 0x060000 [ 2.204640] pci 0000:00:1a.0: [1022:1460] type 00 class 0x060000 [ 2.204693] pci 0000:00:1a.1: [1022:1461] type 00 class 0x060000 [ 2.204747] pci 0000:00:1a.2: [1022:1462] type 00 class 0x060000 [ 2.204799] pci 0000:00:1a.3: [1022:1463] type 00 class 0x060000 [ 2.204853] pci 0000:00:1a.4: [1022:1464] type 00 class 0x060000 [ 2.204905] pci 0000:00:1a.5: [1022:1465] type 00 class 0x060000 [ 2.204961] pci 0000:00:1a.6: [1022:1466] type 00 class 0x060000 [ 2.205014] pci 0000:00:1a.7: [1022:1467] type 00 class 0x060000 [ 2.205068] pci 0000:00:1b.0: [1022:1460] type 00 class 0x060000 [ 2.205121] pci 0000:00:1b.1: [1022:1461] type 00 class 0x060000 [ 2.205176] pci 0000:00:1b.2: [1022:1462] type 00 class 0x060000 [ 2.205230] pci 0000:00:1b.3: [1022:1463] type 00 class 0x060000 [ 2.205285] pci 0000:00:1b.4: [1022:1464] type 00 class 0x060000 [ 2.205339] pci 0000:00:1b.5: [1022:1465] type 00 class 0x060000 [ 2.205394] pci 0000:00:1b.6: [1022:1466] type 00 class 0x060000 [ 2.205447] pci 0000:00:1b.7: [1022:1467] type 00 class 0x060000 [ 2.206334] pci 0000:01:00.0: [15b3:101b] type 00 class 0x020700 [ 2.206481] pci 0000:01:00.0: reg 0x10: [mem 0xe2000000-0xe3ffffff 64bit pref] [ 2.206717] pci 0000:01:00.0: reg 0x30: [mem 0xfff00000-0xffffffff pref] [ 2.207123] pci 0000:01:00.0: PME# supported from D3cold [ 2.207399] pci 0000:00:03.1: PCI bridge to [bus 01] [ 2.212374] pci 0000:00:03.1: bridge window [mem 0xe2000000-0xe3ffffff 64bit pref] [ 2.212450] pci 0000:02:00.0: [1022:145a] type 00 class 0x130000 [ 2.212548] pci 0000:02:00.2: [1022:1456] type 00 class 0x108000 [ 2.212565] pci 0000:02:00.2: reg 0x18: [mem 0xf7300000-0xf73fffff] [ 2.212578] pci 0000:02:00.2: reg 0x24: [mem 0xf7400000-0xf7401fff] [ 2.212656] pci 0000:02:00.3: [1022:145f] type 00 class 0x0c0330 [ 2.212668] pci 0000:02:00.3: reg 0x10: [mem 0xf7200000-0xf72fffff 64bit] [ 2.212716] pci 0000:02:00.3: PME# supported from D0 D3hot D3cold [ 2.212775] pci 0000:00:07.1: PCI bridge to [bus 02] [ 2.217745] pci 0000:00:07.1: bridge window [mem 0xf7200000-0xf74fffff] [ 2.218328] pci 0000:03:00.0: [1022:1455] type 00 class 0x130000 [ 2.218435] pci 0000:03:00.1: [1022:1468] type 00 class 0x108000 [ 2.218453] pci 0000:03:00.1: reg 0x18: [mem 0xf7000000-0xf70fffff] [ 2.218466] pci 0000:03:00.1: reg 0x24: [mem 0xf7100000-0xf7101fff] [ 2.218560] pci 0000:00:08.1: PCI bridge to [bus 03] [ 2.223526] pci 0000:00:08.1: bridge window [mem 0xf7000000-0xf71fffff] [ 2.223542] pci_bus 0000:00: on NUMA node 0 [ 2.223917] ACPI: PCI Root Bridge [PC01] (domain 0000 [bus 40-7f]) [ 2.230096] acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI] [ 2.238306] acpi PNP0A08:01: PCIe AER handled by firmware [ 2.243750] acpi PNP0A08:01: _OSC: platform does not support [SHPCHotplug] [ 2.250697] acpi PNP0A08:01: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] [ 2.258348] acpi PNP0A08:01: FADT indicates ASPM is unsupported, using BIOS configuration [ 2.266808] PCI host bridge to bus 0000:40 [ 2.270906] pci_bus 0000:40: root bus resource [io 0x4000-0x7fff window] [ 2.277693] pci_bus 0000:40: root bus resource [mem 0xc6000000-0xe0ffffff window] [ 2.285172] pci_bus 0000:40: root bus resource [mem 0x2bf40000000-0x47e7fffffff window] [ 2.293172] pci_bus 0000:40: root bus resource [bus 40-7f] [ 2.298660] pci 0000:40:00.0: [1022:1450] type 00 class 0x060000 [ 2.298731] pci 0000:40:00.2: [1022:1451] type 00 class 0x080600 [ 2.298819] pci 0000:40:01.0: [1022:1452] type 00 class 0x060000 [ 2.298894] pci 0000:40:02.0: [1022:1452] type 00 class 0x060000 [ 2.298968] pci 0000:40:03.0: [1022:1452] type 00 class 0x060000 [ 2.299042] pci 0000:40:04.0: [1022:1452] type 00 class 0x060000 [ 2.299120] pci 0000:40:07.0: [1022:1452] type 00 class 0x060000 [ 2.299181] pci 0000:40:07.1: [1022:1454] type 01 class 0x060400 [ 2.299309] pci 0000:40:07.1: PME# supported from D0 D3hot D3cold [ 2.299388] pci 0000:40:08.0: [1022:1452] type 00 class 0x060000 [ 2.299451] pci 0000:40:08.1: [1022:1454] type 01 class 0x060400 [ 2.299563] pci 0000:40:08.1: PME# supported from D0 D3hot D3cold [ 2.299766] pci 0000:41:00.0: [1022:145a] type 00 class 0x130000 [ 2.299871] pci 0000:41:00.2: [1022:1456] type 00 class 0x108000 [ 2.299890] pci 0000:41:00.2: reg 0x18: [mem 0xdb300000-0xdb3fffff] [ 2.299903] pci 0000:41:00.2: reg 0x24: [mem 0xdb400000-0xdb401fff] [ 2.299988] pci 0000:41:00.3: [1022:145f] type 00 class 0x0c0330 [ 2.300001] pci 0000:41:00.3: reg 0x10: [mem 0xdb200000-0xdb2fffff 64bit] [ 2.300055] pci 0000:41:00.3: PME# supported from D0 D3hot D3cold [ 2.300117] pci 0000:40:07.1: PCI bridge to [bus 41] [ 2.305083] pci 0000:40:07.1: bridge window [mem 0xdb200000-0xdb4fffff] [ 2.305290] pci 0000:42:00.0: [1022:1455] type 00 class 0x130000 [ 2.305406] pci 0000:42:00.1: [1022:1468] type 00 class 0x108000 [ 2.305426] pci 0000:42:00.1: reg 0x18: [mem 0xdb000000-0xdb0fffff] [ 2.305440] pci 0000:42:00.1: reg 0x24: [mem 0xdb100000-0xdb101fff] [ 2.305540] pci 0000:40:08.1: PCI bridge to [bus 42] [ 2.310508] pci 0000:40:08.1: bridge window [mem 0xdb000000-0xdb1fffff] [ 2.310520] pci_bus 0000:40: on NUMA node 1 [ 2.310708] ACPI: PCI Root Bridge [PC02] (domain 0000 [bus 80-bf]) [ 2.316886] acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI] [ 2.325096] acpi PNP0A08:02: PCIe AER handled by firmware [ 2.330540] acpi PNP0A08:02: _OSC: platform does not support [SHPCHotplug] [ 2.337488] acpi PNP0A08:02: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] [ 2.345137] acpi PNP0A08:02: FADT indicates ASPM is unsupported, using BIOS configuration [ 2.353623] PCI host bridge to bus 0000:80 [ 2.357724] pci_bus 0000:80: root bus resource [io 0x03b0-0x03df window] [ 2.364509] pci_bus 0000:80: root bus resource [mem 0x000a0000-0x000bffff window] [ 2.371987] pci_bus 0000:80: root bus resource [io 0x8000-0xbfff window] [ 2.378774] pci_bus 0000:80: root bus resource [mem 0xab000000-0xc5ffffff window] [ 2.386254] pci_bus 0000:80: root bus resource [mem 0x47e80000000-0x63dbfffffff window] [ 2.394254] pci_bus 0000:80: root bus resource [bus 80-bf] [ 2.399744] pci 0000:80:00.0: [1022:1450] type 00 class 0x060000 [ 2.399815] pci 0000:80:00.2: [1022:1451] type 00 class 0x080600 [ 2.399905] pci 0000:80:01.0: [1022:1452] type 00 class 0x060000 [ 2.399966] pci 0000:80:01.1: [1022:1453] type 01 class 0x060400 [ 2.400326] pci 0000:80:01.1: PME# supported from D0 D3hot D3cold [ 2.400399] pci 0000:80:01.2: [1022:1453] type 01 class 0x060400 [ 2.400530] pci 0000:80:01.2: PME# supported from D0 D3hot D3cold [ 2.400611] pci 0000:80:02.0: [1022:1452] type 00 class 0x060000 [ 2.400687] pci 0000:80:03.0: [1022:1452] type 00 class 0x060000 [ 2.400747] pci 0000:80:03.1: [1022:1453] type 01 class 0x060400 [ 2.401334] pci 0000:80:03.1: PME# supported from D0 D3hot D3cold [ 2.401431] pci 0000:80:04.0: [1022:1452] type 00 class 0x060000 [ 2.401513] pci 0000:80:07.0: [1022:1452] type 00 class 0x060000 [ 2.401575] pci 0000:80:07.1: [1022:1454] type 01 class 0x060400 [ 2.401684] pci 0000:80:07.1: PME# supported from D0 D3hot D3cold [ 2.401762] pci 0000:80:08.0: [1022:1452] type 00 class 0x060000 [ 2.401823] pci 0000:80:08.1: [1022:1454] type 01 class 0x060400 [ 2.402332] pci 0000:80:08.1: PME# supported from D0 D3hot D3cold [ 2.402552] pci 0000:81:00.0: [14e4:165f] type 00 class 0x020000 [ 2.402577] pci 0000:81:00.0: reg 0x10: [mem 0xac230000-0xac23ffff 64bit pref] [ 2.402592] pci 0000:81:00.0: reg 0x18: [mem 0xac240000-0xac24ffff 64bit pref] [ 2.402607] pci 0000:81:00.0: reg 0x20: [mem 0xac250000-0xac25ffff 64bit pref] [ 2.402617] pci 0000:81:00.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] [ 2.402693] pci 0000:81:00.0: PME# supported from D0 D3hot D3cold [ 2.402789] pci 0000:81:00.1: [14e4:165f] type 00 class 0x020000 [ 2.402814] pci 0000:81:00.1: reg 0x10: [mem 0xac200000-0xac20ffff 64bit pref] [ 2.402829] pci 0000:81:00.1: reg 0x18: [mem 0xac210000-0xac21ffff 64bit pref] [ 2.402844] pci 0000:81:00.1: reg 0x20: [mem 0xac220000-0xac22ffff 64bit pref] [ 2.402854] pci 0000:81:00.1: reg 0x30: [mem 0xfffc0000-0xffffffff pref] [ 2.402929] pci 0000:81:00.1: PME# supported from D0 D3hot D3cold [ 2.403019] pci 0000:80:01.1: PCI bridge to [bus 81] [ 2.407990] pci 0000:80:01.1: bridge window [mem 0xac200000-0xac2fffff 64bit pref] [ 2.408311] pci 0000:82:00.0: [1556:be00] type 01 class 0x060400 [ 2.410551] pci 0000:80:01.2: PCI bridge to [bus 82-83] [ 2.415778] pci 0000:80:01.2: bridge window [mem 0xc0000000-0xc08fffff] [ 2.415783] pci 0000:80:01.2: bridge window [mem 0xab000000-0xabffffff 64bit pref] [ 2.415830] pci 0000:83:00.0: [102b:0536] type 00 class 0x030000 [ 2.415849] pci 0000:83:00.0: reg 0x10: [mem 0xab000000-0xabffffff pref] [ 2.415860] pci 0000:83:00.0: reg 0x14: [mem 0xc0808000-0xc080bfff] [ 2.415871] pci 0000:83:00.0: reg 0x18: [mem 0xc0000000-0xc07fffff] [ 2.416011] pci 0000:82:00.0: PCI bridge to [bus 83] [ 2.420982] pci 0000:82:00.0: bridge window [mem 0xc0000000-0xc08fffff] [ 2.420989] pci 0000:82:00.0: bridge window [mem 0xab000000-0xabffffff 64bit pref] [ 2.421343] pci 0000:84:00.0: [1000:00d1] type 00 class 0x010700 [ 2.421365] pci 0000:84:00.0: reg 0x10: [mem 0xac000000-0xac0fffff 64bit pref] [ 2.421376] pci 0000:84:00.0: reg 0x18: [mem 0xac100000-0xac1fffff 64bit pref] [ 2.421383] pci 0000:84:00.0: reg 0x20: [mem 0xc0d00000-0xc0dfffff] [ 2.421390] pci 0000:84:00.0: reg 0x24: [io 0x8000-0x80ff] [ 2.421399] pci 0000:84:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 2.421450] pci 0000:84:00.0: supports D1 D2 [ 2.423549] pci 0000:80:03.1: PCI bridge to [bus 84] [ 2.428517] pci 0000:80:03.1: bridge window [io 0x8000-0x8fff] [ 2.428520] pci 0000:80:03.1: bridge window [mem 0xc0d00000-0xc0dfffff] [ 2.428523] pci 0000:80:03.1: bridge window [mem 0xac000000-0xac1fffff 64bit pref] [ 2.428602] pci 0000:85:00.0: [1022:145a] type 00 class 0x130000 [ 2.428710] pci 0000:85:00.2: [1022:1456] type 00 class 0x108000 [ 2.428729] pci 0000:85:00.2: reg 0x18: [mem 0xc0b00000-0xc0bfffff] [ 2.428742] pci 0000:85:00.2: reg 0x24: [mem 0xc0c00000-0xc0c01fff] [ 2.428833] pci 0000:80:07.1: PCI bridge to [bus 85] [ 2.433806] pci 0000:80:07.1: bridge window [mem 0xc0b00000-0xc0cfffff] [ 2.434381] pci 0000:86:00.0: [1022:1455] type 00 class 0x130000 [ 2.434498] pci 0000:86:00.1: [1022:1468] type 00 class 0x108000 [ 2.434518] pci 0000:86:00.1: reg 0x18: [mem 0xc0900000-0xc09fffff] [ 2.434532] pci 0000:86:00.1: reg 0x24: [mem 0xc0a00000-0xc0a01fff] [ 2.434624] pci 0000:86:00.2: [1022:7901] type 00 class 0x010601 [ 2.434656] pci 0000:86:00.2: reg 0x24: [mem 0xc0a02000-0xc0a02fff] [ 2.434694] pci 0000:86:00.2: PME# supported from D3hot D3cold [ 2.434759] pci 0000:80:08.1: PCI bridge to [bus 86] [ 2.439732] pci 0000:80:08.1: bridge window [mem 0xc0900000-0xc0afffff] [ 2.439757] pci_bus 0000:80: on NUMA node 2 [ 2.439926] ACPI: PCI Root Bridge [PC03] (domain 0000 [bus c0-ff]) [ 2.446110] acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI] [ 2.454319] acpi PNP0A08:03: PCIe AER handled by firmware [ 2.459754] acpi PNP0A08:03: _OSC: platform does not support [SHPCHotplug] [ 2.466694] acpi PNP0A08:03: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] [ 2.474345] acpi PNP0A08:03: FADT indicates ASPM is unsupported, using BIOS configuration [ 2.482714] acpi PNP0A08:03: host bridge window [mem 0x63dc0000000-0xffffffffffff window] ([0x80000000000-0xffffffffffff] ignored, not CPU addressable) [ 2.496353] PCI host bridge to bus 0000:c0 [ 2.500450] pci_bus 0000:c0: root bus resource [io 0xc000-0xffff window] [ 2.507236] pci_bus 0000:c0: root bus resource [mem 0x90000000-0xaaffffff window] [ 2.514716] pci_bus 0000:c0: root bus resource [mem 0x63dc0000000-0x7ffffffffff window] [ 2.522716] pci_bus 0000:c0: root bus resource [bus c0-ff] [ 2.528205] pci 0000:c0:00.0: [1022:1450] type 00 class 0x060000 [ 2.528276] pci 0000:c0:00.2: [1022:1451] type 00 class 0x080600 [ 2.528366] pci 0000:c0:01.0: [1022:1452] type 00 class 0x060000 [ 2.528426] pci 0000:c0:01.1: [1022:1453] type 01 class 0x060400 [ 2.528585] pci 0000:c0:01.1: PME# supported from D0 D3hot D3cold [ 2.528681] pci 0000:c0:02.0: [1022:1452] type 00 class 0x060000 [ 2.528757] pci 0000:c0:03.0: [1022:1452] type 00 class 0x060000 [ 2.528830] pci 0000:c0:04.0: [1022:1452] type 00 class 0x060000 [ 2.528908] pci 0000:c0:07.0: [1022:1452] type 00 class 0x060000 [ 2.528967] pci 0000:c0:07.1: [1022:1454] type 01 class 0x060400 [ 2.529554] pci 0000:c0:07.1: PME# supported from D0 D3hot D3cold [ 2.529633] pci 0000:c0:08.0: [1022:1452] type 00 class 0x060000 [ 2.529694] pci 0000:c0:08.1: [1022:1454] type 01 class 0x060400 [ 2.529805] pci 0000:c0:08.1: PME# supported from D0 D3hot D3cold [ 2.530003] pci 0000:c1:00.0: [1000:005f] type 00 class 0x010400 [ 2.530017] pci 0000:c1:00.0: reg 0x10: [io 0xc000-0xc0ff] [ 2.530027] pci 0000:c1:00.0: reg 0x14: [mem 0xa5500000-0xa550ffff 64bit] [ 2.530036] pci 0000:c1:00.0: reg 0x1c: [mem 0xa5400000-0xa54fffff 64bit] [ 2.530049] pci 0000:c1:00.0: reg 0x30: [mem 0xfff00000-0xffffffff pref] [ 2.530097] pci 0000:c1:00.0: supports D1 D2 [ 2.530148] pci 0000:c0:01.1: PCI bridge to [bus c1] [ 2.535122] pci 0000:c0:01.1: bridge window [io 0xc000-0xcfff] [ 2.535124] pci 0000:c0:01.1: bridge window [mem 0xa5400000-0xa55fffff] [ 2.535570] pci 0000:c2:00.0: [1022:145a] type 00 class 0x130000 [ 2.535675] pci 0000:c2:00.2: [1022:1456] type 00 class 0x108000 [ 2.535693] pci 0000:c2:00.2: reg 0x18: [mem 0xa5200000-0xa52fffff] [ 2.535706] pci 0000:c2:00.2: reg 0x24: [mem 0xa5300000-0xa5301fff] [ 2.535798] pci 0000:c0:07.1: PCI bridge to [bus c2] [ 2.540773] pci 0000:c0:07.1: bridge window [mem 0xa5200000-0xa53fffff] [ 2.540867] pci 0000:c3:00.0: [1022:1455] type 00 class 0x130000 [ 2.540983] pci 0000:c3:00.1: [1022:1468] type 00 class 0x108000 [ 2.541003] pci 0000:c3:00.1: reg 0x18: [mem 0xa5000000-0xa50fffff] [ 2.541017] pci 0000:c3:00.1: reg 0x24: [mem 0xa5100000-0xa5101fff] [ 2.541113] pci 0000:c0:08.1: PCI bridge to [bus c3] [ 2.546084] pci 0000:c0:08.1: bridge window [mem 0xa5000000-0xa51fffff] [ 2.546101] pci_bus 0000:c0: on NUMA node 3 [ 2.548196] vgaarb: device added: PCI:0000:83:00.0,decodes=io+mem,owns=io+mem,locks=none [ 2.556284] vgaarb: loaded [ 2.558995] vgaarb: bridge control possible 0000:83:00.0 [ 2.564414] SCSI subsystem initialized [ 2.568190] ACPI: bus type USB registered [ 2.572217] usbcore: registered new interface driver usbfs [ 2.577713] usbcore: registered new interface driver hub [ 2.583237] usbcore: registered new device driver usb [ 2.588610] EDAC MC: Ver: 3.0.0 [ 2.592001] PCI: Using ACPI for IRQ routing [ 2.615158] PCI: pci_cache_line_size set to 64 bytes [ 2.615301] e820: reserve RAM buffer [mem 0x0008f000-0x0008ffff] [ 2.615303] e820: reserve RAM buffer [mem 0x3732d020-0x37ffffff] [ 2.615305] e820: reserve RAM buffer [mem 0x37346020-0x37ffffff] [ 2.615307] e820: reserve RAM buffer [mem 0x3736c020-0x37ffffff] [ 2.615308] e820: reserve RAM buffer [mem 0x37375020-0x37ffffff] [ 2.615310] e820: reserve RAM buffer [mem 0x373a7020-0x37ffffff] [ 2.615311] e820: reserve RAM buffer [mem 0x373d9020-0x37ffffff] [ 2.615312] e820: reserve RAM buffer [mem 0x4f723000-0x4fffffff] [ 2.615313] e820: reserve RAM buffer [mem 0x6cacf000-0x6fffffff] [ 2.615315] e820: reserve RAM buffer [mem 0x107f380000-0x107fffffff] [ 2.615316] e820: reserve RAM buffer [mem 0x207ff80000-0x207fffffff] [ 2.615317] e820: reserve RAM buffer [mem 0x307ff80000-0x307fffffff] [ 2.615318] e820: reserve RAM buffer [mem 0x407ff80000-0x407fffffff] [ 2.615552] NetLabel: Initializing [ 2.618963] NetLabel: domain hash size = 128 [ 2.623322] NetLabel: protocols = UNLABELED CIPSOv4 [ 2.628301] NetLabel: unlabeled traffic allowed by default [ 2.634071] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 2.639050] hpet0: 3 comparators, 32-bit 14.318180 MHz counter [ 2.647058] Switched to clocksource hpet [ 2.655657] pnp: PnP ACPI init [ 2.658733] ACPI: bus type PNP registered [ 2.662918] system 00:00: [mem 0x80000000-0x8fffffff] has been reserved [ 2.669543] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active) [ 2.669597] pnp 00:01: Plug and Play ACPI device, IDs PNP0b00 (active) [ 2.669790] pnp 00:02: Plug and Play ACPI device, IDs PNP0501 (active) [ 2.669967] pnp 00:03: Plug and Play ACPI device, IDs PNP0501 (active) [ 2.670140] pnp: PnP ACPI: found 4 devices [ 2.674248] ACPI: bus type PNP unregistered [ 2.685685] pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff00000-0xffffffff pref]: no compatible bridge window [ 2.695606] pci 0000:81:00.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window [ 2.705519] pci 0000:81:00.1: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window [ 2.715436] pci 0000:c1:00.0: can't claim BAR 6 [mem 0xfff00000-0xffffffff pref]: no compatible bridge window [ 2.725371] pci 0000:00:03.1: BAR 14: assigned [mem 0xe1000000-0xe10fffff] [ 2.732258] pci 0000:01:00.0: BAR 6: assigned [mem 0xe1000000-0xe10fffff pref] [ 2.739485] pci 0000:00:03.1: PCI bridge to [bus 01] [ 2.744460] pci 0000:00:03.1: bridge window [mem 0xe1000000-0xe10fffff] [ 2.751255] pci 0000:00:03.1: bridge window [mem 0xe2000000-0xe3ffffff 64bit pref] [ 2.759005] pci 0000:00:07.1: PCI bridge to [bus 02] [ 2.763977] pci 0000:00:07.1: bridge window [mem 0xf7200000-0xf74fffff] [ 2.770775] pci 0000:00:08.1: PCI bridge to [bus 03] [ 2.775748] pci 0000:00:08.1: bridge window [mem 0xf7000000-0xf71fffff] [ 2.782547] pci_bus 0000:00: resource 4 [io 0x0000-0x03af window] [ 2.782549] pci_bus 0000:00: resource 5 [io 0x03e0-0x0cf7 window] [ 2.782551] pci_bus 0000:00: resource 6 [mem 0x000c0000-0x000c3fff window] [ 2.782552] pci_bus 0000:00: resource 7 [mem 0x000c4000-0x000c7fff window] [ 2.782554] pci_bus 0000:00: resource 8 [mem 0x000c8000-0x000cbfff window] [ 2.782556] pci_bus 0000:00: resource 9 [mem 0x000cc000-0x000cffff window] [ 2.782557] pci_bus 0000:00: resource 10 [mem 0x000d0000-0x000d3fff window] [ 2.782559] pci_bus 0000:00: resource 11 [mem 0x000d4000-0x000d7fff window] [ 2.782561] pci_bus 0000:00: resource 12 [mem 0x000d8000-0x000dbfff window] [ 2.782562] pci_bus 0000:00: resource 13 [mem 0x000dc000-0x000dffff window] [ 2.782564] pci_bus 0000:00: resource 14 [mem 0x000e0000-0x000e3fff window] [ 2.782566] pci_bus 0000:00: resource 15 [mem 0x000e4000-0x000e7fff window] [ 2.782567] pci_bus 0000:00: resource 16 [mem 0x000e8000-0x000ebfff window] [ 2.782569] pci_bus 0000:00: resource 17 [mem 0x000ec000-0x000effff window] [ 2.782571] pci_bus 0000:00: resource 18 [mem 0x000f0000-0x000fffff window] [ 2.782572] pci_bus 0000:00: resource 19 [io 0x0d00-0x3fff window] [ 2.782574] pci_bus 0000:00: resource 20 [mem 0xe1000000-0xfebfffff window] [ 2.782576] pci_bus 0000:00: resource 21 [mem 0x10000000000-0x2bf3fffffff window] [ 2.782578] pci_bus 0000:01: resource 1 [mem 0xe1000000-0xe10fffff] [ 2.782580] pci_bus 0000:01: resource 2 [mem 0xe2000000-0xe3ffffff 64bit pref] [ 2.782581] pci_bus 0000:02: resource 1 [mem 0xf7200000-0xf74fffff] [ 2.782583] pci_bus 0000:03: resource 1 [mem 0xf7000000-0xf71fffff] [ 2.782595] pci 0000:40:07.1: PCI bridge to [bus 41] [ 2.787570] pci 0000:40:07.1: bridge window [mem 0xdb200000-0xdb4fffff] [ 2.794367] pci 0000:40:08.1: PCI bridge to [bus 42] [ 2.799341] pci 0000:40:08.1: bridge window [mem 0xdb000000-0xdb1fffff] [ 2.806137] pci_bus 0000:40: resource 4 [io 0x4000-0x7fff window] [ 2.806138] pci_bus 0000:40: resource 5 [mem 0xc6000000-0xe0ffffff window] [ 2.806140] pci_bus 0000:40: resource 6 [mem 0x2bf40000000-0x47e7fffffff window] [ 2.806142] pci_bus 0000:41: resource 1 [mem 0xdb200000-0xdb4fffff] [ 2.806144] pci_bus 0000:42: resource 1 [mem 0xdb000000-0xdb1fffff] [ 2.806176] pci 0000:80:01.1: BAR 14: assigned [mem 0xac300000-0xac3fffff] [ 2.813060] pci 0000:81:00.0: BAR 6: assigned [mem 0xac300000-0xac33ffff pref] [ 2.820286] pci 0000:81:00.1: BAR 6: assigned [mem 0xac340000-0xac37ffff pref] [ 2.827514] pci 0000:80:01.1: PCI bridge to [bus 81] [ 2.832490] pci 0000:80:01.1: bridge window [mem 0xac300000-0xac3fffff] [ 2.839284] pci 0000:80:01.1: bridge window [mem 0xac200000-0xac2fffff 64bit pref] [ 2.847036] pci 0000:82:00.0: PCI bridge to [bus 83] [ 2.852012] pci 0000:82:00.0: bridge window [mem 0xc0000000-0xc08fffff] [ 2.858806] pci 0000:82:00.0: bridge window [mem 0xab000000-0xabffffff 64bit pref] [ 2.866554] pci 0000:80:01.2: PCI bridge to [bus 82-83] [ 2.871787] pci 0000:80:01.2: bridge window [mem 0xc0000000-0xc08fffff] [ 2.878582] pci 0000:80:01.2: bridge window [mem 0xab000000-0xabffffff 64bit pref] [ 2.886331] pci 0000:84:00.0: BAR 6: no space for [mem size 0x00040000 pref] [ 2.893383] pci 0000:84:00.0: BAR 6: failed to assign [mem size 0x00040000 pref] [ 2.900785] pci 0000:80:03.1: PCI bridge to [bus 84] [ 2.905760] pci 0000:80:03.1: bridge window [io 0x8000-0x8fff] [ 2.911863] pci 0000:80:03.1: bridge window [mem 0xc0d00000-0xc0dfffff] [ 2.918657] pci 0000:80:03.1: bridge window [mem 0xac000000-0xac1fffff 64bit pref] [ 2.926409] pci 0000:80:07.1: PCI bridge to [bus 85] [ 2.931382] pci 0000:80:07.1: bridge window [mem 0xc0b00000-0xc0cfffff] [ 2.938179] pci 0000:80:08.1: PCI bridge to [bus 86] [ 2.943152] pci 0000:80:08.1: bridge window [mem 0xc0900000-0xc0afffff] [ 2.949947] pci_bus 0000:80: resource 4 [io 0x03b0-0x03df window] [ 2.949949] pci_bus 0000:80: resource 5 [mem 0x000a0000-0x000bffff window] [ 2.949951] pci_bus 0000:80: resource 6 [io 0x8000-0xbfff window] [ 2.949952] pci_bus 0000:80: resource 7 [mem 0xab000000-0xc5ffffff window] [ 2.949954] pci_bus 0000:80: resource 8 [mem 0x47e80000000-0x63dbfffffff window] [ 2.949956] pci_bus 0000:81: resource 1 [mem 0xac300000-0xac3fffff] [ 2.949958] pci_bus 0000:81: resource 2 [mem 0xac200000-0xac2fffff 64bit pref] [ 2.949960] pci_bus 0000:82: resource 1 [mem 0xc0000000-0xc08fffff] [ 2.949961] pci_bus 0000:82: resource 2 [mem 0xab000000-0xabffffff 64bit pref] [ 2.949963] pci_bus 0000:83: resource 1 [mem 0xc0000000-0xc08fffff] [ 2.949965] pci_bus 0000:83: resource 2 [mem 0xab000000-0xabffffff 64bit pref] [ 2.949967] pci_bus 0000:84: resource 0 [io 0x8000-0x8fff] [ 2.949969] pci_bus 0000:84: resource 1 [mem 0xc0d00000-0xc0dfffff] [ 2.949970] pci_bus 0000:84: resource 2 [mem 0xac000000-0xac1fffff 64bit pref] [ 2.949972] pci_bus 0000:85: resource 1 [mem 0xc0b00000-0xc0cfffff] [ 2.949974] pci_bus 0000:86: resource 1 [mem 0xc0900000-0xc0afffff] [ 2.949989] pci 0000:c1:00.0: BAR 6: no space for [mem size 0x00100000 pref] [ 2.957043] pci 0000:c1:00.0: BAR 6: failed to assign [mem size 0x00100000 pref] [ 2.964444] pci 0000:c0:01.1: PCI bridge to [bus c1] [ 2.969420] pci 0000:c0:01.1: bridge window [io 0xc000-0xcfff] [ 2.975522] pci 0000:c0:01.1: bridge window [mem 0xa5400000-0xa55fffff] [ 2.982319] pci 0000:c0:07.1: PCI bridge to [bus c2] [ 2.987291] pci 0000:c0:07.1: bridge window [mem 0xa5200000-0xa53fffff] [ 2.994088] pci 0000:c0:08.1: PCI bridge to [bus c3] [ 2.999062] pci 0000:c0:08.1: bridge window [mem 0xa5000000-0xa51fffff] [ 3.005858] pci_bus 0000:c0: resource 4 [io 0xc000-0xffff window] [ 3.005860] pci_bus 0000:c0: resource 5 [mem 0x90000000-0xaaffffff window] [ 3.005862] pci_bus 0000:c0: resource 6 [mem 0x63dc0000000-0x7ffffffffff window] [ 3.005864] pci_bus 0000:c1: resource 0 [io 0xc000-0xcfff] [ 3.005865] pci_bus 0000:c1: resource 1 [mem 0xa5400000-0xa55fffff] [ 3.005867] pci_bus 0000:c2: resource 1 [mem 0xa5200000-0xa53fffff] [ 3.005869] pci_bus 0000:c3: resource 1 [mem 0xa5000000-0xa51fffff] [ 3.005951] NET: Registered protocol family 2 [ 3.010986] TCP established hash table entries: 524288 (order: 10, 4194304 bytes) [ 3.019130] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) [ 3.025955] TCP: Hash tables configured (established 524288 bind 65536) [ 3.032594] TCP: reno registered [ 3.035936] UDP hash table entries: 65536 (order: 9, 2097152 bytes) [ 3.042533] UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes) [ 3.049713] NET: Registered protocol family 1 [ 3.054502] pci 0000:83:00.0: Boot video device [ 3.054539] PCI: CLS 64 bytes, default 64 [ 3.054580] Unpacking initramfs... [ 3.408891] Freeing initrd memory: 24744k freed [ 3.416164] AMD-Vi: IOMMU performance counters supported [ 3.421544] AMD-Vi: IOMMU performance counters supported [ 3.426901] AMD-Vi: IOMMU performance counters supported [ 3.432262] AMD-Vi: IOMMU performance counters supported [ 3.438872] iommu: Adding device 0000:00:01.0 to group 0 [ 3.444872] iommu: Adding device 0000:00:02.0 to group 1 [ 3.450890] iommu: Adding device 0000:00:03.0 to group 2 [ 3.457024] iommu: Adding device 0000:00:03.1 to group 3 [ 3.463084] iommu: Adding device 0000:00:04.0 to group 4 [ 3.469125] iommu: Adding device 0000:00:07.0 to group 5 [ 3.475096] iommu: Adding device 0000:00:07.1 to group 6 [ 3.481115] iommu: Adding device 0000:00:08.0 to group 7 [ 3.487119] iommu: Adding device 0000:00:08.1 to group 8 [ 3.493128] iommu: Adding device 0000:00:14.0 to group 9 [ 3.498465] iommu: Adding device 0000:00:14.3 to group 9 [ 3.504539] iommu: Adding device 0000:00:18.0 to group 10 [ 3.509972] iommu: Adding device 0000:00:18.1 to group 10 [ 3.515393] iommu: Adding device 0000:00:18.2 to group 10 [ 3.520818] iommu: Adding device 0000:00:18.3 to group 10 [ 3.526243] iommu: Adding device 0000:00:18.4 to group 10 [ 3.531667] iommu: Adding device 0000:00:18.5 to group 10 [ 3.537094] iommu: Adding device 0000:00:18.6 to group 10 [ 3.542520] iommu: Adding device 0000:00:18.7 to group 10 [ 3.548697] iommu: Adding device 0000:00:19.0 to group 11 [ 3.554128] iommu: Adding device 0000:00:19.1 to group 11 [ 3.559552] iommu: Adding device 0000:00:19.2 to group 11 [ 3.564975] iommu: Adding device 0000:00:19.3 to group 11 [ 3.570400] iommu: Adding device 0000:00:19.4 to group 11 [ 3.575829] iommu: Adding device 0000:00:19.5 to group 11 [ 3.581254] iommu: Adding device 0000:00:19.6 to group 11 [ 3.586682] iommu: Adding device 0000:00:19.7 to group 11 [ 3.592802] iommu: Adding device 0000:00:1a.0 to group 12 [ 3.598228] iommu: Adding device 0000:00:1a.1 to group 12 [ 3.603647] iommu: Adding device 0000:00:1a.2 to group 12 [ 3.609083] iommu: Adding device 0000:00:1a.3 to group 12 [ 3.614512] iommu: Adding device 0000:00:1a.4 to group 12 [ 3.619936] iommu: Adding device 0000:00:1a.5 to group 12 [ 3.625361] iommu: Adding device 0000:00:1a.6 to group 12 [ 3.630784] iommu: Adding device 0000:00:1a.7 to group 12 [ 3.636959] iommu: Adding device 0000:00:1b.0 to group 13 [ 3.642382] iommu: Adding device 0000:00:1b.1 to group 13 [ 3.647809] iommu: Adding device 0000:00:1b.2 to group 13 [ 3.653236] iommu: Adding device 0000:00:1b.3 to group 13 [ 3.658660] iommu: Adding device 0000:00:1b.4 to group 13 [ 3.664086] iommu: Adding device 0000:00:1b.5 to group 13 [ 3.669507] iommu: Adding device 0000:00:1b.6 to group 13 [ 3.674937] iommu: Adding device 0000:00:1b.7 to group 13 [ 3.681094] iommu: Adding device 0000:01:00.0 to group 14 [ 3.687226] iommu: Adding device 0000:02:00.0 to group 15 [ 3.693321] iommu: Adding device 0000:02:00.2 to group 16 [ 3.699412] iommu: Adding device 0000:02:00.3 to group 17 [ 3.705508] iommu: Adding device 0000:03:00.0 to group 18 [ 3.711612] iommu: Adding device 0000:03:00.1 to group 19 [ 3.717720] iommu: Adding device 0000:40:01.0 to group 20 [ 3.723794] iommu: Adding device 0000:40:02.0 to group 21 [ 3.729857] iommu: Adding device 0000:40:03.0 to group 22 [ 3.735968] iommu: Adding device 0000:40:04.0 to group 23 [ 3.742055] iommu: Adding device 0000:40:07.0 to group 24 [ 3.748035] iommu: Adding device 0000:40:07.1 to group 25 [ 3.754046] iommu: Adding device 0000:40:08.0 to group 26 [ 3.760104] iommu: Adding device 0000:40:08.1 to group 27 [ 3.766137] iommu: Adding device 0000:41:00.0 to group 28 [ 3.772196] iommu: Adding device 0000:41:00.2 to group 29 [ 3.778233] iommu: Adding device 0000:41:00.3 to group 30 [ 3.784227] iommu: Adding device 0000:42:00.0 to group 31 [ 3.790280] iommu: Adding device 0000:42:00.1 to group 32 [ 3.796357] iommu: Adding device 0000:80:01.0 to group 33 [ 3.802378] iommu: Adding device 0000:80:01.1 to group 34 [ 3.808538] iommu: Adding device 0000:80:01.2 to group 35 [ 3.814599] iommu: Adding device 0000:80:02.0 to group 36 [ 3.820660] iommu: Adding device 0000:80:03.0 to group 37 [ 3.826749] iommu: Adding device 0000:80:03.1 to group 38 [ 3.832783] iommu: Adding device 0000:80:04.0 to group 39 [ 3.838890] iommu: Adding device 0000:80:07.0 to group 40 [ 3.844911] iommu: Adding device 0000:80:07.1 to group 41 [ 3.850967] iommu: Adding device 0000:80:08.0 to group 42 [ 3.857007] iommu: Adding device 0000:80:08.1 to group 43 [ 3.863028] iommu: Adding device 0000:81:00.0 to group 44 [ 3.868475] iommu: Adding device 0000:81:00.1 to group 44 [ 3.874517] iommu: Adding device 0000:82:00.0 to group 45 [ 3.879933] iommu: Adding device 0000:83:00.0 to group 45 [ 3.885980] iommu: Adding device 0000:84:00.0 to group 46 [ 3.892034] iommu: Adding device 0000:85:00.0 to group 47 [ 3.898106] iommu: Adding device 0000:85:00.2 to group 48 [ 3.904144] iommu: Adding device 0000:86:00.0 to group 49 [ 3.910175] iommu: Adding device 0000:86:00.1 to group 50 [ 3.916243] iommu: Adding device 0000:86:00.2 to group 51 [ 3.922304] iommu: Adding device 0000:c0:01.0 to group 52 [ 3.928320] iommu: Adding device 0000:c0:01.1 to group 53 [ 3.934324] iommu: Adding device 0000:c0:02.0 to group 54 [ 3.940401] iommu: Adding device 0000:c0:03.0 to group 55 [ 3.946449] iommu: Adding device 0000:c0:04.0 to group 56 [ 3.952517] iommu: Adding device 0000:c0:07.0 to group 57 [ 3.958568] iommu: Adding device 0000:c0:07.1 to group 58 [ 3.964606] iommu: Adding device 0000:c0:08.0 to group 59 [ 3.970651] iommu: Adding device 0000:c0:08.1 to group 60 [ 3.979106] iommu: Adding device 0000:c1:00.0 to group 61 [ 3.985196] iommu: Adding device 0000:c2:00.0 to group 62 [ 3.991250] iommu: Adding device 0000:c2:00.2 to group 63 [ 3.997311] iommu: Adding device 0000:c3:00.0 to group 64 [ 4.003367] iommu: Adding device 0000:c3:00.1 to group 65 [ 4.008966] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40 [ 4.014290] AMD-Vi: Extended features (0xf77ef22294ada): [ 4.019611] PPR NX GT IA GA PC GA_vAPIC [ 4.023744] AMD-Vi: Found IOMMU at 0000:40:00.2 cap 0x40 [ 4.029067] AMD-Vi: Extended features (0xf77ef22294ada): [ 4.034388] PPR NX GT IA GA PC GA_vAPIC [ 4.038529] AMD-Vi: Found IOMMU at 0000:80:00.2 cap 0x40 [ 4.043852] AMD-Vi: Extended features (0xf77ef22294ada): [ 4.049172] PPR NX GT IA GA PC GA_vAPIC [ 4.053316] AMD-Vi: Found IOMMU at 0000:c0:00.2 cap 0x40 [ 4.058638] AMD-Vi: Extended features (0xf77ef22294ada): [ 4.063959] PPR NX GT IA GA PC GA_vAPIC [ 4.068102] AMD-Vi: Interrupt remapping enabled [ 4.072644] AMD-Vi: virtual APIC enabled [ 4.076632] pci 0000:00:00.2: irq 26 for MSI/MSI-X [ 4.076725] pci 0000:40:00.2: irq 27 for MSI/MSI-X [ 4.076807] pci 0000:80:00.2: irq 28 for MSI/MSI-X [ 4.076885] pci 0000:c0:00.2: irq 29 for MSI/MSI-X [ 4.076939] AMD-Vi: Lazy IO/TLB flushing enabled [ 4.083264] perf: AMD NB counters detected [ 4.087408] perf: AMD LLC counters detected [ 4.097543] sha1_ssse3: Using SHA-NI optimized SHA-1 implementation [ 4.103895] sha256_ssse3: Using SHA-256-NI optimized SHA-256 implementation [ 4.112469] futex hash table entries: 32768 (order: 9, 2097152 bytes) [ 4.119109] Initialise system trusted keyring [ 4.123510] audit: initializing netlink socket (disabled) [ 4.128927] type=2000 audit(1596901457.271:1): initialized [ 4.159764] HugeTLB registered 1 GB page size, pre-allocated 0 pages [ 4.166126] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 4.173728] zpool: loaded [ 4.176359] zbud: loaded [ 4.179273] VFS: Disk quotas dquot_6.6.0 [ 4.183305] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 4.190114] msgmni has been set to 32768 [ 4.194146] Key type big_key registered [ 4.197994] SELinux: Registering netfilter hooks [ 4.200397] NET: Registered protocol family 38 [ 4.204855] Key type asymmetric registered [ 4.208965] Asymmetric key parser 'x509' registered [ 4.213898] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 248) [ 4.221444] io scheduler noop registered [ 4.225383] io scheduler deadline registered (default) [ 4.230566] io scheduler cfq registered [ 4.234411] io scheduler mq-deadline registered [ 4.238953] io scheduler kyber registered [ 4.243260] pcieport 0000:00:03.1: irq 30 for MSI/MSI-X [ 4.244160] pcieport 0000:00:07.1: irq 31 for MSI/MSI-X [ 4.244390] pcieport 0000:00:08.1: irq 33 for MSI/MSI-X [ 4.245366] pcieport 0000:40:07.1: irq 34 for MSI/MSI-X [ 4.246158] pcieport 0000:40:08.1: irq 36 for MSI/MSI-X [ 4.246869] pcieport 0000:80:01.1: irq 37 for MSI/MSI-X [ 4.247133] pcieport 0000:80:01.2: irq 38 for MSI/MSI-X [ 4.247820] pcieport 0000:80:03.1: irq 39 for MSI/MSI-X [ 4.248120] pcieport 0000:80:07.1: irq 41 for MSI/MSI-X [ 4.248848] pcieport 0000:80:08.1: irq 43 for MSI/MSI-X [ 4.249196] pcieport 0000:c0:01.1: irq 44 for MSI/MSI-X [ 4.249886] pcieport 0000:c0:07.1: irq 46 for MSI/MSI-X [ 4.250135] pcieport 0000:c0:08.1: irq 48 for MSI/MSI-X [ 4.250248] pcieport 0000:00:03.1: Signaling PME through PCIe PME interrupt [ 4.257215] pci 0000:01:00.0: Signaling PME through PCIe PME interrupt [ 4.263751] pcie_pme 0000:00:03.1:pcie001: service driver pcie_pme loaded [ 4.263762] pcieport 0000:00:07.1: Signaling PME through PCIe PME interrupt [ 4.270725] pci 0000:02:00.0: Signaling PME through PCIe PME interrupt [ 4.277260] pci 0000:02:00.2: Signaling PME through PCIe PME interrupt [ 4.283796] pci 0000:02:00.3: Signaling PME through PCIe PME interrupt [ 4.290330] pcie_pme 0000:00:07.1:pcie001: service driver pcie_pme loaded [ 4.290345] pcieport 0000:00:08.1: Signaling PME through PCIe PME interrupt [ 4.297307] pci 0000:03:00.0: Signaling PME through PCIe PME interrupt [ 4.303840] pci 0000:03:00.1: Signaling PME through PCIe PME interrupt [ 4.310377] pcie_pme 0000:00:08.1:pcie001: service driver pcie_pme loaded [ 4.310396] pcieport 0000:40:07.1: Signaling PME through PCIe PME interrupt [ 4.317361] pci 0000:41:00.0: Signaling PME through PCIe PME interrupt [ 4.323895] pci 0000:41:00.2: Signaling PME through PCIe PME interrupt [ 4.330433] pci 0000:41:00.3: Signaling PME through PCIe PME interrupt [ 4.336968] pcie_pme 0000:40:07.1:pcie001: service driver pcie_pme loaded [ 4.336982] pcieport 0000:40:08.1: Signaling PME through PCIe PME interrupt [ 4.343951] pci 0000:42:00.0: Signaling PME through PCIe PME interrupt [ 4.350487] pci 0000:42:00.1: Signaling PME through PCIe PME interrupt [ 4.357023] pcie_pme 0000:40:08.1:pcie001: service driver pcie_pme loaded [ 4.357039] pcieport 0000:80:01.1: Signaling PME through PCIe PME interrupt [ 4.364007] pci 0000:81:00.0: Signaling PME through PCIe PME interrupt [ 4.370543] pci 0000:81:00.1: Signaling PME through PCIe PME interrupt [ 4.377079] pcie_pme 0000:80:01.1:pcie001: service driver pcie_pme loaded [ 4.377101] pcieport 0000:80:01.2: Signaling PME through PCIe PME interrupt [ 4.384064] pci 0000:82:00.0: Signaling PME through PCIe PME interrupt [ 4.390599] pci 0000:83:00.0: Signaling PME through PCIe PME interrupt [ 4.397135] pcie_pme 0000:80:01.2:pcie001: service driver pcie_pme loaded [ 4.397149] pcieport 0000:80:03.1: Signaling PME through PCIe PME interrupt [ 4.404117] pci 0000:84:00.0: Signaling PME through PCIe PME interrupt [ 4.410655] pcie_pme 0000:80:03.1:pcie001: service driver pcie_pme loaded [ 4.410672] pcieport 0000:80:07.1: Signaling PME through PCIe PME interrupt [ 4.417638] pci 0000:85:00.0: Signaling PME through PCIe PME interrupt [ 4.424174] pci 0000:85:00.2: Signaling PME through PCIe PME interrupt [ 4.430709] pcie_pme 0000:80:07.1:pcie001: service driver pcie_pme loaded [ 4.430726] pcieport 0000:80:08.1: Signaling PME through PCIe PME interrupt [ 4.437695] pci 0000:86:00.0: Signaling PME through PCIe PME interrupt [ 4.444229] pci 0000:86:00.1: Signaling PME through PCIe PME interrupt [ 4.450763] pci 0000:86:00.2: Signaling PME through PCIe PME interrupt [ 4.457300] pcie_pme 0000:80:08.1:pcie001: service driver pcie_pme loaded [ 4.457316] pcieport 0000:c0:01.1: Signaling PME through PCIe PME interrupt [ 4.464284] pci 0000:c1:00.0: Signaling PME through PCIe PME interrupt [ 4.470820] pcie_pme 0000:c0:01.1:pcie001: service driver pcie_pme loaded [ 4.470834] pcieport 0000:c0:07.1: Signaling PME through PCIe PME interrupt [ 4.477806] pci 0000:c2:00.0: Signaling PME through PCIe PME interrupt [ 4.484339] pci 0000:c2:00.2: Signaling PME through PCIe PME interrupt [ 4.490877] pcie_pme 0000:c0:07.1:pcie001: service driver pcie_pme loaded [ 4.490890] pcieport 0000:c0:08.1: Signaling PME through PCIe PME interrupt [ 4.497859] pci 0000:c3:00.0: Signaling PME through PCIe PME interrupt [ 4.504396] pci 0000:c3:00.1: Signaling PME through PCIe PME interrupt [ 4.510932] pcie_pme 0000:c0:08.1:pcie001: service driver pcie_pme loaded [ 4.510951] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 4.516532] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 4.523203] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 [ 4.530012] efifb: probing for efifb [ 4.533604] efifb: framebuffer at 0xab000000, mapped to 0xffffbb5659800000, using 3072k, total 3072k [ 4.542737] efifb: mode is 1024x768x32, linelength=4096, pages=1 [ 4.548752] efifb: scrolling: redraw [ 4.552341] efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 [ 4.573654] Console: switching to colour frame buffer device 128x48 [ 4.595375] fb0: EFI VGA frame buffer device [ 4.599756] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input0 [ 4.607944] ACPI: Power Button [PWRB] [ 4.611665] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input1 [ 4.619069] ACPI: Power Button [PWRF] [ 4.623924] GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. [ 4.631403] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 4.658600] 00:02: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 4.685141] 00:03: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 4.691209] Non-volatile memory driver v1.3 [ 4.695432] Linux agpgart interface v0.103 [ 4.701161] crash memory driver: version 1.1 [ 4.705661] rdac: device handler registered [ 4.709905] hp_sw: device handler registered [ 4.714192] emc: device handler registered [ 4.718445] alua: device handler registered [ 4.722671] libphy: Fixed MDIO Bus: probed [ 4.726833] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 4.733371] ehci-pci: EHCI PCI platform driver [ 4.737841] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 4.744031] ohci-pci: OHCI PCI platform driver [ 4.748498] uhci_hcd: USB Universal Host Controller Interface driver [ 4.754969] xhci_hcd 0000:02:00.3: xHCI Host Controller [ 4.760260] xhci_hcd 0000:02:00.3: new USB bus registered, assigned bus number 1 [ 4.767767] xhci_hcd 0000:02:00.3: hcc params 0x0270f665 hci version 0x100 quirks 0x00000410 [ 4.776246] xhci_hcd 0000:02:00.3: irq 50 for MSI/MSI-X [ 4.776270] xhci_hcd 0000:02:00.3: irq 51 for MSI/MSI-X [ 4.776289] xhci_hcd 0000:02:00.3: irq 52 for MSI/MSI-X [ 4.776308] xhci_hcd 0000:02:00.3: irq 53 for MSI/MSI-X [ 4.776327] xhci_hcd 0000:02:00.3: irq 54 for MSI/MSI-X [ 4.776347] xhci_hcd 0000:02:00.3: irq 55 for MSI/MSI-X [ 4.776366] xhci_hcd 0000:02:00.3: irq 56 for MSI/MSI-X [ 4.776386] xhci_hcd 0000:02:00.3: irq 57 for MSI/MSI-X [ 4.776504] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 [ 4.783301] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 4.790530] usb usb1: Product: xHCI Host Controller [ 4.795417] usb usb1: Manufacturer: Linux 3.10.0-957.27.2.el7_lustre.pl2.x86_64 xhci-hcd [ 4.803513] usb usb1: SerialNumber: 0000:02:00.3 [ 4.808244] hub 1-0:1.0: USB hub found [ 4.812008] hub 1-0:1.0: 2 ports detected [ 4.816259] xhci_hcd 0000:02:00.3: xHCI Host Controller [ 4.821561] xhci_hcd 0000:02:00.3: new USB bus registered, assigned bus number 2 [ 4.828980] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. [ 4.837102] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003 [ 4.843900] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 4.851129] usb usb2: Product: xHCI Host Controller [ 4.856017] usb usb2: Manufacturer: Linux 3.10.0-957.27.2.el7_lustre.pl2.x86_64 xhci-hcd [ 4.864111] usb usb2: SerialNumber: 0000:02:00.3 [ 4.868837] hub 2-0:1.0: USB hub found [ 4.872598] hub 2-0:1.0: 2 ports detected [ 4.876910] xhci_hcd 0000:41:00.3: xHCI Host Controller [ 4.882214] xhci_hcd 0000:41:00.3: new USB bus registered, assigned bus number 3 [ 4.889722] xhci_hcd 0000:41:00.3: hcc params 0x0270f665 hci version 0x100 quirks 0x00000410 [ 4.898208] xhci_hcd 0000:41:00.3: irq 59 for MSI/MSI-X [ 4.898228] xhci_hcd 0000:41:00.3: irq 60 for MSI/MSI-X [ 4.898248] xhci_hcd 0000:41:00.3: irq 61 for MSI/MSI-X [ 4.898267] xhci_hcd 0000:41:00.3: irq 62 for MSI/MSI-X [ 4.898288] xhci_hcd 0000:41:00.3: irq 63 for MSI/MSI-X [ 4.898313] xhci_hcd 0000:41:00.3: irq 64 for MSI/MSI-X [ 4.898331] xhci_hcd 0000:41:00.3: irq 65 for MSI/MSI-X [ 4.898351] xhci_hcd 0000:41:00.3: irq 66 for MSI/MSI-X [ 4.898502] usb usb3: New USB device found, idVendor=1d6b, idProduct=0002 [ 4.905297] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 4.912524] usb usb3: Product: xHCI Host Controller [ 4.917414] usb usb3: Manufacturer: Linux 3.10.0-957.27.2.el7_lustre.pl2.x86_64 xhci-hcd [ 4.925506] usb usb3: SerialNumber: 0000:41:00.3 [ 4.930240] hub 3-0:1.0: USB hub found [ 4.934005] hub 3-0:1.0: 2 ports detected [ 4.938264] xhci_hcd 0000:41:00.3: xHCI Host Controller [ 4.943538] xhci_hcd 0000:41:00.3: new USB bus registered, assigned bus number 4 [ 4.950977] usb usb4: We don't know the algorithms for LPM for this host, disabling LPM. [ 4.959089] usb usb4: New USB device found, idVendor=1d6b, idProduct=0003 [ 4.965889] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 4.973115] usb usb4: Product: xHCI Host Controller [ 4.978002] usb usb4: Manufacturer: Linux 3.10.0-957.27.2.el7_lustre.pl2.x86_64 xhci-hcd [ 4.986108] usb usb4: SerialNumber: 0000:41:00.3 [ 4.990820] hub 4-0:1.0: USB hub found [ 4.994585] hub 4-0:1.0: 2 ports detected [ 4.998855] usbcore: registered new interface driver usbserial_generic [ 5.005402] usbserial: USB Serial support registered for generic [ 5.011450] i8042: PNP: No PS/2 controller found. Probing ports directly. [ 5.250122] usb 3-1: new high-speed USB device number 2 using xhci_hcd [ 5.380112] usb 3-1: New USB device found, idVendor=1604, idProduct=10c0 [ 5.386821] usb 3-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 5.399126] hub 3-1:1.0: USB hub found [ 5.403111] hub 3-1:1.0: 4 ports detected [ 6.049334] i8042: No controller found [ 6.053102] sched: RT throttling activated [ 6.053110] tsc: Refined TSC clocksource calibration: 1996.249 MHz [ 6.053254] mousedev: PS/2 mouse device common for all mice [ 6.053534] rtc_cmos 00:01: RTC can wake from S4 [ 6.053906] rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0 [ 6.054010] rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram, hpet irqs [ 6.054067] cpuidle: using governor menu [ 6.054363] EFI Variables Facility v0.08 2004-May-17 [ 6.078795] hidraw: raw HID events driver (C) Jiri Kosina [ 6.078919] usbcore: registered new interface driver usbhid [ 6.078920] usbhid: USB HID core driver [ 6.079051] drop_monitor: Initializing network drop monitor service [ 6.079216] TCP: cubic registered [ 6.079221] Initializing XFRM netlink socket [ 6.079437] NET: Registered protocol family 10 [ 6.079993] NET: Registered protocol family 17 [ 6.079997] mpls_gso: MPLS GSO support [ 6.081064] mce: Using 23 MCE banks [ 6.081122] microcode: CPU0: patch_level=0x08001250 [ 6.081141] microcode: CPU1: patch_level=0x08001250 [ 6.081148] microcode: CPU2: patch_level=0x08001250 [ 6.081163] microcode: CPU3: patch_level=0x08001250 [ 6.081177] microcode: CPU4: patch_level=0x08001250 [ 6.081192] microcode: CPU5: patch_level=0x08001250 [ 6.081208] microcode: CPU6: patch_level=0x08001250 [ 6.081224] microcode: CPU7: patch_level=0x08001250 [ 6.081236] microcode: CPU8: patch_level=0x08001250 [ 6.081246] microcode: CPU9: patch_level=0x08001250 [ 6.081256] microcode: CPU10: patch_level=0x08001250 [ 6.081268] microcode: CPU11: patch_level=0x08001250 [ 6.081279] microcode: CPU12: patch_level=0x08001250 [ 6.081290] microcode: CPU13: patch_level=0x08001250 [ 6.081300] microcode: CPU14: patch_level=0x08001250 [ 6.081312] microcode: CPU15: patch_level=0x08001250 [ 6.084379] microcode: CPU16: patch_level=0x08001250 [ 6.084391] microcode: CPU17: patch_level=0x08001250 [ 6.084403] microcode: CPU18: patch_level=0x08001250 [ 6.084415] microcode: CPU19: patch_level=0x08001250 [ 6.084425] microcode: CPU20: patch_level=0x08001250 [ 6.084436] microcode: CPU21: patch_level=0x08001250 [ 6.084446] microcode: CPU22: patch_level=0x08001250 [ 6.084458] microcode: CPU23: patch_level=0x08001250 [ 6.084468] microcode: CPU24: patch_level=0x08001250 [ 6.084478] microcode: CPU25: patch_level=0x08001250 [ 6.084489] microcode: CPU26: patch_level=0x08001250 [ 6.084500] microcode: CPU27: patch_level=0x08001250 [ 6.084510] microcode: CPU28: patch_level=0x08001250 [ 6.084521] microcode: CPU29: patch_level=0x08001250 [ 6.084532] microcode: CPU30: patch_level=0x08001250 [ 6.084543] microcode: CPU31: patch_level=0x08001250 [ 6.084553] microcode: CPU32: patch_level=0x08001250 [ 6.084561] microcode: CPU33: patch_level=0x08001250 [ 6.084569] microcode: CPU34: patch_level=0x08001250 [ 6.084577] microcode: CPU35: patch_level=0x08001250 [ 6.084588] microcode: CPU36: patch_level=0x08001250 [ 6.084599] microcode: CPU37: patch_level=0x08001250 [ 6.084609] microcode: CPU38: patch_level=0x08001250 [ 6.084620] microcode: CPU39: patch_level=0x08001250 [ 6.084628] microcode: CPU40: patch_level=0x08001250 [ 6.084639] microcode: CPU41: patch_level=0x08001250 [ 6.084647] microcode: CPU42: patch_level=0x08001250 [ 6.084655] microcode: CPU43: patch_level=0x08001250 [ 6.084666] microcode: CPU44: patch_level=0x08001250 [ 6.084677] microcode: CPU45: patch_level=0x08001250 [ 6.084687] microcode: CPU46: patch_level=0x08001250 [ 6.084698] microcode: CPU47: patch_level=0x08001250 [ 6.084751] microcode: Microcode Update Driver: v2.01 , Peter Oruba [ 6.084917] PM: Hibernation image not present or could not be loaded. [ 6.084921] Loading compiled-in X.509 certificates [ 6.084946] Loaded X.509 cert 'CentOS Linux kpatch signing key: ea0413152cde1d98ebdca3fe6f0230904c9ef717' [ 6.084958] Loaded X.509 cert 'CentOS Linux Driver update signing key: 7f421ee0ab69461574bb358861dbe77762a4201b' [ 6.085139] usb 3-1.1: new high-speed USB device number 3 using xhci_hcd [ 6.085345] Loaded X.509 cert 'CentOS Linux kernel signing key: 468656045a39b52ff2152c315f6198c3e658f24d' [ 6.085359] registered taskstats version 1 [ 6.087614] Key type trusted registered [ 6.089207] Key type encrypted registered [ 6.089259] IMA: No TPM chip found, activating TPM-bypass! (rc=-19) [ 6.091593] Magic number: 0:887:739 [ 6.091779] memory memory1240: hash matches [ 6.091815] memory memory453: hash matches [ 6.098214] rtc_cmos 00:01: setting system clock to 2020-08-08 15:44:24 UTC (1596901464) [ 6.477928] Switched to clocksource tsc [ 6.482778] Freeing unused kernel memory: 1876k freed [ 6.488096] Write protecting the kernel read-only data: 12288k [ 6.489139] usb 3-1.1: New USB device found, idVendor=1604, idProduct=10c0 [ 6.489141] usb 3-1.1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 6.509508] Freeing unused kernel memory: 504k freed [ 6.515860] Freeing unused kernel memory: 596k freed [ 6.519159] hub 3-1.1:1.0: USB hub found [ 6.519508] hub 3-1.1:1.0: 4 ports detected [ 6.579683] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) [ 6.583146] usb 3-1.4: new high-speed USB device number 4 using xhci_hcd [ 6.593145] usb 1-1: new high-speed USB device number 2 using xhci_hcd [ 6.612035] systemd[1]: Detected architecture x86-64. [ 6.617093] systemd[1]: Running in initial RAM disk. [ 6.630295] systemd[1]: Set hostname to . [ 6.657149] usb 3-1.4: New USB device found, idVendor=1604, idProduct=10c0 [ 6.664026] usb 3-1.4: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 6.671852] systemd[1]: Reached target Timers. [ 6.679160] hub 3-1.4:1.0: USB hub found [ 6.683387] hub 3-1.4:1.0: 4 ports detected [ 6.688310] systemd[1]: Reached target Local File Systems. [ 6.699206] systemd[1]: Reached target Swap. [ 6.708406] systemd[1]: Created slice Root Slice. [ 6.719030] usb 1-1: New USB device found, idVendor=0424, idProduct=2744 [ 6.725734] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 6.732864] usb 1-1: Product: USB2734 [ 6.736531] usb 1-1: Manufacturer: Microchip Tech [ 6.741349] systemd[1]: Listening on Journal Socket. [ 6.749229] hub 1-1:1.0: USB hub found [ 6.754441] systemd[1]: Listening on udev Control Socket. [ 6.759853] hub 1-1:1.0: 4 ports detected [ 6.769258] systemd[1]: Created slice System Slice. [ 6.780198] systemd[1]: Reached target Slices. [ 6.789778] systemd[1]: Starting Create list of required static device nodes for the current kernel... [ 6.807654] systemd[1]: Starting dracut cmdline hook... [ 6.817749] systemd[1]: Starting Setup Virtual Console... [ 6.827622] systemd[1]: Starting Journal Service... [ 6.829288] usb 2-1: new SuperSpeed USB device number 2 using xhci_hcd [ 6.843408] usb 2-1: New USB device found, idVendor=0424, idProduct=5744 [ 6.843409] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=0 [ 6.843410] usb 2-1: Product: USB5734 [ 6.843411] usb 2-1: Manufacturer: Microchip Tech [ 6.845183] hub 2-1:1.0: USB hub found [ 6.845530] hub 2-1:1.0: 4 ports detected [ 6.846601] usb: port power management may be unreliable [ 6.893350] systemd[1]: Listening on udev Kernel Socket. [ 6.904211] systemd[1]: Reached target Sockets. [ 6.913613] systemd[1]: Starting Apply Kernel Variables... [ 6.923456] systemd[1]: Started Journal Service. [ 7.067774] pps_core: LinuxPPS API ver. 1 registered [ 7.072753] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti [ 7.085088] PTP clock support registered [ 7.092218] mlx_compat: loading out-of-tree module taints kernel. [ 7.092233] megasas: 07.705.02.00-rh1 [ 7.092534] megaraid_sas 0000:c1:00.0: FW now in Ready state [ 7.092537] megaraid_sas 0000:c1:00.0: 64 bit DMA mask and 32 bit consistent mask [ 7.092888] megaraid_sas 0000:c1:00.0: irq 68 for MSI/MSI-X [ 7.092914] megaraid_sas 0000:c1:00.0: irq 69 for MSI/MSI-X [ 7.092942] megaraid_sas 0000:c1:00.0: irq 70 for MSI/MSI-X [ 7.092965] megaraid_sas 0000:c1:00.0: irq 71 for MSI/MSI-X [ 7.092989] megaraid_sas 0000:c1:00.0: irq 72 for MSI/MSI-X [ 7.093012] megaraid_sas 0000:c1:00.0: irq 73 for MSI/MSI-X [ 7.093036] megaraid_sas 0000:c1:00.0: irq 74 for MSI/MSI-X [ 7.093059] megaraid_sas 0000:c1:00.0: irq 75 for MSI/MSI-X [ 7.093081] megaraid_sas 0000:c1:00.0: irq 76 for MSI/MSI-X [ 7.093104] megaraid_sas 0000:c1:00.0: irq 77 for MSI/MSI-X [ 7.093126] megaraid_sas 0000:c1:00.0: irq 78 for MSI/MSI-X [ 7.093157] megaraid_sas 0000:c1:00.0: irq 79 for MSI/MSI-X [ 7.093183] megaraid_sas 0000:c1:00.0: irq 80 for MSI/MSI-X [ 7.093206] megaraid_sas 0000:c1:00.0: irq 81 for MSI/MSI-X [ 7.093229] megaraid_sas 0000:c1:00.0: irq 82 for MSI/MSI-X [ 7.093251] megaraid_sas 0000:c1:00.0: irq 83 for MSI/MSI-X [ 7.093273] megaraid_sas 0000:c1:00.0: irq 84 for MSI/MSI-X [ 7.093295] megaraid_sas 0000:c1:00.0: irq 85 for MSI/MSI-X [ 7.093318] megaraid_sas 0000:c1:00.0: irq 86 for MSI/MSI-X [ 7.093340] megaraid_sas 0000:c1:00.0: irq 87 for MSI/MSI-X [ 7.093360] megaraid_sas 0000:c1:00.0: irq 88 for MSI/MSI-X [ 7.093383] megaraid_sas 0000:c1:00.0: irq 89 for MSI/MSI-X [ 7.093405] megaraid_sas 0000:c1:00.0: irq 90 for MSI/MSI-X [ 7.093427] megaraid_sas 0000:c1:00.0: irq 91 for MSI/MSI-X [ 7.093454] megaraid_sas 0000:c1:00.0: irq 92 for MSI/MSI-X [ 7.093478] megaraid_sas 0000:c1:00.0: irq 93 for MSI/MSI-X [ 7.093499] megaraid_sas 0000:c1:00.0: irq 94 for MSI/MSI-X [ 7.093522] megaraid_sas 0000:c1:00.0: irq 95 for MSI/MSI-X [ 7.093543] megaraid_sas 0000:c1:00.0: irq 96 for MSI/MSI-X [ 7.093566] megaraid_sas 0000:c1:00.0: irq 97 for MSI/MSI-X [ 7.093588] megaraid_sas 0000:c1:00.0: irq 98 for MSI/MSI-X [ 7.093609] megaraid_sas 0000:c1:00.0: irq 99 for MSI/MSI-X [ 7.093630] megaraid_sas 0000:c1:00.0: irq 100 for MSI/MSI-X [ 7.093651] megaraid_sas 0000:c1:00.0: irq 101 for MSI/MSI-X [ 7.093674] megaraid_sas 0000:c1:00.0: irq 102 for MSI/MSI-X [ 7.093697] megaraid_sas 0000:c1:00.0: irq 103 for MSI/MSI-X [ 7.093719] megaraid_sas 0000:c1:00.0: irq 104 for MSI/MSI-X [ 7.093740] megaraid_sas 0000:c1:00.0: irq 105 for MSI/MSI-X [ 7.093761] megaraid_sas 0000:c1:00.0: irq 106 for MSI/MSI-X [ 7.093782] megaraid_sas 0000:c1:00.0: irq 107 for MSI/MSI-X [ 7.093804] megaraid_sas 0000:c1:00.0: irq 108 for MSI/MSI-X [ 7.093824] megaraid_sas 0000:c1:00.0: irq 109 for MSI/MSI-X [ 7.093846] megaraid_sas 0000:c1:00.0: irq 110 for MSI/MSI-X [ 7.093867] megaraid_sas 0000:c1:00.0: irq 111 for MSI/MSI-X [ 7.093886] megaraid_sas 0000:c1:00.0: irq 112 for MSI/MSI-X [ 7.093908] megaraid_sas 0000:c1:00.0: irq 113 for MSI/MSI-X [ 7.093928] megaraid_sas 0000:c1:00.0: irq 114 for MSI/MSI-X [ 7.093948] megaraid_sas 0000:c1:00.0: irq 115 for MSI/MSI-X [ 7.094058] megaraid_sas 0000:c1:00.0: firmware supports msix : (96) [ 7.094059] megaraid_sas 0000:c1:00.0: current msix/online cpus : (48/48) [ 7.094061] megaraid_sas 0000:c1:00.0: RDPQ mode : (disabled) [ 7.094063] megaraid_sas 0000:c1:00.0: Current firmware supports maximum commands: 928 LDIO threshold: 237 [ 7.094344] megaraid_sas 0000:c1:00.0: Configured max firmware commands: 927 [ 7.096420] megaraid_sas 0000:c1:00.0: FW supports sync cache : No [ 7.158286] mlx_compat: module verification failed: signature and/or required key missing - tainting kernel [ 7.170242] libata version 3.00 loaded. [ 7.176125] Compat-mlnx-ofed backport release: 1c4bf42 [ 7.181351] Backport based on mlnx_ofed/mlnx-ofa_kernel-4.0.git 1c4bf42 [ 7.189363] compat.git: mlnx_ofed/mlnx-ofa_kernel-4.0.git [ 7.197885] tg3.c:v3.137 (May 11, 2014) [ 7.202619] mpt3sas version 31.00.00.00 loaded [ 7.208337] mpt3sas_cm0: 63 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (263564436 kB) [ 7.220741] ahci 0000:86:00.2: version 3.0 [ 7.221526] ahci 0000:86:00.2: irq 119 for MSI/MSI-X [ 7.221534] ahci 0000:86:00.2: irq 120 for MSI/MSI-X [ 7.221539] ahci 0000:86:00.2: irq 121 for MSI/MSI-X [ 7.221544] ahci 0000:86:00.2: irq 122 for MSI/MSI-X [ 7.221549] ahci 0000:86:00.2: irq 123 for MSI/MSI-X [ 7.221553] ahci 0000:86:00.2: irq 124 for MSI/MSI-X [ 7.221557] ahci 0000:86:00.2: irq 125 for MSI/MSI-X [ 7.221562] ahci 0000:86:00.2: irq 126 for MSI/MSI-X [ 7.221590] ahci 0000:86:00.2: irq 127 for MSI/MSI-X [ 7.221594] ahci 0000:86:00.2: irq 128 for MSI/MSI-X [ 7.221599] ahci 0000:86:00.2: irq 129 for MSI/MSI-X [ 7.221602] ahci 0000:86:00.2: irq 130 for MSI/MSI-X [ 7.221606] ahci 0000:86:00.2: irq 131 for MSI/MSI-X [ 7.221611] ahci 0000:86:00.2: irq 132 for MSI/MSI-X [ 7.221615] ahci 0000:86:00.2: irq 133 for MSI/MSI-X [ 7.221619] ahci 0000:86:00.2: irq 134 for MSI/MSI-X [ 7.223400] ahci 0000:86:00.2: AHCI 0001.0301 32 slots 1 ports 6 Gbps 0x1 impl SATA mode [ 7.231553] ahci 0000:86:00.2: flags: 64bit ncq sntf ilck pm led clo only pmp fbs pio slum part [ 7.242174] tg3 0000:81:00.0 eth0: Tigon3 [partno(BCM95720) rev 5720000] (PCI Express) MAC address 4c:d9:8f:48:5a:c7 [ 7.252711] tg3 0000:81:00.0 eth0: attached PHY is 5720C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 7.262551] tg3 0000:81:00.0 eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 7.262553] tg3 0000:81:00.0 eth0: dma_rwctrl[00000001] dma_mask[64-bit] [ 7.262628] scsi host2: ahci [ 7.262772] ata1: SATA max UDMA/133 abar m4096@0xc0a02000 port 0xc0a02100 irq 119 [ 7.284761] tg3 0000:81:00.1 eth1: Tigon3 [partno(BCM95720) rev 5720000] (PCI Express) MAC address 4c:d9:8f:48:5a:c8 [ 7.284764] tg3 0000:81:00.1 eth1: attached PHY is 5720C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 7.284766] tg3 0000:81:00.1 eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 7.284767] tg3 0000:81:00.1 eth1: dma_rwctrl[00000001] dma_mask[64-bit] [ 7.328165] mpt3sas_cm0: IOC Number : 0 [ 7.332539] mpt3sas 0000:84:00.0: irq 136 for MSI/MSI-X [ 7.332569] mpt3sas 0000:84:00.0: irq 137 for MSI/MSI-X [ 7.332593] mpt3sas 0000:84:00.0: irq 138 for MSI/MSI-X [ 7.332620] mpt3sas 0000:84:00.0: irq 139 for MSI/MSI-X [ 7.332644] mpt3sas 0000:84:00.0: irq 140 for MSI/MSI-X [ 7.332668] mpt3sas 0000:84:00.0: irq 141 for MSI/MSI-X [ 7.332690] mpt3sas 0000:84:00.0: irq 142 for MSI/MSI-X [ 7.332714] mpt3sas 0000:84:00.0: irq 143 for MSI/MSI-X [ 7.332737] mpt3sas 0000:84:00.0: irq 144 for MSI/MSI-X [ 7.332761] mpt3sas 0000:84:00.0: irq 145 for MSI/MSI-X [ 7.332782] mpt3sas 0000:84:00.0: irq 146 for MSI/MSI-X [ 7.332806] mpt3sas 0000:84:00.0: irq 147 for MSI/MSI-X [ 7.332834] mpt3sas 0000:84:00.0: irq 148 for MSI/MSI-X [ 7.332857] mpt3sas 0000:84:00.0: irq 149 for MSI/MSI-X [ 7.332878] mpt3sas 0000:84:00.0: irq 150 for MSI/MSI-X [ 7.332901] mpt3sas 0000:84:00.0: irq 151 for MSI/MSI-X [ 7.332924] mpt3sas 0000:84:00.0: irq 152 for MSI/MSI-X [ 7.332954] mpt3sas 0000:84:00.0: irq 153 for MSI/MSI-X [ 7.332978] mpt3sas 0000:84:00.0: irq 154 for MSI/MSI-X [ 7.333004] mpt3sas 0000:84:00.0: irq 155 for MSI/MSI-X [ 7.333026] mpt3sas 0000:84:00.0: irq 156 for MSI/MSI-X [ 7.333049] mpt3sas 0000:84:00.0: irq 157 for MSI/MSI-X [ 7.333075] mpt3sas 0000:84:00.0: irq 158 for MSI/MSI-X [ 7.333097] mpt3sas 0000:84:00.0: irq 159 for MSI/MSI-X [ 7.333119] mpt3sas 0000:84:00.0: irq 160 for MSI/MSI-X [ 7.333141] mpt3sas 0000:84:00.0: irq 161 for MSI/MSI-X [ 7.333170] mpt3sas 0000:84:00.0: irq 162 for MSI/MSI-X [ 7.333195] mpt3sas 0000:84:00.0: irq 163 for MSI/MSI-X [ 7.333217] mpt3sas 0000:84:00.0: irq 164 for MSI/MSI-X [ 7.333240] mpt3sas 0000:84:00.0: irq 165 for MSI/MSI-X [ 7.333269] mpt3sas 0000:84:00.0: irq 166 for MSI/MSI-X [ 7.333290] mpt3sas 0000:84:00.0: irq 167 for MSI/MSI-X [ 7.333311] mpt3sas 0000:84:00.0: irq 168 for MSI/MSI-X [ 7.333335] mpt3sas 0000:84:00.0: irq 169 for MSI/MSI-X [ 7.333359] mpt3sas 0000:84:00.0: irq 170 for MSI/MSI-X [ 7.333386] mpt3sas 0000:84:00.0: irq 171 for MSI/MSI-X [ 7.333409] mpt3sas 0000:84:00.0: irq 172 for MSI/MSI-X [ 7.333432] mpt3sas 0000:84:00.0: irq 173 for MSI/MSI-X [ 7.333456] mpt3sas 0000:84:00.0: irq 174 for MSI/MSI-X [ 7.333477] mpt3sas 0000:84:00.0: irq 175 for MSI/MSI-X [ 7.333500] mpt3sas 0000:84:00.0: irq 176 for MSI/MSI-X [ 7.333523] mpt3sas 0000:84:00.0: irq 177 for MSI/MSI-X [ 7.333548] mpt3sas 0000:84:00.0: irq 179 for MSI/MSI-X [ 7.333573] mpt3sas 0000:84:00.0: irq 180 for MSI/MSI-X [ 7.333594] mpt3sas 0000:84:00.0: irq 181 for MSI/MSI-X [ 7.333616] mpt3sas 0000:84:00.0: irq 182 for MSI/MSI-X [ 7.333639] mpt3sas 0000:84:00.0: irq 183 for MSI/MSI-X [ 7.333660] mpt3sas 0000:84:00.0: irq 184 for MSI/MSI-X [ 7.333957] mlx5_core 0000:01:00.0: firmware version: 20.26.1040 [ 7.334607] mpt3sas0-msix0: PCI-MSI-X enabled: IRQ 136 [ 7.334608] mpt3sas0-msix1: PCI-MSI-X enabled: IRQ 137 [ 7.334609] mpt3sas0-msix2: PCI-MSI-X enabled: IRQ 138 [ 7.334610] mpt3sas0-msix3: PCI-MSI-X enabled: IRQ 139 [ 7.334610] mpt3sas0-msix4: PCI-MSI-X enabled: IRQ 140 [ 7.334611] mpt3sas0-msix5: PCI-MSI-X enabled: IRQ 141 [ 7.334611] mpt3sas0-msix6: PCI-MSI-X enabled: IRQ 142 [ 7.334612] mpt3sas0-msix7: PCI-MSI-X enabled: IRQ 143 [ 7.334612] mpt3sas0-msix8: PCI-MSI-X enabled: IRQ 144 [ 7.334613] mpt3sas0-msix9: PCI-MSI-X enabled: IRQ 145 [ 7.334613] mpt3sas0-msix10: PCI-MSI-X enabled: IRQ 146 [ 7.334614] mpt3sas0-msix11: PCI-MSI-X enabled: IRQ 147 [ 7.334614] mpt3sas0-msix12: PCI-MSI-X enabled: IRQ 148 [ 7.334615] mpt3sas0-msix13: PCI-MSI-X enabled: IRQ 149 [ 7.334615] mpt3sas0-msix14: PCI-MSI-X enabled: IRQ 150 [ 7.334616] mpt3sas0-msix15: PCI-MSI-X enabled: IRQ 151 [ 7.334616] mpt3sas0-msix16: PCI-MSI-X enabled: IRQ 152 [ 7.334617] mpt3sas0-msix17: PCI-MSI-X enabled: IRQ 153 [ 7.334617] mpt3sas0-msix18: PCI-MSI-X enabled: IRQ 154 [ 7.334617] mpt3sas0-msix19: PCI-MSI-X enabled: IRQ 155 [ 7.334618] mpt3sas0-msix20: PCI-MSI-X enabled: IRQ 156 [ 7.334618] mpt3sas0-msix21: PCI-MSI-X enabled: IRQ 157 [ 7.334619] mpt3sas0-msix22: PCI-MSI-X enabled: IRQ 158 [ 7.334619] mpt3sas0-msix23: PCI-MSI-X enabled: IRQ 159 [ 7.334620] mpt3sas0-msix24: PCI-MSI-X enabled: IRQ 160 [ 7.334620] mpt3sas0-msix25: PCI-MSI-X enabled: IRQ 161 [ 7.334621] mpt3sas0-msix26: PCI-MSI-X enabled: IRQ 162 [ 7.334621] mpt3sas0-msix27: PCI-MSI-X enabled: IRQ 163 [ 7.334622] mpt3sas0-msix28: PCI-MSI-X enabled: IRQ 164 [ 7.334622] mpt3sas0-msix29: PCI-MSI-X enabled: IRQ 165 [ 7.334623] mpt3sas0-msix30: PCI-MSI-X enabled: IRQ 166 [ 7.334623] mpt3sas0-msix31: PCI-MSI-X enabled: IRQ 167 [ 7.334624] mpt3sas0-msix32: PCI-MSI-X enabled: IRQ 168 [ 7.334624] mpt3sas0-msix33: PCI-MSI-X enabled: IRQ 169 [ 7.334625] mpt3sas0-msix34: PCI-MSI-X enabled: IRQ 170 [ 7.334625] mpt3sas0-msix35: PCI-MSI-X enabled: IRQ 171 [ 7.334626] mpt3sas0-msix36: PCI-MSI-X enabled: IRQ 172 [ 7.334626] mpt3sas0-msix37: PCI-MSI-X enabled: IRQ 173 [ 7.334626] mpt3sas0-msix38: PCI-MSI-X enabled: IRQ 174 [ 7.334627] mpt3sas0-msix39: PCI-MSI-X enabled: IRQ 175 [ 7.334627] mpt3sas0-msix40: PCI-MSI-X enabled: IRQ 176 [ 7.334628] mpt3sas0-msix41: PCI-MSI-X enabled: IRQ 177 [ 7.334628] mpt3sas0-msix42: PCI-MSI-X enabled: IRQ 179 [ 7.334629] mpt3sas0-msix43: PCI-MSI-X enabled: IRQ 180 [ 7.334629] mpt3sas0-msix44: PCI-MSI-X enabled: IRQ 181 [ 7.334630] mpt3sas0-msix45: PCI-MSI-X enabled: IRQ 182 [ 7.334630] mpt3sas0-msix46: PCI-MSI-X enabled: IRQ 183 [ 7.334631] mpt3sas0-msix47: PCI-MSI-X enabled: IRQ 184 [ 7.334632] mpt3sas_cm0: iomem(0x00000000ac000000), mapped(0xffffbb565a000000), size(1048576) [ 7.334633] mpt3sas_cm0: ioport(0x0000000000008000), size(256) [ 7.409167] mlx5_core 0000:01:00.0: 126.016 Gb/s available PCIe bandwidth, limited by 8 GT/s x16 link at 0000:00:03.1 (capable of 252.048 Gb/s with 16 GT/s x16 link) [ 7.411164] mpt3sas_cm0: IOC Number : 0 [ 7.411170] mpt3sas_cm0: sending message unit reset !! [ 7.413170] mpt3sas_cm0: message unit reset: SUCCESS [ 7.453165] megaraid_sas 0000:c1:00.0: Init cmd return status SUCCESS for SCSI host 0 [ 7.474165] megaraid_sas 0000:c1:00.0: firmware type : Legacy(64 VD) firmware [ 7.474166] megaraid_sas 0000:c1:00.0: controller type : iMR(0MB) [ 7.474168] megaraid_sas 0000:c1:00.0: Online Controller Reset(OCR) : Enabled [ 7.474168] megaraid_sas 0000:c1:00.0: Secure JBOD support : No [ 7.474169] megaraid_sas 0000:c1:00.0: NVMe passthru support : No [ 7.495692] megaraid_sas 0000:c1:00.0: INIT adapter done [ 7.495695] megaraid_sas 0000:c1:00.0: Jbod map is not supported megasas_setup_jbod_map 5146 [ 7.521942] megaraid_sas 0000:c1:00.0: pci id : (0x1000)/(0x005f)/(0x1028)/(0x1f4b) [ 7.521944] megaraid_sas 0000:c1:00.0: unevenspan support : yes [ 7.521945] megaraid_sas 0000:c1:00.0: firmware crash dump : no [ 7.521946] megaraid_sas 0000:c1:00.0: jbod sync map : no [ 7.521950] scsi host0: Avago SAS based MegaRAID driver [ 7.541115] scsi 0:2:0:0: Direct-Access DELL PERC H330 Mini 4.30 PQ: 0 ANSI: 5 [ 7.570179] ata1: SATA link down (SStatus 0 SControl 300) [ 7.573796] mpt3sas_cm0: Allocated physical memory: size(38831 kB) [ 7.573797] mpt3sas_cm0: Current Controller Queue Depth(7564), Max Controller Queue Depth(7680) [ 7.573797] mpt3sas_cm0: Scatter Gather Elements per IO(128) [ 7.663559] mlx5_core 0000:01:00.0: irq 185 for MSI/MSI-X [ 7.663579] mlx5_core 0000:01:00.0: irq 186 for MSI/MSI-X [ 7.663599] mlx5_core 0000:01:00.0: irq 187 for MSI/MSI-X [ 7.663621] mlx5_core 0000:01:00.0: irq 188 for MSI/MSI-X [ 7.663640] mlx5_core 0000:01:00.0: irq 189 for MSI/MSI-X [ 7.663659] mlx5_core 0000:01:00.0: irq 190 for MSI/MSI-X [ 7.663679] mlx5_core 0000:01:00.0: irq 191 for MSI/MSI-X [ 7.663700] mlx5_core 0000:01:00.0: irq 192 for MSI/MSI-X [ 7.663719] mlx5_core 0000:01:00.0: irq 193 for MSI/MSI-X [ 7.663737] mlx5_core 0000:01:00.0: irq 194 for MSI/MSI-X [ 7.663755] mlx5_core 0000:01:00.0: irq 195 for MSI/MSI-X [ 7.663773] mlx5_core 0000:01:00.0: irq 196 for MSI/MSI-X [ 7.663792] mlx5_core 0000:01:00.0: irq 197 for MSI/MSI-X [ 7.663811] mlx5_core 0000:01:00.0: irq 198 for MSI/MSI-X [ 7.663830] mlx5_core 0000:01:00.0: irq 199 for MSI/MSI-X [ 7.663848] mlx5_core 0000:01:00.0: irq 200 for MSI/MSI-X [ 7.663866] mlx5_core 0000:01:00.0: irq 201 for MSI/MSI-X [ 7.663883] mlx5_core 0000:01:00.0: irq 202 for MSI/MSI-X [ 7.663902] mlx5_core 0000:01:00.0: irq 203 for MSI/MSI-X [ 7.663920] mlx5_core 0000:01:00.0: irq 204 for MSI/MSI-X [ 7.663938] mlx5_core 0000:01:00.0: irq 205 for MSI/MSI-X [ 7.663958] mlx5_core 0000:01:00.0: irq 206 for MSI/MSI-X [ 7.663976] mlx5_core 0000:01:00.0: irq 207 for MSI/MSI-X [ 7.663994] mlx5_core 0000:01:00.0: irq 208 for MSI/MSI-X [ 7.664012] mlx5_core 0000:01:00.0: irq 209 for MSI/MSI-X [ 7.664030] mlx5_core 0000:01:00.0: irq 210 for MSI/MSI-X [ 7.664049] mlx5_core 0000:01:00.0: irq 211 for MSI/MSI-X [ 7.664066] mlx5_core 0000:01:00.0: irq 212 for MSI/MSI-X [ 7.664084] mlx5_core 0000:01:00.0: irq 213 for MSI/MSI-X [ 7.664103] mlx5_core 0000:01:00.0: irq 214 for MSI/MSI-X [ 7.664122] mlx5_core 0000:01:00.0: irq 215 for MSI/MSI-X [ 7.664140] mlx5_core 0000:01:00.0: irq 216 for MSI/MSI-X [ 7.664157] mlx5_core 0000:01:00.0: irq 217 for MSI/MSI-X [ 7.664183] mlx5_core 0000:01:00.0: irq 218 for MSI/MSI-X [ 7.664204] mlx5_core 0000:01:00.0: irq 219 for MSI/MSI-X [ 7.664222] mlx5_core 0000:01:00.0: irq 220 for MSI/MSI-X [ 7.664241] mlx5_core 0000:01:00.0: irq 221 for MSI/MSI-X [ 7.664260] mlx5_core 0000:01:00.0: irq 222 for MSI/MSI-X [ 7.664279] mlx5_core 0000:01:00.0: irq 223 for MSI/MSI-X [ 7.664296] mlx5_core 0000:01:00.0: irq 224 for MSI/MSI-X [ 7.664315] mlx5_core 0000:01:00.0: irq 225 for MSI/MSI-X [ 7.664333] mlx5_core 0000:01:00.0: irq 226 for MSI/MSI-X [ 7.664351] mlx5_core 0000:01:00.0: irq 227 for MSI/MSI-X [ 7.664369] mlx5_core 0000:01:00.0: irq 228 for MSI/MSI-X [ 7.664387] mlx5_core 0000:01:00.0: irq 229 for MSI/MSI-X [ 7.664406] mlx5_core 0000:01:00.0: irq 230 for MSI/MSI-X [ 7.664425] mlx5_core 0000:01:00.0: irq 231 for MSI/MSI-X [ 7.664444] mlx5_core 0000:01:00.0: irq 232 for MSI/MSI-X [ 7.664461] mlx5_core 0000:01:00.0: irq 233 for MSI/MSI-X [ 7.665489] mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged [ 7.665740] mlx5_core 0000:01:00.0: mlx5_pcie_event:303:(pid 316): PCIe slot advertised sufficient power (27W). [ 7.673065] mlx5_core 0000:01:00.0: mlx5_fw_tracer_start:776:(pid 327): FWTracer: Ownership granted and active [ 7.724778] mpt3sas_cm0: FW Package Version(12.00.00.00) [ 7.725032] mpt3sas_cm0: SAS3616: FWVersion(12.00.00.00), ChipRevision(0x02), BiosVersion(00.00.00.00) [ 7.725037] mpt3sas_cm0: Protocol=(Initiator,Target,NVMe), Capabilities=(TLR,EEDP,Diag Trace Buffer,Task Set Full,NCQ) [ 7.725104] mpt3sas 0000:84:00.0: Enabled Extended Tags as Controller Supports [ 7.725120] mpt3sas_cm0: : host protection capabilities enabled DIF1 DIF2 DIF3 [ 7.725129] scsi host1: Fusion MPT SAS Host [ 7.725372] mpt3sas_cm0: registering trace buffer support [ 7.729666] mpt3sas_cm0: Trace buffer memory 2048 KB allocated [ 7.729667] mpt3sas_cm0: sending port enable !! [ 7.729953] mpt3sas_cm0: hba_port entry: ffff9d10f1bbed00, port: 255 is added to hba_port list [ 7.732500] mpt3sas_cm0: host_add: handle(0x0001), sas_addr(0x500605b00deb4820), phys(21) [ 7.733218] mpt3sas_cm0: detecting: handle(0x0018), sas_address(0x500a0984dfa20c24), phy(0) [ 7.733223] mpt3sas_cm0: REPORT_LUNS: handle(0x0018), retries(0) [ 7.734126] mpt3sas_cm0: TEST_UNIT_READY: handle(0x0018), lun(0) [ 7.734678] scsi 1:0:0:0: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 7.734761] scsi 1:0:0:0: SSP: handle(0x0018), sas_addr(0x500a0984dfa20c24), phy(0), device_name(0x500a0984dfa20c24) [ 7.734763] scsi 1:0:0:0: enclosure logical id(0x300605b00d114820), slot(13) [ 7.734764] scsi 1:0:0:0: enclosure level(0x0000), connector name( C3 ) [ 7.734765] scsi 1:0:0:0: serial_number(021825001558 ) [ 7.734767] scsi 1:0:0:0: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 7.980696] mlx5_ib: Mellanox Connect-IB Infiniband driver v4.7-1.0.0 [ 7.985221] scsi 1:0:0:1: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 7.985300] scsi 1:0:0:1: SSP: handle(0x0018), sas_addr(0x500a0984dfa20c24), phy(0), device_name(0x500a0984dfa20c24) [ 7.985302] scsi 1:0:0:1: enclosure logical id(0x300605b00d114820), slot(13) [ 7.985303] scsi 1:0:0:1: enclosure level(0x0000), connector name( C3 ) [ 7.985304] scsi 1:0:0:1: serial_number(021825001558 ) [ 7.985307] scsi 1:0:0:1: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 8.044383] scsi 1:0:0:31: Direct-Access DELL Universal Xport 0825 PQ: 0 ANSI: 5 [ 8.052635] scsi 1:0:0:31: SSP: handle(0x0018), sas_addr(0x500a0984dfa20c24), phy(0), device_name(0x500a0984dfa20c24) [ 8.063231] scsi 1:0:0:31: enclosure logical id(0x300605b00d114820), slot(13) [ 8.070452] scsi 1:0:0:31: enclosure level(0x0000), connector name( C3 ) [ 8.077260] scsi 1:0:0:31: serial_number(021825001558 ) [ 8.082753] scsi 1:0:0:31: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 8.114442] mpt3sas_cm0: detecting: handle(0x0019), sas_address(0x500a0984dfa1fa10), phy(4) [ 8.122812] mpt3sas_cm0: REPORT_LUNS: handle(0x0019), retries(0) [ 8.129719] mpt3sas_cm0: TEST_UNIT_READY: handle(0x0019), lun(0) [ 8.136293] scsi 1:0:1:0: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 8.144469] scsi 1:0:1:0: SSP: handle(0x0019), sas_addr(0x500a0984dfa1fa10), phy(4), device_name(0x500a0984dfa1fa10) [ 8.154982] scsi 1:0:1:0: enclosure logical id(0x300605b00d114820), slot(9) [ 8.162027] scsi 1:0:1:0: enclosure level(0x0000), connector name( C2 ) [ 8.168746] scsi 1:0:1:0: serial_number(021825001369 ) [ 8.174146] scsi 1:0:1:0: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 8.203167] scsi 1:0:1:1: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 8.210362] random: crng init done [ 8.214754] scsi 1:0:1:1: SSP: handle(0x0019), sas_addr(0x500a0984dfa1fa10), phy(4), device_name(0x500a0984dfa1fa10) [ 8.225270] scsi 1:0:1:1: enclosure logical id(0x300605b00d114820), slot(9) [ 8.232314] scsi 1:0:1:1: enclosure level(0x0000), connector name( C2 ) [ 8.239035] scsi 1:0:1:1: serial_number(021825001369 ) [ 8.244436] scsi 1:0:1:1: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 8.253630] scsi 1:0:1:1: Mode parameters changed [ 8.266130] sd 0:2:0:0: [sda] 467664896 512-byte logical blocks: (239 GB/223 GiB) [ 8.273778] sd 0:2:0:0: [sda] Write Protect is off [ 8.278596] sd 0:2:0:0: [sda] Mode Sense: 1f 00 10 08 [ 8.278625] sd 0:2:0:0: [sda] Write cache: disabled, read cache: disabled, supports DPO and FUA [ 8.288440] scsi 1:0:1:31: Direct-Access DELL Universal Xport 0825 PQ: 0 ANSI: 5 [ 8.289485] sda: sda1 sda2 sda3 [ 8.289913] sd 0:2:0:0: [sda] Attached SCSI disk [ 8.304548] scsi 1:0:1:31: SSP: handle(0x0019), sas_addr(0x500a0984dfa1fa10), phy(4), device_name(0x500a0984dfa1fa10) [ 8.315145] scsi 1:0:1:31: enclosure logical id(0x300605b00d114820), slot(9) [ 8.322278] scsi 1:0:1:31: enclosure level(0x0000), connector name( C2 ) [ 8.329086] scsi 1:0:1:31: serial_number(021825001369 ) [ 8.334572] scsi 1:0:1:31: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 8.354447] mpt3sas_cm0: detecting: handle(0x0017), sas_address(0x500a0984da0f9b24), phy(8) [ 8.362801] mpt3sas_cm0: REPORT_LUNS: handle(0x0017), retries(0) [ 8.369619] mpt3sas_cm0: TEST_UNIT_READY: handle(0x0017), lun(0) [ 8.376211] scsi 1:0:2:0: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 8.384382] scsi 1:0:2:0: SSP: handle(0x0017), sas_addr(0x500a0984da0f9b24), phy(8), device_name(0x500a0984da0f9b24) [ 8.394898] scsi 1:0:2:0: enclosure logical id(0x300605b00d114820), slot(5) [ 8.401944] scsi 1:0:2:0: enclosure level(0x0000), connector name( C1 ) [ 8.408666] scsi 1:0:2:0: serial_number(021812047179 ) [ 8.414066] scsi 1:0:2:0: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 8.437114] scsi 1:0:2:1: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 8.445276] scsi 1:0:2:1: SSP: handle(0x0017), sas_addr(0x500a0984da0f9b24), phy(8), device_name(0x500a0984da0f9b24) [ 8.455794] scsi 1:0:2:1: enclosure logical id(0x300605b00d114820), slot(5) [ 8.462838] scsi 1:0:2:1: enclosure level(0x0000), connector name( C1 ) [ 8.469558] scsi 1:0:2:1: serial_number(021812047179 ) [ 8.474957] scsi 1:0:2:1: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 8.497397] scsi 1:0:2:2: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 8.505557] scsi 1:0:2:2: SSP: handle(0x0017), sas_addr(0x500a0984da0f9b24), phy(8), device_name(0x500a0984da0f9b24) [ 8.516071] scsi 1:0:2:2: enclosure logical id(0x300605b00d114820), slot(5) [ 8.523117] scsi 1:0:2:2: enclosure level(0x0000), connector name( C1 ) [ 8.529835] scsi 1:0:2:2: serial_number(021812047179 ) [ 8.535236] scsi 1:0:2:2: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 8.558403] scsi 1:0:2:31: Direct-Access DELL Universal Xport 0825 PQ: 0 ANSI: 5 [ 8.566648] scsi 1:0:2:31: SSP: handle(0x0017), sas_addr(0x500a0984da0f9b24), phy(8), device_name(0x500a0984da0f9b24) [ 8.577250] scsi 1:0:2:31: enclosure logical id(0x300605b00d114820), slot(5) [ 8.584385] scsi 1:0:2:31: enclosure level(0x0000), connector name( C1 ) [ 8.591193] scsi 1:0:2:31: serial_number(021812047179 ) [ 8.596676] scsi 1:0:2:31: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 8.619455] mpt3sas_cm0: detecting: handle(0x001a), sas_address(0x500a0984db2fa910), phy(12) [ 8.627889] mpt3sas_cm0: REPORT_LUNS: handle(0x001a), retries(0) [ 8.634926] mpt3sas_cm0: TEST_UNIT_READY: handle(0x001a), lun(0) [ 8.641480] scsi 1:0:3:0: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 8.649668] scsi 1:0:3:0: SSP: handle(0x001a), sas_addr(0x500a0984db2fa910), phy(12), device_name(0x500a0984db2fa910) [ 8.660264] scsi 1:0:3:0: enclosure logical id(0x300605b00d114820), slot(1) [ 8.667310] scsi 1:0:3:0: enclosure level(0x0000), connector name( C0 ) [ 8.674029] scsi 1:0:3:0: serial_number(021815000354 ) [ 8.679429] scsi 1:0:3:0: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 8.700150] scsi 1:0:3:1: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 8.708314] scsi 1:0:3:1: SSP: handle(0x001a), sas_addr(0x500a0984db2fa910), phy(12), device_name(0x500a0984db2fa910) [ 8.718912] scsi 1:0:3:1: enclosure logical id(0x300605b00d114820), slot(1) [ 8.725960] scsi 1:0:3:1: enclosure level(0x0000), connector name( C0 ) [ 8.732677] scsi 1:0:3:1: serial_number(021815000354 ) [ 8.738079] scsi 1:0:3:1: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 8.747277] scsi 1:0:3:1: Mode parameters changed [ 8.766394] scsi 1:0:3:2: Direct-Access DELL MD34xx 0825 PQ: 0 ANSI: 5 [ 8.774569] scsi 1:0:3:2: SSP: handle(0x001a), sas_addr(0x500a0984db2fa910), phy(12), device_name(0x500a0984db2fa910) [ 8.785170] scsi 1:0:3:2: enclosure logical id(0x300605b00d114820), slot(1) [ 8.792217] scsi 1:0:3:2: enclosure level(0x0000), connector name( C0 ) [ 8.798935] scsi 1:0:3:2: serial_number(021815000354 ) [ 8.804337] scsi 1:0:3:2: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 8.813517] scsi 1:0:3:2: Mode parameters changed [ 8.829398] scsi 1:0:3:31: Direct-Access DELL Universal Xport 0825 PQ: 0 ANSI: 5 [ 8.837664] scsi 1:0:3:31: SSP: handle(0x001a), sas_addr(0x500a0984db2fa910), phy(12), device_name(0x500a0984db2fa910) [ 8.848354] scsi 1:0:3:31: enclosure logical id(0x300605b00d114820), slot(1) [ 8.855486] scsi 1:0:3:31: enclosure level(0x0000), connector name( C0 ) [ 8.862290] scsi 1:0:3:31: serial_number(021815000354 ) [ 8.867780] scsi 1:0:3:31: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(6), cmd_que(1) [ 8.891048] mpt3sas_cm0: detecting: handle(0x0011), sas_address(0x300705b00deb4820), phy(16) [ 8.899484] mpt3sas_cm0: REPORT_LUNS: handle(0x0011), retries(0) [ 8.905508] mpt3sas_cm0: TEST_UNIT_READY: handle(0x0011), lun(0) [ 8.911887] scsi 1:0:4:0: Enclosure LSI VirtualSES 03 PQ: 0 ANSI: 7 [ 8.920020] scsi 1:0:4:0: set ignore_delay_remove for handle(0x0011) [ 8.926375] scsi 1:0:4:0: SES: handle(0x0011), sas_addr(0x300705b00deb4820), phy(16), device_name(0x300705b00deb4820) [ 8.936973] scsi 1:0:4:0: enclosure logical id(0x300605b00d114820), slot(16) [ 8.944106] scsi 1:0:4:0: enclosure level(0x0000), connector name( C3 ) [ 8.950826] scsi 1:0:4:0: serial_number(300605B00D114820) [ 8.956227] scsi 1:0:4:0: qdepth(1), tagged(0), simple(0), ordered(0), scsi_level(8), cmd_que(0) [ 8.965029] mpt3sas_cm0: log_info(0x31200206): originator(PL), code(0x20), sub_code(0x0206) [ 8.993196] mpt3sas_cm0: port enable: SUCCESS [ 8.998019] scsi 1:0:0:0: rdac: LUN 0 (IOSHIP) (unowned) [ 9.003574] sd 1:0:0:0: [sdb] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 9.011732] scsi 1:0:0:1: rdac: LUN 1 (IOSHIP) (owned) [ 9.016969] sd 1:0:0:0: [sdb] Write Protect is off [ 9.017084] sd 1:0:0:1: [sdc] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 9.017478] sd 1:0:0:1: [sdc] Write Protect is off [ 9.017480] sd 1:0:0:1: [sdc] Mode Sense: 83 00 10 08 [ 9.017525] scsi 1:0:1:0: rdac: LUN 0 (IOSHIP) (owned) [ 9.017620] sd 1:0:0:1: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 9.017773] sd 1:0:1:0: [sdd] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 9.018053] scsi 1:0:1:1: rdac: LUN 1 (IOSHIP) (unowned) [ 9.018314] sd 1:0:1:0: [sdd] Write Protect is off [ 9.018315] sd 1:0:1:0: [sdd] Mode Sense: 83 00 10 08 [ 9.018336] sd 1:0:1:1: [sde] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 9.018489] sd 1:0:1:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 9.018654] scsi 1:0:2:0: rdac: LUN 0 (IOSHIP) (unowned) [ 9.018892] sd 1:0:2:0: [sdf] 926167040 512-byte logical blocks: (474 GB/441 GiB) [ 9.018894] sd 1:0:2:0: [sdf] 4096-byte physical blocks [ 9.018934] sd 1:0:1:1: [sde] Write Protect is off [ 9.018935] sd 1:0:1:1: [sde] Mode Sense: 83 00 10 08 [ 9.019096] sd 1:0:1:1: [sde] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 9.019390] scsi 1:0:2:1: rdac: LUN 1 (IOSHIP) (owned) [ 9.019413] sd 1:0:2:0: [sdf] Write Protect is off [ 9.019414] sd 1:0:2:0: [sdf] Mode Sense: 83 00 10 08 [ 9.019579] sd 1:0:2:0: [sdf] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 9.019687] sd 1:0:2:1: [sdg] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 9.020092] scsi 1:0:2:2: rdac: LUN 2 (IOSHIP) (unowned) [ 9.020423] sd 1:0:2:1: [sdg] Write Protect is off [ 9.020425] sd 1:0:2:1: [sdg] Mode Sense: 83 00 10 08 [ 9.020439] sd 1:0:2:2: [sdh] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 9.020654] sd 1:0:2:1: [sdg] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 9.020906] scsi 1:0:3:0: rdac: LUN 0 (IOSHIP) (owned) [ 9.021494] sd 1:0:2:2: [sdh] Write Protect is off [ 9.021496] sd 1:0:2:2: [sdh] Mode Sense: 83 00 10 08 [ 9.021633] sd 1:0:1:0: [sdd] Attached SCSI disk [ 9.021697] sd 1:0:2:2: [sdh] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 9.021936] sd 1:0:3:0: [sdi] 926167040 512-byte logical blocks: (474 GB/441 GiB) [ 9.021938] sd 1:0:3:0: [sdi] 4096-byte physical blocks [ 9.022430] scsi 1:0:3:1: rdac: LUN 1 (IOSHIP) (unowned) [ 9.022748] sd 1:0:3:0: [sdi] Write Protect is off [ 9.022749] sd 1:0:3:0: [sdi] Mode Sense: 83 00 10 08 [ 9.022908] sd 1:0:3:1: [sdj] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 9.023291] sd 1:0:3:0: [sdi] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 9.023321] sd 1:0:3:1: [sdj] Write Protect is off [ 9.023322] sd 1:0:3:1: [sdj] Mode Sense: 83 00 10 08 [ 9.023481] sd 1:0:0:1: [sdc] Attached SCSI disk [ 9.023864] sd 1:0:3:1: [sdj] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 9.023912] scsi 1:0:3:2: rdac: LUN 2 (IOSHIP) (owned) [ 9.024133] sd 1:0:2:0: [sdf] Attached SCSI disk [ 9.024292] sd 1:0:3:2: [sdk] 37449707520 512-byte logical blocks: (19.1 TB/17.4 TiB) [ 9.024326] sd 1:0:1:1: [sde] Attached SCSI disk [ 9.025253] sd 1:0:3:2: [sdk] Write Protect is off [ 9.025255] sd 1:0:3:2: [sdk] Mode Sense: 83 00 10 08 [ 9.025314] sd 1:0:2:1: [sdg] Attached SCSI disk [ 9.025642] sd 1:0:3:2: [sdk] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 9.026613] sd 1:0:2:2: [sdh] Attached SCSI disk [ 9.027974] sd 1:0:3:0: [sdi] Attached SCSI disk [ 9.029156] sd 1:0:3:1: [sdj] Attached SCSI disk [ 9.029721] sd 1:0:3:2: [sdk] Attached SCSI disk [ 9.305207] sd 1:0:0:0: [sdb] Mode Sense: 83 00 10 08 [ 9.305374] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 9.316552] sd 1:0:0:0: [sdb] Attached SCSI disk [ 9.412032] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null) [ 9.638908] systemd-journald[377]: Received SIGTERM from PID 1 (systemd). [ 9.678804] SELinux: Disabled at runtime. [ 9.683040] SELinux: Unregistering netfilter hooks [ 9.725221] type=1404 audit(1596901468.132:2): selinux=0 auid=4294967295 ses=4294967295 [ 9.751276] ip_tables: (C) 2000-2006 Netfilter Core Team [ 9.757877] systemd[1]: Inserted module 'ip_tables' [ 9.856330] EXT4-fs (sda2): re-mounted. Opts: (null) [ 9.873828] systemd-journald[4903]: Received request to flush runtime journal from PID 1 [ 9.929633] ACPI Error: No handler for Region [SYSI] (ffff9d01a9e7bb88) [IPMI] (20130517/evregion-162) [ 9.942067] ACPI Error: Region IPMI (ID=7) has no handler (20130517/exfldio-305) [ 9.953735] ACPI Error: Method parse/execution failed [\_SB_.PMI0._GHL] (Node ffff9d01a9e785a0), AE_NOT_EXIST (20130517/psparse-536) [ 9.972614] ACPI Error: Method parse/execution failed [\_SB_.PMI0._PMC] (Node ffff9d01a9e78500), AE_NOT_EXIST (20130517/psparse-536) [ 9.991551] ACPI Exception: AE_NOT_EXIST, Evaluating _PMC (20130517/power_meter-753) [ 10.004075] ipmi message handler version 39.2 [ 10.010235] piix4_smbus 0000:00:14.0: SMBus Host Controller at 0xb00, revision 0 [ 10.018079] piix4_smbus 0000:00:14.0: Using register 0x2e for SMBus port selection [ 10.029567] ipmi device interface [ 10.029874] sd 0:2:0:0: Attached scsi generic sg0 type 0 [ 10.030028] sd 1:0:0:0: Attached scsi generic sg1 type 0 [ 10.030197] sd 1:0:0:1: Attached scsi generic sg2 type 0 [ 10.030417] scsi 1:0:0:31: Attached scsi generic sg3 type 0 [ 10.030531] sd 1:0:1:0: Attached scsi generic sg4 type 0 [ 10.030700] sd 1:0:1:1: Attached scsi generic sg5 type 0 [ 10.030846] scsi 1:0:1:31: Attached scsi generic sg6 type 0 [ 10.031050] sd 1:0:2:0: Attached scsi generic sg7 type 0 [ 10.031260] sd 1:0:2:1: Attached scsi generic sg8 type 0 [ 10.031504] sd 1:0:2:2: Attached scsi generic sg9 type 0 [ 10.031647] scsi 1:0:2:31: Attached scsi generic sg10 type 0 [ 10.031783] sd 1:0:3:0: Attached scsi generic sg11 type 0 [ 10.031919] sd 1:0:3:1: Attached scsi generic sg12 type 0 [ 10.032048] sd 1:0:3:2: Attached scsi generic sg13 type 0 [ 10.032175] scsi 1:0:3:31: Attached scsi generic sg14 type 0 [ 10.032285] scsi 1:0:4:0: Attached scsi generic sg15 type 13 [ 10.033843] ccp 0000:02:00.2: 3 command queues available [ 10.033902] ccp 0000:02:00.2: irq 235 for MSI/MSI-X [ 10.033923] ccp 0000:02:00.2: irq 236 for MSI/MSI-X [ 10.033991] ccp 0000:02:00.2: Queue 2 can access 4 LSB regions [ 10.033993] ccp 0000:02:00.2: Queue 3 can access 4 LSB regions [ 10.033995] ccp 0000:02:00.2: Queue 4 can access 4 LSB regions [ 10.033997] ccp 0000:02:00.2: Queue 0 gets LSB 4 [ 10.033998] ccp 0000:02:00.2: Queue 1 gets LSB 5 [ 10.033999] ccp 0000:02:00.2: Queue 2 gets LSB 6 [ 10.035040] ccp 0000:02:00.2: enabled [ 10.035368] ccp 0000:03:00.1: 5 command queues available [ 10.035423] ccp 0000:03:00.1: irq 238 for MSI/MSI-X [ 10.035456] ccp 0000:03:00.1: Queue 0 can access 7 LSB regions [ 10.035458] ccp 0000:03:00.1: Queue 1 can access 7 LSB regions [ 10.035460] ccp 0000:03:00.1: Queue 2 can access 7 LSB regions [ 10.035461] ccp 0000:03:00.1: Queue 3 can access 7 LSB regions [ 10.035463] ccp 0000:03:00.1: Queue 4 can access 7 LSB regions [ 10.035465] ccp 0000:03:00.1: Queue 0 gets LSB 1 [ 10.035466] ccp 0000:03:00.1: Queue 1 gets LSB 2 [ 10.035467] ccp 0000:03:00.1: Queue 2 gets LSB 3 [ 10.035468] ccp 0000:03:00.1: Queue 3 gets LSB 4 [ 10.035470] ccp 0000:03:00.1: Queue 4 gets LSB 5 [ 10.036734] ccp 0000:03:00.1: enabled [ 10.036915] ccp 0000:41:00.2: 3 command queues available [ 10.036955] ccp 0000:41:00.2: irq 240 for MSI/MSI-X [ 10.036975] ccp 0000:41:00.2: irq 241 for MSI/MSI-X [ 10.037015] ccp 0000:41:00.2: Queue 2 can access 4 LSB regions [ 10.037017] ccp 0000:41:00.2: Queue 3 can access 4 LSB regions [ 10.037019] ccp 0000:41:00.2: Queue 4 can access 4 LSB regions [ 10.037020] ccp 0000:41:00.2: Queue 0 gets LSB 4 [ 10.037021] ccp 0000:41:00.2: Queue 1 gets LSB 5 [ 10.037022] ccp 0000:41:00.2: Queue 2 gets LSB 6 [ 10.037372] ccp 0000:41:00.2: enabled [ 10.037477] ccp 0000:42:00.1: 5 command queues available [ 10.037518] ccp 0000:42:00.1: irq 243 for MSI/MSI-X [ 10.037538] ccp 0000:42:00.1: Queue 0 can access 7 LSB regions [ 10.037540] ccp 0000:42:00.1: Queue 1 can access 7 LSB regions [ 10.037542] ccp 0000:42:00.1: Queue 2 can access 7 LSB regions [ 10.037544] ccp 0000:42:00.1: Queue 3 can access 7 LSB regions [ 10.037546] ccp 0000:42:00.1: Queue 4 can access 7 LSB regions [ 10.037547] ccp 0000:42:00.1: Queue 0 gets LSB 1 [ 10.037548] ccp 0000:42:00.1: Queue 1 gets LSB 2 [ 10.037549] ccp 0000:42:00.1: Queue 2 gets LSB 3 [ 10.037550] ccp 0000:42:00.1: Queue 3 gets LSB 4 [ 10.037551] ccp 0000:42:00.1: Queue 4 gets LSB 5 [ 10.037977] ccp 0000:42:00.1: enabled [ 10.038175] ccp 0000:85:00.2: 3 command queues available [ 10.038235] ccp 0000:85:00.2: irq 245 for MSI/MSI-X [ 10.038256] ccp 0000:85:00.2: irq 246 for MSI/MSI-X [ 10.038312] ccp 0000:85:00.2: Queue 2 can access 4 LSB regions [ 10.038315] ccp 0000:85:00.2: Queue 3 can access 4 LSB regions [ 10.038318] ccp 0000:85:00.2: Queue 4 can access 4 LSB regions [ 10.038320] ccp 0000:85:00.2: Queue 0 gets LSB 4 [ 10.038322] ccp 0000:85:00.2: Queue 1 gets LSB 5 [ 10.038323] ccp 0000:85:00.2: Queue 2 gets LSB 6 [ 10.038757] ccp 0000:85:00.2: enabled [ 10.038887] ccp 0000:86:00.1: 5 command queues available [ 10.038931] ccp 0000:86:00.1: irq 248 for MSI/MSI-X [ 10.038966] ccp 0000:86:00.1: Queue 0 can access 7 LSB regions [ 10.038968] ccp 0000:86:00.1: Queue 1 can access 7 LSB regions [ 10.038970] ccp 0000:86:00.1: Queue 2 can access 7 LSB regions [ 10.038972] ccp 0000:86:00.1: Queue 3 can access 7 LSB regions [ 10.038974] ccp 0000:86:00.1: Queue 4 can access 7 LSB regions [ 10.038976] ccp 0000:86:00.1: Queue 0 gets LSB 1 [ 10.038977] ccp 0000:86:00.1: Queue 1 gets LSB 2 [ 10.038978] ccp 0000:86:00.1: Queue 2 gets LSB 3 [ 10.038979] ccp 0000:86:00.1: Queue 3 gets LSB 4 [ 10.038981] ccp 0000:86:00.1: Queue 4 gets LSB 5 [ 10.039773] ccp 0000:86:00.1: enabled [ 10.039954] ccp 0000:c2:00.2: 3 command queues available [ 10.039997] ccp 0000:c2:00.2: irq 250 for MSI/MSI-X [ 10.040020] ccp 0000:c2:00.2: irq 251 for MSI/MSI-X [ 10.040064] ccp 0000:c2:00.2: Queue 2 can access 4 LSB regions [ 10.040066] ccp 0000:c2:00.2: Queue 3 can access 4 LSB regions [ 10.040068] ccp 0000:c2:00.2: Queue 4 can access 4 LSB regions [ 10.040069] ccp 0000:c2:00.2: Queue 0 gets LSB 4 [ 10.040070] ccp 0000:c2:00.2: Queue 1 gets LSB 5 [ 10.040071] ccp 0000:c2:00.2: Queue 2 gets LSB 6 [ 10.040444] ccp 0000:c2:00.2: enabled [ 10.040541] ccp 0000:c3:00.1: 5 command queues available [ 10.040583] ccp 0000:c3:00.1: irq 253 for MSI/MSI-X [ 10.040604] ccp 0000:c3:00.1: Queue 0 can access 7 LSB regions [ 10.040606] ccp 0000:c3:00.1: Queue 1 can access 7 LSB regions [ 10.040608] ccp 0000:c3:00.1: Queue 2 can access 7 LSB regions [ 10.040610] ccp 0000:c3:00.1: Queue 3 can access 7 LSB regions [ 10.040612] ccp 0000:c3:00.1: Queue 4 can access 7 LSB regions [ 10.040613] ccp 0000:c3:00.1: Queue 0 gets LSB 1 [ 10.040614] ccp 0000:c3:00.1: Queue 1 gets LSB 2 [ 10.040615] ccp 0000:c3:00.1: Queue 2 gets LSB 3 [ 10.040617] ccp 0000:c3:00.1: Queue 3 gets LSB 4 [ 10.040618] ccp 0000:c3:00.1: Queue 4 gets LSB 5 [ 10.041366] ccp 0000:c3:00.1: enabled [ 10.645783] cryptd: max_cpu_qlen set to 1000 [ 10.656078] input: PC Speaker as /devices/platform/pcspkr/input/input2 [ 10.664391] IPMI System Interface driver [ 10.668777] ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS [ 10.676494] ipmi_si: SMBIOS: io 0xca8 regsize 1 spacing 4 irq 10 [ 10.683883] ipmi_si: Adding SMBIOS-specified kcs state machine [ 10.691155] ipmi_si IPI0001:00: ipmi_platform: probing via ACPI [ 10.698442] ipmi_si IPI0001:00: [io 0x0ca8] regsize 1 spacing 4 irq 10 [ 10.706438] ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI [ 10.716493] ipmi_si: Adding ACPI-specified kcs state machine [ 10.723714] ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca8, slave address 0x20, irq 10 [ 10.736231] sd 1:0:0:0: Embedded Enclosure Device [ 10.747166] device-mapper: uevent: version 1.0.3 [ 10.747860] sd 1:0:0:1: Embedded Enclosure Device [ 10.750015] scsi 1:0:0:31: Embedded Enclosure Device [ 10.752405] sd 1:0:1:0: Embedded Enclosure Device [ 10.754636] sd 1:0:1:1: Embedded Enclosure Device [ 10.757129] scsi 1:0:1:31: Embedded Enclosure Device [ 10.759648] sd 1:0:2:0: Embedded Enclosure Device [ 10.762369] sd 1:0:2:1: Embedded Enclosure Device [ 10.765561] sd 1:0:2:2: Embedded Enclosure Device [ 10.767725] scsi 1:0:2:31: Embedded Enclosure Device [ 10.769956] sd 1:0:3:0: Embedded Enclosure Device [ 10.771261] ipmi_si IPI0001:00: The BMC does not support setting the recv irq bit, compensating, but the BMC needs to be fixed. [ 10.772360] sd 1:0:3:1: Embedded Enclosure Device [ 10.774587] sd 1:0:3:2: Embedded Enclosure Device [ 10.776751] scsi 1:0:3:31: Embedded Enclosure Device [ 10.780725] ses 1:0:4:0: Attached Enclosure device [ 10.783359] ipmi_si IPI0001:00: Using irq 10 [ 10.807160] ipmi_si IPI0001:00: Found new BMC (man_id: 0x0002a2, prod_id: 0x0100, dev_id: 0x20) [ 10.858460] device-mapper: ioctl: 4.37.1-ioctl (2018-04-03) initialised: dm-devel@redhat.com [ 10.867598] AVX2 version of gcm_enc/dec engaged. [ 10.872252] AES CTR mode by8 optimization enabled [ 10.878030] ipmi_si IPI0001:00: IPMI kcs interface initialized [ 10.883539] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) [ 10.883668] alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) [ 10.901595] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.3) [ 10.964313] kvm: Nested Paging enabled [ 10.971585] MCE: In-kernel MCE decoding enabled. [ 10.980092] AMD64 EDAC driver v3.4.0 [ 10.983702] EDAC amd64: DRAM ECC enabled. [ 10.987719] EDAC amd64: F17h detected (node 0). [ 10.992303] EDAC MC: UMC0 chip selects: [ 10.992306] EDAC amd64: MC: 0: 0MB 1: 0MB [ 10.997018] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 10.997021] EDAC amd64: MC: 4: 0MB 5: 0MB [ 10.997022] EDAC amd64: MC: 6: 0MB 7: 0MB [ 10.997025] EDAC MC: UMC1 chip selects: [ 10.997025] EDAC amd64: MC: 0: 0MB 1: 0MB [ 10.997026] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 10.997027] EDAC amd64: MC: 4: 0MB 5: 0MB [ 10.997027] EDAC amd64: MC: 6: 0MB 7: 0MB [ 10.997028] EDAC amd64: using x8 syndromes. [ 10.997028] EDAC amd64: MCT channel count: 2 [ 10.997198] EDAC MC0: Giving out device to 'amd64_edac' 'F17h': DEV 0000:00:18.3 [ 10.997203] EDAC amd64: DRAM ECC enabled. [ 10.997204] EDAC amd64: F17h detected (node 1). [ 10.997253] EDAC MC: UMC0 chip selects: [ 10.997254] EDAC amd64: MC: 0: 0MB 1: 0MB [ 10.997255] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 10.997255] EDAC amd64: MC: 4: 0MB 5: 0MB [ 10.997256] EDAC amd64: MC: 6: 0MB 7: 0MB [ 10.997258] EDAC MC: UMC1 chip selects: [ 10.997259] EDAC amd64: MC: 0: 0MB 1: 0MB [ 10.997259] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 10.997260] EDAC amd64: MC: 4: 0MB 5: 0MB [ 10.997260] EDAC amd64: MC: 6: 0MB 7: 0MB [ 10.997261] EDAC amd64: using x8 syndromes. [ 10.997261] EDAC amd64: MCT channel count: 2 [ 10.997424] EDAC MC1: Giving out device to 'amd64_edac' 'F17h': DEV 0000:00:19.3 [ 10.997429] EDAC amd64: DRAM ECC enabled. [ 10.997430] EDAC amd64: F17h detected (node 2). [ 10.997468] EDAC MC: UMC0 chip selects: [ 10.997469] EDAC amd64: MC: 0: 0MB 1: 0MB [ 10.997470] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 10.997470] EDAC amd64: MC: 4: 0MB 5: 0MB [ 10.997471] EDAC amd64: MC: 6: 0MB 7: 0MB [ 10.997473] EDAC MC: UMC1 chip selects: [ 10.997474] EDAC amd64: MC: 0: 0MB 1: 0MB [ 10.997474] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 10.997476] EDAC amd64: MC: 4: 0MB 5: 0MB [ 10.997478] EDAC amd64: MC: 6: 0MB 7: 0MB [ 10.997479] EDAC amd64: using x8 syndromes. [ 10.997479] EDAC amd64: MCT channel count: 2 [ 10.998093] EDAC MC2: Giving out device to 'amd64_edac' 'F17h': DEV 0000:00:1a.3 [ 10.998101] EDAC amd64: DRAM ECC enabled. [ 10.998102] EDAC amd64: F17h detected (node 3). [ 10.998147] EDAC MC: UMC0 chip selects: [ 10.998148] EDAC amd64: MC: 0: 0MB 1: 0MB [ 10.998149] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 10.998150] EDAC amd64: MC: 4: 0MB 5: 0MB [ 10.998151] EDAC amd64: MC: 6: 0MB 7: 0MB [ 10.998153] EDAC MC: UMC1 chip selects: [ 10.998154] EDAC amd64: MC: 0: 0MB 1: 0MB [ 10.998155] EDAC amd64: MC: 2: 16383MB 3: 16383MB [ 10.998156] EDAC amd64: MC: 4: 0MB 5: 0MB [ 10.998156] EDAC amd64: MC: 6: 0MB 7: 0MB [ 10.998157] EDAC amd64: using x8 syndromes. [ 10.998157] EDAC amd64: MCT channel count: 2 [ 10.999466] EDAC MC3: Giving out device to 'amd64_edac' 'F17h': DEV 0000:00:1b.3 [ 10.999538] EDAC PCI0: Giving out device to module 'amd64_edac' controller 'EDAC PCI controller': DEV '0000:00:18.0' (POLLED) [ 32.319941] device-mapper: multipath round-robin: version 1.2.0 loaded [ 41.294705] Adding 4194300k swap on /dev/sda3. Priority:-2 extents:1 across:4194300k FS [ 41.307933] FAT-fs (sda1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck. [ 41.345748] type=1305 audit(1596901499.751:3): audit_pid=10388 old=0 auid=4294967295 ses=4294967295 res=1 [ 41.367731] RPC: Registered named UNIX socket transport module. [ 41.373964] RPC: Registered udp transport module. [ 41.380034] RPC: Registered tcp transport module. [ 41.386143] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 42.017659] mlx5_core 0000:01:00.0: slow_pci_heuristic:5575:(pid 10691): Max link speed = 100000, PCI BW = 126016 [ 42.027974] mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) [ 42.036246] mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) [ 42.470727] tg3 0000:81:00.0: irq 254 for MSI/MSI-X [ 42.470740] tg3 0000:81:00.0: irq 255 for MSI/MSI-X [ 42.470752] tg3 0000:81:00.0: irq 256 for MSI/MSI-X [ 42.470763] tg3 0000:81:00.0: irq 257 for MSI/MSI-X [ 42.470777] tg3 0000:81:00.0: irq 258 for MSI/MSI-X [ 42.596955] IPv6: ADDRCONF(NETDEV_UP): em1: link is not ready [ 46.173054] tg3 0000:81:00.0 em1: Link is up at 1000 Mbps, full duplex [ 46.179587] tg3 0000:81:00.0 em1: Flow control is on for TX and on for RX [ 46.186376] tg3 0000:81:00.0 em1: EEE is enabled [ 46.191013] IPv6: ADDRCONF(NETDEV_CHANGE): em1: link becomes ready [ 46.950489] IPv6: ADDRCONF(NETDEV_UP): ib0: link is not ready [ 47.248368] IPv6: ADDRCONF(NETDEV_CHANGE): ib0: link becomes ready [ 51.085678] FS-Cache: Loaded [ 51.115990] FS-Cache: Netfs 'nfs' registered for caching [ 51.125131] Key type dns_resolver registered [ 51.153397] NFS: Registering the id_resolver key type [ 51.158615] Key type id_resolver registered [ 51.164148] Key type id_legacy registered [ 71.740956] LNet: HW NUMA nodes: 4, HW CPU cores: 48, npartitions: 4 [ 71.748709] alg: No test for adler32 (adler32-zlib) [ 72.547972] Lustre: Lustre: Build Version: 2.12.5_2_g8ac362a [ 72.653153] LNet: 20083:0:(config.c:1642:lnet_inet_enumerate()) lnet: Ignoring interface em2: it's down [ 72.662950] LNet: Using FastReg for registration [ 72.678584] LNet: Added LNI 10.0.10.54@o2ib7 [8/256/0/180] [ 139.130199] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [ 139.140373] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Timed out RDMA with 10.0.10.226@o2ib7 (32): c: 7, oc: 0, rc: 8 [ 189.131611] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [ 189.141787] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Timed out RDMA with 10.0.10.109@o2ib7 (106): c: 8, oc: 0, rc: 8 [ 194.131746] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [ 194.141914] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Skipped 1 previous similar message [ 194.151995] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Timed out RDMA with 10.0.10.113@o2ib7 (106): c: 8, oc: 0, rc: 8 [ 194.164152] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Skipped 1 previous similar message [ 199.131884] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [ 199.142050] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Timed out RDMA with 10.0.10.51@o2ib7 (106): c: 8, oc: 0, rc: 8 [ 211.132228] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [ 211.142394] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Skipped 3 previous similar messages [ 211.152564] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Timed out RDMA with 10.0.10.101@o2ib7 (105): c: 8, oc: 0, rc: 8 [ 211.164722] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Skipped 3 previous similar messages [ 230.132775] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [ 230.142941] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Skipped 2 previous similar messages [ 230.153111] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Timed out RDMA with 10.0.10.106@o2ib7 (105): c: 8, oc: 0, rc: 8 [ 230.165267] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Skipped 2 previous similar messages [ 241.362214] INFO: task kiblnd_sd_00_00:20141 blocked for more than 120 seconds. [ 241.369527] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 241.377363] kiblnd_sd_00_00 D ffff9d00f1951040 0 20141 2 0x00000080 [ 241.384476] Call Trace: [ 241.386938] [] ? load_balance+0x1be/0x9a0 [ 241.392631] [] schedule_preempt_disabled+0x29/0x70 [ 241.399077] [] __mutex_lock_slowpath+0xc7/0x1d0 [ 241.405264] [] mutex_lock+0x1f/0x2f [ 241.410439] [] lnet_nid2peerni_locked+0x71/0x150 [lnet] [ 241.417326] [] lnet_parse+0x791/0x11f0 [lnet] [ 241.423349] [] kiblnd_handle_rx+0x213/0x6b0 [ko2iblnd] [ 241.430162] [] kiblnd_scheduler+0xf3c/0x1180 [ko2iblnd] [ 241.437059] [] ? dequeue_task_fair+0x41e/0x660 [ 241.443155] [] ? __switch_to+0xce/0x580 [ 241.448666] [] ? wake_up_state+0x20/0x20 [ 241.454250] [] ? kiblnd_cq_event+0x90/0x90 [ko2iblnd] [ 241.460958] [] kthread+0xd1/0xe0 [ 241.465860] [] ? insert_kthread_work+0x40/0x40 [ 241.471963] [] ret_from_fork_nospec_begin+0xe/0x21 [ 241.478413] [] ? insert_kthread_work+0x40/0x40 [ 241.484528] INFO: task kiblnd_sd_00_01:20142 blocked for more than 120 seconds. [ 241.491833] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 241.499663] kiblnd_sd_00_01 D ffff9d00f1956180 0 20142 2 0x00000080 [ 241.506800] Call Trace: [ 241.509254] [] ? load_balance+0x178/0x9a0 [ 241.514918] [] schedule_preempt_disabled+0x29/0x70 [ 241.521378] [] __mutex_lock_slowpath+0xc7/0x1d0 [ 241.527559] [] mutex_lock+0x1f/0x2f [ 241.532710] [] lnet_nid2peerni_locked+0x71/0x150 [lnet] [ 241.539612] [] lnet_parse+0x791/0x11f0 [lnet] [ 241.545625] [] kiblnd_handle_rx+0x213/0x6b0 [ko2iblnd] [ 241.552422] [] kiblnd_scheduler+0xf3c/0x1180 [ko2iblnd] [ 241.559342] [] ? dequeue_task_fair+0x41e/0x660 [ 241.565444] [] ? __switch_to+0xce/0x580 [ 241.570940] [] ? wake_up_state+0x20/0x20 [ 241.576540] [] ? kiblnd_cq_event+0x90/0x90 [ko2iblnd] [ 241.583245] [] kthread+0xd1/0xe0 [ 241.588127] [] ? insert_kthread_work+0x40/0x40 [ 241.594245] [] ret_from_fork_nospec_begin+0xe/0x21 [ 241.600691] [] ? insert_kthread_work+0x40/0x40 [ 241.606788] INFO: task kiblnd_sd_00_02:20143 blocked for more than 120 seconds. [ 241.614114] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 241.621958] kiblnd_sd_00_02 D ffff9d11a9afb0c0 0 20143 2 0x00000080 [ 241.629064] Call Trace: [ 241.631522] [] ? __enqueue_entity+0x78/0x80 [ 241.637381] [] schedule_preempt_disabled+0x29/0x70 [ 241.643828] [] __mutex_lock_slowpath+0xc7/0x1d0 [ 241.650017] [] mutex_lock+0x1f/0x2f [ 241.655188] [] lnet_nid2peerni_locked+0x71/0x150 [lnet] [ 241.662071] [] lnet_parse+0x791/0x11f0 [lnet] [ 241.668091] [] kiblnd_handle_rx+0x213/0x6b0 [ko2iblnd] [ 241.674898] [] kiblnd_scheduler+0xf3c/0x1180 [ko2iblnd] [ 241.681786] [] ? dequeue_task_fair+0x41e/0x660 [ 241.687884] [] ? __switch_to+0xce/0x580 [ 241.693394] [] ? wake_up_state+0x20/0x20 [ 241.698980] [] ? kiblnd_cq_event+0x90/0x90 [ko2iblnd] [ 241.705686] [] kthread+0xd1/0xe0 [ 241.710586] [] ? insert_kthread_work+0x40/0x40 [ 241.716682] [] ret_from_fork_nospec_begin+0xe/0x21 [ 241.723133] [] ? insert_kthread_work+0x40/0x40 [ 241.729249] INFO: task kiblnd_sd_00_03:20144 blocked for more than 120 seconds. [ 241.736562] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 241.744391] kiblnd_sd_00_03 D ffff9d11a9afc100 0 20144 2 0x00000080 [ 241.751529] Call Trace: [ 241.753985] [] ? load_balance+0x178/0x9a0 [ 241.759646] [] schedule_preempt_disabled+0x29/0x70 [ 241.766111] [] __mutex_lock_slowpath+0xc7/0x1d0 [ 241.772299] [] mutex_lock+0x1f/0x2f [ 241.777454] [] lnet_nid2peerni_locked+0x71/0x150 [lnet] [ 241.784359] [] lnet_parse+0x791/0x11f0 [lnet] [ 241.790371] [] kiblnd_handle_rx+0x213/0x6b0 [ko2iblnd] [ 241.797167] [] kiblnd_scheduler+0xf3c/0x1180 [ko2iblnd] [ 241.804072] [] ? dequeue_task_fair+0x41e/0x660 [ 241.810173] [] ? __switch_to+0xce/0x580 [ 241.815669] [] ? wake_up_state+0x20/0x20 [ 241.821267] [] ? kiblnd_cq_event+0x90/0x90 [ko2iblnd] [ 241.827973] [] kthread+0xd1/0xe0 [ 241.832856] [] ? insert_kthread_work+0x40/0x40 [ 241.838972] [] ret_from_fork_nospec_begin+0xe/0x21 [ 241.845412] [] ? insert_kthread_work+0x40/0x40 [ 241.851508] INFO: task kiblnd_sd_01_00:20145 blocked for more than 120 seconds. [ 241.858835] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 241.866676] kiblnd_sd_01_00 D ffff9d20ed44b0c0 0 20145 2 0x00000080 [ 241.873777] Call Trace: [ 241.876247] [] ? __enqueue_entity+0x78/0x80 [ 241.882085] [] schedule_preempt_disabled+0x29/0x70 [ 241.888547] [] __mutex_lock_slowpath+0xc7/0x1d0 [ 241.894727] [] mutex_lock+0x1f/0x2f [ 241.899874] [] lnet_nid2peerni_locked+0x71/0x150 [lnet] [ 241.906782] [] lnet_parse+0x791/0x11f0 [lnet] [ 241.912792] [] kiblnd_handle_rx+0x213/0x6b0 [ko2iblnd] [ 241.919590] [] kiblnd_scheduler+0xf3c/0x1180 [ko2iblnd] [ 241.926494] [] ? dequeue_task_fair+0x41e/0x660 [ 241.932591] [] ? __switch_to+0xce/0x580 [ 241.938083] [] ? wake_up_state+0x20/0x20 [ 241.943680] [] ? kiblnd_cq_event+0x90/0x90 [ko2iblnd] [ 241.950387] [] kthread+0xd1/0xe0 [ 241.955266] [] ? insert_kthread_work+0x40/0x40 [ 241.961385] [] ret_from_fork_nospec_begin+0xe/0x21 [ 241.967824] [] ? insert_kthread_work+0x40/0x40 [ 241.973920] INFO: task kiblnd_sd_01_02:20147 blocked for more than 120 seconds. [ 241.981247] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 241.989094] kiblnd_sd_01_02 D ffff9d20ed449040 0 20147 2 0x00000080 [ 241.996206] Call Trace: [ 241.998663] [] ? __enqueue_entity+0x78/0x80 [ 242.004518] [] schedule_preempt_disabled+0x29/0x70 [ 242.010961] [] __mutex_lock_slowpath+0xc7/0x1d0 [ 242.017151] [] mutex_lock+0x1f/0x2f [ 242.022318] [] lnet_nid2peerni_locked+0x71/0x150 [lnet] [ 242.029204] [] lnet_parse+0x791/0x11f0 [lnet] [ 242.035224] [] kiblnd_handle_rx+0x213/0x6b0 [ko2iblnd] [ 242.042033] [] kiblnd_scheduler+0xf3c/0x1180 [ko2iblnd] [ 242.048928] [] ? dequeue_task_fair+0x41e/0x660 [ 242.055027] [] ? __switch_to+0xce/0x580 [ 242.060534] [] ? wake_up_state+0x20/0x20 [ 242.066111] [] ? kiblnd_cq_event+0x90/0x90 [ko2iblnd] [ 242.072819] [] kthread+0xd1/0xe0 [ 242.077721] [] ? insert_kthread_work+0x40/0x40 [ 242.083815] [] ret_from_fork_nospec_begin+0xe/0x21 [ 242.090266] [] ? insert_kthread_work+0x40/0x40 [ 242.096381] INFO: task kiblnd_sd_02_00:20149 blocked for more than 120 seconds. [ 242.103686] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 242.111516] kiblnd_sd_02_00 D ffff9d20f5ead140 0 20149 2 0x00000080 [ 242.118652] Call Trace: [ 242.121107] [] ? __enqueue_entity+0x78/0x80 [ 242.126943] [] schedule_preempt_disabled+0x29/0x70 [ 242.133406] [] __mutex_lock_slowpath+0xc7/0x1d0 [ 242.139588] [] mutex_lock+0x1f/0x2f [ 242.144743] [] lnet_nid2peerni_locked+0x71/0x150 [lnet] [ 242.151645] [] lnet_parse+0x791/0x11f0 [lnet] [ 242.157660] [] kiblnd_handle_rx+0x213/0x6b0 [ko2iblnd] [ 242.164457] [] kiblnd_scheduler+0xf3c/0x1180 [ko2iblnd] [ 242.171365] [] ? dequeue_task_fair+0x41e/0x660 [ 242.177461] [] ? __switch_to+0xce/0x580 [ 242.182958] [] ? wake_up_state+0x20/0x20 [ 242.188558] [] ? kiblnd_cq_event+0x90/0x90 [ko2iblnd] [ 242.195263] [] kthread+0xd1/0xe0 [ 242.200154] [] ? insert_kthread_work+0x40/0x40 [ 242.206271] [] ret_from_fork_nospec_begin+0xe/0x21 [ 242.212717] [] ? insert_kthread_work+0x40/0x40 [ 242.218813] INFO: task kiblnd_sd_02_01:20150 blocked for more than 120 seconds. [ 242.226141] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 242.233985] kiblnd_sd_02_01 D ffff9d20f5eac100 0 20150 2 0x00000080 [ 242.241090] Call Trace: [ 242.243548] [] ? __enqueue_entity+0x78/0x80 [ 242.249406] [] schedule_preempt_disabled+0x29/0x70 [ 242.255853] [] __mutex_lock_slowpath+0xc7/0x1d0 [ 242.262035] [] mutex_lock+0x1f/0x2f [ 242.267207] [] lnet_nid2peerni_locked+0x71/0x150 [lnet] [ 242.274087] [] lnet_parse+0x791/0x11f0 [lnet] [ 242.280102] [] kiblnd_handle_rx+0x213/0x6b0 [ko2iblnd] [ 242.286916] [] kiblnd_scheduler+0xf3c/0x1180 [ko2iblnd] [ 242.293813] [] ? dequeue_task_fair+0x41e/0x660 [ 242.299910] [] ? __switch_to+0xce/0x580 [ 242.305420] [] ? wake_up_state+0x20/0x20 [ 242.311004] [] ? kiblnd_cq_event+0x90/0x90 [ko2iblnd] [ 242.317713] [] kthread+0xd1/0xe0 [ 242.322613] [] ? insert_kthread_work+0x40/0x40 [ 242.328708] [] ret_from_fork_nospec_begin+0xe/0x21 [ 242.335150] [] ? insert_kthread_work+0x40/0x40 [ 242.341266] INFO: task kiblnd_sd_02_02:20151 blocked for more than 120 seconds. [ 242.348571] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 242.356402] kiblnd_sd_02_02 D ffff9d20f5eaa080 0 20151 2 0x00000080 [ 242.363538] Call Trace: [ 242.365992] [] ? __enqueue_entity+0x78/0x80 [ 242.371827] [] schedule_preempt_disabled+0x29/0x70 [ 242.378290] [] __mutex_lock_slowpath+0xc7/0x1d0 [ 242.384472] [] mutex_lock+0x1f/0x2f [ 242.389627] [] lnet_nid2peerni_locked+0x71/0x150 [lnet] [ 242.396532] [] lnet_parse+0x791/0x11f0 [lnet] [ 242.402545] [] kiblnd_handle_rx+0x213/0x6b0 [ko2iblnd] [ 242.409340] [] kiblnd_scheduler+0xf3c/0x1180 [ko2iblnd] [ 242.416248] [] ? dequeue_task_fair+0x41e/0x660 [ 242.422347] [] ? __switch_to+0xce/0x580 [ 242.427842] [] ? wake_up_state+0x20/0x20 [ 242.433439] [] ? kiblnd_cq_event+0x90/0x90 [ko2iblnd] [ 242.440141] [] kthread+0xd1/0xe0 [ 242.445030] [] ? insert_kthread_work+0x40/0x40 [ 242.451144] [] ret_from_fork_nospec_begin+0xe/0x21 [ 242.457586] [] ? insert_kthread_work+0x40/0x40 [ 242.463688] INFO: task kiblnd_sd_02_03:20152 blocked for more than 120 seconds. [ 242.471018] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 242.478858] kiblnd_sd_02_03 D ffff9d20f5ea9040 0 20152 2 0x00000080 [ 242.485968] Call Trace: [ 242.488439] [] ? load_balance+0x178/0x9a0 [ 242.494101] [] schedule_preempt_disabled+0x29/0x70 [ 242.500561] [] __mutex_lock_slowpath+0xc7/0x1d0 [ 242.506745] [] mutex_lock+0x1f/0x2f [ 242.511902] [] lnet_nid2peerni_locked+0x71/0x150 [lnet] [ 242.518806] [] lnet_parse+0x791/0x11f0 [lnet] [ 242.524821] [] kiblnd_handle_rx+0x213/0x6b0 [ko2iblnd] [ 242.531616] [] kiblnd_scheduler+0xf3c/0x1180 [ko2iblnd] [ 242.538520] [] ? dequeue_task_fair+0x41e/0x660 [ 242.544618] [] ? __switch_to+0xce/0x580 [ 242.550110] [] ? wake_up_state+0x20/0x20 [ 242.555719] [] ? kiblnd_cq_event+0x90/0x90 [ko2iblnd] [ 242.562420] [] kthread+0xd1/0xe0 [ 242.567304] [] ? insert_kthread_work+0x40/0x40 [ 242.573421] [] ret_from_fork_nospec_begin+0xe/0x21 [ 242.579867] [] ? insert_kthread_work+0x40/0x40 [ 307.134960] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [ 307.145129] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Skipped 4 previous similar messages [ 307.155298] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Timed out RDMA with 10.0.10.232@o2ib7 (5): c: 7, oc: 0, rc: 8 [ 307.167288] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Skipped 4 previous similar messages [ 308.552205] LDISKFS-fs warning (device dm-1): ldiskfs_multi_mount_protect:321: MMP interval 42 higher than expected, please wait. [ 350.566472] LDISKFS-fs (dm-1): file extents enabled, maximum tree depth=5 [ 361.307701] LDISKFS-fs (dm-1): recovery complete [ 361.312536] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,acl,no_mbcache,nodelalloc [ 362.425798] Lustre: fir-MDT0003: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-900 [ 362.439487] Lustre: fir-MDD0003: changelog on [ 363.182445] Lustre: fir-MDT0003: Will be in recovery for at least 2:30, or until 1368 clients reconnect [ 364.191926] Lustre: fir-MDT0003: Connection restored to 780ae761-2011-4 (at 10.49.22.30@o2ib1) [ 364.200545] Lustre: Skipped 4 previous similar messages [ 364.745446] Lustre: fir-MDT0003: Connection restored to e83e30a9-1ff6-4 (at 10.49.24.25@o2ib1) [ 364.754071] Lustre: Skipped 1 previous similar message [ 366.719887] Lustre: fir-MDT0003: Connection restored to a8cd39d4-1e05-4 (at 10.49.25.12@o2ib1) [ 368.723646] Lustre: fir-MDT0003: Connection restored to 647172d5-3da4-4 (at 10.50.4.8@o2ib2) [ 368.732086] Lustre: Skipped 245 previous similar messages [ 372.891860] Lustre: fir-MDT0003: Connection restored to 977ae7db-a89d-4 (at 10.50.6.8@o2ib2) [ 372.900306] Lustre: Skipped 742 previous similar messages [ 380.914256] Lustre: fir-MDT0003: Connection restored to f2b982c8-cfe6-4 (at 10.50.12.16@o2ib2) [ 380.922876] Lustre: Skipped 344 previous similar messages [ 418.573537] Lustre: fir-MDT0003: Connection restored to fir-MDT0000-mdtlov_UUID (at 10.0.10.51@o2ib7) [ 418.582763] Lustre: Skipped 22 previous similar messages [ 451.712898] Lustre: fir-MDT0003: Connection restored to fir-MDT0003-lwp-OST0044_UUID (at 10.0.10.111@o2ib7) [ 451.722647] Lustre: Skipped 14 previous similar messages [ 523.346126] Lustre: fir-MDT0003: recovery is timed out, evict stale exports [ 523.353409] Lustre: fir-MDT0003: disconnecting 1 stale clients [ 523.359683] LustreError: 20753:0:(fld_handler.c:225:fld_local_lookup()) srv-fir-MDT0003: FLD cache range [0x0000001440000400-0x0000001480000400]:45:ost does not match requested flag 0: rc = -5 [ 528.195156] Lustre: fir-MDT0003: Recovery over after 2:45, of 1368 clients 1367 recovered and 1 was evicted. [ 728.383072] LNet: Service thread pid 20847 was inactive for 200.12s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 728.400011] Pid: 20847, comm: mdt00_005 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 728.410183] Call Trace: [ 728.412653] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [ 728.419620] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 728.426752] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 728.433456] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 728.440029] [] lod_object_lock+0xf4/0x780 [lod] [ 728.446273] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 728.452414] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 728.459693] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 728.466447] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 728.472690] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 728.479178] [] mdt_reint_rec+0x83/0x210 [mdt] [ 728.485250] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 728.491822] [] mdt_reint+0x67/0x140 [mdt] [ 728.497551] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 728.504529] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 728.512286] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 728.518630] [] kthread+0xd1/0xe0 [ 728.523561] [] ret_from_fork_nospec_begin+0xe/0x21 [ 728.530061] [] 0xffffffffffffffff [ 728.535114] LustreError: dumping log to /tmp/lustre-log.1596902187.20847 [ 728.895077] LNet: Service thread pid 21158 was inactive for 200.44s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 728.912027] Pid: 21158, comm: mdt03_074 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 728.922201] Call Trace: [ 728.924670] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [ 728.931631] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 728.938748] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 728.945430] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 728.952003] [] lod_object_lock+0xf4/0x780 [lod] [ 728.958234] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 728.964375] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 728.971640] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 728.978381] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 728.984604] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 728.991086] [] mdt_reint_rec+0x83/0x210 [mdt] [ 728.997146] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 729.003714] [] mdt_reint+0x67/0x140 [mdt] [ 729.009416] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 729.016396] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 729.024138] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 729.030479] [] kthread+0xd1/0xe0 [ 729.035393] [] ret_from_fork_nospec_begin+0xe/0x21 [ 729.041867] [] 0xffffffffffffffff [ 729.407095] Pid: 21017, comm: mdt01_038 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 729.417276] Call Trace: [ 729.419742] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [ 729.426696] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 729.433813] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 729.440495] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 729.447054] [] lod_object_lock+0xf4/0x780 [lod] [ 729.453269] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 729.459397] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 729.466660] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 729.473412] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 729.479636] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 729.486119] [] mdt_reint_rec+0x83/0x210 [mdt] [ 729.492167] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 729.498736] [] mdt_reint+0x67/0x140 [mdt] [ 729.504439] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 729.511399] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 729.519130] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 729.525465] [] kthread+0xd1/0xe0 [ 729.530379] [] ret_from_fork_nospec_begin+0xe/0x21 [ 729.536856] [] 0xffffffffffffffff [ 729.541873] LustreError: dumping log to /tmp/lustre-log.1596902188.21017 [ 729.549206] Pid: 21081, comm: mdt01_053 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 729.559385] Call Trace: [ 729.561865] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [ 729.568834] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 729.575951] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 729.582624] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 729.589213] [] lod_object_lock+0xf4/0x780 [lod] [ 729.595442] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 729.601578] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 729.608851] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 729.615592] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 729.621808] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 729.628281] [] mdt_reint_rec+0x83/0x210 [mdt] [ 729.634352] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 729.640917] [] mdt_reint+0x67/0x140 [mdt] [ 729.646622] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 729.653579] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 729.661302] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 729.667631] [] kthread+0xd1/0xe0 [ 729.672544] [] ret_from_fork_nospec_begin+0xe/0x21 [ 729.679026] [] 0xffffffffffffffff [ 729.684049] Pid: 21257, comm: mdt01_098 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 729.694221] Call Trace: [ 729.696680] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [ 729.703623] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 729.710729] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 729.717404] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 729.723963] [] lod_object_lock+0xf4/0x780 [lod] [ 729.730179] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 729.736308] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 729.743559] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 729.750296] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 729.756509] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 729.762993] [] mdt_reint_rec+0x83/0x210 [mdt] [ 729.769060] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 729.775629] [] mdt_reint+0x67/0x140 [mdt] [ 729.781331] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 729.788294] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 729.796022] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 729.802358] [] kthread+0xd1/0xe0 [ 729.807272] [] ret_from_fork_nospec_begin+0xe/0x21 [ 729.813748] [] 0xffffffffffffffff [ 729.818784] LNet: Service thread pid 21295 was inactive for 200.49s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [ 732.479187] LNet: Service thread pid 21192 was inactive for 200.76s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [ 732.492051] LNet: Skipped 7 previous similar messages [ 732.497113] LustreError: dumping log to /tmp/lustre-log.1596902191.21192 [ 736.575294] LNet: Service thread pid 20945 was inactive for 200.61s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [ 736.588221] LustreError: dumping log to /tmp/lustre-log.1596902195.20945 [ 738.624344] LNet: Service thread pid 21039 was inactive for 200.16s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [ 738.637208] LNet: Skipped 1 previous similar message [ 738.642183] LustreError: dumping log to /tmp/lustre-log.1596902197.21039 [ 741.695438] LustreError: dumping log to /tmp/lustre-log.1596902200.21054 [ 747.327595] LNet: Service thread pid 20941 was inactive for 200.39s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [ 747.340461] LNet: Skipped 1 previous similar message [ 747.345435] LustreError: dumping log to /tmp/lustre-log.1596902206.20941 [ 765.760114] LNet: Service thread pid 21162 was inactive for 200.47s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [ 765.772979] LustreError: dumping log to /tmp/lustre-log.1596902224.21162 [ 828.261890] Lustre: fir-MDT0000-osp-MDT0003: Connection to fir-MDT0000 (at 10.0.10.51@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [ 828.277893] LustreError: 20847:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1596901987, 300s ago), entering recovery for fir-MDT0000_UUID@10.0.10.51@o2ib7 ns: fir-MDT0000-osp-MDT0003 lock: ffff9cf0d214ec00/0xfdb7f5661d27c282 lrc: 4/0,1 mode: --/EX res: [0x200000004:0x1:0x0].0x0 bits 0x2/0x0 rrc: 30 type: IBT flags: 0x1000001000000 nid: local remote: 0x8a212c21b526e7c9 expref: -99 pid: 20847 timeout: 0 lvb_type: 0 [ 828.343409] Lustre: fir-MDT0000-osp-MDT0003: Connection restored to 10.0.10.51@o2ib7 (at 10.0.10.51@o2ib7) [ 828.353072] Lustre: Skipped 81 previous similar messages [ 828.786909] LustreError: 21257:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1596901987, 300s ago), entering recovery for fir-MDT0000_UUID@10.0.10.51@o2ib7 ns: fir-MDT0000-osp-MDT0003 lock: ffff9d00f02bf500/0xfdb7f5661d2b7b5b lrc: 4/0,1 mode: --/EX res: [0x200000004:0x1:0x0].0x0 bits 0x2/0x0 rrc: 30 type: IBT flags: 0x1000001000000 nid: local remote: 0x8a212c21b52789be expref: -99 pid: 21257 timeout: 0 lvb_type: 0 [ 828.827022] LustreError: 21257:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) Skipped 9 previous similar messages [ 831.715001] LustreError: 21192:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1596901990, 300s ago), entering recovery for fir-MDT0000_UUID@10.0.10.51@o2ib7 ns: fir-MDT0000-osp-MDT0003 lock: ffff9d20f3017500/0xfdb7f5661d4d5482 lrc: 4/0,1 mode: --/EX res: [0x200000004:0x1:0x0].0x0 bits 0x2/0x0 rrc: 30 type: IBT flags: 0x1000001000000 nid: local remote: 0x8a212c21b52a360c expref: -99 pid: 21192 timeout: 0 lvb_type: 0 [ 831.755087] LustreError: 21192:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) Skipped 2 previous similar messages [ 835.964114] LustreError: 20945:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1596901994, 300s ago), entering recovery for fir-MDT0000_UUID@10.0.10.51@o2ib7 ns: fir-MDT0000-osp-MDT0003 lock: ffff9d00f04e6e40/0xfdb7f5661d6537e2 lrc: 4/0,1 mode: --/EX res: [0x200000004:0x1:0x0].0x0 bits 0x2/0x0 rrc: 30 type: IBT flags: 0x1000001000000 nid: local remote: 0x8a212c21b52caf9a expref: -99 pid: 20945 timeout: 0 lvb_type: 0 [ 841.603281] LustreError: 21054:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1596902000, 300s ago), entering recovery for fir-MDT0000_UUID@10.0.10.51@o2ib7 ns: fir-MDT0000-osp-MDT0003 lock: ffff9d10747f5c40/0xfdb7f5661d925135 lrc: 4/0,1 mode: --/EX res: [0x200000004:0x1:0x0].0x0 bits 0x2/0x0 rrc: 31 type: IBT flags: 0x1000001000000 nid: local remote: 0x8a212c21b5305365 expref: -99 pid: 21054 timeout: 0 lvb_type: 0 [ 841.643365] LustreError: 21054:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) Skipped 2 previous similar messages [ 850.242519] LNet: Service thread pid 20892 was inactive for 232.25s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [ 850.255388] LustreError: dumping log to /tmp/lustre-log.1596902309.20892 [ 865.287947] LustreError: 21162:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1596902024, 300s ago), entering recovery for fir-MDT0000_UUID@10.0.10.51@o2ib7 ns: fir-MDT0000-osp-MDT0003 lock: ffff9d20bfefda00/0xfdb7f5661e4314a0 lrc: 4/0,1 mode: --/EX res: [0x200000004:0x1:0x0].0x0 bits 0x2/0x0 rrc: 31 type: IBT flags: 0x1000001000000 nid: local remote: 0x8a212c21b542408d expref: -99 pid: 21162 timeout: 0 lvb_type: 0 [ 865.328033] LustreError: 21162:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) Skipped 1 previous similar message [ 917.994433] LustreError: 20892:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1596902076, 300s ago), entering recovery for fir-MDT0000_UUID@10.0.10.51@o2ib7 ns: fir-MDT0000-osp-MDT0003 lock: ffff9d1053b20fc0/0xfdb7f5661f752138 lrc: 4/0,1 mode: --/EX res: [0x200000004:0x1:0x0].0x0 bits 0x2/0x0 rrc: 33 type: IBT flags: 0x1000001000000 nid: local remote: 0x8a212c21b54eefd0 expref: -99 pid: 20892 timeout: 0 lvb_type: 0 [ 938.308935] LNet: Service thread pid 21237 was inactive for 282.11s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [ 938.321798] LustreError: dumping log to /tmp/lustre-log.1596902397.21237 [ 943.429085] LustreError: dumping log to /tmp/lustre-log.1596902402.21241 [ 956.190448] LustreError: 21237:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1596902115, 300s ago), entering recovery for fir-MDT0000_UUID@10.0.10.51@o2ib7 ns: fir-MDT0000-osp-MDT0003 lock: ffff9d20ac647500/0xfdb7f56620684466 lrc: 4/0,1 mode: --/EX res: [0x200000004:0x1:0x0].0x0 bits 0x2/0x0 rrc: 34 type: IBT flags: 0x1000001000000 nid: local remote: 0x8a212c21b556bdee expref: -99 pid: 21237 timeout: 0 lvb_type: 0 [ 1015.111076] LNet: Service thread pid 21080 was inactive for 332.69s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [ 1015.123948] LNet: Skipped 1 previous similar message [ 1015.128926] LustreError: dumping log to /tmp/lustre-log.1596902474.21080 [ 1033.669592] LustreError: 21157:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1596902192, 300s ago), entering recovery for fir-MDT0000_UUID@10.0.10.51@o2ib7 ns: fir-MDT0000-osp-MDT0003 lock: ffff9cf02321f740/0xfdb7f566225df9ee lrc: 4/0,1 mode: --/EX res: [0x200000004:0x1:0x0].0x0 bits 0x2/0x0 rrc: 35 type: IBT flags: 0x1000001000000 nid: local remote: 0x8a212c21b5668637 expref: -99 pid: 21157 timeout: 0 lvb_type: 0 [ 1033.709688] LustreError: 21157:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) Skipped 2 previous similar messages [ 1123.354063] Lustre: 21214:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9d20f4f21f80 x1672357021695040/t0(0) o36->ec15422e-002d-4@10.50.4.7@o2ib2:577/0 lens 552/2888 e 23 to 0 dl 1596902587 ref 2 fl Interpret:/0/0 rc 0/0 [ 1123.930082] Lustre: 21312:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9cf0d6f9f080 x1672354262550976/t0(0) o36->647172d5-3da4-4@10.50.4.8@o2ib2:577/0 lens 552/2888 e 23 to 0 dl 1596902587 ref 2 fl Interpret:/0/0 rc 0/0 [ 1123.956999] Lustre: 21312:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 10 previous similar messages [ 1124.940102] Lustre: 21142:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9d0090321680 x1672342544130432/t0(0) o36->a43a8093-f12c-4@10.50.4.52@o2ib2:578/0 lens 592/2888 e 23 to 0 dl 1596902588 ref 2 fl Interpret:/0/0 rc 0/0 [ 1129.289162] Lustre: fir-MDT0003: Client 647172d5-3da4-4 (at 10.50.4.8@o2ib2) reconnecting [ 1129.297389] Lustre: fir-MDT0003: Connection restored to 647172d5-3da4-4 (at 10.50.4.8@o2ib2) [ 1129.819118] Lustre: fir-MDT0003: Client 8101e344-5e5b-4 (at 10.50.1.36@o2ib2) reconnecting [ 1129.827394] Lustre: Skipped 2 previous similar messages [ 1130.954283] Lustre: 21093:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9d0090357980 x1672660241328256/t0(0) o36->0d7cae9d-3145-4@10.50.17.47@o2ib2:584/0 lens 552/2888 e 23 to 0 dl 1596902594 ref 2 fl Interpret:/0/0 rc 0/0 [ 1130.981368] Lustre: 21093:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 1 previous similar message [ 1132.850407] Lustre: fir-MDT0003: Client 0d7cae9d-3145-4 (at 10.50.17.47@o2ib2) reconnecting [ 1132.858778] Lustre: Skipped 6 previous similar messages [ 1136.955430] Lustre: 21303:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9cf0abeac800 x1672378499430528/t0(0) o36->6a67586f-42ae-4@10.50.8.10@o2ib2:590/0 lens 536/2888 e 23 to 0 dl 1596902600 ref 2 fl Interpret:/0/0 rc 0/0 [ 1136.982438] Lustre: 21303:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 2 previous similar messages [ 1137.518672] Lustre: fir-MDT0003: Client 4f9d968b-f0dc-4 (at 10.50.17.48@o2ib2) reconnecting [ 1141.153550] Lustre: fir-MDT0000-osp-MDT0003: Connection to fir-MDT0000 (at 10.0.10.51@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [ 1142.681498] Lustre: fir-MDT0003: Client 6a67586f-42ae-4 (at 10.50.8.10@o2ib2) reconnecting [ 1142.689772] Lustre: Skipped 1 previous similar message [ 1160.859115] Lustre: 20901:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9d109bd55100 x1672438047691456/t0(0) o36->497c76fa-304d-4@10.49.27.18@o2ib1:614/0 lens 536/2888 e 11 to 0 dl 1596902624 ref 2 fl Interpret:/0/0 rc 0/0 [ 1160.886202] Lustre: 20901:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 1 previous similar message [ 1166.349326] Lustre: fir-MDT0003: Client 497c76fa-304d-4 (at 10.49.27.18@o2ib1) reconnecting [ 1166.357688] Lustre: Skipped 1 previous similar message [ 1166.667264] LNet: Service thread pid 21157 was inactive for 432.99s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 1166.684203] LNet: Skipped 3 previous similar messages [ 1166.689265] Pid: 21157, comm: mdt02_082 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 1166.699437] Call Trace: [ 1166.701910] [] ldlm_completion_ast+0x4e5/0x860 [ptlrpc] [ 1166.708863] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 1166.715978] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 1166.722652] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 1166.729223] [] lod_object_lock+0xf4/0x780 [lod] [ 1166.735445] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 1166.741582] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 1166.748852] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 1166.755596] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 1166.761819] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 1166.768312] [] mdt_reint_rec+0x83/0x210 [mdt] [ 1166.774368] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 1166.780937] [] mdt_reint+0x67/0x140 [mdt] [ 1166.786658] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 1166.793619] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 1166.801340] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 1166.807677] [] kthread+0xd1/0xe0 [ 1166.812595] [] ret_from_fork_nospec_begin+0xe/0x21 [ 1166.819073] [] 0xffffffffffffffff [ 1166.824100] LustreError: dumping log to /tmp/lustre-log.1596902625.21157 [ 1170.692377] LustreError: 20900:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1596902329, 300s ago), entering recovery for fir-MDT0000_UUID@10.0.10.51@o2ib7 ns: fir-MDT0000-osp-MDT0003 lock: ffff9d2094d38d80/0xfdb7f56625927bea lrc: 4/0,1 mode: --/EX res: [0x200000004:0x1:0x0].0x0 bits 0x2/0x0 rrc: 36 type: IBT flags: 0x1000001000000 nid: local remote: 0x8a212c21b5a5dec1 expref: -99 pid: 20900 timeout: 0 lvb_type: 0 [ 1170.732479] LustreError: 20900:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) Skipped 5 previous similar messages [ 1173.835460] LNet: Service thread pid 21144 was inactive for 434.70s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 1173.852398] Pid: 21144, comm: mdt03_069 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 1173.862570] Call Trace: [ 1173.865042] [] ldlm_completion_ast+0x4e5/0x860 [ptlrpc] [ 1173.871988] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 1173.879110] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 1173.885786] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 1173.892357] [] lod_object_lock+0xf4/0x780 [lod] [ 1173.898587] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 1173.904723] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 1173.911996] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 1173.918754] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 1173.924979] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 1173.931462] [] mdt_reint_rec+0x83/0x210 [mdt] [ 1173.937520] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 1173.944094] [] mdt_reint+0x67/0x140 [mdt] [ 1173.949799] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 1173.956759] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 1173.964492] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 1173.970844] [] kthread+0xd1/0xe0 [ 1173.975760] [] ret_from_fork_nospec_begin+0xe/0x21 [ 1173.982240] [] 0xffffffffffffffff [ 1173.987273] LustreError: dumping log to /tmp/lustre-log.1596902632.21144 [ 1251.533633] Lustre: 20969:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9d20ada51680 x1672583873062336/t0(0) o36->966ed71b-2e25-4@10.49.27.15@o2ib1:705/0 lens 552/2888 e 3 to 0 dl 1596902715 ref 2 fl Interpret:/0/0 rc 0/0 [ 1257.251069] Lustre: fir-MDT0003: Client 966ed71b-2e25-4 (at 10.49.27.15@o2ib1) reconnecting [ 1316.175412] LNet: Service thread pid 20980 was inactive for 535.99s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 1316.192343] Pid: 20980, comm: mdt03_028 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 1316.202517] Call Trace: [ 1316.204989] [] ldlm_completion_ast+0x4e5/0x860 [ptlrpc] [ 1316.211944] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 1316.219059] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 1316.225742] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 1316.232312] [] lod_object_lock+0xf4/0x780 [lod] [ 1316.238535] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 1316.244670] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 1316.251944] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 1316.258685] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 1316.264916] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 1316.271405] [] mdt_reint_rec+0x83/0x210 [mdt] [ 1316.277458] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 1316.284025] [] mdt_reint+0x67/0x140 [mdt] [ 1316.289728] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 1316.296688] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 1316.304422] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 1316.310757] [] kthread+0xd1/0xe0 [ 1316.315670] [] ret_from_fork_nospec_begin+0xe/0x21 [ 1316.322145] [] 0xffffffffffffffff [ 1316.327173] LustreError: dumping log to /tmp/lustre-log.1596902775.20980 [ 1320.271525] Pid: 21026, comm: mdt02_048 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 1320.281706] Call Trace: [ 1320.284175] [] ldlm_completion_ast+0x4e5/0x860 [ptlrpc] [ 1320.291134] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 1320.298247] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 1320.304931] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 1320.311501] [] lod_object_lock+0xf4/0x780 [lod] [ 1320.317721] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 1320.323859] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 1320.331122] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 1320.337863] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 1320.344086] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 1320.350573] [] mdt_reint_rec+0x83/0x210 [mdt] [ 1320.356630] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 1320.363197] [] mdt_reint+0x67/0x140 [mdt] [ 1320.368900] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 1320.375869] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 1320.383591] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 1320.389928] [] kthread+0xd1/0xe0 [ 1320.394842] [] ret_from_fork_nospec_begin+0xe/0x21 [ 1320.401317] [] 0xffffffffffffffff [ 1320.406346] LustreError: dumping log to /tmp/lustre-log.1596902779.21026 [ 1328.975792] Lustre: 21205:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9d103d46a400 x1672322974962048/t0(0) o36->a17882ae-6de9-4@10.50.6.22@o2ib2:27/0 lens 520/2888 e 2 to 0 dl 1596902792 ref 2 fl Interpret:/0/0 rc 0/0 [ 1329.002618] Lustre: 21205:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 1 previous similar message [ 1334.726811] Lustre: fir-MDT0003: Client a17882ae-6de9-4 (at 10.50.6.22@o2ib2) reconnecting [ 1334.735086] Lustre: Skipped 1 previous similar message [ 1388.409766] Lustre: fir-MDT0003: Connection restored to 0c7d624a-f523-4 (at 10.50.10.36@o2ib2) [ 1388.418385] Lustre: Skipped 21 previous similar messages [ 1416.530215] Lustre: 21030:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9d0048851200 x1672352838630784/t0(0) o36->f2b7c93c-815b-4@10.50.0.61@o2ib2:115/0 lens 536/2888 e 1 to 0 dl 1596902880 ref 2 fl Interpret:/0/0 rc 0/0 [ 1416.557138] Lustre: 21030:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 3 previous similar messages [ 1422.792005] Lustre: fir-MDT0003: Client f2b7c93c-815b-4 (at 10.50.0.61@o2ib2) reconnecting [ 1422.800275] Lustre: Skipped 3 previous similar messages [ 1455.443286] LNet: Service thread pid 20777 was inactive for 633.70s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 1455.460226] LNet: Skipped 1 previous similar message [ 1455.465200] Pid: 20777, comm: mdt01_004 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 1455.475372] Call Trace: [ 1455.477845] [] ldlm_completion_ast+0x4e5/0x860 [ptlrpc] [ 1455.484789] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 1455.491907] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 1455.498579] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 1455.505151] [] lod_object_lock+0xf4/0x780 [lod] [ 1455.511373] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 1455.517507] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 1455.524781] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 1455.531524] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 1455.537747] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 1455.544229] [] mdt_reint_rec+0x83/0x210 [mdt] [ 1455.550279] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 1455.556847] [] mdt_reint+0x67/0x140 [mdt] [ 1455.562559] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 1455.569519] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 1455.577243] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 1455.583575] [] kthread+0xd1/0xe0 [ 1455.588491] [] ret_from_fork_nospec_begin+0xe/0x21 [ 1455.594964] [] 0xffffffffffffffff [ 1455.599984] LustreError: dumping log to /tmp/lustre-log.1596902914.20777 [ 1506.168698] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Timed out tx: tx_queue, 0 seconds [ 1506.178695] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Skipped 14 previous similar messages [ 1506.188950] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Timed out RDMA with 10.0.10.3@o2ib7 (11): c: 0, oc: 0, rc: 8 [ 1506.200846] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Skipped 14 previous similar messages [ 1506.724721] Lustre: 21169:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1596902958/real 0] req@ffff9d1034815580 x1674472635018880/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1596902965 ref 2 fl Rpc:X/0/ffffffff rc 0/-1 [ 1507.168730] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Timed out tx for 10.0.10.3@o2ib7: 2 seconds [ 1508.054757] Lustre: 20894:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1596902959/real 0] req@ffff9d10feb85e80 x1674472635057216/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1596902966 ref 2 fl Rpc:X/0/ffffffff rc 0/-1 [ 1508.081142] Lustre: 20894:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 5 previous similar messages [ 1508.168752] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Timed out tx for 10.0.10.3@o2ib7: 2 seconds [ 1508.178751] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Skipped 4 previous similar messages [ 1510.017815] Lustre: 21230:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1596902961/real 0] req@ffff9cffefa7de80 x1674472635074752/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1596902968 ref 2 fl Rpc:X/0/ffffffff rc 0/-1 [ 1510.044205] Lustre: 21230:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 1511.168842] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Timed out tx for 10.0.10.3@o2ib7: 5 seconds [ 1511.178844] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Skipped 5 previous similar messages [ 1513.109897] Lustre: 21222:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1596902964/real 0] req@ffff9d0000155a00 x1674472635112576/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1596902971 ref 2 fl Rpc:X/0/ffffffff rc 0/-1 [ 1513.136282] Lustre: 21222:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 1514.168924] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Timed out tx for 10.0.10.3@o2ib7: 2 seconds [ 1514.178917] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Skipped 15 previous similar messages [ 1518.124032] Lustre: 21129:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1596902970/real 0] req@ffff9cffe8318480 x1674472635164480/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1596902977 ref 2 fl Rpc:X/0/ffffffff rc 0/-1 [ 1518.150422] Lustre: 21129:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 10 previous similar messages [ 1519.169063] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Timed out tx for 10.0.10.3@o2ib7: 0 seconds [ 1519.179066] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Skipped 18 previous similar messages [ 1526.188256] Lustre: 21171:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1596902978/real 1596902978] req@ffff9d0f9b239b00 x1674472635126272/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1596902985 ref 2 fl Rpc:X/2/ffffffff rc 0/-1 [ 1526.215423] Lustre: 21171:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 17 previous similar messages [ 1527.125276] LNet: Service thread pid 21114 was inactive for 685.96s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 1527.142219] Pid: 21114, comm: mdt03_062 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 1527.152396] Call Trace: [ 1527.154865] [] ldlm_completion_ast+0x4e5/0x860 [ptlrpc] [ 1527.161822] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 1527.168936] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 1527.175618] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 1527.182187] [] lod_object_lock+0xf4/0x780 [lod] [ 1527.188410] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 1527.194547] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 1527.201818] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 1527.208563] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 1527.214784] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 1527.221269] [] mdt_reint_rec+0x83/0x210 [mdt] [ 1527.227316] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 1527.233887] [] mdt_reint+0x67/0x140 [mdt] [ 1527.239590] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 1527.246549] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 1527.254274] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 1527.260608] [] kthread+0xd1/0xe0 [ 1527.265521] [] ret_from_fork_nospec_begin+0xe/0x21 [ 1527.271994] [] 0xffffffffffffffff [ 1527.277024] LustreError: dumping log to /tmp/lustre-log.1596902986.21114 [ 1528.169309] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Timed out tx for 10.0.10.3@o2ib7: 0 seconds [ 1528.179304] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Skipped 30 previous similar messages [ 1543.169723] Lustre: 21171:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1596902995/real 1596902995] req@ffff9d0f9b239b00 x1674472635126272/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1596903002 ref 2 fl Rpc:X/2/ffffffff rc 0/-1 [ 1543.169725] Lustre: 21169:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1596902995/real 1596902995] req@ffff9d1034810900 x1674472635018944/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1596903002 ref 2 fl Rpc:X/2/ffffffff rc 0/-1 [ 1543.169730] Lustre: 21169:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 35 previous similar messages [ 1545.169769] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Timed out tx for 10.0.10.3@o2ib7: 3 seconds [ 1545.179765] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Skipped 67 previous similar messages [ 1553.366011] Lustre: 21036:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9d10f6e89200 x1672340512903040/t0(0) o36->b0c2b6ec-d912-4@10.50.0.62@o2ib2:252/0 lens 536/2888 e 1 to 0 dl 1596903017 ref 2 fl Interpret:/0/0 rc 0/0 [ 1553.392917] Lustre: 21036:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 4 previous similar messages [ 1559.304507] Lustre: fir-MDT0003: Client b0c2b6ec-d912-4 (at 10.50.0.62@o2ib2) reconnecting [ 1559.312786] Lustre: Skipped 4 previous similar messages [ 1578.170685] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Timed out tx for 10.0.10.3@o2ib7: 1 seconds [ 1578.170703] Lustre: 20989:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1596903030/real 1596903030] req@ffff9cfff3a24800 x1674472635143936/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1596903037 ref 2 fl Rpc:X/2/ffffffff rc 0/-1 [ 1578.170706] Lustre: 20989:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 62 previous similar messages [ 1578.217659] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Skipped 120 previous similar messages [ 1582.316801] Lustre: fir-MDT0000-osp-MDT0003: Connection to fir-MDT0000 (at 10.0.10.51@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [ 1582.332805] LustreError: 20926:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1596902741, 300s ago), entering recovery for fir-MDT0000_UUID@10.0.10.51@o2ib7 ns: fir-MDT0000-osp-MDT0003 lock: ffff9d001c823f00/0xfdb7f5663191990d lrc: 4/0,1 mode: --/EX res: [0x200000004:0x1:0x0].0x0 bits 0x2/0x0 rrc: 38 type: IBT flags: 0x1000001000000 nid: local remote: 0x8a212c21b5fa04f4 expref: -99 pid: 20926 timeout: 0 lvb_type: 0 [ 1582.372883] LustreError: 20926:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) Skipped 4 previous similar messages [ 1601.367421] LustreError: 21169:0:(ldlm_lockd.c:681:ldlm_handle_ast_error()) ### client (nid 10.0.10.3@o2ib7) failed to reply to blocking AST (req@ffff9d1034810900 x1674472635018944 status 0 rc -110), evict it ns: mdt-fir-MDT0003_UUID lock: ffff9cefa798de80/0xfdb7f5662a722099 lrc: 4/0,0 mode: PR/PR res: [0x280041ddc:0x1fa3:0x0].0x0 bits 0x12/0x0 rrc: 7 type: IBT flags: 0x60200400000020 nid: 10.0.10.3@o2ib7 remote: 0x884867ae361b3361 expref: 362626 pid: 21137 timeout: 1675 lvb_type: 0 [ 1601.410109] LustreError: 138-a: fir-MDT0003: A client on nid 10.0.10.3@o2ib7 was evicted due to a lock blocking callback time out: rc -110 [ 1601.413371] LustreError: 20458:0:(ldlm_lockd.c:256:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 10.0.10.3@o2ib7 ns: mdt-fir-MDT0003_UUID lock: ffff9cef9d133cc0/0xfdb7f5662eaa7f9a lrc: 3/0,0 mode: PR/PR res: [0x280041f24:0x197e:0x0].0x0 bits 0x1b/0x0 rrc: 7 type: IBT flags: 0x60200400000020 nid: 10.0.10.3@o2ib7 remote: 0x884867ae36613de1 expref: 362627 pid: 21177 timeout: 0 lvb_type: 0 [ 1601.460035] LustreError: Skipped 1 previous similar message [ 1602.903369] LNet: Service thread pid 20900 was inactive for 732.19s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 1602.920306] Pid: 20900, comm: mdt01_011 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 1602.930480] Call Trace: [ 1602.932953] [] ldlm_completion_ast+0x4e5/0x860 [ptlrpc] [ 1602.939899] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 1602.947006] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 1602.953687] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 1602.960257] [] lod_object_lock+0xf4/0x780 [lod] [ 1602.966480] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 1602.972617] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 1602.979891] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 1602.986655] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 1602.992880] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 1602.999362] [] mdt_reint_rec+0x83/0x210 [mdt] [ 1603.005411] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 1603.011980] [] mdt_reint+0x67/0x140 [mdt] [ 1603.017684] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 1603.024637] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 1603.032367] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 1603.038701] [] kthread+0xd1/0xe0 [ 1603.043616] [] ret_from_fork_nospec_begin+0xe/0x21 [ 1603.050089] [] 0xffffffffffffffff [ 1603.055110] LustreError: dumping log to /tmp/lustre-log.1596903061.20900 [ 1606.999478] Pid: 20952, comm: mdt01_025 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 1607.009650] Call Trace: [ 1607.012127] [] ldlm_completion_ast+0x4e5/0x860 [ptlrpc] [ 1607.019068] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 1607.026175] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 1607.032848] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 1607.039418] [] lod_object_lock+0xf4/0x780 [lod] [ 1607.045645] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 1607.051790] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 1607.059069] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 1607.065819] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 1607.072040] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 1607.078523] [] mdt_reint_rec+0x83/0x210 [mdt] [ 1607.084572] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 1607.091144] [] mdt_reint+0x67/0x140 [mdt] [ 1607.096855] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 1607.103825] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 1607.111544] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 1607.117881] [] kthread+0xd1/0xe0 [ 1607.122796] [] ret_from_fork_nospec_begin+0xe/0x21 [ 1607.129271] [] 0xffffffffffffffff [ 1607.134299] LustreError: dumping log to /tmp/lustre-log.1596903066.20952 [ 1642.488586] LustreError: 11-0: fir-MDT0000-osp-MDT0003: operation mds_statfs to node 10.0.10.51@o2ib7 failed: rc = -107 [ 1642.499370] Lustre: fir-MDT0000-osp-MDT0003: Connection to fir-MDT0000 (at 10.0.10.51@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [ 1643.172490] Lustre: 20815:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1596903095/real 1596903095] req@ffff9cffe93c8480 x1674472636416704/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1596903102 ref 2 fl Rpc:X/2/ffffffff rc 0/-1 [ 1643.172493] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Timed out tx for 10.0.10.3@o2ib7: 0 seconds [ 1643.172499] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Skipped 268 previous similar messages [ 1643.219143] Lustre: 20815:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 100 previous similar messages [ 1657.561153] Lustre: fir-MDT0000-lwp-MDT0003: Connection to fir-MDT0000 (at 10.0.10.51@o2ib7) was lost; in progress operations using this service will wait for recovery to complete [ 1712.506047] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.0.10.115@o2ib7 (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1712.523417] LustreError: Skipped 4 previous similar messages [ 1713.126767] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.50.8.25@o2ib2 (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1713.144050] LustreError: Skipped 1 previous similar message [ 1714.134421] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.50.3.26@o2ib2 (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1714.151707] LustreError: Skipped 83 previous similar messages [ 1716.152441] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.51.1.10@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1716.169757] LustreError: Skipped 165 previous similar messages [ 1720.204298] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.49.27.8@o2ib1 (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1720.221577] LustreError: Skipped 213 previous similar messages [ 1728.175882] LustreError: 21118:0:(ldlm_lockd.c:681:ldlm_handle_ast_error()) ### client (nid 10.0.10.3@o2ib7) failed to reply to blocking AST (req@ffff9cffe93cad00 x1674472636468032 status 0 rc -110), evict it ns: mdt-fir-MDT0003_UUID lock: ffff9d0fe5398fc0/0xfdb7f5662f406cbd lrc: 4/0,0 mode: PR/PR res: [0x280041f24:0x1991:0x0].0x0 bits 0x12/0x0 rrc: 5 type: IBT flags: 0x60200400000020 nid: 10.0.10.3@o2ib7 remote: 0x884867ae366affec expref: 67523 pid: 20861 timeout: 1799 lvb_type: 0 [ 1728.218473] LustreError: 21118:0:(ldlm_lockd.c:681:ldlm_handle_ast_error()) Skipped 1 previous similar message [ 1728.228482] LustreError: 138-a: fir-MDT0003: A client on nid 10.0.10.3@o2ib7 was evicted due to a lock blocking callback time out: rc -110 [ 1728.240940] LustreError: 20458:0:(ldlm_lockd.c:256:expired_lock_main()) ### lock callback timer expired after 116s: evicting client at 10.0.10.3@o2ib7 ns: mdt-fir-MDT0003_UUID lock: ffff9d0fe5398fc0/0xfdb7f5662f406cbd lrc: 3/0,0 mode: PR/PR res: [0x280041f24:0x1991:0x0].0x0 bits 0x12/0x0 rrc: 5 type: IBT flags: 0x60200400000020 nid: 10.0.10.3@o2ib7 remote: 0x884867ae366affec expref: 67449 pid: 20861 timeout: 0 lvb_type: 0 [ 1728.248159] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.50.4.2@o2ib2 (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1728.248161] LustreError: Skipped 225 previous similar messages [ 1730.174941] LustreError: 20990:0:(ldlm_lockd.c:681:ldlm_handle_ast_error()) ### client (nid 10.0.10.3@o2ib7) failed to reply to blocking AST (req@ffff9d0f97eea880 x1674472636481664 status 0 rc -110), evict it ns: mdt-fir-MDT0003_UUID lock: ffff9d001cb74ec0/0xfdb7f5662f00e961 lrc: 4/0,0 mode: PR/PR res: [0x2800422d3:0xb2:0x0].0x0 bits 0x12/0x0 rrc: 6 type: IBT flags: 0x60200400000020 nid: 10.0.10.3@o2ib7 remote: 0x884867ae36662f21 expref: 65230 pid: 21213 timeout: 1803 lvb_type: 0 [ 1730.217364] LustreError: 138-a: fir-MDT0003: A client on nid 10.0.10.3@o2ib7 was evicted due to a lock blocking callback time out: rc -110 [ 1730.229815] LustreError: 20458:0:(ldlm_lockd.c:256:expired_lock_main()) ### lock callback timer expired after 117s: evicting client at 10.0.10.3@o2ib7 ns: mdt-fir-MDT0003_UUID lock: ffff9d001cb74ec0/0xfdb7f5662f00e961 lrc: 3/0,0 mode: PR/PR res: [0x2800422d3:0xb2:0x0].0x0 bits 0x12/0x0 rrc: 6 type: IBT flags: 0x60200400000020 nid: 10.0.10.3@o2ib7 remote: 0x884867ae36662f21 expref: 65168 pid: 21213 timeout: 0 lvb_type: 0 [ 1733.175045] LustreError: 20990:0:(ldlm_lockd.c:681:ldlm_handle_ast_error()) ### client (nid 10.0.10.3@o2ib7) failed to reply to blocking AST (req@ffff9d0f97eea400 x1674472636481728 status 0 rc -110), evict it ns: mdt-fir-MDT0003_UUID lock: ffff9ce0f78b6300/0xfdb7f5662f00e9d8 lrc: 4/0,0 mode: PR/PR res: [0x2800422d3:0xb2:0x0].0x0 bits 0x1b/0x0 rrc: 6 type: IBT flags: 0x60200400000020 nid: 10.0.10.3@o2ib7 remote: 0x884867ae36662f6e expref: 61847 pid: 21303 timeout: 1803 lvb_type: 0 [ 1733.217475] LustreError: 138-a: fir-MDT0003: A client on nid 10.0.10.3@o2ib7 was evicted due to a lock blocking callback time out: rc -110 [ 1733.229921] LustreError: 20458:0:(ldlm_lockd.c:256:expired_lock_main()) ### lock callback timer expired after 120s: evicting client at 10.0.10.3@o2ib7 ns: mdt-fir-MDT0003_UUID lock: ffff9ce0f78b6300/0xfdb7f5662f00e9d8 lrc: 3/0,0 mode: PR/PR res: [0x2800422d3:0xb2:0x0].0x0 bits 0x1b/0x0 rrc: 6 type: IBT flags: 0x60200400000020 nid: 10.0.10.3@o2ib7 remote: 0x884867ae36662f6e expref: 61784 pid: 21303 timeout: 0 lvb_type: 0 [ 1748.185623] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.50.1.40@o2ib2 (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1748.202903] LustreError: Skipped 762 previous similar messages [ 1757.915802] LustreError: 167-0: fir-MDT0000-lwp-MDT0003: This client was evicted by fir-MDT0000; in progress operations using this service will fail. [ 1758.069072] LustreError: 11-0: fir-MDT0000-lwp-MDT0003: operation quota_acquire to node 10.0.10.51@o2ib7 failed: rc = -11 [ 1758.080030] LustreError: Skipped 26 previous similar messages [ 1766.747881] LNet: Service thread pid 21130 was inactive for 834.23s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 1766.764818] LNet: Skipped 1 previous similar message [ 1766.769796] Pid: 21130, comm: mdt01_067 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 1766.779967] Call Trace: [ 1766.782439] [] ldlm_completion_ast+0x4e5/0x860 [ptlrpc] [ 1766.789386] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 1766.796492] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 1766.803166] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 1766.809747] [] lod_object_lock+0xf4/0x780 [lod] [ 1766.815967] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 1766.822106] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 1766.829384] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 1766.836127] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 1766.842351] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 1766.848834] [] mdt_reint_rec+0x83/0x210 [mdt] [ 1766.854890] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 1766.861460] [] mdt_reint+0x67/0x140 [mdt] [ 1766.867165] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 1766.874124] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 1766.881845] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 1766.888172] [] kthread+0xd1/0xe0 [ 1766.893102] [] ret_from_fork_nospec_begin+0xe/0x21 [ 1766.899579] [] 0xffffffffffffffff [ 1766.904596] LustreError: dumping log to /tmp/lustre-log.1596903225.21130 [ 1771.177014] Lustre: 21245:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1596903223/real 1596903223] req@ffff9d20f13f0000 x1674472637958208/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1596903230 ref 2 fl Rpc:X/2/ffffffff rc 0/-1 [ 1771.177037] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Timed out tx for 10.0.10.3@o2ib7: 2 seconds [ 1771.177040] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Skipped 540 previous similar messages [ 1771.223653] Lustre: 21245:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 197 previous similar messages [ 1812.860795] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.0.10.115@o2ib7 (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1812.878181] LustreError: Skipped 7 previous similar messages [ 1818.110461] LustreError: 11-0: fir-MDT0000-lwp-MDT0003: operation quota_acquire to node 10.0.10.51@o2ib7 failed: rc = -11 [ 1818.121424] LustreError: Skipped 144 previous similar messages [ 1834.333761] Lustre: 20472:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-138), not sending early reply req@ffff9d20ef3d2880 x1672413935688960/t0(0) o36->cb2b9d6c-6883-4@10.49.27.19@o2ib1:533/0 lens 536/2888 e 0 to 0 dl 1596903298 ref 2 fl Interpret:/0/0 rc 0/0 [ 1834.361019] Lustre: 20472:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 2 previous similar messages [ 1840.400880] Lustre: fir-MDT0003: Client cb2b9d6c-6883-4 (at 10.49.27.19@o2ib1) reconnecting [ 1840.409244] Lustre: Skipped 16 previous similar messages [ 1844.574034] Pid: 20762, comm: mdt02_003 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [ 1844.584209] Call Trace: [ 1844.586683] [] ldlm_completion_ast+0x4e5/0x860 [ptlrpc] [ 1844.593641] [] ldlm_cli_enqueue_fini+0x96f/0xdf0 [ptlrpc] [ 1844.600801] [] ldlm_cli_enqueue+0x40e/0x920 [ptlrpc] [ 1844.607491] [] osp_md_object_lock+0x162/0x2d0 [osp] [ 1844.614075] [] lod_object_lock+0xf4/0x780 [lod] [ 1844.620301] [] mdd_object_lock+0x3e/0xe0 [mdd] [ 1844.626436] [] mdt_remote_object_lock_try+0x1e1/0x750 [mdt] [ 1844.633699] [] mdt_remote_object_lock+0x2a/0x30 [mdt] [ 1844.640443] [] mdt_rename_lock+0xbe/0x4b0 [mdt] [ 1844.646664] [] mdt_reint_rename+0x2c5/0x2b60 [mdt] [ 1844.653148] [] mdt_reint_rec+0x83/0x210 [mdt] [ 1844.659199] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [ 1844.665791] [] mdt_reint+0x67/0x140 [mdt] [ 1844.671496] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [ 1844.678497] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [ 1844.686230] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [ 1844.692566] [] kthread+0xd1/0xe0 [ 1844.697487] [] ret_from_fork_nospec_begin+0xe/0x21 [ 1844.703961] [] 0xffffffffffffffff [ 1844.708993] LustreError: dumping log to /tmp/lustre-log.1596903303.20762 [ 1878.112129] LustreError: 11-0: fir-MDT0000-lwp-MDT0003: operation quota_acquire to node 10.0.10.51@o2ib7 failed: rc = -11 [ 1878.123094] LustreError: Skipped 106 previous similar messages [ 1901.059851] Lustre: fir-MDT0000-osp-MDT0003: Connection restored to 10.0.10.51@o2ib7 (at 10.0.10.51@o2ib7) [ 1901.069509] Lustre: Skipped 28 previous similar messages [ 1901.087982] Lustre: 20762:0:(service.c:2165:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (600:342s); client may timeout. req@ffff9d10f6e89200 x1672340512903040/t725906929266(0) o36->b0c2b6ec-d912-4@10.50.0.62@o2ib2:252/0 lens 536/424 e 1 to 0 dl 1596903017 ref 1 fl Complete:/0/0 rc 0/0 [ 1901.091110] LNet: Service thread pid 21130 completed after 968.57s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [ 1901.131753] Lustre: 20762:0:(service.c:2165:ptlrpc_server_handle_request()) Skipped 1 previous similar message [ 8784.688717] perf: interrupt took too long (2508 > 2500), lowering kernel.perf_event_max_sample_rate to 79000 [22153.291798] Lustre: fir-MDT0003: Connection restored to e12594ea-3256-4 (at 10.49.26.35@o2ib1) [22153.300422] Lustre: Skipped 2 previous similar messages [22175.661946] Lustre: fir-MDT0003: haven't heard from client e12594ea-3256-4 (at 10.49.26.35@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d109260e000, cur 1596923634 expire 1596923484 last 1596923407 [31677.035294] perf: interrupt took too long (3139 > 3135), lowering kernel.perf_event_max_sample_rate to 63000 [98859.849927] Lustre: fir-MDT0003: Connection restored to e12594ea-3256-4 (at 10.49.26.35@o2ib1) [98898.739332] Lustre: fir-MDT0003: haven't heard from client 37ebccea-e3d0-4 (at 10.49.26.35@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d108eb48000, cur 1597000355 expire 1597000205 last 1597000128 [109251.006699] Lustre: fir-MDT0003: haven't heard from client 0093b08b-cced-4 (at 10.50.7.60@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d10929bf400, cur 1597010707 expire 1597010557 last 1597010480 [109492.624882] LustreError: 20453:0:(ldlm_lockd.c:2324:ldlm_cancel_handler()) ldlm_cancel from 10.50.7.60@o2ib2 arrived at 1597010948 with bad export cookie 18282351031258259909 [109736.791871] Lustre: fir-MDT0003: Connection restored to e12594ea-3256-4 (at 10.49.26.35@o2ib1) [109740.958654] Lustre: fir-MDT0003: Connection restored to 0093b08b-cced-4 (at 10.50.7.60@o2ib2) [109776.019036] Lustre: fir-MDT0003: haven't heard from client 02acdb4c-57b0-4 (at 10.49.26.35@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1034847000, cur 1597011232 expire 1597011082 last 1597011005 [109967.025184] Lustre: fir-MDT0003: haven't heard from client 0093b08b-cced-4 (at 10.50.7.60@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf94d22c000, cur 1597011423 expire 1597011273 last 1597011196 [114986.764678] Lustre: fir-MDT0003: Connection restored to 0093b08b-cced-4 (at 10.50.7.60@o2ib2) [115846.175855] Lustre: fir-MDT0003: haven't heard from client 0093b08b-cced-4 (at 10.50.7.60@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1946a77c00, cur 1597017302 expire 1597017152 last 1597017075 [116048.181128] Lustre: fir-MDT0003: Connection restored to 0093b08b-cced-4 (at 10.50.7.60@o2ib2) [116275.188020] Lustre: fir-MDT0003: haven't heard from client 0093b08b-cced-4 (at 10.50.7.60@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9ce0e4e75400, cur 1597017731 expire 1597017581 last 1597017504 [120171.967943] Lustre: 21103:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [120172.740331] Lustre: 21054:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [120174.330820] Lustre: 21204:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [120174.342307] Lustre: 21204:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1 previous similar message [120188.520743] Lustre: 21103:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [120188.532231] Lustre: 21103:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 2 previous similar messages [120192.880964] Lustre: 21306:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [120192.892441] Lustre: 21306:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 5 previous similar messages [120201.072749] Lustre: 20934:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [120201.084231] Lustre: 20934:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 10 previous similar messages [120217.162421] Lustre: 20915:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [120217.173908] Lustre: 20915:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 20 previous similar messages [120249.455351] Lustre: 20943:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [120249.466834] Lustre: 20943:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 26 previous similar messages [120313.791913] Lustre: 21033:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [120313.803402] Lustre: 21033:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 68 previous similar messages [120441.909237] Lustre: 21283:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [120441.920740] Lustre: 21283:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 142 previous similar messages [120698.455874] Lustre: 21213:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [120698.467357] Lustre: 21213:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 312 previous similar messages [121210.682476] Lustre: 21262:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [121210.693960] Lustre: 21262:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 568 previous similar messages [121811.182568] Lustre: 20940:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [121811.194056] Lustre: 20940:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 662 previous similar messages [122411.293512] Lustre: 21086:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [122411.304995] Lustre: 21086:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 691 previous similar messages [123011.840789] Lustre: 20948:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [123011.852266] Lustre: 20948:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 708 previous similar messages [123612.154393] Lustre: 20878:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [123612.165901] Lustre: 20878:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 698 previous similar messages [124212.775515] Lustre: 21216:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [124212.787008] Lustre: 21216:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 700 previous similar messages [124813.363748] Lustre: 21177:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [124813.375234] Lustre: 21177:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 708 previous similar messages [125413.685557] Lustre: 20961:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [125413.697051] Lustre: 20961:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 703 previous similar messages [126014.303067] Lustre: 21309:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [126014.314546] Lustre: 21309:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 710 previous similar messages [126614.663337] Lustre: 21233:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [126614.674808] Lustre: 21233:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 729 previous similar messages [127215.321697] Lustre: 20996:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [127215.333182] Lustre: 20996:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 728 previous similar messages [127815.499548] Lustre: 20878:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [127815.511054] Lustre: 20878:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 717 previous similar messages [128415.822409] Lustre: 21262:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [128415.833902] Lustre: 21262:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 741 previous similar messages [129016.418192] Lustre: 20934:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [129016.429666] Lustre: 20934:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 704 previous similar messages [129617.017722] Lustre: 21133:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [129617.029197] Lustre: 21133:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 721 previous similar messages [130217.399813] Lustre: 21002:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [130217.411294] Lustre: 21002:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 723 previous similar messages [130818.062002] Lustre: 20898:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [130818.073491] Lustre: 20898:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 698 previous similar messages [131418.174951] Lustre: 21023:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [131418.186430] Lustre: 21023:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 738 previous similar messages [132018.870705] Lustre: 20972:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [132018.882189] Lustre: 20972:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 726 previous similar messages [132619.303870] Lustre: 21163:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [132619.315350] Lustre: 21163:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 723 previous similar messages [133219.581641] Lustre: 21301:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [133219.593122] Lustre: 21301:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 742 previous similar messages [133819.696954] Lustre: 20954:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [133819.708442] Lustre: 20954:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 717 previous similar messages [134419.961366] Lustre: 21206:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [134419.972846] Lustre: 21206:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 723 previous similar messages [135020.374598] Lustre: 20464:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [135020.386090] Lustre: 20464:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 768 previous similar messages [135620.462421] Lustre: 21285:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [135620.473901] Lustre: 21285:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 774 previous similar messages [136220.857010] Lustre: 20934:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [136220.868492] Lustre: 20934:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 805 previous similar messages [136821.169793] Lustre: 21213:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [136821.181270] Lustre: 21213:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 806 previous similar messages [137421.964312] Lustre: 21278:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [137421.975788] Lustre: 21278:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 815 previous similar messages [138022.053519] Lustre: 20465:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [138022.064995] Lustre: 20465:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 799 previous similar messages [138622.218692] Lustre: 20992:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [138622.230170] Lustre: 20992:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 814 previous similar messages [139222.593835] Lustre: 21024:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [139222.605318] Lustre: 21024:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 791 previous similar messages [139822.944298] Lustre: 21233:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [139822.955775] Lustre: 21233:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 764 previous similar messages [140422.969670] Lustre: 20847:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [140422.981152] Lustre: 20847:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 795 previous similar messages [141023.046538] Lustre: 21309:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [141023.058019] Lustre: 21309:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 822 previous similar messages [141623.131224] Lustre: 20940:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [141623.142698] Lustre: 20940:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 830 previous similar messages [142223.226574] Lustre: 21216:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [142223.238057] Lustre: 21216:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 831 previous similar messages [142823.864658] Lustre: 21092:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [142823.876145] Lustre: 21092:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 847 previous similar messages [143424.252796] Lustre: 21103:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [143424.264270] Lustre: 21103:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 845 previous similar messages [144024.921466] Lustre: 21283:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [144024.932947] Lustre: 21283:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 831 previous similar messages [144625.335059] Lustre: 20961:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [144625.346543] Lustre: 20961:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 849 previous similar messages [145225.605137] Lustre: 21213:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [145225.616616] Lustre: 21213:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 852 previous similar messages [145826.130076] Lustre: 20891:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [145826.141556] Lustre: 20891:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 855 previous similar messages [146426.522917] Lustre: 20996:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [146426.534396] Lustre: 20996:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 851 previous similar messages [147027.004473] Lustre: 21305:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [147027.015958] Lustre: 21305:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 887 previous similar messages [147627.463865] Lustre: 21068:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [147627.475342] Lustre: 21068:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 887 previous similar messages [148227.836244] Lustre: 20929:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [148227.847726] Lustre: 20929:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 873 previous similar messages [148827.862338] Lustre: 21301:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [148827.873819] Lustre: 21301:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 864 previous similar messages [149428.113816] Lustre: 21278:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [149428.125295] Lustre: 21278:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 870 previous similar messages [150028.678030] Lustre: 21210:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [150028.689510] Lustre: 21210:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 891 previous similar messages [150628.813566] Lustre: 20465:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [150628.825038] Lustre: 20465:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 896 previous similar messages [151229.117360] Lustre: 20992:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [151229.128838] Lustre: 20992:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 900 previous similar messages [151829.265991] Lustre: 21111:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [151829.277465] Lustre: 21111:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 918 previous similar messages [152429.308618] Lustre: 21233:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [152429.320098] Lustre: 21233:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 911 previous similar messages [153029.721169] Lustre: 21204:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [153029.732646] Lustre: 21204:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 904 previous similar messages [153630.284939] Lustre: 21283:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [153630.296416] Lustre: 21283:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 893 previous similar messages [154230.766805] Lustre: 21232:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [154230.778285] Lustre: 21232:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 886 previous similar messages [154831.078789] Lustre: 20908:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [154831.090282] Lustre: 20908:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 919 previous similar messages [155431.264046] Lustre: 20898:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [155431.275524] Lustre: 20898:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 897 previous similar messages [156031.910917] Lustre: 21210:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [156031.922398] Lustre: 21210:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 890 previous similar messages [156632.446995] Lustre: 20861:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [156632.458476] Lustre: 20861:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 880 previous similar messages [157232.793695] Lustre: 21031:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [157232.805180] Lustre: 21031:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 871 previous similar messages [157832.967349] Lustre: 21213:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [157832.978827] Lustre: 21213:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 872 previous similar messages [158433.547033] Lustre: 21140:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [158433.558512] Lustre: 21140:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 916 previous similar messages [159033.889066] Lustre: 20954:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [159033.900543] Lustre: 20954:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 954 previous similar messages [159633.915590] Lustre: 21177:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [159633.927071] Lustre: 21177:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 986 previous similar messages [160234.032001] Lustre: 20899:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [160234.043480] Lustre: 20899:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1042 previous similar messages [160834.105701] Lustre: 21306:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [160834.117183] Lustre: 21306:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 998 previous similar messages [161434.565160] Lustre: 21306:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [161434.576651] Lustre: 21306:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 970 previous similar messages [162034.834600] Lustre: 21002:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [162034.846080] Lustre: 21002:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 979 previous similar messages [162635.427639] Lustre: 21031:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [162635.439121] Lustre: 21031:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1021 previous similar messages [163235.528710] Lustre: 21302:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [163235.540194] Lustre: 21302:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1015 previous similar messages [163835.975827] Lustre: 21137:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [163835.987307] Lustre: 21137:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1058 previous similar messages [164436.459008] Lustre: 20915:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [164436.470487] Lustre: 20915:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1065 previous similar messages [165036.758544] Lustre: 21140:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [165036.770025] Lustre: 21140:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1059 previous similar messages [165637.343337] Lustre: 21033:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [165637.354821] Lustre: 21033:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1038 previous similar messages [166237.452424] Lustre: 21096:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [166237.463913] Lustre: 21096:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 992 previous similar messages [166837.809572] Lustre: 20464:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [166837.821060] Lustre: 20464:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 979 previous similar messages [167438.028642] Lustre: 20908:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [167438.040120] Lustre: 20908:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 977 previous similar messages [168038.312310] Lustre: 21193:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [168038.323789] Lustre: 21193:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1018 previous similar messages [168638.367354] Lustre: 21213:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [168638.378842] Lustre: 21213:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1003 previous similar messages [169238.740615] Lustre: 21177:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [169238.752120] Lustre: 21177:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1002 previous similar messages [169838.852699] Lustre: 20927:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [169838.864200] Lustre: 20927:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 984 previous similar messages [170439.051181] Lustre: 21137:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [170439.062700] Lustre: 21137:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 978 previous similar messages [171039.605825] Lustre: 21027:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [171039.617318] Lustre: 21027:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 970 previous similar messages [171640.133079] Lustre: 21233:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [171640.144553] Lustre: 21233:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 994 previous similar messages [172240.274128] Lustre: 21124:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [172240.285605] Lustre: 21124:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 934 previous similar messages [172840.868707] Lustre: 21276:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [172840.880188] Lustre: 21276:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 964 previous similar messages [173363.204661] Lustre: fir-MDT0003: Connection restored to 40339601-8037-4 (at 10.49.27.27@o2ib1) [173441.454883] Lustre: 21096:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [173441.466371] Lustre: 21096:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 984 previous similar messages [174041.747952] Lustre: 21163:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [174041.759429] Lustre: 21163:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 956 previous similar messages [174641.875786] Lustre: 20972:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [174641.887258] Lustre: 20972:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1048 previous similar messages [175241.952699] Lustre: 21033:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [175241.964183] Lustre: 21033:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1031 previous similar messages [175842.070886] Lustre: 21250:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [175842.082373] Lustre: 21250:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1011 previous similar messages [176442.098074] Lustre: 21148:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [176442.109557] Lustre: 21148:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1163 previous similar messages [177042.501583] Lustre: 21206:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [177042.513061] Lustre: 21206:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1262 previous similar messages [177642.720283] Lustre: 21309:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [177642.731797] Lustre: 21309:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1481 previous similar messages [177987.141772] Lustre: fir-MDT0003: Connection restored to 18c2bcea-280e-4 (at 10.50.5.2@o2ib2) [178243.057121] Lustre: 20948:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [178243.068602] Lustre: 20948:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1786 previous similar messages [178843.145037] Lustre: 21309:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [178843.156512] Lustre: 21309:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 1629 previous similar messages [179443.206941] Lustre: 21027:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [179443.218438] Lustre: 21027:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 2009 previous similar messages [179907.135515] LNet: Service thread pid 21236 was inactive for 200.39s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [179907.152536] LNet: Skipped 1 previous similar message [179907.157599] Pid: 21236, comm: mdt00_083 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [179907.167896] Call Trace: [179907.170441] [] 0xffffffffffffffff [179907.175562] LustreError: dumping log to /tmp/lustre-log.1597081361.21236 [180056.764820] Lustre: 77311:0:(llog_cat.c:899:llog_cat_process_or_fork()) fir-MDD0003: catlog [0x24e:0xa:0x0] crosses index zero [180056.776302] Lustre: 77311:0:(llog_cat.c:899:llog_cat_process_or_fork()) Skipped 990 previous similar messages [180301.212224] Lustre: 21309:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9ce8fcee6300 x1674960138330496/t0(0) o46->f972f41f-af14-c427-d6f4-f019a90c69e8@10.0.10.3@o2ib7:60/0 lens 264/224 e 24 to 0 dl 1597081760 ref 2 fl Interpret:/0/0 rc 0/0 [180307.793897] Lustre: fir-MDT0003: Client f972f41f-af14-c427-d6f4-f019a90c69e8 (at 10.0.10.3@o2ib7) reconnecting [180307.803990] Lustre: Skipped 2 previous similar messages [180307.809337] Lustre: fir-MDT0003: Connection restored to 974577fb-0e05-9af2-6b92-3f2c3b3290cd (at 10.0.10.3@o2ib7) [180449.164127] LNet: Service thread pid 21236 completed after 742.41s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [180449.180378] LNet: Skipped 33 previous similar messages [185578.111761] Lustre: fir-MDT0003: haven't heard from client 4d8d442d-799e-4 (at 10.49.27.27@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d043771f800, cur 1597087032 expire 1597086882 last 1597086805 [187097.637611] Lustre: fir-MDT0003: Connection restored to 40339601-8037-4 (at 10.49.27.27@o2ib1) [187296.654758] Lustre: fir-MDT0003: Connection restored to 18c2bcea-280e-4 (at 10.50.5.2@o2ib2) [207682.690514] Lustre: fir-MDT0003: haven't heard from client 78f17c31-27f8-4 (at 10.49.7.8@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1092aa0400, cur 1597109136 expire 1597108986 last 1597108909 [223631.163625] Lustre: 21103:0:(mdd_device.c:1811:mdd_changelog_clear()) fir-MDD0003: Failure to clear the changelog for user 1: -22 [223631.663435] Lustre: 20961:0:(mdd_device.c:1811:mdd_changelog_clear()) fir-MDD0003: Failure to clear the changelog for user 1: -22 [223631.675174] Lustre: 20961:0:(mdd_device.c:1811:mdd_changelog_clear()) Skipped 911 previous similar messages [223632.663680] Lustre: 20464:0:(mdd_device.c:1811:mdd_changelog_clear()) fir-MDD0003: Failure to clear the changelog for user 1: -22 [223632.675419] Lustre: 20464:0:(mdd_device.c:1811:mdd_changelog_clear()) Skipped 1567 previous similar messages [223634.663793] Lustre: 21092:0:(mdd_device.c:1811:mdd_changelog_clear()) fir-MDD0003: Failure to clear the changelog for user 1: -22 [223634.675529] Lustre: 21092:0:(mdd_device.c:1811:mdd_changelog_clear()) Skipped 2621 previous similar messages [223638.663645] Lustre: 20861:0:(mdd_device.c:1811:mdd_changelog_clear()) fir-MDD0003: Failure to clear the changelog for user 1: -22 [223638.675382] Lustre: 20861:0:(mdd_device.c:1811:mdd_changelog_clear()) Skipped 5663 previous similar messages [258565.725369] Lustre: fir-MDT0003: Connection restored to f9aa9717-0f2c-4 (at 10.49.24.23@o2ib1) [258579.919013] Lustre: fir-MDT0003: Connection restored to b0b32558-1daa-4 (at 10.50.14.3@o2ib2) [258584.074625] Lustre: fir-MDT0003: Connection restored to d71af60f-d92b-4 (at 10.50.17.33@o2ib2) [258601.052525] Lustre: fir-MDT0003: Connection restored to b394092c-af20-4 (at 10.50.10.26@o2ib2) [258601.061227] Lustre: Skipped 1 previous similar message [258611.151138] Lustre: fir-MDT0003: Connection restored to a5532ed0-3006-4 (at 10.50.5.1@o2ib2) [258611.159666] Lustre: Skipped 1 previous similar message [258642.899762] Lustre: fir-MDT0003: Connection restored to b6314ce0-8801-4 (at 10.50.9.39@o2ib2) [258642.908383] Lustre: Skipped 2 previous similar messages [258729.702272] Lustre: fir-MDT0003: Connection restored to 78f17c31-27f8-4 (at 10.49.7.8@o2ib1) [270208.851687] Lustre: fir-MDT0003: Connection restored to 4801f02b-804c-4 (at 10.50.5.4@o2ib2) [270234.821586] Lustre: fir-MDT0003: Connection restored to f9aa9717-0f2c-4 (at 10.49.24.23@o2ib1) [270295.464460] Lustre: fir-MDT0003: haven't heard from client 4801f02b-804c-4 (at 10.50.5.4@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf0d1c48000, cur 1597171747 expire 1597171597 last 1597171520 [270328.627185] Lustre: fir-MDT0003: Connection restored to 0093b08b-cced-4 (at 10.50.7.60@o2ib2) [270777.386197] Lustre: fir-MDT0003: Connection restored to 0093b08b-cced-4 (at 10.50.7.60@o2ib2) [291765.046888] Lustre: fir-MDT0003: haven't heard from client 5bfdf5fe-3d28-4 (at 10.50.3.34@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d109246a800, cur 1597193216 expire 1597193066 last 1597192989 [291765.066859] Lustre: Skipped 1 previous similar message [305153.570406] Lustre: fir-MDT0003: Connection restored to e12594ea-3256-4 (at 10.49.26.35@o2ib1) [305183.428446] Lustre: fir-MDT0003: haven't heard from client 3d6acc12-03f2-4 (at 10.49.26.35@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d05ffa4e000, cur 1597206634 expire 1597206484 last 1597206407 [342040.554494] Lustre: fir-MDT0003: Connection restored to 01d8ddbb-050c-4 (at 10.50.5.57@o2ib2) [342047.471040] Lustre: fir-MDT0003: haven't heard from client 01d8ddbb-050c-4 (at 10.50.5.57@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1092848400, cur 1597243497 expire 1597243347 last 1597243270 [450620.753851] Lustre: fir-MDT0003: Connection restored to 4801f02b-804c-4 (at 10.50.5.4@o2ib2) [450647.290917] Lustre: fir-MDT0003: Connection restored to f9aa9717-0f2c-4 (at 10.49.24.23@o2ib1) [450714.458460] Lustre: fir-MDT0003: haven't heard from client 7a4bd836-d4ab-4 (at 10.49.24.23@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cffaa78b800, cur 1597352161 expire 1597352011 last 1597351934 [470846.993039] Lustre: fir-MDT0003: haven't heard from client d41d5794-535b-4 (at 10.49.27.27@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d04a01d7400, cur 1597372293 expire 1597372143 last 1597372066 [470847.013095] Lustre: Skipped 1 previous similar message [509302.035468] Lustre: fir-MDT0003: haven't heard from client 507c7243-9c69-4 (at 10.51.1.35@o2ib3) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d109200a000, cur 1597410747 expire 1597410597 last 1597410520 [527214.505543] Lustre: fir-MDT0003: haven't heard from client e5f7b382-77c9-4 (at 10.51.1.29@o2ib3) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1092817000, cur 1597428659 expire 1597428509 last 1597428432 [527361.022358] Lustre: fir-MDT0003: Connection restored to abcaecdb-9db0-4 (at 10.51.1.4@o2ib3) [528545.812221] Lustre: fir-MDT0003: Connection restored to 3bac5ea6-ee0f-4 (at 10.51.1.40@o2ib3) [528555.302433] Lustre: fir-MDT0003: Connection restored to fac09ef8-7eb1-4 (at 10.51.1.18@o2ib3) [528561.117018] Lustre: fir-MDT0003: Connection restored to 517fcb3e-a2b3-4 (at 10.51.1.10@o2ib3) [528566.041433] Lustre: fir-MDT0003: Connection restored to 0654a8a4-6545-4 (at 10.51.1.42@o2ib3) [528566.050060] Lustre: Skipped 2 previous similar messages [528586.960739] Lustre: fir-MDT0003: Connection restored to e5f7b382-77c9-4 (at 10.51.1.29@o2ib3) [528586.969365] Lustre: Skipped 1 previous similar message [528597.197547] Lustre: fir-MDT0003: Connection restored to e975d764-4b77-4 (at 10.51.1.34@o2ib3) [528597.206160] Lustre: Skipped 2 previous similar messages [528689.920328] Lustre: fir-MDT0003: Connection restored to dd65725d-2284-4 (at 10.51.1.71@o2ib3) [528733.755864] Lustre: fir-MDT0003: Connection restored to b2b7f503-100a-4 (at 10.51.1.56@o2ib3) [528814.490409] Lustre: fir-MDT0003: Connection restored to c2efcbb4-d887-4 (at 10.51.1.1@o2ib3) [528814.498953] Lustre: Skipped 54 previous similar messages [528968.738113] Lustre: fir-MDT0003: Connection restored to 0e91b1c3-bfa8-4 (at 10.51.1.60@o2ib3) [530125.282794] Lustre: fir-MDT0003: Connection restored to 44e14051-ce2d-4 (at 10.51.12.18@o2ib3) [530388.473392] Lustre: fir-MDT0003: Connection restored to 44975de7-3407-4 (at 10.51.12.17@o2ib3) [530534.259427] Lustre: fir-MDT0003: Connection restored to 075c0105-e726-4 (at 10.51.12.19@o2ib3) [531442.072554] Lustre: fir-MDT0003: Client 67703ce6-337f-4 (at 10.51.0.16@o2ib3) reconnecting [531442.080939] Lustre: fir-MDT0003: Connection restored to 67703ce6-337f-4 (at 10.51.0.16@o2ib3) [531452.576264] Lustre: fir-MDT0003: Client ef35fd6a-d6d8-4 (at 10.51.0.15@o2ib3) reconnecting [531473.032526] Lustre: fir-MDT0003: Client 67703ce6-337f-4 (at 10.51.0.16@o2ib3) reconnecting [531473.040935] Lustre: fir-MDT0003: Connection restored to 67703ce6-337f-4 (at 10.51.0.16@o2ib3) [531473.049559] Lustre: Skipped 1 previous similar message [531495.876222] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.51.0.15@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [531495.893601] LustreError: Skipped 28 previous similar messages [531505.889087] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.51.12.17@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [531530.977694] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.51.12.17@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [531530.995160] LustreError: Skipped 3 previous similar messages [531546.826693] Lustre: fir-MDT0003: Client ac2d5bf7-e29f-4 (at 10.51.1.34@o2ib3) reconnecting [531546.835050] Lustre: Skipped 2 previous similar messages [531546.840397] Lustre: fir-MDT0003: Connection restored to e975d764-4b77-4 (at 10.51.1.34@o2ib3) [531546.849013] Lustre: Skipped 2 previous similar messages [536112.891431] Lustre: fir-MDT0003: Connection restored to e12594ea-3256-4 (at 10.49.26.35@o2ib1) [536112.900146] Lustre: Skipped 5 previous similar messages [536147.740515] Lustre: fir-MDT0003: haven't heard from client 730256df-1478-4 (at 10.49.26.35@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0fa0872000, cur 1597437592 expire 1597437442 last 1597437365 [536147.760577] Lustre: Skipped 73 previous similar messages [537892.084100] Lustre: fir-MDT0003: Connection restored to e12594ea-3256-4 (at 10.49.26.35@o2ib1) [537921.787568] Lustre: fir-MDT0003: haven't heard from client 91bb9fce-8346-4 (at 10.49.26.35@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf39d338c00, cur 1597439366 expire 1597439216 last 1597439139 [540961.608351] Lustre: fir-MDT0003: Connection restored to e12594ea-3256-4 (at 10.49.26.35@o2ib1) [541005.869840] Lustre: fir-MDT0003: haven't heard from client 11ca10d8-060a-4 (at 10.49.26.35@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d15131e2800, cur 1597442450 expire 1597442300 last 1597442223 [542225.546185] Lustre: fir-MDT0003: Connection restored to e12594ea-3256-4 (at 10.49.26.35@o2ib1) [542268.903621] Lustre: fir-MDT0003: haven't heard from client 6fd2559b-6944-4 (at 10.49.26.35@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d002f90c400, cur 1597443713 expire 1597443563 last 1597443486 [543164.835637] Lustre: fir-MDT0003: Connection restored to e12594ea-3256-4 (at 10.49.26.35@o2ib1) [543206.928655] Lustre: fir-MDT0003: haven't heard from client 1b5142d2-23f4-4 (at 10.49.26.35@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0f2afce400, cur 1597444651 expire 1597444501 last 1597444424 [543699.816456] Lustre: fir-MDT0003: Connection restored to e12594ea-3256-4 (at 10.49.26.35@o2ib1) [543718.942284] Lustre: fir-MDT0003: haven't heard from client c55e8469-253a-4 (at 10.49.26.35@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d05ff020000, cur 1597445163 expire 1597445013 last 1597444936 [543843.590599] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [543853.729380] Lustre: fir-MDT0003: Client 1be46ba6-6564-4 (at 10.51.1.45@o2ib3) reconnecting [543853.737736] Lustre: Skipped 5 previous similar messages [543853.743078] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [543948.165110] Lustre: fir-MDT0003: Client 0cbafe63-9c1f-4 (at 10.51.1.45@o2ib3) reconnecting [543948.173499] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [544017.972378] LustreError: 137-5: fir-MDT0002_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [544017.989757] LustreError: Skipped 3 previous similar messages [544103.080962] Lustre: fir-MDT0003: Client 1be46ba6-6564-4 (at 10.51.1.45@o2ib3) reconnecting [544103.089314] Lustre: Skipped 1 previous similar message [544103.094573] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [544103.103212] Lustre: Skipped 1 previous similar message [544469.728001] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [544476.726483] Lustre: fir-MDT0003: Client 1be46ba6-6564-4 (at 10.51.1.45@o2ib3) reconnecting [544476.734868] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [544547.830776] Lustre: fir-MDT0003: Client 1be46ba6-6564-4 (at 10.51.1.45@o2ib3) reconnecting [544547.839157] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [544667.940520] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [545218.567820] Lustre: fir-MDT0003: Client 1be46ba6-6564-4 (at 10.51.1.45@o2ib3) reconnecting [545218.576182] Lustre: Skipped 1 previous similar message [545218.581441] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [545356.728966] LustreError: 137-5: fir-MDT0002_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [545509.361109] Lustre: fir-MDT0003: Client 0cbafe63-9c1f-4 (at 10.51.1.45@o2ib3) reconnecting [545509.369477] Lustre: Skipped 1 previous similar message [545509.374732] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [545509.383341] Lustre: Skipped 1 previous similar message [545595.006060] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [546132.572989] Lustre: fir-MDT0003: Client 1be46ba6-6564-4 (at 10.51.1.45@o2ib3) reconnecting [546132.581348] Lustre: Skipped 8 previous similar messages [546132.586701] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [546132.595317] Lustre: Skipped 8 previous similar messages [546176.580797] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [546314.148900] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [546667.959020] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [546746.069273] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [546917.390714] Lustre: fir-MDT0003: Client 1be46ba6-6564-4 (at 10.51.1.45@o2ib3) reconnecting [546917.399078] Lustre: Skipped 5 previous similar messages [546917.404425] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [546917.413046] Lustre: Skipped 5 previous similar messages [547331.566595] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [547658.793959] Lustre: fir-MDT0003: Client 1be46ba6-6564-4 (at 10.51.1.45@o2ib3) reconnecting [547658.802319] Lustre: Skipped 2 previous similar messages [547658.807668] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [547658.816281] Lustre: Skipped 2 previous similar messages [548598.847606] Lustre: fir-MDT0003: Client 0cbafe63-9c1f-4 (at 10.51.1.45@o2ib3) reconnecting [548598.855973] Lustre: Skipped 3 previous similar messages [548598.861322] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [548598.869934] Lustre: Skipped 3 previous similar messages [549389.403535] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [549529.637801] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [549665.640695] Lustre: fir-MDT0003: Client 54c13fbf-743b-4 (at 10.50.17.28@o2ib2) reconnecting [549665.649140] Lustre: Skipped 2 previous similar messages [549665.654487] Lustre: fir-MDT0003: Connection restored to 54c13fbf-743b-4 (at 10.50.17.28@o2ib2) [549665.663186] Lustre: Skipped 2 previous similar messages [550317.684011] LustreError: 137-5: fir-MDT0002_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [550427.361853] Lustre: fir-MDT0003: Client 1be46ba6-6564-4 (at 10.51.1.45@o2ib3) reconnecting [550427.370234] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [550655.811930] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [551199.592638] Lustre: fir-MDT0003: Client 1be46ba6-6564-4 (at 10.51.1.45@o2ib3) reconnecting [551199.600999] Lustre: Skipped 1 previous similar message [551199.606264] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [551199.614891] Lustre: Skipped 1 previous similar message [551750.368438] LustreError: 137-5: fir-MDT0002_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [551839.024315] LustreError: 137-5: fir-MDT0002_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [551846.025502] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [551899.333705] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [552004.314907] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [552055.644736] LustreError: 137-5: fir-MDT0000_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [552081.378820] LustreError: 137-5: fir-MDT0002_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [552207.683687] Lustre: fir-MDT0003: Client 1be46ba6-6564-4 (at 10.51.1.45@o2ib3) reconnecting [552207.692046] Lustre: Skipped 1 previous similar message [552207.697300] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [552207.705916] Lustre: Skipped 1 previous similar message [552659.448905] LustreError: 137-5: fir-MDT0001_UUID: not available for connect from 10.51.1.45@o2ib3 (no target). If you are running an HA pair check that the target is mounted on the other server. [553036.602416] Lustre: fir-MDT0003: Client 0cbafe63-9c1f-4 (at 10.51.1.45@o2ib3) reconnecting [553036.610775] Lustre: Skipped 3 previous similar messages [553036.616123] Lustre: fir-MDT0003: Connection restored to a3435062-b3fc-4 (at 10.51.1.45@o2ib3) [553036.624734] Lustre: Skipped 3 previous similar messages [553288.196816] Lustre: fir-MDT0003: haven't heard from client 1be46ba6-6564-4 (at 10.51.1.45@o2ib3) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d08ceff8c00, cur 1597454732 expire 1597454582 last 1597454505 [554154.817506] Lustre: fir-MDT0003: Connection restored to 4801f02b-804c-4 (at 10.50.5.4@o2ib2) [554154.826033] Lustre: Skipped 1 previous similar message [567628.575500] Lustre: fir-MDT0003: haven't heard from client 850ebd4e-db74-4 (at 10.49.27.27@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d20f5f7b000, cur 1597469072 expire 1597468922 last 1597468845 [567628.595554] Lustre: Skipped 1 previous similar message [570456.654934] Lustre: fir-MDT0003: haven't heard from client cd8b5b64-9cb9-4 (at 10.49.7.8@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1013e86400, cur 1597471900 expire 1597471750 last 1597471673 [570613.453822] Lustre: fir-MDT0003: Connection restored to 78f17c31-27f8-4 (at 10.49.7.8@o2ib1) [570613.462351] Lustre: Skipped 4 previous similar messages [727268.844945] Lustre: fir-MDT0003: haven't heard from client 0e93f05e-1408-4 (at 10.49.7.8@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0680807000, cur 1597628708 expire 1597628558 last 1597628481 [727422.479098] Lustre: fir-MDT0003: Connection restored to 78f17c31-27f8-4 (at 10.49.7.8@o2ib1) [741929.528264] Lustre: fir-MDT0003: Client 8e3fa4c3-88a1-4 (at 10.50.17.10@o2ib2) reconnecting [741929.536705] Lustre: Skipped 1 previous similar message [741929.541961] Lustre: fir-MDT0003: Connection restored to 8e3fa4c3-88a1-4 (at 10.50.17.10@o2ib2) [741953.048081] Lustre: fir-MDT0003: Connection restored to b8d724aa-c446-4 (at 10.50.8.44@o2ib2) [741953.056691] Lustre: Skipped 12 previous similar messages [741955.601294] Lustre: fir-MDT0003: Connection restored to b332ac95-7106-4 (at 10.50.2.28@o2ib2) [741955.609906] Lustre: Skipped 5 previous similar messages [741992.812962] Lustre: fir-MDT0003: Connection restored to b8a33377-6c0c-4 (at 10.50.10.20@o2ib2) [741992.821671] Lustre: Skipped 14 previous similar messages [758911.018614] Lustre: fir-MDT0003: Connection restored to 36f839d9-d30f-4 (at 10.49.18.29@o2ib1) [758911.027317] Lustre: Skipped 4 previous similar messages [758957.689484] Lustre: fir-MDT0003: haven't heard from client 36f839d9-d30f-4 (at 10.49.18.29@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d109246d400, cur 1597660396 expire 1597660246 last 1597660169 [784416.959374] Lustre: fir-MDT0003: Connection restored to a5800472-0bd4-4 (at 10.50.1.49@o2ib2) [784418.219185] Lustre: fir-MDT0003: Connection restored to 4801f02b-804c-4 (at 10.50.5.4@o2ib2) [784444.740839] Lustre: fir-MDT0003: Connection restored to f9aa9717-0f2c-4 (at 10.49.24.23@o2ib1) [784490.383749] Lustre: fir-MDT0003: haven't heard from client a5800472-0bd4-4 (at 10.50.1.49@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1092b1ec00, cur 1597685928 expire 1597685778 last 1597685701 [790072.259445] Lustre: fir-MDT0003: Connection restored to 40339601-8037-4 (at 10.49.27.27@o2ib1) [800371.289298] Lustre: fir-MDT0003: Connection restored to b29bd073-151f-4 (at 10.49.18.33@o2ib1) [803096.148791] Lustre: fir-MDT0003: Connection restored to c3909848-c382-4 (at 10.51.12.5@o2ib3) [803096.157407] Lustre: Skipped 2 previous similar messages [811076.087027] Lustre: fir-MDT0003: haven't heard from client bfbd6eed-50c7-4 (at 10.51.12.7@o2ib3) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d11e94acc00, cur 1597712513 expire 1597712363 last 1597712286 [811076.106997] Lustre: Skipped 2 previous similar messages [811129.998318] Lustre: fir-MDT0003: Connection restored to fb4b0116-eddd-4 (at 10.51.13.2@o2ib3) [811130.006935] Lustre: Skipped 12 previous similar messages [811190.883368] Lustre: fir-MDT0003: Connection restored to bfbd6eed-50c7-4 (at 10.51.12.7@o2ib3) [811752.105202] Lustre: fir-MDT0003: haven't heard from client 08f55ffe-14ab-4 (at 10.51.12.15@o2ib3) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf7c2201800, cur 1597713189 expire 1597713039 last 1597712962 [811941.110105] Lustre: fir-MDT0003: haven't heard from client 6e4d754e-69d4-4 (at 10.49.27.27@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cfcb27b4000, cur 1597713378 expire 1597713228 last 1597713151 [811941.130161] Lustre: Skipped 7 previous similar messages [812874.135449] Lustre: fir-MDT0003: haven't heard from client 1eb9fc94-54fe-4 (at 10.51.12.7@o2ib3) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1379365000, cur 1597714311 expire 1597714161 last 1597714084 [813033.919229] Lustre: fir-MDT0003: Connection restored to d42895f5-d7a6-4 (at 10.51.12.16@o2ib3) [813037.625195] Lustre: fir-MDT0003: Connection restored to 9443f4b2-8d7b-4 (at 10.51.12.11@o2ib3) [813040.489199] Lustre: fir-MDT0003: Connection restored to b72ed905-1fdf-4 (at 10.51.12.8@o2ib3) [813045.182112] Lustre: fir-MDT0003: Connection restored to 8eaeec1c-de2e-4 (at 10.51.12.10@o2ib3) [813045.190821] Lustre: Skipped 1 previous similar message [813051.130023] Lustre: fir-MDT0003: Connection restored to 1537f853-9a67-4 (at 10.51.12.9@o2ib3) [813051.138684] Lustre: Skipped 1 previous similar message [813065.781218] Lustre: fir-MDT0003: Connection restored to 08f55ffe-14ab-4 (at 10.51.12.15@o2ib3) [813065.789929] Lustre: Skipped 1 previous similar message [820257.760719] Lustre: fir-MDT0003: Connection restored to bfbd6eed-50c7-4 (at 10.51.12.7@o2ib3) [865268.575723] Lustre: fir-MDT0003: haven't heard from client 583e2eda-0d03-4 (at 10.51.12.7@o2ib3) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cfac934e000, cur 1597766704 expire 1597766554 last 1597766477 [869164.676728] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1092a2a400, cur 1597770600 expire 1597770450 last 1597770373 [869939.410307] LustreError: 20454:0:(ldlm_lockd.c:2324:ldlm_cancel_handler()) ldlm_cancel from 10.49.24.31@o2ib1 arrived at 1597771374 with bad export cookie 18282351031258260217 [869939.426389] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [870165.701465] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0914038000, cur 1597771601 expire 1597771451 last 1597771374 [870611.335689] Lustre: fir-MDT0003: Connection restored to bfbd6eed-50c7-4 (at 10.51.12.7@o2ib3) [878665.823981] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [879005.933760] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0761128800, cur 1597780441 expire 1597780291 last 1597780214 [879321.848803] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [879547.948304] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1b953f5c00, cur 1597780983 expire 1597780833 last 1597780756 [879949.853528] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [880175.965322] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0562f71400, cur 1597781611 expire 1597781461 last 1597781384 [880616.870326] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [880842.982976] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d10fe82d000, cur 1597782278 expire 1597782128 last 1597782051 [881264.890922] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [881491.002212] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d078091b800, cur 1597782926 expire 1597782776 last 1597782699 [882042.913966] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [882269.020312] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d004e328800, cur 1597783704 expire 1597783554 last 1597783477 [882704.932439] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [882931.037231] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1242ea9c00, cur 1597784366 expire 1597784216 last 1597784139 [883421.946191] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [883648.056645] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cfa42938000, cur 1597785083 expire 1597784933 last 1597784856 [884077.961407] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [884304.073872] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0b1c794800, cur 1597785739 expire 1597785589 last 1597785512 [884681.967751] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [884908.090213] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0dcc63bc00, cur 1597786343 expire 1597786193 last 1597786116 [885380.000604] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [885606.108149] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d20f5f7f800, cur 1597787041 expire 1597786891 last 1597786814 [886170.040451] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [886396.128236] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf646bb0000, cur 1597787831 expire 1597787681 last 1597787604 [886926.038431] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [887152.149142] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d10363db000, cur 1597788587 expire 1597788437 last 1597788360 [887626.054169] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [887852.167780] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d103487c800, cur 1597789287 expire 1597789137 last 1597789060 [888394.078707] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [888620.187590] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d10363dc400, cur 1597790055 expire 1597789905 last 1597789828 [889117.142292] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [889343.207045] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf8a070b800, cur 1597790778 expire 1597790628 last 1597790551 [889876.142438] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [890102.227114] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf388aef800, cur 1597791537 expire 1597791387 last 1597791310 [890590.162729] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [890816.245991] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf646bb3000, cur 1597792251 expire 1597792101 last 1597792024 [891279.183875] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [891505.263257] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d180d4a9c00, cur 1597792940 expire 1597792790 last 1597792713 [891978.200565] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [892204.283227] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d07602ee800, cur 1597793639 expire 1597793489 last 1597793412 [892643.218597] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [892869.301578] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d07f1f29800, cur 1597794304 expire 1597794154 last 1597794077 [893375.239186] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [893601.320624] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d2026e62c00, cur 1597795036 expire 1597794886 last 1597794809 [894093.256237] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [894319.339563] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf8779e5000, cur 1597795754 expire 1597795604 last 1597795527 [894825.275050] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [895051.359330] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf47d8e0400, cur 1597796486 expire 1597796336 last 1597796259 [895584.620659] Lustre: fir-MDT0003: Connection restored to 40339601-8037-4 (at 10.49.27.27@o2ib1) [895586.289707] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [895812.378774] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cffda122c00, cur 1597797247 expire 1597797097 last 1597797020 [896326.317251] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [896552.399652] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cfa206a6400, cur 1597797987 expire 1597797837 last 1597797760 [896930.333832] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [897156.413881] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf8627a2400, cur 1597798591 expire 1597798441 last 1597798364 [897654.347730] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [897880.434498] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d10f7bab000, cur 1597799315 expire 1597799165 last 1597799088 [898310.366613] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [898536.450960] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d07602ee000, cur 1597799971 expire 1597799821 last 1597799744 [898958.367957] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [899184.469077] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf0dec18800, cur 1597800619 expire 1597800469 last 1597800392 [899641.405963] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [899867.487665] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0680802800, cur 1597801302 expire 1597801152 last 1597801075 [900314.416390] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [900540.505811] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1ba4d1dc00, cur 1597801975 expire 1597801825 last 1597801748 [900906.438578] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [901132.521908] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0f8398c400, cur 1597802567 expire 1597802417 last 1597802340 [901558.451552] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [901784.539489] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d10363dc400, cur 1597803219 expire 1597803069 last 1597802992 [902201.474977] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [902427.555401] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d10f9455c00, cur 1597803862 expire 1597803712 last 1597803635 [902822.486436] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [903048.572861] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1048716800, cur 1597804483 expire 1597804333 last 1597804256 [903454.503506] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [903680.589173] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1b65c2d000, cur 1597805115 expire 1597804965 last 1597804888 [903994.598733] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [904221.603387] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1697f26800, cur 1597805656 expire 1597805506 last 1597805429 [904619.625467] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [904846.621878] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1034844000, cur 1597806281 expire 1597806131 last 1597806054 [905291.557147] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [905517.639686] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0680807800, cur 1597806952 expire 1597806802 last 1597806725 [905999.573466] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [906225.659879] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cffaa789000, cur 1597807660 expire 1597807510 last 1597807433 [906654.588293] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [906880.675121] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0f32f3d000, cur 1597808315 expire 1597808165 last 1597808088 [907375.605102] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [907818.699926] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d00f0635c00, cur 1597809253 expire 1597809103 last 1597809026 [908195.628981] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [908447.719492] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d004e331400, cur 1597809882 expire 1597809732 last 1597809655 [908577.718683] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [908803.726674] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf85a6d7000, cur 1597810238 expire 1597810088 last 1597810011 [908858.664846] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [909084.733085] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d078091e400, cur 1597810519 expire 1597810369 last 1597810292 [911331.860831] Lustre: 20991:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1597812759/real 1597812759] req@ffff9d10f6e8c800 x1674503472998208/t0(0) o104->fir-MDT0003@10.49.7.8@o2ib1:15/16 lens 296/224 e 0 to 1 dl 1597812766 ref 1 fl Rpc:X/0/ffffffff rc 0/-1 [911331.888083] Lustre: 20991:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 28 previous similar messages [911352.792405] Lustre: fir-MDT0003: haven't heard from client 4ea01d5a-8a43-4 (at 10.49.7.8@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d00ef791800, cur 1597812787 expire 1597812637 last 1597812560 [917061.947334] Lustre: fir-MDT0003: haven't heard from client 8ed30fba-4090-4 (at 10.49.27.27@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0dcc63fc00, cur 1597818496 expire 1597818346 last 1597818269 [917152.454782] Lustre: fir-MDT0003: Connection restored to 40339601-8037-4 (at 10.49.27.27@o2ib1) [918866.940398] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [919093.003115] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0639db7c00, cur 1597820527 expire 1597820377 last 1597820300 [927059.173148] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [927285.229052] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d002f64a000, cur 1597828719 expire 1597828569 last 1597828492 [929222.287738] Lustre: fir-MDT0003: haven't heard from client 0ea6f21c-8921-4 (at 10.49.27.27@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d187d677400, cur 1597830656 expire 1597830506 last 1597830429 [949825.802876] Lustre: fir-MDT0003: Connection restored to 40339601-8037-4 (at 10.49.27.27@o2ib1) [950019.733347] Lustre: fir-MDT0003: Connection restored to 78f17c31-27f8-4 (at 10.49.7.8@o2ib1) [950365.892795] Lustre: fir-MDT0003: haven't heard from client 04626c89-3f35-4 (at 10.49.27.27@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1a78b3c000, cur 1597851799 expire 1597851649 last 1597851572 [951279.918368] Lustre: fir-MDT0003: haven't heard from client 18fcd0f7-92c8-4 (at 10.51.1.23@o2ib3) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d07602ec000, cur 1597852713 expire 1597852563 last 1597852486 [952385.216927] Lustre: fir-MDT0003: Connection restored to 9e455aea-8e9a-4 (at 10.51.1.2@o2ib3) [952401.234972] Lustre: fir-MDT0003: Connection restored to d198cbd4-28a2-4 (at 10.51.1.8@o2ib3) [952407.860581] Lustre: fir-MDT0003: Connection restored to 517fcb3e-a2b3-4 (at 10.51.1.10@o2ib3) [952410.037752] Lustre: fir-MDT0003: Connection restored to f11d4880-b41d-4 (at 10.51.1.9@o2ib3) [952410.046288] Lustre: Skipped 1 previous similar message [952415.360992] Lustre: fir-MDT0003: Connection restored to b61aaeb6-8885-4 (at 10.51.1.17@o2ib3) [952415.369606] Lustre: Skipped 2 previous similar messages [952424.275877] Lustre: fir-MDT0003: Connection restored to 925133ce-2cb7-4 (at 10.51.1.26@o2ib3) [952424.284498] Lustre: Skipped 7 previous similar messages [952443.087532] Lustre: fir-MDT0003: Connection restored to 16d84491-2e53-4 (at 10.51.1.7@o2ib3) [952443.096061] Lustre: Skipped 16 previous similar messages [952476.607730] Lustre: fir-MDT0003: Connection restored to fed13e1b-131a-4 (at 10.51.1.36@o2ib3) [952476.616348] Lustre: Skipped 18 previous similar messages [953386.404025] Lustre: fir-MDT0003: Connection restored to 3fed0d73-447f-4 (at 10.51.1.5@o2ib3) [953386.412557] Lustre: Skipped 18 previous similar messages [958723.696310] Lustre: fir-MDT0003: Connection restored to 9251d1dd-b663-4 (at 10.50.10.30@o2ib2) [958745.116979] Lustre: fir-MDT0003: haven't heard from client 9251d1dd-b663-4 (at 10.50.10.30@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1092fde800, cur 1597860178 expire 1597860028 last 1597859951 [958745.137034] Lustre: Skipped 70 previous similar messages [960702.168538] Lustre: fir-MDT0003: haven't heard from client 709d2839-c6e3-4 (at 10.50.10.5@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d109284d800, cur 1597862135 expire 1597861985 last 1597861908 [960981.816579] Lustre: fir-MDT0003: Connection restored to 709d2839-c6e3-4 (at 10.50.10.5@o2ib2) [969940.362471] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [970166.419534] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cfcb27b4800, cur 1597871599 expire 1597871449 last 1597871372 [972108.423127] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [972334.477226] Lustre: fir-MDT0003: haven't heard from client 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d103486d400, cur 1597873767 expire 1597873617 last 1597873540 [976093.600707] perf: interrupt took too long (3953 > 3923), lowering kernel.perf_event_max_sample_rate to 50000 [976641.098109] Lustre: fir-MDT0003: Connection restored to 5dd1f2f7-e94f-4 (at 10.51.0.61@o2ib3) [976641.106750] Lustre: Skipped 1 previous similar message [978045.543319] perf: interrupt took too long (4964 > 4941), lowering kernel.perf_event_max_sample_rate to 40000 [979844.687945] Lustre: fir-MDT0003: haven't heard from client 9d767d69-9f00-4 (at 10.51.12.2@o2ib3) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9ceb44399000, cur 1597881277 expire 1597881127 last 1597881050 [981256.081740] Lustre: fir-MDT0003: Connection restored to 52dee2e2-4d73-4 (at 10.51.12.3@o2ib3) [982058.084150] Lustre: fir-MDT0003: Connection restored to 9d767d69-9f00-4 (at 10.51.12.2@o2ib3) [994162.050724] Lustre: fir-MDT0003: haven't heard from client acd8b5b2-cefe-4 (at 10.51.0.13@o2ib3) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d109284fc00, cur 1597895594 expire 1597895444 last 1597895367 [994162.070705] Lustre: Skipped 1 previous similar message [1033035.129630] Lustre: fir-MDT0003: haven't heard from client 65c59c4a-56c3-4 (at 10.49.27.8@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d10929f3400, cur 1597934466 expire 1597934316 last 1597934239 [1044256.823116] Lustre: fir-MDT0003: Connection restored to b0b32558-1daa-4 (at 10.50.14.3@o2ib2) [1044292.435245] Lustre: fir-MDT0003: haven't heard from client b0b32558-1daa-4 (at 10.50.14.3@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d207b3dd400, cur 1597945723 expire 1597945573 last 1597945496 [1053603.732990] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [1053603.743334] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Timed out RDMA with 10.0.10.4@o2ib7 (106): c: 7, oc: 0, rc: 8 [1056170.046448] Lustre: fir-MDT0003: Connection restored to 4801f02b-804c-4 (at 10.50.5.4@o2ib2) [1056183.325100] Lustre: fir-MDT0003: Connection restored to 0f63cb1b-5592-4 (at 10.49.24.31@o2ib1) [1056190.892898] Lustre: fir-MDT0003: Connection restored to f9aa9717-0f2c-4 (at 10.49.24.23@o2ib1) [1056210.564459] Lustre: fir-MDT0003: Connection restored to 65c59c4a-56c3-4 (at 10.49.27.8@o2ib1) [1057985.900433] Lustre: fir-MDT0003: Connection restored to 5bfdf5fe-3d28-4 (at 10.50.3.34@o2ib2) [1059908.841792] Lustre: fir-MDT0003: haven't heard from client 9fa5311b-a51d-4 (at 10.49.18.24@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d109242e400, cur 1597961339 expire 1597961189 last 1597961112 [1062784.736306] Lustre: fir-MDT0003: Connection restored to c1c05e0e-d0c4-4 (at 10.51.13.20@o2ib3) [1062787.397517] Lustre: fir-MDT0003: Connection restored to 637280a5-81a2-4 (at 10.51.13.23@o2ib3) [1062811.176958] Lustre: fir-MDT0003: Connection restored to 3f991350-bf87-4 (at 10.51.12.20@o2ib3) [1062813.972550] Lustre: fir-MDT0003: Connection restored to d6eac430-ab74-4 (at 10.51.12.22@o2ib3) [1062813.981336] Lustre: Skipped 1 previous similar message [1062822.011194] Lustre: fir-MDT0003: Connection restored to 3df53b0b-215e-4 (at 10.51.12.23@o2ib3) [1062822.019989] Lustre: Skipped 1 previous similar message [1063106.587937] Lustre: fir-MDT0003: Connection restored to 91882b8d-da82-4 (at 10.51.13.24@o2ib3) [1063106.596726] Lustre: Skipped 2 previous similar messages [1063274.394779] Lustre: fir-MDT0003: Connection restored to 6c0fb37b-721b-4 (at 10.51.13.17@o2ib3) [1063474.934403] Lustre: fir-MDT0003: haven't heard from client ddff788d-8aef-4 (at 10.50.7.61@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d109284a800, cur 1597964905 expire 1597964755 last 1597964678 [1068975.701174] Lustre: fir-MDT0003: Connection restored to b0b32558-1daa-4 (at 10.50.14.3@o2ib2) [1069042.083783] Lustre: fir-MDT0003: haven't heard from client 595f178e-4b2e-4 (at 10.50.14.3@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9ce6037c9c00, cur 1597970472 expire 1597970322 last 1597970245 [1087690.346792] LNet: Service thread pid 21289 was inactive for 200.31s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [1087690.363926] Pid: 21289, comm: mdt03_110 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1087690.374293] Call Trace: [1087690.376982] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1087690.384134] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1087690.391520] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1087690.398542] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1087690.405735] [] mdt_reint_object_lock+0x2c/0x60 [mdt] [1087690.412606] [] mdt_reint_striped_lock+0x8c/0x510 [mdt] [1087690.419647] [] mdt_attr_set+0x9e/0x770 [mdt] [1087690.425790] [] mdt_reint_setattr+0x599/0xc50 [mdt] [1087690.432473] [] mdt_reint_rec+0x83/0x210 [mdt] [1087690.438702] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [1087690.445453] [] mdt_reint+0x67/0x140 [mdt] [1087690.451337] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1087690.458481] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1087690.466396] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1087690.472913] [] kthread+0xd1/0xe0 [1087690.478006] [] ret_from_fork_nospec_begin+0xe/0x21 [1087690.484661] [] 0xffffffffffffffff [1087690.489865] LustreError: dumping log to /tmp/lustre-log.1597989119.21289 [1087690.503760] Pid: 21306, comm: mdt00_104 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1087690.514106] Call Trace: [1087690.516749] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1087690.523900] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1087690.531338] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1087690.538434] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1087690.545646] [] mdt_object_lock_try+0x27/0xb0 [mdt] [1087690.552337] [] mdt_getattr_name_lock+0x1277/0x1c30 [mdt] [1087690.559565] [] mdt_intent_getattr+0x2b5/0x480 [mdt] [1087690.566364] [] mdt_intent_policy+0x435/0xd80 [mdt] [1087690.573042] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [1087690.579989] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [1087690.587280] [] tgt_enqueue+0x62/0x210 [ptlrpc] [1087690.593633] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1087690.600756] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1087690.608668] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1087690.615194] [] kthread+0xd1/0xe0 [1087690.620283] [] ret_from_fork_nospec_begin+0xe/0x21 [1087690.626938] [] 0xffffffffffffffff [1087690.632145] Pid: 21204, comm: mdt00_073 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1087690.642491] Call Trace: [1087690.645129] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1087690.652248] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1087690.659631] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1087690.666652] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1087690.673837] [] mdt_reint_object_lock+0x2c/0x60 [mdt] [1087690.680675] [] mdt_reint_striped_lock+0x8c/0x510 [mdt] [1087690.687686] [] mdt_attr_set+0x9e/0x770 [mdt] [1087690.693830] [] mdt_reint_setattr+0x599/0xc50 [mdt] [1087690.700487] [] mdt_reint_rec+0x83/0x210 [mdt] [1087690.706718] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [1087690.713462] [] mdt_reint+0x67/0x140 [mdt] [1087690.719347] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1087690.726489] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1087690.734393] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1087690.740910] [] kthread+0xd1/0xe0 [1087690.746006] [] ret_from_fork_nospec_begin+0xe/0x21 [1087690.752664] [] 0xffffffffffffffff [1087690.757868] Pid: 20907, comm: mdt02_019 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1087690.768214] Call Trace: [1087690.770851] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1087690.777979] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1087690.785354] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1087690.792375] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1087690.799558] [] mdt_object_lock+0x20/0x30 [mdt] [1087690.805868] [] mdt_hsm_state_set+0xc9/0x830 [mdt] [1087690.812455] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1087690.819579] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1087690.827475] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1087690.833993] [] kthread+0xd1/0xe0 [1087690.839089] [] ret_from_fork_nospec_begin+0xe/0x21 [1087690.845744] [] 0xffffffffffffffff [1087690.850951] Pid: 21205, comm: mdt02_097 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1087690.861297] Call Trace: [1087690.863936] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1087690.871054] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1087690.878428] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1087690.885451] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1087690.892634] [] mdt_reint_object_lock+0x2c/0x60 [mdt] [1087690.899471] [] mdt_reint_striped_lock+0x8c/0x510 [mdt] [1087690.906477] [] mdt_attr_set+0x9e/0x770 [mdt] [1087690.912621] [] mdt_reint_setattr+0x599/0xc50 [mdt] [1087690.919284] [] mdt_reint_rec+0x83/0x210 [mdt] [1087690.925516] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [1087690.932267] [] mdt_reint+0x67/0x140 [mdt] [1087690.938144] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1087690.945285] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1087690.953190] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1087690.959708] [] kthread+0xd1/0xe0 [1087690.964802] [] ret_from_fork_nospec_begin+0xe/0x21 [1087690.971453] [] 0xffffffffffffffff [1087690.976674] LNet: Service thread pid 21030 was inactive for 200.94s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [1087701.099141] LustreError: dumping log to /tmp/lustre-log.1597989130.21133 [1087790.035661] LustreError: 21205:0:(ldlm_request.c:130:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1597988919, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0003_UUID lock: ffff9d1d4cebd100/0xfdb7f58f5c8e3429 lrc: 3/0,1 mode: --/PW res: [0x280045a23:0x9c46:0x0].0x0 bits 0x2/0x0 rrc: 13 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 21205 timeout: 0 lvb_type: 0 [1087790.035721] LustreError: dumping log to /tmp/lustre-log.1597989219.21306 [1087790.082235] LustreError: 21205:0:(ldlm_request.c:130:ldlm_expired_completion_wait()) Skipped 7 previous similar messages [1087800.885989] LustreError: 21133:0:(ldlm_request.c:130:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1597988930, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0003_UUID lock: ffff9ce0fb44e0c0/0xfdb7f58f5ca693c2 lrc: 3/1,0 mode: --/PR res: [0x280045a23:0x9c46:0x0].0x0 bits 0x12/0x0 rrc: 13 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 21133 timeout: 0 lvb_type: 0 [1088046.641024] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [1088046.651374] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Timed out RDMA with 10.0.10.3@o2ib7 (6): c: 0, oc: 0, rc: 8 [1088048.349072] Lustre: 21023:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1597989470/real 0] req@ffff9cee083c8000 x1674509383538048/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1597989477 ref 2 fl Rpc:X/0/ffffffff rc 0/-1 [1088048.375642] Lustre: 21023:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 20 previous similar messages [1088052.351216] Lustre: 21308:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1597989480/real 1597989481] req@ffff9cee083cbf00 x1674509383869440/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1597989487 ref 1 fl Rpc:eX/0/ffffffff rc 0/-1 [1088052.379050] Lustre: 21308:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 75 previous similar messages [1088060.352411] Lustre: 21025:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1597989487/real 1597989489] req@ffff9cf829e52d00 x1674509383992704/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1597989494 ref 1 fl Rpc:eX/2/ffffffff rc 0/-1 [1088060.380101] Lustre: 21025:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 240 previous similar messages [1088076.354244] Lustre: 20953:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1597989503/real 1597989505] req@ffff9d1a0f81da00 x1674509383575872/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1597989510 ref 1 fl Rpc:eX/2/ffffffff rc 0/-1 [1088076.381938] Lustre: 20953:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 1031 previous similar messages [1088084.726111] Lustre: 21118:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9cf7dd6a5e80 x1672144875899648/t737100455425(0) o36->ddf76c19-da2f-4@10.50.10.33@o2ib2:309/0 lens 488/3152 e 24 to 0 dl 1597989519 ref 2 fl Interpret:/0/0 rc 0/0 [1088091.044585] Lustre: fir-MDT0003: Client ddf76c19-da2f-4 (at 10.50.10.33@o2ib2) reconnecting [1088091.053124] Lustre: Skipped 38 previous similar messages [1088091.058642] Lustre: fir-MDT0003: Connection restored to ddf76c19-da2f-4 (at 10.50.10.33@o2ib2) [1088096.198443] Lustre: 21307:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9ce6ae48ec00 x1675196754989376/t0(0) o101->f972f41f-af14-c427-d6f4-f019a90c69e8@10.0.10.3@o2ib7:320/0 lens 576/3264 e 23 to 0 dl 1597989530 ref 2 fl Interpret:/0/0 rc 0/0 [1088096.227602] Lustre: 21307:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 7 previous similar messages [1088108.386247] Lustre: 21313:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1597989535/real 1597989537] req@ffff9cec272a3f00 x1674509385007168/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1597989542 ref 1 fl Rpc:eX/2/ffffffff rc 0/-1 [1088108.414021] Lustre: 21313:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 5658 previous similar messages [1088142.425704] LustreError: 21023:0:(ldlm_lockd.c:681:ldlm_handle_ast_error()) ### client (nid 10.0.10.3@o2ib7) failed to reply to blocking AST (req@ffff9cee083c8000 x1674509383538048 status 0 rc -110), evict it ns: mdt-fir-MDT0003_UUID lock: ffff9ce0cef5a880/0xfdb7f58f5c7d3c32 lrc: 4/0,0 mode: PR/PR res: [0x280044062:0x1ff03:0x0].0x0 bits 0x12/0x0 rrc: 7 type: IBT flags: 0x60200400000020 nid: 10.0.10.3@o2ib7 remote: 0x3e2ccfedb646eeb9 expref: 247885 pid: 21068 timeout: 1088210 lvb_type: 0 [1088142.468908] LustreError: 138-a: fir-MDT0003: A client on nid 10.0.10.3@o2ib7 was evicted due to a lock blocking callback time out: rc -110 [1088142.481561] LustreError: 20458:0:(ldlm_lockd.c:256:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 10.0.10.3@o2ib7 ns: mdt-fir-MDT0003_UUID lock: ffff9ce0cef5a880/0xfdb7f58f5c7d3c32 lrc: 3/0,0 mode: PR/PR res: [0x280044062:0x1ff03:0x0].0x0 bits 0x12/0x0 rrc: 8 type: IBT flags: 0x60200400000020 nid: 10.0.10.3@o2ib7 remote: 0x3e2ccfedb646eeb9 expref: 247886 pid: 21068 timeout: 0 lvb_type: 0 [1088172.426774] Lustre: 21248:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1597989599/real 1597989601] req@ffff9d151aaa9b00 x1674509387190976/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1597989606 ref 1 fl Rpc:eX/2/ffffffff rc 0/-1 [1088172.454468] Lustre: 21248:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 10979 previous similar messages [1088206.645556] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Timed out tx for 10.0.10.3@o2ib7: 0 seconds [1088206.655731] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Skipped 103 previous similar messages [1088239.646500] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Timed out tx for 10.0.10.3@o2ib7: 5 seconds [1088239.656668] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Skipped 192 previous similar messages [1088250.646980] LustreError: 21146:0:(ldlm_lockd.c:681:ldlm_handle_ast_error()) ### client (nid 10.0.10.3@o2ib7) failed to reply to blocking AST (req@ffff9d185aee2400 x1674509387131904 status 0 rc -110), evict it ns: mdt-fir-MDT0003_UUID lock: ffff9ce0eb37b600/0xfdb7f58f4a81486e lrc: 4/0,0 mode: PR/PR res: [0x280044386:0xe493:0x0].0x0 bits 0x12/0x0 rrc: 3 type: IBT flags: 0x60200400000020 nid: 10.0.10.3@o2ib7 remote: 0x3e2ccfedb5ef7ffa expref: 21766 pid: 21216 timeout: 1088271 lvb_type: 0 [1088250.690038] LustreError: 21146:0:(ldlm_lockd.c:681:ldlm_handle_ast_error()) Skipped 1 previous similar message [1088250.700230] LustreError: 138-a: fir-MDT0003: A client on nid 10.0.10.3@o2ib7 was evicted due to a lock blocking callback time out: rc -110 [1088250.712843] LustreError: Skipped 1 previous similar message [1088250.718637] LustreError: 20458:0:(ldlm_lockd.c:256:expired_lock_main()) ### lock callback timer expired after 108s: evicting client at 10.0.10.3@o2ib7 ns: mdt-fir-MDT0003_UUID lock: ffff9ce0eb37b600/0xfdb7f58f4a81486e lrc: 3/0,0 mode: PR/PR res: [0x280044386:0xe493:0x0].0x0 bits 0x12/0x0 rrc: 3 type: IBT flags: 0x60200400000020 nid: 10.0.10.3@o2ib7 remote: 0x3e2ccfedb5ef7ffa expref: 21690 pid: 21216 timeout: 0 lvb_type: 0 [1088250.756224] LustreError: 20458:0:(ldlm_lockd.c:256:expired_lock_main()) Skipped 1 previous similar message [1088305.648358] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Timed out tx for 10.0.10.3@o2ib7: 31 seconds [1088305.658617] LNet: 20140:0:(o2iblnd_cb.c:3397:kiblnd_check_conns()) Skipped 373 previous similar messages [1088314.648629] Lustre: 21111:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1597989737/real 1597989737] req@ffff9ce794ff7500 x1674509389520960/t0(0) o104->fir-MDT0003@10.0.10.3@o2ib7:15/16 lens 296/224 e 0 to 1 dl 1597989744 ref 2 fl Rpc:X/2/ffffffff rc 0/-1 [1088314.676003] Lustre: 21111:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 3133 previous similar messages [1088336.665290] LNetError: 97054:0:(o2iblnd_cb.c:2962:kiblnd_rejected()) 10.0.10.3@o2ib7 rejected: o2iblnd fatal error [1088337.144368] Lustre: fir-MDT0003: Connection restored to 974577fb-0e05-9af2-6b92-3f2c3b3290cd (at 10.0.10.3@o2ib7) [1088618.338783] Lustre: DEBUG MARKER: Thu Aug 20 23:07:27 2020 [1088692.084376] Lustre: fir-MDT0003: Client ddf76c19-da2f-4 (at 10.50.10.33@o2ib2) reconnecting [1088692.092938] Lustre: fir-MDT0003: Connection restored to ddf76c19-da2f-4 (at 10.50.10.33@o2ib2) [1088692.101728] Lustre: Skipped 1 previous similar message [1088711.117867] LustreError: 21006:0:(ldlm_request.c:130:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1597989840, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0003_UUID lock: ffff9cfc95aac380/0xfdb7f58f6525049a lrc: 3/1,0 mode: --/PR res: [0x280045a23:0x9c46:0x0].0x0 bits 0x12/0x0 rrc: 15 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 21006 timeout: 0 lvb_type: 0 [1089157.672481] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [1089157.682834] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Timed out RDMA with 10.0.10.3@o2ib7 (105): c: 7, oc: 0, rc: 8 [1089161.456584] Lustre: 21111:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-150), not sending early reply req@ffff9cefe0fd5a00 x1675613723910528/t0(0) o101->48c8dc6d-b841-ddef-34ea-b30c66ed6822@10.0.10.3@o2ib7:630/0 lens 576/3264 e 0 to 0 dl 1597990595 ref 2 fl Interpret:/0/0 rc 0/0 [1089278.616911] Lustre: fir-MDT0003: haven't heard from client 48c8dc6d-b841-ddef-34ea-b30c66ed6822 (at 10.0.10.3@o2ib7) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9ce8d2728400, cur 1597990708 expire 1597990558 last 1597990481 [1089293.169442] Lustre: fir-MDT0003: Client ddf76c19-da2f-4 (at 10.50.10.33@o2ib2) reconnecting [1089293.178765] Lustre: fir-MDT0003: Connection restored to ddf76c19-da2f-4 (at 10.50.10.33@o2ib2) [1089611.937255] LNet: Service thread pid 21006 was inactive for 1200.79s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [1089611.954487] LNet: Skipped 4 previous similar messages [1089611.959755] Pid: 21006, comm: mdt00_032 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1089611.970130] Call Trace: [1089611.972812] [] ldlm_completion_ast+0x4e5/0x860 [ptlrpc] [1089611.979973] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1089611.987358] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1089611.994380] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1089612.001573] [] mdt_getattr_name_lock+0x11d/0x1c30 [mdt] [1089612.008665] [] mdt_intent_getattr+0x2b5/0x480 [mdt] [1089612.015417] [] mdt_intent_policy+0x435/0xd80 [mdt] [1089612.022080] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [1089612.029038] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [1089612.036337] [] tgt_enqueue+0x62/0x210 [ptlrpc] [1089612.042697] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1089612.049828] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1089612.057737] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1089612.064262] [] kthread+0xd1/0xe0 [1089612.069358] [] ret_from_fork_nospec_begin+0xe/0x21 [1089612.076008] [] 0xffffffffffffffff [1089612.081236] LustreError: dumping log to /tmp/lustre-log.1597991041.21006 [1089894.206262] Lustre: fir-MDT0003: Client ddf76c19-da2f-4 (at 10.50.10.33@o2ib2) reconnecting [1089894.214851] Lustre: fir-MDT0003: Connection restored to ddf76c19-da2f-4 (at 10.50.10.33@o2ib2) [1090495.241266] Lustre: fir-MDT0003: Client ddf76c19-da2f-4 (at 10.50.10.33@o2ib2) reconnecting [1090495.249822] Lustre: fir-MDT0003: Connection restored to ddf76c19-da2f-4 (at 10.50.10.33@o2ib2) [1090627.215153] Lustre: fir-MDT0003: Connection restored to 974577fb-0e05-9af2-6b92-3f2c3b3290cd (at 10.0.10.3@o2ib7) [1091096.275270] Lustre: fir-MDT0003: Client ddf76c19-da2f-4 (at 10.50.10.33@o2ib2) reconnecting [1091096.283858] Lustre: fir-MDT0003: Connection restored to ddf76c19-da2f-4 (at 10.50.10.33@o2ib2) [1091395.539606] LNet: Service thread pid 21306 completed after 3905.40s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [1091395.539614] LNet: Service thread pid 20907 completed after 3905.40s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [1091395.540952] LustreError: 21133:0:(ldlm_lockd.c:1348:ldlm_handle_enqueue0()) ### lock on destroyed export ffff9cf00c282400 ns: mdt-fir-MDT0003_UUID lock: ffff9ce0fb44e0c0/0xfdb7f58f5ca693c2 lrc: 3/0,0 mode: PR/PR res: [0x280045a23:0x9c46:0x0].0x0 bits 0x12/0x0 rrc: 7 type: IBT flags: 0x50200000000000 nid: 10.0.10.3@o2ib7 remote: 0x3e2ccfedb647b1f5 expref: 2 pid: 21133 timeout: 0 lvb_type: 0 [1091395.541020] Lustre: 21133:0:(service.c:2165:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (600:3294s); client may timeout. req@ffff9ce6ae48ec00 x1675196754989376/t0(0) o101->f972f41f-af14-c427-d6f4-f019a90c69e8@10.0.10.3@o2ib7:320/0 lens 576/880 e 23 to 0 dl 1597989530 ref 1 fl Complete:/0/0 rc -107/-107 [1091395.636453] LNet: Skipped 8 previous similar messages [1095549.794958] Lustre: fir-MDT0003: haven't heard from client 226ceaf2-fc95-4 (at 10.50.14.3@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d2033b77800, cur 1597996979 expire 1597996829 last 1597996752 [1095651.016253] Lustre: fir-MDT0003: Connection restored to b0b32558-1daa-4 (at 10.50.14.3@o2ib2) [1134422.738544] Lustre: fir-MDT0003: Connection restored to acd8b5b2-cefe-4 (at 10.51.0.13@o2ib3) [1134750.861969] Lustre: fir-MDT0003: haven't heard from client 29f09f1f-cab5-4 (at 10.51.0.13@o2ib3) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf339f7c800, cur 1598036179 expire 1598036029 last 1598035952 [1135294.866205] Lustre: fir-MDT0003: Connection restored to acd8b5b2-cefe-4 (at 10.51.0.13@o2ib3) [1135751.887512] Lustre: fir-MDT0003: haven't heard from client f80a8319-8a1f-4 (at 10.50.7.62@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1092468400, cur 1598037180 expire 1598037030 last 1598036953 [1135827.889586] Lustre: fir-MDT0003: haven't heard from client 0e943951-085b-4 (at 10.51.0.13@o2ib3) in 156 seconds. I think it's dead, and I am evicting it. exp ffff9cf00c282000, cur 1598037256 expire 1598037106 last 1598037100 [1136099.986490] LustreError: 21228:0:(ldlm_lockd.c:2324:ldlm_cancel_handler()) ldlm_cancel from 10.50.7.62@o2ib2 arrived at 1598037528 with bad export cookie 18282351031258262177 [1136100.002221] LustreError: 21228:0:(ldlm_lockd.c:2324:ldlm_cancel_handler()) Skipped 17 previous similar messages [1136106.836159] Lustre: fir-MDT0003: Connection restored to f80a8319-8a1f-4 (at 10.50.7.62@o2ib2) [1136332.903457] Lustre: fir-MDT0003: haven't heard from client f80a8319-8a1f-4 (at 10.50.7.62@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0b1c797800, cur 1598037761 expire 1598037611 last 1598037534 [1136773.503203] Lustre: fir-MDT0003: Connection restored to f80a8319-8a1f-4 (at 10.50.7.62@o2ib2) [1137264.927421] Lustre: fir-MDT0003: haven't heard from client f80a8319-8a1f-4 (at 10.50.7.62@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9ce8b7dc6000, cur 1598038693 expire 1598038543 last 1598038466 [1137403.928988] Lustre: fir-MDT0003: Connection restored to f80a8319-8a1f-4 (at 10.50.7.62@o2ib2) [1137629.937041] Lustre: fir-MDT0003: haven't heard from client f80a8319-8a1f-4 (at 10.50.7.62@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d07f1f2c800, cur 1598039058 expire 1598038908 last 1598038831 [1137683.675590] Lustre: fir-MDT0003: Connection restored to f80a8319-8a1f-4 (at 10.50.7.62@o2ib2) [1139026.414987] Lustre: fir-MDT0003: Connection restored to acd8b5b2-cefe-4 (at 10.51.0.13@o2ib3) [1144811.273880] Lustre: fir-MDT0003: Connection restored to 1586a130-8875-4 (at 10.51.12.21@o2ib3) [1148306.699326] Lustre: fir-MDT0003: Connection restored to e8968dd5-02b2-4 (at 10.51.13.19@o2ib3) [1219889.140910] Lustre: fir-MDT0003: haven't heard from client 230bb7f2-3a7b-4 (at 10.50.10.25@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1092849400, cur 1598121315 expire 1598121165 last 1598121088 [1223328.197536] LustreError: 26883:0:(ldlm_lockd.c:2324:ldlm_cancel_handler()) ldlm_cancel from 10.50.10.25@o2ib2 arrived at 1598124753 with bad export cookie 18282351031258258173 [1223328.213784] Lustre: fir-MDT0003: Connection restored to 230bb7f2-3a7b-4 (at 10.50.10.25@o2ib2) [1223554.235503] Lustre: fir-MDT0003: haven't heard from client 230bb7f2-3a7b-4 (at 10.50.10.25@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf0f130f400, cur 1598124980 expire 1598124830 last 1598124753 [1240612.081314] Lustre: fir-MDT0003: Connection restored to 230bb7f2-3a7b-4 (at 10.50.10.25@o2ib2) [1240838.680768] Lustre: fir-MDT0003: haven't heard from client 230bb7f2-3a7b-4 (at 10.50.10.25@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9ce61ec57800, cur 1598142264 expire 1598142114 last 1598142037 [1241242.690544] Lustre: fir-MDT0003: Connection restored to 230bb7f2-3a7b-4 (at 10.50.10.25@o2ib2) [1241468.698130] Lustre: fir-MDT0003: haven't heard from client 230bb7f2-3a7b-4 (at 10.50.10.25@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cef644f6400, cur 1598142894 expire 1598142744 last 1598142667 [1292189.043490] Lustre: fir-MDT0003: haven't heard from client 130bb216-3d2a-4 (at 10.50.7.37@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d109242f000, cur 1598193613 expire 1598193463 last 1598193386 [1292485.000988] Lustre: fir-MDT0003: Connection restored to 130bb216-3d2a-4 (at 10.50.7.37@o2ib2) [1345830.764056] Lustre: fir-MDT0003: Client 9eccbaf8-d387-4 (at 10.49.0.64@o2ib1) reconnecting [1345830.772601] Lustre: fir-MDT0003: Connection restored to 9eccbaf8-d387-4 (at 10.49.0.64@o2ib1) [1376056.147271] Lustre: 20905:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1598277470/real 1598277470] req@ffff9d08ccaf6780 x1674527722901888/t0(0) o104->fir-MDT0003@10.49.27.20@o2ib1:15/16 lens 296/224 e 0 to 1 dl 1598277477 ref 1 fl Rpc:X/0/ffffffff rc 0/-1 [1376056.174928] Lustre: 20905:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [1376091.186223] Lustre: 20905:0:(client.c:2133:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1598277505/real 1598277505] req@ffff9d08ccaf6780 x1674527722901888/t0(0) o104->fir-MDT0003@10.49.27.20@o2ib1:15/16 lens 296/224 e 0 to 1 dl 1598277512 ref 1 fl Rpc:X/2/ffffffff rc 0/-1 [1376091.213739] Lustre: 20905:0:(client.c:2133:ptlrpc_expire_one_request()) Skipped 4 previous similar messages [1376111.252875] Lustre: fir-MDT0003: haven't heard from client bf33897b-16ab-4 (at 10.49.27.20@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d109268bc00, cur 1598277533 expire 1598277383 last 1598277306 [1376111.273080] LustreError: 20905:0:(ldlm_lockd.c:681:ldlm_handle_ast_error()) ### client (nid 10.49.27.20@o2ib1) failed to reply to blocking AST (req@ffff9d08ccaf6780 x1674527722901888 status 0 rc -5), evict it ns: mdt-fir-MDT0003_UUID lock: ffff9cf2054dbcc0/0xfdb7f59643efd3f4 lrc: 4/0,0 mode: PR/PR res: [0x280045ff4:0x2fb6:0x0].0x0 bits 0x1b/0x0 rrc: 6 type: IBT flags: 0x60200400000020 nid: 10.49.27.20@o2ib1 remote: 0x4bfdc1ff9f81818 expref: 43 pid: 20949 timeout: 1376167 lvb_type: 0 [1376111.315958] LustreError: 138-a: fir-MDT0003: A client on nid 10.49.27.20@o2ib1 was evicted due to a lock blocking callback time out: rc -5 [1399161.227581] Lustre: fir-MDT0003: Connection restored to e12594ea-3256-4 (at 10.49.26.35@o2ib1) [1399196.855289] Lustre: fir-MDT0003: haven't heard from client da749e7d-c275-4 (at 10.49.26.35@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d002f8a8000, cur 1598300618 expire 1598300468 last 1598300391 [1402055.895640] Lustre: fir-MDT0003: Connection restored to 0ef9c93a-bb40-4 (at 10.49.21.21@o2ib1) [1402109.929082] Lustre: fir-MDT0003: haven't heard from client 0ef9c93a-bb40-4 (at 10.49.21.21@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1092bf5c00, cur 1598303531 expire 1598303381 last 1598303304 [1402675.943269] Lustre: fir-MDT0003: haven't heard from client c6a7ac7c-9570-4 (at 10.49.26.35@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d0994e60800, cur 1598304097 expire 1598303947 last 1598303870 [1402761.529935] Lustre: fir-MDT0003: Connection restored to e12594ea-3256-4 (at 10.49.26.35@o2ib1) [1403994.943310] LNet: Service thread pid 21272 was inactive for 200.28s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [1403994.960455] Pid: 21272, comm: mdt03_104 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1403994.970811] Call Trace: [1403994.973459] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1403994.980610] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1403994.987997] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1403994.995024] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1403995.002219] [] mdt_object_lock+0x20/0x30 [mdt] [1403995.008536] [] mdt_hsm_state_set+0xc9/0x830 [mdt] [1403995.015140] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1403995.022292] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1403995.030200] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1403995.036721] [] kthread+0xd1/0xe0 [1403995.041826] [] ret_from_fork_nospec_begin+0xe/0x21 [1403995.048481] [] 0xffffffffffffffff [1403995.053691] LustreError: dumping log to /tmp/lustre-log.1598305416.21272 [1403995.061250] Pid: 21164, comm: mdt01_075 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1403995.071606] Call Trace: [1403995.074252] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1403995.081380] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1403995.088777] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1403995.095806] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1403995.102997] [] mdt_reint_object_lock+0x2c/0x60 [mdt] [1403995.109845] [] mdt_reint_striped_lock+0x8c/0x510 [mdt] [1403995.116872] [] mdt_attr_set+0x9e/0x770 [mdt] [1403995.123026] [] mdt_reint_setattr+0x599/0xc50 [mdt] [1403995.129698] [] mdt_reint_rec+0x83/0x210 [mdt] [1403995.135945] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [1403995.142692] [] mdt_reint+0x67/0x140 [mdt] [1403995.148575] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1403995.155724] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1403995.163638] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1403995.170155] [] kthread+0xd1/0xe0 [1403995.175252] [] ret_from_fork_nospec_begin+0xe/0x21 [1403995.181909] [] 0xffffffffffffffff [1403995.187123] Pid: 20861, comm: mdt00_006 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1403995.197465] Call Trace: [1403995.200107] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1403995.207224] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1403995.214600] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1403995.221610] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1403995.228798] [] mdt_reint_object_lock+0x2c/0x60 [mdt] [1403995.235633] [] mdt_reint_striped_lock+0x8c/0x510 [mdt] [1403995.242638] [] mdt_attr_set+0x9e/0x770 [mdt] [1403995.248781] [] mdt_reint_setattr+0x599/0xc50 [mdt] [1403995.255449] [] mdt_reint_rec+0x83/0x210 [mdt] [1403995.261680] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [1403995.268431] [] mdt_reint+0x67/0x140 [mdt] [1403995.274315] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1403995.281494] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1403995.289403] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1403995.295922] [] kthread+0xd1/0xe0 [1403995.301026] [] ret_from_fork_nospec_begin+0xe/0x21 [1403995.307686] [] 0xffffffffffffffff [1403995.312900] Pid: 21038, comm: mdt02_052 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1403995.323254] Call Trace: [1403995.325897] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1403995.333026] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1403995.340401] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1403995.347421] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1403995.354606] [] mdt_reint_object_lock+0x2c/0x60 [mdt] [1403995.361446] [] mdt_reint_striped_lock+0x8c/0x510 [mdt] [1403995.368467] [] mdt_attr_set+0x9e/0x770 [mdt] [1403995.374609] [] mdt_reint_setattr+0x599/0xc50 [mdt] [1403995.381284] [] mdt_reint_rec+0x83/0x210 [mdt] [1403995.387533] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [1403995.394289] [] mdt_reint+0x67/0x140 [mdt] [1403995.400177] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1403995.407335] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1403995.415249] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1403995.421777] [] kthread+0xd1/0xe0 [1403995.426871] [] ret_from_fork_nospec_begin+0xe/0x21 [1403995.433529] [] 0xffffffffffffffff [1403995.438745] Pid: 21057, comm: mdt01_049 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1403995.449112] Call Trace: [1403995.451752] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1403995.458888] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1403995.466274] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1403995.473290] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1403995.480483] [] mdt_object_lock_try+0x27/0xb0 [mdt] [1403995.487150] [] mdt_getattr_name_lock+0x1277/0x1c30 [mdt] [1403995.494334] [] mdt_intent_getattr+0x2b5/0x480 [mdt] [1403995.501096] [] mdt_intent_policy+0x435/0xd80 [mdt] [1403995.507762] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [1403995.514719] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [1403995.522028] [] tgt_enqueue+0x62/0x210 [ptlrpc] [1403995.528380] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1403995.535504] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1403995.543406] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1403995.549924] [] kthread+0xd1/0xe0 [1403995.555020] [] ret_from_fork_nospec_begin+0xe/0x21 [1403995.561674] [] 0xffffffffffffffff [1403995.566877] LNet: Service thread pid 21200 was inactive for 200.90s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [1403995.579917] LNet: Skipped 3 previous similar messages [1404005.695574] LNet: Service thread pid 21019 was inactive for 200.05s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [1404005.708609] LNet: Skipped 2 previous similar messages [1404005.713844] LustreError: dumping log to /tmp/lustre-log.1598305426.21019 [1404094.609303] Lustre: fir-MDT0003: Connection restored to e12594ea-3256-4 (at 10.49.26.35@o2ib1) [1404094.659823] LustreError: 21164:0:(ldlm_request.c:130:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1598305215, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0003_UUID lock: ffff9cfdb5be0000/0xfdb7f597405a512e lrc: 3/0,1 mode: --/PW res: [0x28004567a:0x19752:0x0].0x0 bits 0x2/0x0 rrc: 14 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 21164 timeout: 0 lvb_type: 0 [1404094.699590] LustreError: 21164:0:(ldlm_request.c:130:ldlm_expired_completion_wait()) Skipped 7 previous similar messages [1404105.642109] LustreError: 21019:0:(ldlm_request.c:130:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1598305226, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0003_UUID lock: ffff9ce88702e9c0/0xfdb7f5974077f15a lrc: 3/1,0 mode: --/PR res: [0x28004567a:0x19752:0x0].0x0 bits 0x12/0x0 rrc: 14 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 21019 timeout: 0 lvb_type: 0 [1404118.980494] Lustre: fir-MDT0003: haven't heard from client 5738d9b4-c5b6-4 (at 10.49.26.35@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d108b048800, cur 1598305540 expire 1598305390 last 1598305313 [1404389.065217] Lustre: 21174:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9d10feb82d00 x1673510733300352/t739163010159(0) o36->34b6d393-8480-4@10.50.7.64@o2ib2:260/0 lens 488/3152 e 24 to 0 dl 1598305815 ref 2 fl Interpret:/0/0 rc 0/0 [1404389.705234] Lustre: 20957:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9d20f751f080 x1673510733301824/t739163010167(0) o36->34b6d393-8480-4@10.50.7.64@o2ib2:260/0 lens 488/3152 e 24 to 0 dl 1598305815 ref 2 fl Interpret:/0/0 rc 0/0 [1404389.733533] Lustre: 20957:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 4 previous similar messages [1404395.666650] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1404395.675112] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1404400.121489] Lustre: 20908:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9cf0eef93f00 x1675711072205312/t0(0) o101->f30ab81d-9508-4319-d2cd-8fed5307bd23@10.0.10.3@o2ib7:271/0 lens 576/3264 e 23 to 0 dl 1598305826 ref 2 fl Interpret:/0/0 rc 0/0 [1404400.150665] Lustre: 20908:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 2 previous similar messages [1404406.695699] Lustre: fir-MDT0003: Client f30ab81d-9508-4319-d2cd-8fed5307bd23 (at 10.0.10.3@o2ib7) reconnecting [1404406.705952] Lustre: fir-MDT0003: Connection restored to 974577fb-0e05-9af2-6b92-3f2c3b3290cd (at 10.0.10.3@o2ib7) [1404996.699069] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1404996.707540] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1405007.787939] Lustre: fir-MDT0003: Connection restored to 974577fb-0e05-9af2-6b92-3f2c3b3290cd (at 10.0.10.3@o2ib7) [1405597.731754] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1405597.740197] Lustre: Skipped 1 previous similar message [1405597.745609] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1406198.769788] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1406198.778233] Lustre: Skipped 1 previous similar message [1406198.783578] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1406198.792282] Lustre: Skipped 1 previous similar message [1406799.812960] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1406799.821416] Lustre: Skipped 1 previous similar message [1406799.826761] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1406799.835462] Lustre: Skipped 1 previous similar message [1407400.856474] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1407400.864934] Lustre: Skipped 1 previous similar message [1407400.870329] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1407400.879031] Lustre: Skipped 1 previous similar message [1407593.159229] Lustre: fir-MDT0003: Connection restored to e12594ea-3256-4 (at 10.49.26.35@o2ib1) [1407593.168033] Lustre: Skipped 1 previous similar message [1407633.069071] Lustre: fir-MDT0003: haven't heard from client 5cd474df-6828-4 (at 10.49.26.35@o2ib1) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf8b1807c00, cur 1598309054 expire 1598308904 last 1598308827 [1407701.365079] LNet: Service thread pid 21272 completed after 3906.61s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources). [1407701.381491] LNet: Skipped 8 previous similar messages [1414451.786734] LNet: Service thread pid 20995 was inactive for 200.76s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [1414451.803856] LNet: Skipped 4 previous similar messages [1414451.809094] Pid: 20995, comm: mdt01_033 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1414451.819445] Call Trace: [1414451.822106] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1414451.829245] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1414451.836617] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1414451.843631] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1414451.850809] [] mdt_object_lock_try+0x27/0xb0 [mdt] [1414451.857462] [] mdt_getattr_name_lock+0x1277/0x1c30 [mdt] [1414451.864641] [] mdt_intent_getattr+0x2b5/0x480 [mdt] [1414451.871384] [] mdt_intent_policy+0x435/0xd80 [mdt] [1414451.878048] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [1414451.884982] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [1414451.892262] [] tgt_enqueue+0x62/0x210 [ptlrpc] [1414451.898598] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1414451.905717] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1414451.913656] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1414451.920201] [] kthread+0xd1/0xe0 [1414451.925297] [] ret_from_fork_nospec_begin+0xe/0x21 [1414451.931951] [] 0xffffffffffffffff [1414451.937149] LustreError: dumping log to /tmp/lustre-log.1598315872.20995 [1414451.944764] Pid: 21235, comm: mdt03_093 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1414451.955108] Call Trace: [1414451.957741] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1414451.964849] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1414451.972215] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1414451.979218] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1414451.986392] [] mdt_object_lock_try+0x27/0xb0 [mdt] [1414451.993048] [] mdt_getattr_name_lock+0x1277/0x1c30 [mdt] [1414452.000224] [] mdt_intent_getattr+0x2b5/0x480 [mdt] [1414452.006968] [] mdt_intent_policy+0x435/0xd80 [mdt] [1414452.013626] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [1414452.020557] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [1414452.027838] [] tgt_enqueue+0x62/0x210 [ptlrpc] [1414452.034183] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1414452.041319] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1414452.049214] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1414452.055720] [] kthread+0xd1/0xe0 [1414452.060808] [] ret_from_fork_nospec_begin+0xe/0x21 [1414452.067446] [] 0xffffffffffffffff [1414452.072643] Pid: 20982, comm: mdt03_029 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1414452.082988] Call Trace: [1414452.085619] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1414452.092728] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1414452.100095] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1414452.107098] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1414452.114267] [] mdt_object_lock_try+0x27/0xb0 [mdt] [1414452.120922] [] mdt_getattr_name_lock+0x1277/0x1c30 [mdt] [1414452.128112] [] mdt_intent_getattr+0x2b5/0x480 [mdt] [1414452.134859] [] mdt_intent_policy+0x435/0xd80 [mdt] [1414452.141514] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [1414452.148457] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [1414452.155745] [] tgt_enqueue+0x62/0x210 [ptlrpc] [1414452.162089] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1414452.169206] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1414452.177102] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1414452.183611] [] kthread+0xd1/0xe0 [1414452.188699] [] ret_from_fork_nospec_begin+0xe/0x21 [1414452.195345] [] 0xffffffffffffffff [1414452.200542] Pid: 21187, comm: mdt02_093 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1414452.210890] Call Trace: [1414452.213530] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1414452.220634] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1414452.228002] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1414452.235013] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1414452.242189] [] mdt_object_lock_try+0x27/0xb0 [mdt] [1414452.248850] [] mdt_getattr_name_lock+0x1277/0x1c30 [mdt] [1414452.256034] [] mdt_intent_getattr+0x2b5/0x480 [mdt] [1414452.262785] [] mdt_intent_policy+0x435/0xd80 [mdt] [1414452.269449] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [1414452.276382] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [1414452.283670] [] tgt_enqueue+0x62/0x210 [ptlrpc] [1414452.290014] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1414452.297149] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1414452.305035] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1414452.311535] [] kthread+0xd1/0xe0 [1414452.316630] [] ret_from_fork_nospec_begin+0xe/0x21 [1414452.323279] [] 0xffffffffffffffff [1414452.328475] LNet: Service thread pid 21304 was inactive for 201.30s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [1414452.345587] LNet: Skipped 3 previous similar messages [1414452.350830] Pid: 21304, comm: mdt00_102 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1414452.361175] Call Trace: [1414452.363817] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1414452.370937] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1414452.378333] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1414452.385343] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1414452.392520] [] mdt_reint_object_lock+0x2c/0x60 [mdt] [1414452.399356] [] mdt_reint_striped_lock+0x8c/0x510 [mdt] [1414452.406395] [] mdt_attr_set+0x9e/0x770 [mdt] [1414452.412539] [] mdt_reint_setattr+0x599/0xc50 [mdt] [1414452.419196] [] mdt_reint_rec+0x83/0x210 [mdt] [1414452.425423] [] mdt_reint_internal+0x6e3/0xaf0 [mdt] [1414452.432179] [] mdt_reint+0x67/0x140 [mdt] [1414452.438064] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1414452.445197] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1414452.453091] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1414452.459599] [] kthread+0xd1/0xe0 [1414452.464689] [] ret_from_fork_nospec_begin+0xe/0x21 [1414452.471344] [] 0xffffffffffffffff [1414452.476541] LNet: Service thread pid 21053 was inactive for 201.45s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [1414457.419869] LNet: Service thread pid 20847 was inactive for 200.44s. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [1414457.432921] LNet: Skipped 7 previous similar messages [1414457.438168] LustreError: dumping log to /tmp/lustre-log.1598315878.20847 [1414551.027198] LustreError: 20982:0:(ldlm_request.c:130:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1598315671, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0003_UUID lock: ffff9d20a5252880/0xfdb7f5979d9522c6 lrc: 3/1,0 mode: --/PR res: [0x28004567a:0x19752:0x0].0x0 bits 0x13/0x8 rrc: 25 type: IBT flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 20982 timeout: 0 lvb_type: 0 [1414551.067049] LustreError: 20982:0:(ldlm_request.c:130:ldlm_expired_completion_wait()) Skipped 12 previous similar messages [1414556.978350] LustreError: 20847:0:(ldlm_request.c:130:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1598315677, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0003_UUID lock: ffff9cf9d6790480/0xfdb7f5979d9d3d58 lrc: 3/1,0 mode: --/PR res: [0x28004567a:0x19752:0x0].0x0 bits 0x12/0x0 rrc: 25 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 20847 timeout: 0 lvb_type: 0 [1414845.300641] Lustre: 20894:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9cf829e54800 x1673510771316608/t0(0) o101->34b6d393-8480-4@10.50.7.64@o2ib2:146/0 lens 576/3264 e 24 to 0 dl 1598316271 ref 2 fl Interpret:/0/0 rc 0/0 [1414845.908687] Lustre: 20991:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9d144729d580 x1673510771318528/t0(0) o101->34b6d393-8480-4@10.50.7.64@o2ib2:146/0 lens 576/3264 e 23 to 0 dl 1598316271 ref 2 fl Interpret:/0/0 rc 0/0 [1414845.936126] Lustre: 20991:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 10 previous similar messages [1414851.621812] Lustre: 21200:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9ce6f7b49680 x1675713797785472/t0(0) o101->f30ab81d-9508-4319-d2cd-8fed5307bd23@10.0.10.3@o2ib7:152/0 lens 576/3264 e 23 to 0 dl 1598316277 ref 2 fl Interpret:/0/0 rc 0/0 [1414851.650983] Lustre: 21200:0:(service.c:1372:ptlrpc_at_send_early_reply()) Skipped 1 previous similar message [1414852.034023] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1414852.042467] Lustre: Skipped 1 previous similar message [1414852.047820] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1415453.072635] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1415453.081078] Lustre: Skipped 1 previous similar message [1415453.086428] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1415453.095125] Lustre: Skipped 1 previous similar message [1416054.116283] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1416054.124721] Lustre: Skipped 1 previous similar message [1416054.130077] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1416054.138799] Lustre: Skipped 1 previous similar message [1416077.286550] Lustre: fir-MDT0003: haven't heard from client 81fe6735-08d5-4 (at 10.50.10.1@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1092ae7800, cur 1598317498 expire 1598317348 last 1598317271 [1416655.159581] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1416655.168018] Lustre: Skipped 1 previous similar message [1416655.173373] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1416655.182076] Lustre: Skipped 1 previous similar message [1417256.202923] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1417256.211365] Lustre: Skipped 1 previous similar message [1417256.216709] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1417256.225436] Lustre: Skipped 1 previous similar message [1417857.246581] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1417857.255029] Lustre: Skipped 1 previous similar message [1417857.260549] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1417857.269248] Lustre: Skipped 1 previous similar message [1418458.330226] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1418458.338728] Lustre: Skipped 1 previous similar message [1418458.344141] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1418458.352842] Lustre: Skipped 1 previous similar message [1419059.376898] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1419059.385341] Lustre: Skipped 1 previous similar message [1419059.390693] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1419059.399392] Lustre: Skipped 1 previous similar message [1419660.420769] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1419660.429214] Lustre: Skipped 1 previous similar message [1419660.434563] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1419660.443271] Lustre: Skipped 1 previous similar message [1420261.464753] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1420261.473193] Lustre: Skipped 1 previous similar message [1420261.478553] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1420261.487256] Lustre: Skipped 1 previous similar message [1420862.508678] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1420862.517125] Lustre: Skipped 1 previous similar message [1420862.522523] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1420862.531218] Lustre: Skipped 1 previous similar message [1421463.552806] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1421463.561256] Lustre: Skipped 1 previous similar message [1421463.566636] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1421463.575372] Lustre: Skipped 1 previous similar message [1422064.596824] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1422064.605284] Lustre: Skipped 1 previous similar message [1422064.610638] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1422064.619344] Lustre: Skipped 2 previous similar messages [1422453.453446] Lustre: fir-MDT0003: haven't heard from client 81fe6735-08d5-4 (at 10.50.10.1@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9ce87a5b6400, cur 1598323874 expire 1598323724 last 1598323647 [1422665.641330] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1422665.649768] Lustre: Skipped 1 previous similar message [1422665.655116] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1422665.663816] Lustre: Skipped 1 previous similar message [1423266.685582] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1423266.694028] Lustre: Skipped 1 previous similar message [1423266.699432] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1423266.708145] Lustre: Skipped 1 previous similar message [1423867.730025] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1423867.738470] Lustre: Skipped 1 previous similar message [1423867.743823] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1423867.752532] Lustre: Skipped 1 previous similar message [1424468.774611] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1424468.783059] Lustre: Skipped 1 previous similar message [1424468.788403] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1424468.797124] Lustre: Skipped 1 previous similar message [1425069.859385] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1425069.867873] Lustre: Skipped 1 previous similar message [1425069.873304] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1425069.882018] Lustre: Skipped 1 previous similar message [1425670.910984] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1425670.919427] Lustre: Skipped 1 previous similar message [1425670.924765] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1425670.933460] Lustre: Skipped 1 previous similar message [1426029.410880] LustreError: 121889:0:(ldlm_lockd.c:2324:ldlm_cancel_handler()) ldlm_cancel from 10.50.10.1@o2ib2 arrived at 1598327449 with bad export cookie 18282351244687989471 [1426255.557533] Lustre: fir-MDT0003: haven't heard from client 81fe6735-08d5-4 (at 10.50.10.1@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d20f9a6cc00, cur 1598327676 expire 1598327526 last 1598327449 [1426271.955483] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1426271.963928] Lustre: Skipped 1 previous similar message [1426271.969293] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1426271.977999] Lustre: Skipped 2 previous similar messages [1426873.000263] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1426873.008717] Lustre: Skipped 1 previous similar message [1426873.014066] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1426873.022766] Lustre: Skipped 1 previous similar message [1427474.045040] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1427474.053492] Lustre: Skipped 1 previous similar message [1427474.058854] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1427474.067587] Lustre: Skipped 1 previous similar message [1428075.089954] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1428075.098398] Lustre: Skipped 1 previous similar message [1428075.103748] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1428075.112443] Lustre: Skipped 1 previous similar message [1428676.153860] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1428676.162329] Lustre: Skipped 1 previous similar message [1428676.167715] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1428676.176450] Lustre: Skipped 1 previous similar message [1429277.198451] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1429277.206905] Lustre: Skipped 1 previous similar message [1429277.212255] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1429277.220955] Lustre: Skipped 1 previous similar message [1429878.252425] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1429878.260872] Lustre: Skipped 1 previous similar message [1429878.266225] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1429878.274936] Lustre: Skipped 1 previous similar message [1430479.297260] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1430479.305758] Lustre: Skipped 1 previous similar message [1430479.311122] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1430479.319841] Lustre: Skipped 1 previous similar message [1430999.496401] Lustre: DEBUG MARKER: Mon Aug 24 22:13:39 2020 [1431080.358458] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1431080.366929] Lustre: Skipped 1 previous similar message [1431080.372359] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1431080.381102] Lustre: Skipped 1 previous similar message [1431681.414378] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1431681.422823] Lustre: Skipped 1 previous similar message [1431681.428211] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1431681.436946] Lustre: Skipped 1 previous similar message [1431793.769432] LNetError: 20140:0:(o2iblnd_cb.c:3351:kiblnd_check_txs_locked()) Timed out tx: active_txs, 0 seconds [1431793.779784] LNetError: 20140:0:(o2iblnd_cb.c:3426:kiblnd_check_conns()) Timed out RDMA with 10.0.10.3@o2ib7 (105): c: 8, oc: 0, rc: 8 [1431914.714992] Lustre: fir-MDT0003: haven't heard from client f30ab81d-9508-4319-d2cd-8fed5307bd23 (at 10.0.10.3@o2ib7) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9cf0fe85c400, cur 1598333335 expire 1598333185 last 1598333108 [1432282.458511] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1432282.467084] Lustre: Skipped 1 previous similar message [1432282.472447] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1432282.481152] Lustre: Skipped 1 previous similar message [1432883.502504] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1432883.510976] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1433484.535701] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1433484.544173] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1434085.569000] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1434085.577476] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1434085.586199] Lustre: Skipped 1 previous similar message [1434092.635748] LNet: Service thread pid 21302 was inactive for 200.25s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [1434092.652861] Pid: 21302, comm: mdt00_100 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019 [1434092.663208] Call Trace: [1434092.665848] [] ldlm_completion_ast+0x430/0x860 [ptlrpc] [1434092.672960] [] ldlm_cli_enqueue_local+0x231/0x830 [ptlrpc] [1434092.680328] [] mdt_object_local_lock+0x50b/0xb20 [mdt] [1434092.687340] [] mdt_object_lock_internal+0x70/0x360 [mdt] [1434092.694520] [] mdt_getattr_name_lock+0x11d/0x1c30 [mdt] [1434092.701623] [] mdt_intent_getattr+0x2b5/0x480 [mdt] [1434092.708365] [] mdt_intent_policy+0x435/0xd80 [mdt] [1434092.715020] [] ldlm_lock_enqueue+0x356/0xa20 [ptlrpc] [1434092.721955] [] ldlm_handle_enqueue0+0xa56/0x15f0 [ptlrpc] [1434092.729235] [] tgt_enqueue+0x62/0x210 [ptlrpc] [1434092.735571] [] tgt_request_handle+0xada/0x1570 [ptlrpc] [1434092.742685] [] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [1434092.750625] [] ptlrpc_main+0xb34/0x1470 [ptlrpc] [1434092.757171] [] kthread+0xd1/0xe0 [1434092.762281] [] ret_from_fork_nospec_begin+0xe/0x21 [1434092.768944] [] 0xffffffffffffffff [1434092.774202] LustreError: dumping log to /tmp/lustre-log.1598335513.21302 [1434192.385486] LustreError: 21302:0:(ldlm_request.c:130:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1598335312, 300s ago); not entering recovery in server code, just going back to sleep ns: mdt-fir-MDT0003_UUID lock: ffff9ce12a61e540/0xfdb7f5982407b876 lrc: 3/1,0 mode: --/PR res: [0x28004567a:0x19752:0x0].0x0 bits 0x12/0x0 rrc: 27 type: IBT flags: 0x40210000000000 nid: local remote: 0x0 expref: -99 pid: 21302 timeout: 0 lvb_type: 0 [1434486.994495] Lustre: 21148:0:(service.c:1372:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply req@ffff9ce2295bf500 x1675976027713472/t0(0) o101->696f9381-3ccb-334e-6da3-bb0cf44f216a@10.0.10.3@o2ib7:157/0 lens 576/3264 e 23 to 0 dl 1598335912 ref 2 fl Interpret:/0/0 rc 0/0 [1434686.608137] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1434686.616586] Lustre: Skipped 1 previous similar message [1434686.621938] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1434686.630643] Lustre: Skipped 1 previous similar message [1435287.652470] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1435287.660933] Lustre: Skipped 1 previous similar message [1435287.666357] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1435287.675076] Lustre: Skipped 1 previous similar message [1435888.696506] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1435888.704954] Lustre: Skipped 1 previous similar message [1435888.710313] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1435888.719019] Lustre: Skipped 1 previous similar message [1436489.740849] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1436489.749303] Lustre: Skipped 1 previous similar message [1436489.754653] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1436489.763365] Lustre: Skipped 1 previous similar message [1436983.852382] Lustre: fir-MDT0003: haven't heard from client c4d9031c-85fb-4 (at 10.50.7.65@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d10929bbc00, cur 1598338404 expire 1598338254 last 1598338177 [1437090.785436] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1437090.793882] Lustre: Skipped 1 previous similar message [1437090.799222] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1437090.807920] Lustre: Skipped 2 previous similar messages [1437691.829920] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1437691.838374] Lustre: Skipped 1 previous similar message [1437691.843724] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1437691.852453] Lustre: Skipped 1 previous similar message [1438292.874296] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1438292.882777] Lustre: Skipped 1 previous similar message [1438292.930764] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1438292.939500] Lustre: Skipped 1 previous similar message [1438894.007816] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1438894.016283] Lustre: Skipped 1 previous similar message [1438894.021978] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1438894.030700] Lustre: Skipped 1 previous similar message [1439495.091978] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1439495.100419] Lustre: Skipped 1 previous similar message [1439495.105807] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1439495.114566] Lustre: Skipped 1 previous similar message [1440096.138281] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1440096.146733] Lustre: Skipped 1 previous similar message [1440096.152103] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1440096.160807] Lustre: Skipped 1 previous similar message [1440697.182437] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1440697.190882] Lustre: Skipped 3 previous similar messages [1440697.196317] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1440697.205017] Lustre: Skipped 3 previous similar messages [1441298.226673] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1441298.235112] Lustre: Skipped 1 previous similar message [1441298.240467] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1441298.249172] Lustre: Skipped 1 previous similar message [1441899.271212] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1441899.279702] Lustre: Skipped 1 previous similar message [1441899.285131] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1441899.293845] Lustre: Skipped 1 previous similar message [1442500.375331] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1442500.383799] Lustre: Skipped 1 previous similar message [1442500.411853] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1442500.420565] Lustre: Skipped 1 previous similar message [1443101.499766] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1443101.508212] Lustre: Skipped 1 previous similar message [1443101.546240] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1443101.554970] Lustre: Skipped 1 previous similar message [1443193.021839] Lustre: fir-MDT0003: haven't heard from client e9fcf4f3-512d-4 (at 10.50.12.13@o2ib2) in 227 seconds. I think it's dead, and I am evicting it. exp ffff9d1092b1b400, cur 1598344613 expire 1598344463 last 1598344386 [1443702.664158] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1443702.672622] Lustre: Skipped 1 previous similar message [1443702.678346] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1443702.687097] Lustre: Skipped 2 previous similar messages [1444303.713344] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1444303.721800] Lustre: Skipped 1 previous similar message [1444303.727274] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1444303.735998] Lustre: Skipped 1 previous similar message [1444904.808664] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1444904.817124] Lustre: Skipped 1 previous similar message [1444904.833302] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1444904.842011] Lustre: Skipped 1 previous similar message [1445505.956777] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1445505.965220] Lustre: Skipped 1 previous similar message [1445505.981362] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1445505.990075] Lustre: Skipped 1 previous similar message [1446107.018781] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1446107.027258] Lustre: Skipped 1 previous similar message [1446107.032736] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1446107.041496] Lustre: Skipped 1 previous similar message [1446708.122739] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1446708.131200] Lustre: Skipped 1 previous similar message [1446708.136546] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1446708.145249] Lustre: Skipped 1 previous similar message [1447309.166784] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1447309.175239] Lustre: Skipped 1 previous similar message [1447309.180622] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1447309.189342] Lustre: Skipped 1 previous similar message [1447910.211018] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1447910.219496] Lustre: Skipped 1 previous similar message [1447910.265621] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1447910.274379] Lustre: Skipped 1 previous similar message [1448511.306912] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1448511.315391] Lustre: Skipped 1 previous similar message [1448511.320860] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1448511.329569] Lustre: Skipped 1 previous similar message [1449112.403813] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1449112.412298] Lustre: Skipped 1 previous similar message [1449112.428634] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1449112.437365] Lustre: Skipped 1 previous similar message [1449713.458756] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1449713.467213] Lustre: Skipped 1 previous similar message [1449713.472679] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1449713.481420] Lustre: Skipped 1 previous similar message [1450314.502533] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1450314.510977] Lustre: Skipped 1 previous similar message [1450314.516328] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1450314.525027] Lustre: Skipped 1 previous similar message [1450915.546745] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1450915.555218] Lustre: Skipped 1 previous similar message [1450915.560659] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1450915.569391] Lustre: Skipped 1 previous similar message [1451516.599785] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1451516.608255] Lustre: Skipped 1 previous similar message [1451516.615276] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1451516.624108] Lustre: Skipped 1 previous similar message [1452117.660128] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1452117.668594] Lustre: Skipped 1 previous similar message [1452117.674731] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1452117.683459] Lustre: Skipped 1 previous similar message [1452718.757190] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1452718.765633] Lustre: Skipped 1 previous similar message [1452718.784424] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1452718.793148] Lustre: Skipped 1 previous similar message [1453319.814337] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1453319.822811] Lustre: Skipped 1 previous similar message [1453319.828197] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1453319.836966] Lustre: Skipped 1 previous similar message [1453920.858674] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1453920.867143] Lustre: Skipped 1 previous similar message [1453920.885539] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1453920.894261] Lustre: Skipped 1 previous similar message [1454521.915430] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1454521.923896] Lustre: Skipped 1 previous similar message [1454521.929334] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1454521.938067] Lustre: Skipped 1 previous similar message [1455122.959842] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1455122.968289] Lustre: Skipped 1 previous similar message [1455122.986441] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1455122.995164] Lustre: Skipped 1 previous similar message [1455724.093628] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1455724.102097] Lustre: Skipped 1 previous similar message [1455724.107481] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1455724.116200] Lustre: Skipped 1 previous similar message [1456325.190762] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1456325.199221] Lustre: Skipped 1 previous similar message [1456325.204805] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1456325.213549] Lustre: Skipped 1 previous similar message [1456926.235604] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1456926.244046] Lustre: Skipped 1 previous similar message [1456926.249401] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1456926.258109] Lustre: Skipped 1 previous similar message [1457527.289619] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1457527.298093] Lustre: Skipped 1 previous similar message [1457527.304427] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1457527.313166] Lustre: Skipped 1 previous similar message [1458128.361547] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1458128.369998] Lustre: Skipped 1 previous similar message [1458128.375357] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1458128.384077] Lustre: Skipped 1 previous similar message [1458729.411770] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1458729.420228] Lustre: Skipped 1 previous similar message [1458729.425580] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1458729.434280] Lustre: Skipped 1 previous similar message [1459330.455926] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1459330.464391] Lustre: Skipped 1 previous similar message [1459330.469821] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1459330.478532] Lustre: Skipped 1 previous similar message [1459931.502746] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1459931.511194] Lustre: Skipped 1 previous similar message [1459931.523625] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1459931.532378] Lustre: Skipped 1 previous similar message [1460532.554339] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1460532.562791] Lustre: Skipped 1 previous similar message [1460532.585886] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1460532.594701] Lustre: Skipped 1 previous similar message [1461133.676540] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1461133.684996] Lustre: Skipped 1 previous similar message [1461133.690358] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1461133.699063] Lustre: Skipped 1 previous similar message [1461734.722778] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1461734.731263] Lustre: Skipped 1 previous similar message [1461734.736920] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1461734.745670] Lustre: Skipped 1 previous similar message [1462335.778160] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1462335.786635] Lustre: Skipped 1 previous similar message [1462335.792480] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1462335.801222] Lustre: Skipped 1 previous similar message [1462936.842007] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1462936.850457] Lustre: Skipped 1 previous similar message [1462936.855906] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1462936.864629] Lustre: Skipped 1 previous similar message [1463537.886284] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1463537.894727] Lustre: Skipped 1 previous similar message [1463537.900091] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1463537.908795] Lustre: Skipped 1 previous similar message [1464139.022507] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1464139.030966] Lustre: Skipped 1 previous similar message [1464139.036313] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1464139.045018] Lustre: Skipped 1 previous similar message [1464740.066902] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1464740.075384] Lustre: Skipped 1 previous similar message [1464740.090504] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1464740.099223] Lustre: Skipped 1 previous similar message [1465341.121962] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1465341.130424] Lustre: Skipped 1 previous similar message [1465341.145235] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1465341.153975] Lustre: Skipped 1 previous similar message [1465942.265068] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1465942.273526] Lustre: Skipped 1 previous similar message [1465942.278970] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1465942.287709] Lustre: Skipped 1 previous similar message [1466543.308904] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1466543.317354] Lustre: Skipped 1 previous similar message [1466543.322720] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1466543.331454] Lustre: Skipped 1 previous similar message [1467144.353141] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1467144.361601] Lustre: Skipped 1 previous similar message [1467144.366961] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1467144.375677] Lustre: Skipped 1 previous similar message [1467745.411282] Lustre: fir-MDT0003: Client 34b6d393-8480-4 (at 10.50.7.64@o2ib2) reconnecting [1467745.419743] Lustre: Skipped 1 previous similar message [1467745.457704] Lustre: fir-MDT0003: Connection restored to 34b6d393-8480-4 (at 10.50.7.64@o2ib2) [1467745.466417] Lustre: Skipped 1 previous similar message [1467848.143660] SysRq : Trigger a crash [1467848.147415] BUG: unable to handle kernel NULL pointer dereference at (null) [1467848.155477] IP: [] sysrq_handle_crash+0x16/0x20 [1467848.161791] PGD 3feeee4067 PUD 3e95aeb067 PMD 0 [1467848.166675] Oops: 0002 [#1] SMP [1467848.170143] Modules linked in: osp(OE) mdd(OE) lod(OE) mdt(OE) lfsck(OE) mgc(OE) osd_ldiskfs(OE) lquota(OE) ldiskfs(OE) lustre(OE) lmv(OE) mdc(OE) osc(OE) lov(OE) fid(OE) fld(OE) ko2iblnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache rdma_ucm(OE) ib_ucm(OE) rdma_cm(OE) iw_cm(OE) ib_ipoib(OE) ib_cm(OE) ib_umad(OE) mlx4_en(OE) mlx4_ib(OE) mlx4_core(OE) dell_rbu sunrpc vfat fat dm_round_robin amd64_edac_mod edac_mce_amd kvm_amd kvm irqbypass crc32_pclmul ghash_clmulni_intel dm_multipath dcdbas aesni_intel lrw ses gf128mul glue_helper ablk_helper enclosure dm_mod ipmi_si pcspkr cryptd sg ipmi_devintf ccp k10temp i2c_piix4 ipmi_msghandler acpi_power_meter ip_tables ext4 mbcache jbd2 sd_mod crc_t10dif crct10dif_generic mlx5_ib(OE) [1467848.242754] ib_uverbs(OE) ib_core(OE) i2c_algo_bit drm_kms_helper syscopyarea sysfillrect mlx5_core(OE) sysimgblt fb_sys_fops mlxfw(OE) devlink ttm ahci mpt3sas(OE) libahci crct10dif_pclmul drm tg3 mlx_compat(OE) crct10dif_common libata raid_class crc32c_intel ptp megaraid_sas scsi_transport_sas drm_panel_orientation_quirks pps_core [1467848.271727] CPU: 26 PID: 93797 Comm: bash Kdump: loaded Tainted: G OE ------------ 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 [1467848.284149] Hardware name: Dell Inc. PowerEdge R6415/07YXFK, BIOS 1.12.2 11/15/2019 [1467848.291994] task: ffff9d00f923a080 ti: ffff9cf44a598000 task.ti: ffff9cf44a598000 [1467848.299655] RIP: 0010:[] [] sysrq_handle_crash+0x16/0x20 [1467848.308400] RSP: 0018:ffff9cf44a59be58 EFLAGS: 00010246 [1467848.313901] RAX: ffffffff87864430 RBX: ffffffff880e4f80 RCX: 0000000000000000 [1467848.321211] RDX: 0000000000000000 RSI: ffff9d10ff793898 RDI: 0000000000000063 [1467848.328521] RBP: ffff9cf44a59be58 R08: ffffffff883e38bc R09: ffffffff8845000b [1467848.335827] R10: 00000000000011e2 R11: 00000000000011e1 R12: 0000000000000063 [1467848.343133] R13: 0000000000000000 R14: 0000000000000007 R15: 0000000000000000 [1467848.350441] FS: 00007f9d181ef740(0000) GS:ffff9d10ff780000(0000) knlGS:0000000000000000 [1467848.358700] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [1467848.364619] CR2: 0000000000000000 CR3: 0000003545bc4000 CR4: 00000000003407e0 [1467848.371925] Call Trace: [1467848.374558] [] __handle_sysrq+0x10d/0x170 [1467848.380394] [] write_sysrq_trigger+0x28/0x40 [1467848.386489] [] proc_reg_write+0x40/0x80 [1467848.392163] [] vfs_write+0xc0/0x1f0 [1467848.397476] [] SyS_write+0x7f/0xf0 [1467848.402705] [] system_call_fastpath+0x22/0x27 [1467848.408890] Code: eb 9b 45 01 f4 45 39 65 34 75 e5 4c 89 ef e8 e2 f7 ff ff eb db 66 66 66 66 90 55 48 89 e5 c7 05 91 31 7e 00 01 00 00 00 0f ae f8 04 25 00 00 00 00 01 5d c3 66 66 66 66 90 55 31 c0 c7 05 0e [1467848.429593] RIP [] sysrq_handle_crash+0x16/0x20 [1467848.435973] RSP [1467848.439640] CR2: 0000000000000000