crash 6.1.0-5.el6 Copyright (C) 2002-2012 Red Hat, Inc. Copyright (C) 2004, 2005, 2006, 2010 IBM Corporation Copyright (C) 1999-2006 Hewlett-Packard Co Copyright (C) 2005, 2006, 2011, 2012 Fujitsu Limited Copyright (C) 2006, 2007 VA Linux Systems Japan K.K. Copyright (C) 2005, 2011 NEC Corporation Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc. Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc. This program is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Enter "help copying" to see the conditions. This program has absolutely no warranty. Enter "help warranty" for details. [?1034hGNU gdb (GDB) 7.3.1 Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-unknown-linux-gnu"... please wait... (gathering kmem slab cache data) please wait... (gathering module symbol data) please wait... (gathering task table data) please wait... (determining panic task) KERNEL: /usr/lib/debug/lib/modules/2.6.32-431.1.2.el6.Bull.44.x86_64/vmlinux DUMPFILE: vmcore [PARTIAL DUMP] CPUS: 8 DATE: Fri Jul 4 15:41:01 2014 UPTIME: 8 days, 07:01:02 LOAD AVERAGE: 0.19, 0.09, 0.02 TASKS: 357 NODENAME: lama10 RELEASE: 2.6.32-431.1.2.el6.Bull.44.x86_64 VERSION: #1 SMP Tue Jan 21 01:58:34 CET 2014 MACHINE: x86_64 (2800 Mhz) MEMORY: 24 GB PANIC: "Kernel panic - not syncing: LBUG" PID: 12957 COMMAND: "mount.lustre" TASK: ffff88063a70d540 [THREAD_INFO: ffff88062964a000] CPU: 2 STATE: TASK_RUNNING (PANIC) crash> crash> foreach bt PID: 0 TASK: ffffffff81a8d020 CPU: 0 COMMAND: "swapper" #0 [ffff880028207e90] crash_nmi_callback at ffffffff81030096 #1 [ffff880028207ea0] notifier_call_chain at ffffffff8152e3b5 #2 [ffff880028207ee0] atomic_notifier_call_chain at ffffffff8152e41a #3 [ffff880028207ef0] notify_die at ffffffff810a052e #4 [ffff880028207f20] do_nmi at ffffffff8152c07b #5 [ffff880028207f50] nmi at ffffffff8152b940 [exception RIP: intel_idle+177] RIP: ffffffff812e1041 RSP: ffffffff81a01e38 RFLAGS: 00000046 RAX: 0000000000000010 RBX: 0000000000000004 RCX: 0000000000000001 RDX: 0000000000000000 RSI: ffffffff81a01fd8 RDI: ffffffff81a903c0 RBP: ffffffff81a01ea8 R8: 0000000000000000 R9: 0000000000000320 R10: 0000000000000002 R11: 0000000000000000 R12: 0000000000000010 R13: 137db84433e58105 R14: 0000000000000002 R15: 0000000000000000 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 --- --- #6 [ffffffff81a01e38] intel_idle at ffffffff812e1041 #7 [ffffffff81a01eb0] cpuidle_idle_call at ffffffff81427077 #8 [ffffffff81a01ed0] cpu_idle at ffffffff81009fc6 PID: 0 TASK: ffff88063b6cc040 CPU: 1 COMMAND: "swapper" #0 [ffff880028227e90] crash_nmi_callback at ffffffff81030096 #1 [ffff880028227ea0] notifier_call_chain at ffffffff8152e3b5 #2 [ffff880028227ee0] atomic_notifier_call_chain at ffffffff8152e41a #3 [ffff880028227ef0] notify_die at ffffffff810a052e #4 [ffff880028227f20] do_nmi at ffffffff8152c07b #5 [ffff880028227f50] nmi at ffffffff8152b940 [exception RIP: intel_idle+177] RIP: ffffffff812e1041 RSP: ffff88033ac3de68 RFLAGS: 00000046 RAX: 0000000000000020 RBX: 0000000000000008 RCX: 0000000000000001 RDX: 0000000000000000 RSI: ffff88033ac3dfd8 RDI: ffffffff81a903c0 RBP: ffff88033ac3ded8 R8: 0000000000000000 R9: 00000000000000c8 R10: 0000000000000002 R11: 0000000000000000 R12: 0000000000000020 R13: 137db84433e54305 R14: 0000000000000003 R15: 0000000000000001 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 --- --- #6 [ffff88033ac3de68] intel_idle at ffffffff812e1041 #7 [ffff88033ac3dee0] cpuidle_idle_call at ffffffff81427077 #8 [ffff88033ac3df00] cpu_idle at ffffffff81009fc6 PID: 0 TASK: ffff88063b6cca80 CPU: 2 COMMAND: "swapper" #0 [ffff88033ac6fe38] schedule at ffffffff81528762 #1 [ffff88033ac6ff00] cpu_idle at ffffffff81009ffe PID: 0 TASK: ffff88063b6cd4c0 CPU: 3 COMMAND: "swapper" #0 [ffff880028267e90] crash_nmi_callback at ffffffff81030096 #1 [ffff880028267ea0] notifier_call_chain at ffffffff8152e3b5 #2 [ffff880028267ee0] atomic_notifier_call_chain at ffffffff8152e41a #3 [ffff880028267ef0] notify_die at ffffffff810a052e #4 [ffff880028267f20] do_nmi at ffffffff8152c07b #5 [ffff880028267f50] nmi at ffffffff8152b940 [exception RIP: intel_idle+177] RIP: ffffffff812e1041 RSP: ffff88033ac81e68 RFLAGS: 00000046 RAX: 0000000000000020 RBX: 0000000000000008 RCX: 0000000000000001 RDX: 0000000000000000 RSI: ffff88033ac81fd8 RDI: ffffffff81a903c0 RBP: ffff88033ac81ed8 R8: 0000000000000000 R9: 00000000000000c8 R10: 00028b9e7dc91394 R11: 0000000000000000 R12: 0000000000000020 R13: 137db84433e5f560 R14: 0000000000000003 R15: 0000000000000003 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 --- --- #6 [ffff88033ac81e68] intel_idle at ffffffff812e1041 #7 [ffff88033ac81ee0] cpuidle_idle_call at ffffffff81427077 #8 [ffff88033ac81f00] cpu_idle at ffffffff81009fc6 PID: 0 TASK: ffff88063b714080 CPU: 4 COMMAND: "swapper" #0 [ffff88033ac8de38] schedule at ffffffff81528762 #1 [ffff88033ac8df00] cpu_idle at ffffffff81009ffe PID: 0 TASK: ffff88063b714ac0 CPU: 5 COMMAND: "swapper" #0 [ffff88034ac27e90] crash_nmi_callback at ffffffff81030096 #1 [ffff88034ac27ea0] notifier_call_chain at ffffffff8152e3b5 #2 [ffff88034ac27ee0] atomic_notifier_call_chain at ffffffff8152e41a #3 [ffff88034ac27ef0] notify_die at ffffffff810a052e #4 [ffff88034ac27f20] do_nmi at ffffffff8152c07b #5 [ffff88034ac27f50] nmi at ffffffff8152b940 [exception RIP: intel_idle+177] RIP: ffffffff812e1041 RSP: ffff88033ac9be68 RFLAGS: 00000046 RAX: 0000000000000000 RBX: 0000000000000002 RCX: 0000000000000001 RDX: 0000000000000000 RSI: ffff88033ac9bfd8 RDI: ffffffff81a903c0 RBP: ffff88033ac9bed8 R8: 0000000000000280 R9: 0000000000000296 R10: 0000000000000002 R11: 0000000000000000 R12: 0000000000000000 R13: 137db84433eae898 R14: 0000000000000001 R15: 0000000000000005 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 --- --- #6 [ffff88033ac9be68] intel_idle at ffffffff812e1041 #7 [ffff88033ac9bee0] cpuidle_idle_call at ffffffff81427077 #8 [ffff88033ac9bf00] cpu_idle at ffffffff81009fc6 PID: 0 TASK: ffff88063b715500 CPU: 6 COMMAND: "swapper" #0 [ffff88034ac47e90] crash_nmi_callback at ffffffff81030096 #1 [ffff88034ac47ea0] notifier_call_chain at ffffffff8152e3b5 #2 [ffff88034ac47ee0] atomic_notifier_call_chain at ffffffff8152e41a #3 [ffff88034ac47ef0] notify_die at ffffffff810a052e #4 [ffff88034ac47f20] do_nmi at ffffffff8152c07b #5 [ffff88034ac47f50] nmi at ffffffff8152b940 [exception RIP: intel_idle+177] RIP: ffffffff812e1041 RSP: ffff88033acc5e68 RFLAGS: 00000046 RAX: 0000000000000000 RBX: 0000000000000002 RCX: 0000000000000001 RDX: 0000000000000000 RSI: ffff88033acc5fd8 RDI: ffffffff81a903c0 RBP: ffff88033acc5ed8 R8: 00000000000002a5 R9: 00000000000002ad R10: 0000000000000002 R11: 0000000000000000 R12: 0000000000000000 R13: 137db84433eaa845 R14: 0000000000000001 R15: 0000000000000006 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 --- --- #6 [ffff88033acc5e68] intel_idle at ffffffff812e1041 #7 [ffff88033acc5ee0] cpuidle_idle_call at ffffffff81427077 #8 [ffff88033acc5f00] cpu_idle at ffffffff81009fc6 PID: 0 TASK: ffff88063b7be0c0 CPU: 7 COMMAND: "swapper" #0 [ffff88034ac67e90] crash_nmi_callback at ffffffff81030096 #1 [ffff88034ac67ea0] notifier_call_chain at ffffffff8152e3b5 #2 [ffff88034ac67ee0] atomic_notifier_call_chain at ffffffff8152e41a #3 [ffff88034ac67ef0] notify_die at ffffffff810a052e #4 [ffff88034ac67f20] do_nmi at ffffffff8152c07b #5 [ffff88034ac67f50] nmi at ffffffff8152b940 [exception RIP: intel_idle+177] RIP: ffffffff812e1041 RSP: ffff88033acd3e68 RFLAGS: 00000046 RAX: 0000000000000000 RBX: 0000000000000002 RCX: 0000000000000001 RDX: 0000000000000000 RSI: ffff88033acd3fd8 RDI: ffffffff81a903c0 RBP: ffff88033acd3ed8 R8: 00000000000632d4 R9: 00000000000632dc R10: 00028b9e7cd25145 R11: 0000000000000000 R12: 0000000000000000 R13: 137db84433eaa19f R14: 0000000000000001 R15: 0000000000000007 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 --- --- #6 [ffff88033acd3e68] intel_idle at ffffffff812e1041 #7 [ffff88033acd3ee0] cpuidle_idle_call at ffffffff81427077 #8 [ffff88033acd3f00] cpu_idle at ffffffff81009fc6 PID: 1 TASK: ffff88033b7fd4c0 CPU: 2 COMMAND: "init" #0 [ffff88063b6c9848] schedule at ffffffff81528762 #1 [ffff88063b6c9910] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff88063b6c99b0] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff88063b6c99d0] do_select at ffffffff811a146c #4 [ffff88063b6c9d70] core_sys_select at ffffffff811a173a #5 [ffff88063b6c9f10] sys_select at ffffffff811a1ac7 #6 [ffff88063b6c9f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f5be59a05c3 RSP: 00007fffe47cd210 RFLAGS: 00000206 RAX: 0000000000000017 RBX: ffffffff8100b072 RCX: 0000000000000002 RDX: 00007fffe47cd420 RSI: 00007fffe47cd4a0 RDI: 000000000000000a RBP: 0000000000000000 R8: 0000000000000000 R9: 0000000000000200 R10: 00007fffe47cd3a0 R11: 0000000000000246 R12: 00007fffe47cd4a0 R13: 00007fffe47cd420 R14: 00007fffe47cd3a0 R15: 00007fffe47cd55f ORIG_RAX: 0000000000000017 CS: 0033 SS: 002b PID: 2 TASK: ffff88033b7fca80 CPU: 3 COMMAND: "kthreadd" #0 [ffff88033ac01e10] schedule at ffffffff81528762 #1 [ffff88033ac01ed8] kthreadd at ffffffff81099e15 #2 [ffff88033ac01f48] kernel_thread at ffffffff8100c20a PID: 3 TASK: ffff88033b7fc040 CPU: 0 COMMAND: "migration/0" #0 [ffff88033ac25db0] schedule at ffffffff81528762 #1 [ffff88033ac25e78] migration_thread at ffffffff810683d5 #2 [ffff88033ac25ee8] kthread at ffffffff81099eb6 #3 [ffff88033ac25f48] kernel_thread at ffffffff8100c20a PID: 4 TASK: ffff88033ac27500 CPU: 0 COMMAND: "ksoftirqd/0" #0 [ffff88033ac29de0] schedule at ffffffff81528762 #1 [ffff88033ac29ea8] ksoftirqd at ffffffff81079485 #2 [ffff88033ac29ee8] kthread at ffffffff81099eb6 #3 [ffff88033ac29f48] kernel_thread at ffffffff8100c20a PID: 5 TASK: ffff88033ac26ac0 CPU: 0 COMMAND: "migration/0" #0 [ffff88033ac2bd30] schedule at ffffffff81528762 #1 [ffff88033ac2bdf8] cpu_stopper_thread at ffffffff810d3a55 #2 [ffff88033ac2bee8] kthread at ffffffff81099eb6 #3 [ffff88033ac2bf48] kernel_thread at ffffffff8100c20a PID: 6 TASK: ffff88033ac26080 CPU: 1 COMMAND: "migration/1" #0 [ffff88033ac2fdb0] schedule at ffffffff81528762 #1 [ffff88033ac2fe78] migration_thread at ffffffff810683d5 #2 [ffff88033ac2fee8] kthread at ffffffff81099eb6 #3 [ffff88033ac2ff48] kernel_thread at ffffffff8100c20a PID: 7 TASK: ffff88033ac31540 CPU: 1 COMMAND: "migration/1" #0 [ffff88033ac33d30] schedule at ffffffff81528762 #1 [ffff88033ac33df8] cpu_stopper_thread at ffffffff810d3a55 #2 [ffff88033ac33ee8] kthread at ffffffff81099eb6 #3 [ffff88033ac33f48] kernel_thread at ffffffff8100c20a PID: 8 TASK: ffff88033ac30b00 CPU: 1 COMMAND: "ksoftirqd/1" #0 [ffff88033ac39de0] schedule at ffffffff81528762 #1 [ffff88033ac39ea8] ksoftirqd at ffffffff81079485 #2 [ffff88033ac39ee8] kthread at ffffffff81099eb6 #3 [ffff88033ac39f48] kernel_thread at ffffffff8100c20a PID: 9 TASK: ffff88033ac300c0 CPU: 2 COMMAND: "migration/2" #0 [ffff88033ac41db0] schedule at ffffffff81528762 #1 [ffff88033ac41e78] migration_thread at ffffffff810683d5 #2 [ffff88033ac41ee8] kthread at ffffffff81099eb6 #3 [ffff88033ac41f48] kernel_thread at ffffffff8100c20a PID: 10 TASK: ffff88033ac43580 CPU: 2 COMMAND: "migration/2" #0 [ffff88033ac45d30] schedule at ffffffff81528762 #1 [ffff88033ac45df8] cpu_stopper_thread at ffffffff810d3a55 #2 [ffff88033ac45ee8] kthread at ffffffff81099eb6 #3 [ffff88033ac45f48] kernel_thread at ffffffff8100c20a PID: 11 TASK: ffff88033ac42b40 CPU: 2 COMMAND: "ksoftirqd/2" #0 [ffff88033ac6bde0] schedule at ffffffff81528762 #1 [ffff88033ac6bea8] ksoftirqd at ffffffff81079485 #2 [ffff88033ac6bee8] kthread at ffffffff81099eb6 #3 [ffff88033ac6bf48] kernel_thread at ffffffff8100c20a PID: 12 TASK: ffff88033ac42100 CPU: 3 COMMAND: "migration/3" #0 [ffff88033ac71db0] schedule at ffffffff81528762 #1 [ffff88033ac71e78] migration_thread at ffffffff810683d5 #2 [ffff88033ac71ee8] kthread at ffffffff81099eb6 #3 [ffff88033ac71f48] kernel_thread at ffffffff8100c20a PID: 13 TASK: ffff88033ac734c0 CPU: 3 COMMAND: "migration/3" #0 [ffff88033ac75d30] schedule at ffffffff81528762 #1 [ffff88033ac75df8] cpu_stopper_thread at ffffffff810d3a55 #2 [ffff88033ac75ee8] kthread at ffffffff81099eb6 #3 [ffff88033ac75f48] kernel_thread at ffffffff8100c20a PID: 14 TASK: ffff88033ac72a80 CPU: 3 COMMAND: "ksoftirqd/3" #0 [ffff88033ac77de0] schedule at ffffffff81528762 #1 [ffff88033ac77ea8] ksoftirqd at ffffffff81079485 #2 [ffff88033ac77ee8] kthread at ffffffff81099eb6 #3 [ffff88033ac77f48] kernel_thread at ffffffff8100c20a PID: 15 TASK: ffff88033ac72040 CPU: 4 COMMAND: "migration/4" #0 [ffff88033ac83db0] schedule at ffffffff81528762 #1 [ffff88033ac83e78] migration_thread at ffffffff810683d5 #2 [ffff88033ac83ee8] kthread at ffffffff81099eb6 #3 [ffff88033ac83f48] kernel_thread at ffffffff8100c20a PID: 16 TASK: ffff88033ac85500 CPU: 4 COMMAND: "migration/4" #0 [ffff88033ac87d30] schedule at ffffffff81528762 #1 [ffff88033ac87df8] cpu_stopper_thread at ffffffff810d3a55 #2 [ffff88033ac87ee8] kthread at ffffffff81099eb6 #3 [ffff88033ac87f48] kernel_thread at ffffffff8100c20a PID: 17 TASK: ffff88033ac84ac0 CPU: 4 COMMAND: "ksoftirqd/4" #0 [ffff88033ac89de0] schedule at ffffffff81528762 #1 [ffff88033ac89ea8] ksoftirqd at ffffffff81079485 #2 [ffff88033ac89ee8] kthread at ffffffff81099eb6 #3 [ffff88033ac89f48] kernel_thread at ffffffff8100c20a PID: 18 TASK: ffff88033ac84080 CPU: 5 COMMAND: "migration/5" #0 [ffff88033ac8fdb0] schedule at ffffffff81528762 #1 [ffff88033ac8fe78] migration_thread at ffffffff810683d5 #2 [ffff88033ac8fee8] kthread at ffffffff81099eb6 #3 [ffff88033ac8ff48] kernel_thread at ffffffff8100c20a PID: 19 TASK: ffff88033ac91540 CPU: 5 COMMAND: "migration/5" #0 [ffff88033ac93d30] schedule at ffffffff81528762 #1 [ffff88033ac93df8] cpu_stopper_thread at ffffffff810d3a55 #2 [ffff88033ac93ee8] kthread at ffffffff81099eb6 #3 [ffff88033ac93f48] kernel_thread at ffffffff8100c20a PID: 20 TASK: ffff88033ac90b00 CPU: 5 COMMAND: "ksoftirqd/5" #0 [ffff88033ac97de0] schedule at ffffffff81528762 #1 [ffff88033ac97ea8] ksoftirqd at ffffffff81079485 #2 [ffff88033ac97ee8] kthread at ffffffff81099eb6 #3 [ffff88033ac97f48] kernel_thread at ffffffff8100c20a PID: 21 TASK: ffff88033ac900c0 CPU: 6 COMMAND: "migration/6" #0 [ffff88033ac9ddb0] schedule at ffffffff81528762 #1 [ffff88033ac9de78] migration_thread at ffffffff810683d5 #2 [ffff88033ac9dee8] kthread at ffffffff81099eb6 #3 [ffff88033ac9df48] kernel_thread at ffffffff8100c20a PID: 22 TASK: ffff88033ac9f580 CPU: 6 COMMAND: "migration/6" #0 [ffff88033aca1d30] schedule at ffffffff81528762 #1 [ffff88033aca1df8] cpu_stopper_thread at ffffffff810d3a55 #2 [ffff88033aca1ee8] kthread at ffffffff81099eb6 #3 [ffff88033aca1f48] kernel_thread at ffffffff8100c20a PID: 23 TASK: ffff88033ac9eb40 CPU: 6 COMMAND: "ksoftirqd/6" #0 [ffff88033aca3de0] schedule at ffffffff81528762 #1 [ffff88033aca3ea8] ksoftirqd at ffffffff81079485 #2 [ffff88033aca3ee8] kthread at ffffffff81099eb6 #3 [ffff88033aca3f48] kernel_thread at ffffffff8100c20a PID: 24 TASK: ffff88033ac9e100 CPU: 7 COMMAND: "migration/7" #0 [ffff88033acc7db0] schedule at ffffffff81528762 #1 [ffff88033acc7e78] migration_thread at ffffffff810683d5 #2 [ffff88033acc7ee8] kthread at ffffffff81099eb6 #3 [ffff88033acc7f48] kernel_thread at ffffffff8100c20a PID: 25 TASK: ffff88033accb4c0 CPU: 7 COMMAND: "migration/7" #0 [ffff88033accdd30] schedule at ffffffff81528762 #1 [ffff88033accddf8] cpu_stopper_thread at ffffffff810d3a55 #2 [ffff88033accdee8] kthread at ffffffff81099eb6 #3 [ffff88033accdf48] kernel_thread at ffffffff8100c20a PID: 26 TASK: ffff88033accaa80 CPU: 7 COMMAND: "ksoftirqd/7" #0 [ffff88033accfde0] schedule at ffffffff81528762 #1 [ffff88033accfea8] ksoftirqd at ffffffff81079485 #2 [ffff88033accfee8] kthread at ffffffff81099eb6 #3 [ffff88033accff48] kernel_thread at ffffffff8100c20a PID: 27 TASK: ffff88033acd5500 CPU: 0 COMMAND: "events/0" #0 [ffff88033acd7d70] schedule at ffffffff81528762 #1 [ffff88033acd7e38] worker_thread at ffffffff81093d6c #2 [ffff88033acd7ee8] kthread at ffffffff81099eb6 #3 [ffff88033acd7f48] kernel_thread at ffffffff8100c20a PID: 28 TASK: ffff88033acd4ac0 CPU: 1 COMMAND: "events/1" #0 [ffff88033ad05d70] schedule at ffffffff81528762 #1 [ffff88033ad05e38] worker_thread at ffffffff81093d6c #2 [ffff88033ad05ee8] kthread at ffffffff81099eb6 #3 [ffff88033ad05f48] kernel_thread at ffffffff8100c20a PID: 29 TASK: ffff88033acd4080 CPU: 2 COMMAND: "events/2" #0 [ffff88033ad07d70] schedule at ffffffff81528762 #1 [ffff88033ad07e38] worker_thread at ffffffff81093d6c #2 [ffff88033ad07ee8] kthread at ffffffff81099eb6 #3 [ffff88033ad07f48] kernel_thread at ffffffff8100c20a PID: 30 TASK: ffff88033ad09540 CPU: 3 COMMAND: "events/3" #0 [ffff88033ad0bd70] schedule at ffffffff81528762 #1 [ffff88033ad0be38] worker_thread at ffffffff81093d6c #2 [ffff88033ad0bee8] kthread at ffffffff81099eb6 #3 [ffff88033ad0bf48] kernel_thread at ffffffff8100c20a PID: 31 TASK: ffff88033ad08b00 CPU: 4 COMMAND: "events/4" #0 [ffff88033ad0fd70] schedule at ffffffff81528762 #1 [ffff88033ad0fe38] worker_thread at ffffffff81093d6c #2 [ffff88033ad0fee8] kthread at ffffffff81099eb6 #3 [ffff88033ad0ff48] kernel_thread at ffffffff8100c20a PID: 32 TASK: ffff88033ad080c0 CPU: 5 COMMAND: "events/5" #0 [ffff88033ad11d70] schedule at ffffffff81528762 #1 [ffff88033ad11e38] worker_thread at ffffffff81093d6c #2 [ffff88033ad11ee8] kthread at ffffffff81099eb6 #3 [ffff88033ad11f48] kernel_thread at ffffffff8100c20a PID: 33 TASK: ffff88033ad13580 CPU: 6 COMMAND: "events/6" #0 [ffff88033ad15d70] schedule at ffffffff81528762 #1 [ffff88033ad15e38] worker_thread at ffffffff81093d6c #2 [ffff88033ad15ee8] kthread at ffffffff81099eb6 #3 [ffff88033ad15f48] kernel_thread at ffffffff8100c20a PID: 34 TASK: ffff88033ad12b40 CPU: 7 COMMAND: "events/7" #0 [ffff88033ad19d70] schedule at ffffffff81528762 #1 [ffff88033ad19e38] worker_thread at ffffffff81093d6c #2 [ffff88033ad19ee8] kthread at ffffffff81099eb6 #3 [ffff88033ad19f48] kernel_thread at ffffffff8100c20a PID: 35 TASK: ffff88033ad12100 CPU: 5 COMMAND: "cgroup" #0 [ffff88033ad1dd70] schedule at ffffffff81528762 #1 [ffff88033ad1de38] worker_thread at ffffffff81093d6c #2 [ffff88033ad1dee8] kthread at ffffffff81099eb6 #3 [ffff88033ad1df48] kernel_thread at ffffffff8100c20a PID: 36 TASK: ffff88033ad1f4c0 CPU: 4 COMMAND: "khelper" #0 [ffff88033ad21d70] schedule at ffffffff81528762 #1 [ffff88033ad21e38] worker_thread at ffffffff81093d6c #2 [ffff88033ad21ee8] kthread at ffffffff81099eb6 #3 [ffff88033ad21f48] kernel_thread at ffffffff8100c20a PID: 37 TASK: ffff88033ad1ea80 CPU: 5 COMMAND: "netns" #0 [ffff88033ad53d70] schedule at ffffffff81528762 #1 [ffff88033ad53e38] worker_thread at ffffffff81093d6c #2 [ffff88033ad53ee8] kthread at ffffffff81099eb6 #3 [ffff88033ad53f48] kernel_thread at ffffffff8100c20a PID: 38 TASK: ffff88033ad1e040 CPU: 1 COMMAND: "async/mgr" #0 [ffff88033ad55db0] schedule at ffffffff81528762 #1 [ffff88033ad55e78] async_manager_thread at ffffffff810a2513 #2 [ffff88033ad55ee8] kthread at ffffffff81099eb6 #3 [ffff88033ad55f48] kernel_thread at ffffffff8100c20a PID: 39 TASK: ffff88033ad57500 CPU: 5 COMMAND: "pm" #0 [ffff88033ad59d70] schedule at ffffffff81528762 #1 [ffff88033ad59e38] worker_thread at ffffffff81093d6c #2 [ffff88033ad59ee8] kthread at ffffffff81099eb6 #3 [ffff88033ad59f48] kernel_thread at ffffffff8100c20a PID: 40 TASK: ffff88033ad56ac0 CPU: 0 COMMAND: "sync_supers" #0 [ffff88033addbe00] schedule at ffffffff81528762 #1 [ffff88033addbec8] bdi_sync_supers at ffffffff81142b8b #2 [ffff88033addbee8] kthread at ffffffff81099eb6 #3 [ffff88033addbf48] kernel_thread at ffffffff8100c20a PID: 41 TASK: ffff88033ad56080 CPU: 0 COMMAND: "bdi-default" #0 [ffff88033adddd10] schedule at ffffffff81528762 #1 [ffff88033addddd8] schedule_timeout at ffffffff815295d2 #2 [ffff88033addde88] bdi_forker_task at ffffffff811439b3 #3 [ffff88033adddee8] kthread at ffffffff81099eb6 #4 [ffff88033adddf48] kernel_thread at ffffffff8100c20a PID: 42 TASK: ffff88033ade1540 CPU: 0 COMMAND: "kintegrityd/0" #0 [ffff88033ade3d70] schedule at ffffffff81528762 #1 [ffff88033ade3e38] worker_thread at ffffffff81093d6c #2 [ffff88033ade3ee8] kthread at ffffffff81099eb6 #3 [ffff88033ade3f48] kernel_thread at ffffffff8100c20a PID: 43 TASK: ffff88033ade0b00 CPU: 1 COMMAND: "kintegrityd/1" #0 [ffff88033ade7d70] schedule at ffffffff81528762 #1 [ffff88033ade7e38] worker_thread at ffffffff81093d6c #2 [ffff88033ade7ee8] kthread at ffffffff81099eb6 #3 [ffff88033ade7f48] kernel_thread at ffffffff8100c20a PID: 44 TASK: ffff88033ade00c0 CPU: 2 COMMAND: "kintegrityd/2" #0 [ffff88033ade9d70] schedule at ffffffff81528762 #1 [ffff88033ade9e38] worker_thread at ffffffff81093d6c #2 [ffff88033ade9ee8] kthread at ffffffff81099eb6 #3 [ffff88033ade9f48] kernel_thread at ffffffff8100c20a PID: 45 TASK: ffff88033adeb580 CPU: 3 COMMAND: "kintegrityd/3" #0 [ffff88033adedd70] schedule at ffffffff81528762 #1 [ffff88033adede38] worker_thread at ffffffff81093d6c #2 [ffff88033adedee8] kthread at ffffffff81099eb6 #3 [ffff88033adedf48] kernel_thread at ffffffff8100c20a PID: 46 TASK: ffff88033adeab40 CPU: 4 COMMAND: "kintegrityd/4" #0 [ffff88033adf1d70] schedule at ffffffff81528762 #1 [ffff88033adf1e38] worker_thread at ffffffff81093d6c #2 [ffff88033adf1ee8] kthread at ffffffff81099eb6 #3 [ffff88033adf1f48] kernel_thread at ffffffff8100c20a PID: 47 TASK: ffff88033adea100 CPU: 5 COMMAND: "kintegrityd/5" #0 [ffff88033adf3d70] schedule at ffffffff81528762 #1 [ffff88033adf3e38] worker_thread at ffffffff81093d6c #2 [ffff88033adf3ee8] kthread at ffffffff81099eb6 #3 [ffff88033adf3f48] kernel_thread at ffffffff8100c20a PID: 48 TASK: ffff88033adf54c0 CPU: 6 COMMAND: "kintegrityd/6" #0 [ffff88033adf7d70] schedule at ffffffff81528762 #1 [ffff88033adf7e38] worker_thread at ffffffff81093d6c #2 [ffff88033adf7ee8] kthread at ffffffff81099eb6 #3 [ffff88033adf7f48] kernel_thread at ffffffff8100c20a PID: 49 TASK: ffff88033adf4a80 CPU: 7 COMMAND: "kintegrityd/7" #0 [ffff88033adfdd70] schedule at ffffffff81528762 #1 [ffff88033adfde38] worker_thread at ffffffff81093d6c #2 [ffff88033adfdee8] kthread at ffffffff81099eb6 #3 [ffff88033adfdf48] kernel_thread at ffffffff8100c20a PID: 50 TASK: ffff88033adf4040 CPU: 0 COMMAND: "kblockd/0" #0 [ffff88033aebfd70] schedule at ffffffff81528762 #1 [ffff88033aebfe38] worker_thread at ffffffff81093d6c #2 [ffff88033aebfee8] kthread at ffffffff81099eb6 #3 [ffff88033aebff48] kernel_thread at ffffffff8100c20a PID: 51 TASK: ffff88033aec1500 CPU: 1 COMMAND: "kblockd/1" #0 [ffff88033aec3d70] schedule at ffffffff81528762 #1 [ffff88033aec3e38] worker_thread at ffffffff81093d6c #2 [ffff88033aec3ee8] kthread at ffffffff81099eb6 #3 [ffff88033aec3f48] kernel_thread at ffffffff8100c20a PID: 52 TASK: ffff88033aec0ac0 CPU: 2 COMMAND: "kblockd/2" #0 [ffff88033aec7d70] schedule at ffffffff81528762 #1 [ffff88033aec7e38] worker_thread at ffffffff81093d6c #2 [ffff88033aec7ee8] kthread at ffffffff81099eb6 #3 [ffff88033aec7f48] kernel_thread at ffffffff8100c20a PID: 53 TASK: ffff88033aec0080 CPU: 3 COMMAND: "kblockd/3" #0 [ffff88033aec9d70] schedule at ffffffff81528762 #1 [ffff88033aec9e38] worker_thread at ffffffff81093d6c #2 [ffff88033aec9ee8] kthread at ffffffff81099eb6 #3 [ffff88033aec9f48] kernel_thread at ffffffff8100c20a PID: 54 TASK: ffff88033aecb540 CPU: 4 COMMAND: "kblockd/4" #0 [ffff88033aecdd70] schedule at ffffffff81528762 #1 [ffff88033aecde38] worker_thread at ffffffff81093d6c #2 [ffff88033aecdee8] kthread at ffffffff81099eb6 #3 [ffff88033aecdf48] kernel_thread at ffffffff8100c20a PID: 55 TASK: ffff88033aecab00 CPU: 5 COMMAND: "kblockd/5" #0 [ffff88033aed1d70] schedule at ffffffff81528762 #1 [ffff88033aed1e38] worker_thread at ffffffff81093d6c #2 [ffff88033aed1ee8] kthread at ffffffff81099eb6 #3 [ffff88033aed1f48] kernel_thread at ffffffff8100c20a PID: 56 TASK: ffff88033aeca0c0 CPU: 6 COMMAND: "kblockd/6" #0 [ffff88033aed5d70] schedule at ffffffff81528762 #1 [ffff88033aed5e38] worker_thread at ffffffff81093d6c #2 [ffff88033aed5ee8] kthread at ffffffff81099eb6 #3 [ffff88033aed5f48] kernel_thread at ffffffff8100c20a PID: 57 TASK: ffff88033aed7580 CPU: 7 COMMAND: "kblockd/7" #0 [ffff88033aed9d70] schedule at ffffffff81528762 #1 [ffff88033aed9e38] worker_thread at ffffffff81093d6c #2 [ffff88033aed9ee8] kthread at ffffffff81099eb6 #3 [ffff88033aed9f48] kernel_thread at ffffffff8100c20a PID: 58 TASK: ffff88033aed6b40 CPU: 0 COMMAND: "kacpid" #0 [ffff88033aeddd70] schedule at ffffffff81528762 #1 [ffff88033aedde38] worker_thread at ffffffff81093d6c #2 [ffff88033aeddee8] kthread at ffffffff81099eb6 #3 [ffff88033aeddf48] kernel_thread at ffffffff8100c20a PID: 59 TASK: ffff88033aed6100 CPU: 0 COMMAND: "kacpi_notify" #0 [ffff88033aedfd70] schedule at ffffffff81528762 #1 [ffff88033aedfe38] worker_thread at ffffffff81093d6c #2 [ffff88033aedfee8] kthread at ffffffff81099eb6 #3 [ffff88033aedff48] kernel_thread at ffffffff8100c20a PID: 60 TASK: ffff88033af014c0 CPU: 0 COMMAND: "kacpi_hotplug" #0 [ffff88033af03d70] schedule at ffffffff81528762 #1 [ffff88033af03e38] worker_thread at ffffffff81093d6c #2 [ffff88033af03ee8] kthread at ffffffff81099eb6 #3 [ffff88033af03f48] kernel_thread at ffffffff8100c20a PID: 61 TASK: ffff88033af00a80 CPU: 5 COMMAND: "ata_aux" #0 [ffff88033af85d70] schedule at ffffffff81528762 #1 [ffff88033af85e38] worker_thread at ffffffff81093d6c #2 [ffff88033af85ee8] kthread at ffffffff81099eb6 #3 [ffff88033af85f48] kernel_thread at ffffffff8100c20a PID: 62 TASK: ffff88033af00040 CPU: 0 COMMAND: "ata_sff/0" #0 [ffff88033af87d70] schedule at ffffffff81528762 #1 [ffff88033af87e38] worker_thread at ffffffff81093d6c #2 [ffff88033af87ee8] kthread at ffffffff81099eb6 #3 [ffff88033af87f48] kernel_thread at ffffffff8100c20a PID: 63 TASK: ffff88033af8b500 CPU: 1 COMMAND: "ata_sff/1" #0 [ffff88033af8dd70] schedule at ffffffff81528762 #1 [ffff88033af8de38] worker_thread at ffffffff81093d6c #2 [ffff88033af8dee8] kthread at ffffffff81099eb6 #3 [ffff88033af8df48] kernel_thread at ffffffff8100c20a PID: 64 TASK: ffff88033af8aac0 CPU: 2 COMMAND: "ata_sff/2" #0 [ffff88033af91d70] schedule at ffffffff81528762 #1 [ffff88033af91e38] worker_thread at ffffffff81093d6c #2 [ffff88033af91ee8] kthread at ffffffff81099eb6 #3 [ffff88033af91f48] kernel_thread at ffffffff8100c20a PID: 65 TASK: ffff88033af8a080 CPU: 3 COMMAND: "ata_sff/3" #0 [ffff88033af93d70] schedule at ffffffff81528762 #1 [ffff88033af93e38] worker_thread at ffffffff81093d6c #2 [ffff88033af93ee8] kthread at ffffffff81099eb6 #3 [ffff88033af93f48] kernel_thread at ffffffff8100c20a PID: 66 TASK: ffff88033af95540 CPU: 4 COMMAND: "ata_sff/4" #0 [ffff88033af97d70] schedule at ffffffff81528762 #1 [ffff88033af97e38] worker_thread at ffffffff81093d6c #2 [ffff88033af97ee8] kthread at ffffffff81099eb6 #3 [ffff88033af97f48] kernel_thread at ffffffff8100c20a PID: 67 TASK: ffff88033af94b00 CPU: 5 COMMAND: "ata_sff/5" #0 [ffff88033af9bd70] schedule at ffffffff81528762 #1 [ffff88033af9be38] worker_thread at ffffffff81093d6c #2 [ffff88033af9bee8] kthread at ffffffff81099eb6 #3 [ffff88033af9bf48] kernel_thread at ffffffff8100c20a PID: 68 TASK: ffff88033af940c0 CPU: 6 COMMAND: "ata_sff/6" #0 [ffff88033af9dd70] schedule at ffffffff81528762 #1 [ffff88033af9de38] worker_thread at ffffffff81093d6c #2 [ffff88033af9dee8] kthread at ffffffff81099eb6 #3 [ffff88033af9df48] kernel_thread at ffffffff8100c20a PID: 69 TASK: ffff88033af9f580 CPU: 7 COMMAND: "ata_sff/7" #0 [ffff88033afa1d70] schedule at ffffffff81528762 #1 [ffff88033afa1e38] worker_thread at ffffffff81093d6c #2 [ffff88033afa1ee8] kthread at ffffffff81099eb6 #3 [ffff88033afa1f48] kernel_thread at ffffffff8100c20a PID: 70 TASK: ffff88033af9eb40 CPU: 1 COMMAND: "ksuspend_usbd" #0 [ffff88033afa7d70] schedule at ffffffff81528762 #1 [ffff88033afa7e38] worker_thread at ffffffff81093d6c #2 [ffff88033afa7ee8] kthread at ffffffff81099eb6 #3 [ffff88033afa7f48] kernel_thread at ffffffff8100c20a PID: 71 TASK: ffff88033af9e100 CPU: 1 COMMAND: "khubd" #0 [ffff88033afa9ca0] schedule at ffffffff81528762 #1 [ffff88033afa9d68] hub_thread at ffffffff813c29f9 #2 [ffff88033afa9ee8] kthread at ffffffff81099eb6 #3 [ffff88033afa9f48] kernel_thread at ffffffff8100c20a PID: 72 TASK: ffff88033afab4c0 CPU: 5 COMMAND: "kseriod" #0 [ffff88033afadd90] schedule at ffffffff81528762 #1 [ffff88033afade58] serio_thread at ffffffff813f0a6a #2 [ffff88033afadee8] kthread at ffffffff81099eb6 #3 [ffff88033afadf48] kernel_thread at ffffffff8100c20a PID: 73 TASK: ffff88033afaaa80 CPU: 0 COMMAND: "md/0" #0 [ffff88033afb1d70] schedule at ffffffff81528762 #1 [ffff88033afb1e38] worker_thread at ffffffff81093d6c #2 [ffff88033afb1ee8] kthread at ffffffff81099eb6 #3 [ffff88033afb1f48] kernel_thread at ffffffff8100c20a PID: 74 TASK: ffff88033afaa040 CPU: 1 COMMAND: "md/1" #0 [ffff88033afb3d70] schedule at ffffffff81528762 #1 [ffff88033afb3e38] worker_thread at ffffffff81093d6c #2 [ffff88033afb3ee8] kthread at ffffffff81099eb6 #3 [ffff88033afb3f48] kernel_thread at ffffffff8100c20a PID: 75 TASK: ffff88033afb5500 CPU: 2 COMMAND: "md/2" #0 [ffff88033afb7d70] schedule at ffffffff81528762 #1 [ffff88033afb7e38] worker_thread at ffffffff81093d6c #2 [ffff88033afb7ee8] kthread at ffffffff81099eb6 #3 [ffff88033afb7f48] kernel_thread at ffffffff8100c20a PID: 76 TASK: ffff88033afb4ac0 CPU: 3 COMMAND: "md/3" #0 [ffff88033afbbd70] schedule at ffffffff81528762 #1 [ffff88033afbbe38] worker_thread at ffffffff81093d6c #2 [ffff88033afbbee8] kthread at ffffffff81099eb6 #3 [ffff88033afbbf48] kernel_thread at ffffffff8100c20a PID: 77 TASK: ffff88033afb4080 CPU: 4 COMMAND: "md/4" #0 [ffff88033afbfd70] schedule at ffffffff81528762 #1 [ffff88033afbfe38] worker_thread at ffffffff81093d6c #2 [ffff88033afbfee8] kthread at ffffffff81099eb6 #3 [ffff88033afbff48] kernel_thread at ffffffff8100c20a PID: 78 TASK: ffff88033afc1540 CPU: 5 COMMAND: "md/5" #0 [ffff88033afc3d70] schedule at ffffffff81528762 #1 [ffff88033afc3e38] worker_thread at ffffffff81093d6c #2 [ffff88033afc3ee8] kthread at ffffffff81099eb6 #3 [ffff88033afc3f48] kernel_thread at ffffffff8100c20a PID: 79 TASK: ffff88033afc0b00 CPU: 6 COMMAND: "md/6" #0 [ffff88033afc7d70] schedule at ffffffff81528762 #1 [ffff88033afc7e38] worker_thread at ffffffff81093d6c #2 [ffff88033afc7ee8] kthread at ffffffff81099eb6 #3 [ffff88033afc7f48] kernel_thread at ffffffff8100c20a PID: 80 TASK: ffff88033afc00c0 CPU: 7 COMMAND: "md/7" #0 [ffff88033afc9d70] schedule at ffffffff81528762 #1 [ffff88033afc9e38] worker_thread at ffffffff81093d6c #2 [ffff88033afc9ee8] kthread at ffffffff81099eb6 #3 [ffff88033afc9f48] kernel_thread at ffffffff8100c20a PID: 81 TASK: ffff88033afcb580 CPU: 0 COMMAND: "md_misc/0" #0 [ffff88033afcdd70] schedule at ffffffff81528762 #1 [ffff88033afcde38] worker_thread at ffffffff81093d6c #2 [ffff88033afcdee8] kthread at ffffffff81099eb6 #3 [ffff88033afcdf48] kernel_thread at ffffffff8100c20a PID: 82 TASK: ffff88033afcab40 CPU: 1 COMMAND: "md_misc/1" #0 [ffff88033afd1d70] schedule at ffffffff81528762 #1 [ffff88033afd1e38] worker_thread at ffffffff81093d6c #2 [ffff88033afd1ee8] kthread at ffffffff81099eb6 #3 [ffff88033afd1f48] kernel_thread at ffffffff8100c20a PID: 83 TASK: ffff88033afca100 CPU: 2 COMMAND: "md_misc/2" #0 [ffff88033afd3d70] schedule at ffffffff81528762 #1 [ffff88033afd3e38] worker_thread at ffffffff81093d6c #2 [ffff88033afd3ee8] kthread at ffffffff81099eb6 #3 [ffff88033afd3f48] kernel_thread at ffffffff8100c20a PID: 84 TASK: ffff88033afd74c0 CPU: 3 COMMAND: "md_misc/3" #0 [ffff88033afd9d70] schedule at ffffffff81528762 #1 [ffff88033afd9e38] worker_thread at ffffffff81093d6c #2 [ffff88033afd9ee8] kthread at ffffffff81099eb6 #3 [ffff88033afd9f48] kernel_thread at ffffffff8100c20a PID: 85 TASK: ffff88033afd6a80 CPU: 4 COMMAND: "md_misc/4" #0 [ffff88033afddd70] schedule at ffffffff81528762 #1 [ffff88033afdde38] worker_thread at ffffffff81093d6c #2 [ffff88033afddee8] kthread at ffffffff81099eb6 #3 [ffff88033afddf48] kernel_thread at ffffffff8100c20a PID: 86 TASK: ffff88033afd6040 CPU: 5 COMMAND: "md_misc/5" #0 [ffff88033afdfd70] schedule at ffffffff81528762 #1 [ffff88033afdfe38] worker_thread at ffffffff81093d6c #2 [ffff88033afdfee8] kthread at ffffffff81099eb6 #3 [ffff88033afdff48] kernel_thread at ffffffff8100c20a PID: 87 TASK: ffff88033afe1500 CPU: 6 COMMAND: "md_misc/6" #0 [ffff88033afe3d70] schedule at ffffffff81528762 #1 [ffff88033afe3e38] worker_thread at ffffffff81093d6c #2 [ffff88033afe3ee8] kthread at ffffffff81099eb6 #3 [ffff88033afe3f48] kernel_thread at ffffffff8100c20a PID: 88 TASK: ffff88033afe0ac0 CPU: 7 COMMAND: "md_misc/7" #0 [ffff88033afe7d70] schedule at ffffffff81528762 #1 [ffff88033afe7e38] worker_thread at ffffffff81093d6c #2 [ffff88033afe7ee8] kthread at ffffffff81099eb6 #3 [ffff88033afe7f48] kernel_thread at ffffffff8100c20a PID: 89 TASK: ffff88033afe0080 CPU: 1 COMMAND: "linkwatch" #0 [ffff88033afe9d70] schedule at ffffffff81528762 #1 [ffff88033afe9e38] worker_thread at ffffffff81093d6c #2 [ffff88033afe9ee8] kthread at ffffffff81099eb6 #3 [ffff88033afe9f48] kernel_thread at ffffffff8100c20a PID: 90 TASK: ffff880339edb540 CPU: 0 COMMAND: "khungtaskd" #0 [ffff880339eddd10] schedule at ffffffff81528762 #1 [ffff880339edddd8] schedule_timeout at ffffffff815295d2 #2 [ffff880339edde88] schedule_timeout_interruptible at ffffffff8152977e #3 [ffff880339edde98] watchdog at ffffffff810e6212 #4 [ffff880339eddee8] kthread at ffffffff81099eb6 #5 [ffff880339eddf48] kernel_thread at ffffffff8100c20a PID: 91 TASK: ffff880339edab00 CPU: 0 COMMAND: "kswapd0" #0 [ffff880339f33d60] schedule at ffffffff81528762 #1 [ffff880339f33e28] kswapd at ffffffff8113c579 #2 [ffff880339f33ee8] kthread at ffffffff81099eb6 #3 [ffff880339f33f48] kernel_thread at ffffffff8100c20a PID: 92 TASK: ffff880339eda0c0 CPU: 5 COMMAND: "kswapd1" #0 [ffff880339f35d60] schedule at ffffffff81528762 #1 [ffff880339f35e28] kswapd at ffffffff8113c579 #2 [ffff880339f35ee8] kthread at ffffffff81099eb6 #3 [ffff880339f35f48] kernel_thread at ffffffff8100c20a PID: 93 TASK: ffff880339f37580 CPU: 5 COMMAND: "ksmd" #0 [ffff880339f39d50] schedule at ffffffff81528762 #1 [ffff880339f39e18] ksm_scan_thread at ffffffff8116df1b #2 [ffff880339f39ee8] kthread at ffffffff81099eb6 #3 [ffff880339f39f48] kernel_thread at ffffffff8100c20a PID: 94 TASK: ffff880339f36b40 CPU: 0 COMMAND: "aio/0" #0 [ffff880339f6bd70] schedule at ffffffff81528762 #1 [ffff880339f6be38] worker_thread at ffffffff81093d6c #2 [ffff880339f6bee8] kthread at ffffffff81099eb6 #3 [ffff880339f6bf48] kernel_thread at ffffffff8100c20a PID: 95 TASK: ffff880339f36100 CPU: 1 COMMAND: "aio/1" #0 [ffff880339f6dd70] schedule at ffffffff81528762 #1 [ffff880339f6de38] worker_thread at ffffffff81093d6c #2 [ffff880339f6dee8] kthread at ffffffff81099eb6 #3 [ffff880339f6df48] kernel_thread at ffffffff8100c20a PID: 96 TASK: ffff880339f6f4c0 CPU: 2 COMMAND: "aio/2" #0 [ffff880339f71d70] schedule at ffffffff81528762 #1 [ffff880339f71e38] worker_thread at ffffffff81093d6c #2 [ffff880339f71ee8] kthread at ffffffff81099eb6 #3 [ffff880339f71f48] kernel_thread at ffffffff8100c20a PID: 97 TASK: ffff880339f6ea80 CPU: 3 COMMAND: "aio/3" #0 [ffff880339f75d70] schedule at ffffffff81528762 #1 [ffff880339f75e38] worker_thread at ffffffff81093d6c #2 [ffff880339f75ee8] kthread at ffffffff81099eb6 #3 [ffff880339f75f48] kernel_thread at ffffffff8100c20a PID: 98 TASK: ffff880339f6e040 CPU: 4 COMMAND: "aio/4" #0 [ffff880339f79d70] schedule at ffffffff81528762 #1 [ffff880339f79e38] worker_thread at ffffffff81093d6c #2 [ffff880339f79ee8] kthread at ffffffff81099eb6 #3 [ffff880339f79f48] kernel_thread at ffffffff8100c20a PID: 99 TASK: ffff880339f7b500 CPU: 5 COMMAND: "aio/5" #0 [ffff880339f7dd70] schedule at ffffffff81528762 #1 [ffff880339f7de38] worker_thread at ffffffff81093d6c #2 [ffff880339f7dee8] kthread at ffffffff81099eb6 #3 [ffff880339f7df48] kernel_thread at ffffffff8100c20a PID: 100 TASK: ffff880339f7aac0 CPU: 6 COMMAND: "aio/6" #0 [ffff880339f81d70] schedule at ffffffff81528762 #1 [ffff880339f81e38] worker_thread at ffffffff81093d6c #2 [ffff880339f81ee8] kthread at ffffffff81099eb6 #3 [ffff880339f81f48] kernel_thread at ffffffff8100c20a PID: 101 TASK: ffff880339f7a080 CPU: 7 COMMAND: "aio/7" #0 [ffff880339f83d70] schedule at ffffffff81528762 #1 [ffff880339f83e38] worker_thread at ffffffff81093d6c #2 [ffff880339f83ee8] kthread at ffffffff81099eb6 #3 [ffff880339f83f48] kernel_thread at ffffffff8100c20a PID: 102 TASK: ffff880339f85540 CPU: 0 COMMAND: "crypto/0" #0 [ffff880339f87d70] schedule at ffffffff81528762 #1 [ffff880339f87e38] worker_thread at ffffffff81093d6c #2 [ffff880339f87ee8] kthread at ffffffff81099eb6 #3 [ffff880339f87f48] kernel_thread at ffffffff8100c20a PID: 103 TASK: ffff880339f84b00 CPU: 1 COMMAND: "crypto/1" #0 [ffff880339f8bd70] schedule at ffffffff81528762 #1 [ffff880339f8be38] worker_thread at ffffffff81093d6c #2 [ffff880339f8bee8] kthread at ffffffff81099eb6 #3 [ffff880339f8bf48] kernel_thread at ffffffff8100c20a PID: 104 TASK: ffff880339f840c0 CPU: 2 COMMAND: "crypto/2" #0 [ffff880339f8dd70] schedule at ffffffff81528762 #1 [ffff880339f8de38] worker_thread at ffffffff81093d6c #2 [ffff880339f8dee8] kthread at ffffffff81099eb6 #3 [ffff880339f8df48] kernel_thread at ffffffff8100c20a PID: 105 TASK: ffff880339fb1580 CPU: 3 COMMAND: "crypto/3" #0 [ffff880339fb3d70] schedule at ffffffff81528762 #1 [ffff880339fb3e38] worker_thread at ffffffff81093d6c #2 [ffff880339fb3ee8] kthread at ffffffff81099eb6 #3 [ffff880339fb3f48] kernel_thread at ffffffff8100c20a PID: 106 TASK: ffff880339fb0b40 CPU: 4 COMMAND: "crypto/4" #0 [ffff880339fb7d70] schedule at ffffffff81528762 #1 [ffff880339fb7e38] worker_thread at ffffffff81093d6c #2 [ffff880339fb7ee8] kthread at ffffffff81099eb6 #3 [ffff880339fb7f48] kernel_thread at ffffffff8100c20a PID: 107 TASK: ffff880339fb0100 CPU: 5 COMMAND: "crypto/5" #0 [ffff880339fb9d70] schedule at ffffffff81528762 #1 [ffff880339fb9e38] worker_thread at ffffffff81093d6c #2 [ffff880339fb9ee8] kthread at ffffffff81099eb6 #3 [ffff880339fb9f48] kernel_thread at ffffffff8100c20a PID: 108 TASK: ffff880339fbb4c0 CPU: 6 COMMAND: "crypto/6" #0 [ffff880339fbdd70] schedule at ffffffff81528762 #1 [ffff880339fbde38] worker_thread at ffffffff81093d6c #2 [ffff880339fbdee8] kthread at ffffffff81099eb6 #3 [ffff880339fbdf48] kernel_thread at ffffffff8100c20a PID: 109 TASK: ffff880339fbaa80 CPU: 7 COMMAND: "crypto/7" #0 [ffff880339fc1d70] schedule at ffffffff81528762 #1 [ffff880339fc1e38] worker_thread at ffffffff81093d6c #2 [ffff880339fc1ee8] kthread at ffffffff81099eb6 #3 [ffff880339fc1f48] kernel_thread at ffffffff8100c20a PID: 114 TASK: ffff880339fe1540 CPU: 0 COMMAND: "kthrotld/0" #0 [ffff880339fe3d70] schedule at ffffffff81528762 #1 [ffff880339fe3e38] worker_thread at ffffffff81093d6c #2 [ffff880339fe3ee8] kthread at ffffffff81099eb6 #3 [ffff880339fe3f48] kernel_thread at ffffffff8100c20a PID: 115 TASK: ffff880339fe0b00 CPU: 1 COMMAND: "kthrotld/1" #0 [ffff880339fe7d70] schedule at ffffffff81528762 #1 [ffff880339fe7e38] worker_thread at ffffffff81093d6c #2 [ffff880339fe7ee8] kthread at ffffffff81099eb6 #3 [ffff880339fe7f48] kernel_thread at ffffffff8100c20a PID: 116 TASK: ffff880339fe00c0 CPU: 2 COMMAND: "kthrotld/2" #0 [ffff880339fe9d70] schedule at ffffffff81528762 #1 [ffff880339fe9e38] worker_thread at ffffffff81093d6c #2 [ffff880339fe9ee8] kthread at ffffffff81099eb6 #3 [ffff880339fe9f48] kernel_thread at ffffffff8100c20a PID: 117 TASK: ffff880339feb580 CPU: 3 COMMAND: "kthrotld/3" #0 [ffff880339fedd70] schedule at ffffffff81528762 #1 [ffff880339fede38] worker_thread at ffffffff81093d6c #2 [ffff880339fedee8] kthread at ffffffff81099eb6 #3 [ffff880339fedf48] kernel_thread at ffffffff8100c20a PID: 118 TASK: ffff880339feab40 CPU: 4 COMMAND: "kthrotld/4" #0 [ffff880339ff1d70] schedule at ffffffff81528762 #1 [ffff880339ff1e38] worker_thread at ffffffff81093d6c #2 [ffff880339ff1ee8] kthread at ffffffff81099eb6 #3 [ffff880339ff1f48] kernel_thread at ffffffff8100c20a PID: 119 TASK: ffff880339fea100 CPU: 5 COMMAND: "kthrotld/5" #0 [ffff880339ff5d70] schedule at ffffffff81528762 #1 [ffff880339ff5e38] worker_thread at ffffffff81093d6c #2 [ffff880339ff5ee8] kthread at ffffffff81099eb6 #3 [ffff880339ff5f48] kernel_thread at ffffffff8100c20a PID: 120 TASK: ffff880339ff74c0 CPU: 6 COMMAND: "kthrotld/6" #0 [ffff880339ff9d70] schedule at ffffffff81528762 #1 [ffff880339ff9e38] worker_thread at ffffffff81093d6c #2 [ffff880339ff9ee8] kthread at ffffffff81099eb6 #3 [ffff880339ff9f48] kernel_thread at ffffffff8100c20a PID: 121 TASK: ffff880339ff6a80 CPU: 7 COMMAND: "kthrotld/7" #0 [ffff880339ffdd70] schedule at ffffffff81528762 #1 [ffff880339ffde38] worker_thread at ffffffff81093d6c #2 [ffff880339ffdee8] kthread at ffffffff81099eb6 #3 [ffff880339ffdf48] kernel_thread at ffffffff8100c20a PID: 122 TASK: ffff880339ff6040 CPU: 0 COMMAND: "kipmi0" #0 [ffff880339fc7cf0] schedule at ffffffff81528762 #1 [ffff880339fc7db8] schedule_timeout at ffffffff815295d2 #2 [ffff880339fc7e68] schedule_timeout_interruptible at ffffffff8152977e #3 [ffff880339fc7e78] ipmi_thread at ffffffff812e7f7a #4 [ffff880339fc7ee8] kthread at ffffffff81099eb6 #5 [ffff880339fc7f48] kernel_thread at ffffffff8100c20a PID: 124 TASK: ffff88033983eac0 CPU: 6 COMMAND: "kpsmoused" #0 [ffff88033992bd70] schedule at ffffffff81528762 #1 [ffff88033992be38] worker_thread at ffffffff81093d6c #2 [ffff88033992bee8] kthread at ffffffff81099eb6 #3 [ffff88033992bf48] kernel_thread at ffffffff8100c20a PID: 125 TASK: ffff88033983e080 CPU: 5 COMMAND: "usbhid_resumer" #0 [ffff88033992dd70] schedule at ffffffff81528762 #1 [ffff88033992de38] worker_thread at ffffffff81093d6c #2 [ffff88033992dee8] kthread at ffffffff81099eb6 #3 [ffff88033992df48] kernel_thread at ffffffff8100c20a PID: 249 TASK: ffff880339107500 CPU: 0 COMMAND: "scsi_eh_0" #0 [ffff8803391c9d60] schedule at ffffffff81528762 #1 [ffff8803391c9e28] scsi_error_handler at ffffffff813874e9 #2 [ffff8803391c9ee8] kthread at ffffffff81099eb6 #3 [ffff8803391c9f48] kernel_thread at ffffffff8100c20a PID: 250 TASK: ffff880339a1d4c0 CPU: 5 COMMAND: "scsi_eh_1" #0 [ffff8803391cdd60] schedule at ffffffff81528762 #1 [ffff8803391cde28] scsi_error_handler at ffffffff813874e9 #2 [ffff8803391cdee8] kthread at ffffffff81099eb6 #3 [ffff8803391cdf48] kernel_thread at ffffffff8100c20a PID: 251 TASK: ffff8803390c6040 CPU: 5 COMMAND: "scsi_eh_2" #0 [ffff8803391cfd60] schedule at ffffffff81528762 #1 [ffff8803391cfe28] scsi_error_handler at ffffffff813874e9 #2 [ffff8803391cfee8] kthread at ffffffff81099eb6 #3 [ffff8803391cff48] kernel_thread at ffffffff8100c20a PID: 252 TASK: ffff880339040b40 CPU: 5 COMMAND: "scsi_eh_3" #0 [ffff880339281d60] schedule at ffffffff81528762 #1 [ffff880339281e28] scsi_error_handler at ffffffff813874e9 #2 [ffff880339281ee8] kthread at ffffffff81099eb6 #3 [ffff880339281f48] kernel_thread at ffffffff8100c20a PID: 253 TASK: ffff880339041580 CPU: 5 COMMAND: "scsi_eh_4" #0 [ffff880339285d60] schedule at ffffffff81528762 #1 [ffff880339285e28] scsi_error_handler at ffffffff813874e9 #2 [ffff880339285ee8] kthread at ffffffff81099eb6 #3 [ffff880339285f48] kernel_thread at ffffffff8100c20a PID: 254 TASK: ffff880339fba040 CPU: 5 COMMAND: "scsi_eh_5" #0 [ffff880339287d60] schedule at ffffffff81528762 #1 [ffff880339287e28] scsi_error_handler at ffffffff813874e9 #2 [ffff880339287ee8] kthread at ffffffff81099eb6 #3 [ffff880339287f48] kernel_thread at ffffffff8100c20a PID: 373 TASK: ffff8803399d6b00 CPU: 5 COMMAND: "jbd2/sda1-8" #0 [ffff880339a07da0] schedule at ffffffff81528762 #1 [ffff880339a07e68] kjournald2 at ffffffffa004cb8a [jbd2] #2 [ffff880339a07ee8] kthread at ffffffff81099eb6 #3 [ffff880339a07f48] kernel_thread at ffffffff8100c20a PID: 374 TASK: ffff8803399d60c0 CPU: 5 COMMAND: "ext4-dio-unwrit" #0 [ffff8803393c3d70] schedule at ffffffff81528762 #1 [ffff8803393c3e38] worker_thread at ffffffff81093d6c #2 [ffff8803393c3ee8] kthread at ffffffff81099eb6 #3 [ffff8803393c3f48] kernel_thread at ffffffff8100c20a PID: 461 TASK: ffff88033996c080 CPU: 3 COMMAND: "udevd" #0 [ffff88033a031998] schedule at ffffffff81528762 #1 [ffff88033a031a60] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff88033a031b00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff88033a031b20] do_sys_poll at ffffffff811a0b47 #4 [ffff88033a031f40] sys_poll at ffffffff811a0e01 #5 [ffff88033a031f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f67ab081308 RSP: 00007fff340e15d8 RFLAGS: 00010246 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: ffffffffffffffff RSI: 0000000000000005 RDI: 00007f67ab996020 RBP: 00007f67accb9e30 R8: 0000000000000000 R9: 0000000000000000 R10: 0000000000000040 R11: 0000000000000246 R12: 00007f67ab996140 R13: 00007f67ab996150 R14: 00007f67accb2010 R15: 00007f67accb9e30 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 690 TASK: ffff8803390420c0 CPU: 0 COMMAND: "scsi_tgtd/0" #0 [ffff880339e6dd70] schedule at ffffffff81528762 #1 [ffff880339e6de38] worker_thread at ffffffff81093d6c #2 [ffff880339e6dee8] kthread at ffffffff81099eb6 #3 [ffff880339e6df48] kernel_thread at ffffffff8100c20a PID: 691 TASK: ffff880339016b00 CPU: 1 COMMAND: "scsi_tgtd/1" #0 [ffff88033a12fd70] schedule at ffffffff81528762 #1 [ffff88033a12fe38] worker_thread at ffffffff81093d6c #2 [ffff88033a12fee8] kthread at ffffffff81099eb6 #3 [ffff88033a12ff48] kernel_thread at ffffffff8100c20a PID: 692 TASK: ffff880339bce080 CPU: 2 COMMAND: "scsi_tgtd/2" #0 [ffff880339bbdd70] schedule at ffffffff81528762 #1 [ffff880339bbde38] worker_thread at ffffffff81093d6c #2 [ffff880339bbdee8] kthread at ffffffff81099eb6 #3 [ffff880339bbdf48] kernel_thread at ffffffff8100c20a PID: 693 TASK: ffff88033996d500 CPU: 3 COMMAND: "scsi_tgtd/3" #0 [ffff880339a13d70] schedule at ffffffff81528762 #1 [ffff880339a13e38] worker_thread at ffffffff81093d6c #2 [ffff880339a13ee8] kthread at ffffffff81099eb6 #3 [ffff880339a13f48] kernel_thread at ffffffff8100c20a PID: 694 TASK: ffff88033910cb40 CPU: 4 COMMAND: "scsi_tgtd/4" #0 [ffff88033a007d70] schedule at ffffffff81528762 #1 [ffff88033a007e38] worker_thread at ffffffff81093d6c #2 [ffff88033a007ee8] kthread at ffffffff81099eb6 #3 [ffff88033a007f48] kernel_thread at ffffffff8100c20a PID: 695 TASK: ffff8803392c94c0 CPU: 5 COMMAND: "scsi_tgtd/5" #0 [ffff88033a3f7d70] schedule at ffffffff81528762 #1 [ffff88033a3f7e38] worker_thread at ffffffff81093d6c #2 [ffff88033a3f7ee8] kthread at ffffffff81099eb6 #3 [ffff88033a3f7f48] kernel_thread at ffffffff8100c20a PID: 696 TASK: ffff8803399ce040 CPU: 6 COMMAND: "scsi_tgtd/6" #0 [ffff88033a7fbd70] schedule at ffffffff81528762 #1 [ffff88033a7fbe38] worker_thread at ffffffff81093d6c #2 [ffff88033a7fbee8] kthread at ffffffff81099eb6 #3 [ffff88033a7fbf48] kernel_thread at ffffffff8100c20a PID: 697 TASK: ffff880339043540 CPU: 7 COMMAND: "scsi_tgtd/7" #0 [ffff880339e43d70] schedule at ffffffff81528762 #1 [ffff880339e43e38] worker_thread at ffffffff81093d6c #2 [ffff880339e43ee8] kthread at ffffffff81099eb6 #3 [ffff880339e43f48] kernel_thread at ffffffff8100c20a PID: 698 TASK: ffff880339042b00 CPU: 0 COMMAND: "lpfc_worker_0" #0 [ffff880338cd1ca0] schedule at ffffffff81528762 #1 [ffff880338cd1d68] lpfc_do_work at ffffffffa018dafe [lpfc] #2 [ffff880338cd1ee8] kthread at ffffffff81099eb6 #3 [ffff880338cd1f48] kernel_thread at ffffffff8100c20a PID: 699 TASK: ffff8803399d7540 CPU: 5 COMMAND: "scsi_eh_6" #0 [ffff880338cd3d60] schedule at ffffffff81528762 #1 [ffff880338cd3e28] scsi_error_handler at ffffffff813874e9 #2 [ffff880338cd3ee8] kthread at ffffffff81099eb6 #3 [ffff880338cd3f48] kernel_thread at ffffffff8100c20a PID: 700 TASK: ffff880339961500 CPU: 0 COMMAND: "scsi_wq_6" #0 [ffff880339329d70] schedule at ffffffff81528762 #1 [ffff880339329e38] worker_thread at ffffffff81093d6c #2 [ffff880339329ee8] kthread at ffffffff81099eb6 #3 [ffff880339329f48] kernel_thread at ffffffff8100c20a PID: 701 TASK: ffff880339118a80 CPU: 0 COMMAND: "fc_wq_6" #0 [ffff88033932bd70] schedule at ffffffff81528762 #1 [ffff88033932be38] worker_thread at ffffffff81093d6c #2 [ffff88033932bee8] kthread at ffffffff81099eb6 #3 [ffff88033932bf48] kernel_thread at ffffffff8100c20a PID: 702 TASK: ffff880339bcf500 CPU: 0 COMMAND: "fc_dl_6" #0 [ffff88033a7fdd70] schedule at ffffffff81528762 #1 [ffff88033a7fde38] worker_thread at ffffffff81093d6c #2 [ffff88033a7fdee8] kthread at ffffffff81099eb6 #3 [ffff88033a7fdf48] kernel_thread at ffffffff8100c20a PID: 703 TASK: ffff88033983f500 CPU: 1 COMMAND: "lpfc_worker_1" #0 [ffff880339115ca0] schedule at ffffffff81528762 #1 [ffff880339115d68] lpfc_do_work at ffffffffa018dafe [lpfc] #2 [ffff880339115ee8] kthread at ffffffff81099eb6 #3 [ffff880339115f48] kernel_thread at ffffffff8100c20a PID: 704 TASK: ffff8803392a4ac0 CPU: 5 COMMAND: "scsi_eh_7" #0 [ffff880339b0bd60] schedule at ffffffff81528762 #1 [ffff880339b0be28] scsi_error_handler at ffffffff813874e9 #2 [ffff880339b0bee8] kthread at ffffffff81099eb6 #3 [ffff880339b0bf48] kernel_thread at ffffffff8100c20a PID: 705 TASK: ffff880339040100 CPU: 0 COMMAND: "scsi_wq_7" #0 [ffff88033a035d70] schedule at ffffffff81528762 #1 [ffff88033a035e38] worker_thread at ffffffff81093d6c #2 [ffff88033a035ee8] kthread at ffffffff81099eb6 #3 [ffff88033a035f48] kernel_thread at ffffffff8100c20a PID: 706 TASK: ffff8803399ee0c0 CPU: 0 COMMAND: "fc_wq_7" #0 [ffff8803392ebd70] schedule at ffffffff81528762 #1 [ffff8803392ebe38] worker_thread at ffffffff81093d6c #2 [ffff8803392ebee8] kthread at ffffffff81099eb6 #3 [ffff8803392ebf48] kernel_thread at ffffffff8100c20a PID: 707 TASK: ffff8803392c8040 CPU: 0 COMMAND: "fc_dl_7" #0 [ffff88033a025d70] schedule at ffffffff81528762 #1 [ffff88033a025e38] worker_thread at ffffffff81093d6c #2 [ffff88033a025ee8] kthread at ffffffff81099eb6 #3 [ffff88033a025f48] kernel_thread at ffffffff8100c20a PID: 2058 TASK: ffff8806385fcb40 CPU: 1 COMMAND: "kstriped" #0 [ffff880638565d70] schedule at ffffffff81528762 #1 [ffff880638565e38] worker_thread at ffffffff81093d6c #2 [ffff880638565ee8] kthread at ffffffff81099eb6 #3 [ffff880638565f48] kernel_thread at ffffffff8100c20a PID: 2060 TASK: ffff880639d66ac0 CPU: 0 COMMAND: "kmpathd/0" #0 [ffff8806387abd70] schedule at ffffffff81528762 #1 [ffff8806387abe38] worker_thread at ffffffff81093d6c #2 [ffff8806387abee8] kthread at ffffffff81099eb6 #3 [ffff8806387abf48] kernel_thread at ffffffff8100c20a PID: 2061 TASK: ffff8806385fd580 CPU: 1 COMMAND: "kmpathd/1" #0 [ffff8806387b3d70] schedule at ffffffff81528762 #1 [ffff8806387b3e38] worker_thread at ffffffff81093d6c #2 [ffff8806387b3ee8] kthread at ffffffff81099eb6 #3 [ffff8806387b3f48] kernel_thread at ffffffff8100c20a PID: 2062 TASK: ffff8806399574c0 CPU: 2 COMMAND: "kmpathd/2" #0 [ffff880638751d70] schedule at ffffffff81528762 #1 [ffff880638751e38] worker_thread at ffffffff81093d6c #2 [ffff880638751ee8] kthread at ffffffff81099eb6 #3 [ffff880638751f48] kernel_thread at ffffffff8100c20a PID: 2063 TASK: ffff880639a8b540 CPU: 3 COMMAND: "kmpathd/3" #0 [ffff880639b2dd70] schedule at ffffffff81528762 #1 [ffff880639b2de38] worker_thread at ffffffff81093d6c #2 [ffff880639b2dee8] kthread at ffffffff81099eb6 #3 [ffff880639b2df48] kernel_thread at ffffffff8100c20a PID: 2064 TASK: ffff880639956a80 CPU: 4 COMMAND: "kmpathd/4" #0 [ffff88063a76dd70] schedule at ffffffff81528762 #1 [ffff88063a76de38] worker_thread at ffffffff81093d6c #2 [ffff88063a76dee8] kthread at ffffffff81099eb6 #3 [ffff88063a76df48] kernel_thread at ffffffff8100c20a PID: 2065 TASK: ffff880639b8cb40 CPU: 5 COMMAND: "kmpathd/5" #0 [ffff880639d23d70] schedule at ffffffff81528762 #1 [ffff880639d23e38] worker_thread at ffffffff81093d6c #2 [ffff880639d23ee8] kthread at ffffffff81099eb6 #3 [ffff880639d23f48] kernel_thread at ffffffff8100c20a PID: 2066 TASK: ffff8806399d6b40 CPU: 6 COMMAND: "kmpathd/6" #0 [ffff8806398c9d70] schedule at ffffffff81528762 #1 [ffff8806398c9e38] worker_thread at ffffffff81093d6c #2 [ffff8806398c9ee8] kthread at ffffffff81099eb6 #3 [ffff8806398c9f48] kernel_thread at ffffffff8100c20a PID: 2067 TASK: ffff880639b8d580 CPU: 7 COMMAND: "kmpathd/7" #0 [ffff880639d5dd70] schedule at ffffffff81528762 #1 [ffff880639d5de38] worker_thread at ffffffff81093d6c #2 [ffff880639d5dee8] kthread at ffffffff81099eb6 #3 [ffff880639d5df48] kernel_thread at ffffffff8100c20a PID: 2068 TASK: ffff8806398c20c0 CPU: 2 COMMAND: "kmpath_handlerd" #0 [ffff88063a2e1d70] schedule at ffffffff81528762 #1 [ffff88063a2e1e38] worker_thread at ffffffff81093d6c #2 [ffff88063a2e1ee8] kthread at ffffffff81099eb6 #3 [ffff88063a2e1f48] kernel_thread at ffffffff8100c20a PID: 2084 TASK: ffff880639d20080 CPU: 6 COMMAND: "kdmflush" #0 [ffff8806385f1d70] schedule at ffffffff81528762 #1 [ffff8806385f1e38] worker_thread at ffffffff81093d6c #2 [ffff8806385f1ee8] kthread at ffffffff81099eb6 #3 [ffff8806385f1f48] kernel_thread at ffffffff8100c20a PID: 2092 TASK: ffff8806387a1540 CPU: 6 COMMAND: "kdmflush" #0 [ffff88063a6cdd70] schedule at ffffffff81528762 #1 [ffff88063a6cde38] worker_thread at ffffffff81093d6c #2 [ffff88063a6cdee8] kthread at ffffffff81099eb6 #3 [ffff88063a6cdf48] kernel_thread at ffffffff8100c20a PID: 2097 TASK: ffff88063a2b4b40 CPU: 5 COMMAND: "kdmflush" #0 [ffff8806385f5d70] schedule at ffffffff81528762 #1 [ffff8806385f5e38] worker_thread at ffffffff81093d6c #2 [ffff8806385f5ee8] kthread at ffffffff81099eb6 #3 [ffff8806385f5f48] kernel_thread at ffffffff8100c20a PID: 2102 TASK: ffff880639c08a80 CPU: 5 COMMAND: "kdmflush" #0 [ffff8806387b7d70] schedule at ffffffff81528762 #1 [ffff8806387b7e38] worker_thread at ffffffff81093d6c #2 [ffff8806387b7ee8] kthread at ffffffff81099eb6 #3 [ffff8806387b7f48] kernel_thread at ffffffff8100c20a PID: 2107 TASK: ffff88063a37b4c0 CPU: 6 COMMAND: "kdmflush" #0 [ffff88063a7f3d70] schedule at ffffffff81528762 #1 [ffff88063a7f3e38] worker_thread at ffffffff81093d6c #2 [ffff88063a7f3ee8] kthread at ffffffff81099eb6 #3 [ffff88063a7f3f48] kernel_thread at ffffffff8100c20a PID: 2112 TASK: ffff880639c494c0 CPU: 5 COMMAND: "kdmflush" #0 [ffff880639c51d70] schedule at ffffffff81528762 #1 [ffff880639c51e38] worker_thread at ffffffff81093d6c #2 [ffff880639c51ee8] kthread at ffffffff81099eb6 #3 [ffff880639c51f48] kernel_thread at ffffffff8100c20a PID: 2117 TASK: ffff880639d66080 CPU: 6 COMMAND: "kdmflush" #0 [ffff88063b625d70] schedule at ffffffff81528762 #1 [ffff88063b625e38] worker_thread at ffffffff81093d6c #2 [ffff88063b625ee8] kthread at ffffffff81099eb6 #3 [ffff88063b625f48] kernel_thread at ffffffff8100c20a PID: 2901 TASK: ffff8806399ae0c0 CPU: 0 COMMAND: "kauditd" #0 [ffff88063863ddb0] schedule at ffffffff81528762 #1 [ffff88063863de78] kauditd_thread at ffffffff810d8e99 #2 [ffff88063863dee8] kthread at ffffffff81099eb6 #3 [ffff88063863df48] kernel_thread at ffffffff8100c20a PID: 2946 TASK: ffff8806399d9500 CPU: 1 COMMAND: "mthcacatas" #0 [ffff880637c91d70] schedule at ffffffff81528762 #1 [ffff880637c91e38] worker_thread at ffffffff81093d6c #2 [ffff880637c91ee8] kthread at ffffffff81099eb6 #3 [ffff880637c91f48] kernel_thread at ffffffff8100c20a PID: 2954 TASK: ffff880639c37500 CPU: 1 COMMAND: "mlx4" #0 [ffff8806387d9d70] schedule at ffffffff81528762 #1 [ffff8806387d9e38] worker_thread at ffffffff81093d6c #2 [ffff8806387d9ee8] kthread at ffffffff81099eb6 #3 [ffff8806387d9f48] kernel_thread at ffffffff8100c20a PID: 2955 TASK: ffff88063a2b5580 CPU: 0 COMMAND: "mlx4_sense" #0 [ffff880632801d70] schedule at ffffffff81528762 #1 [ffff880632801e38] worker_thread at ffffffff81093d6c #2 [ffff880632801ee8] kthread at ffffffff81099eb6 #3 [ffff880632801f48] kernel_thread at ffffffff8100c20a PID: 2958 TASK: ffff880639882040 CPU: 1 COMMAND: "mlx4_ib" #0 [ffff880637c93d70] schedule at ffffffff81528762 #1 [ffff880637c93e38] worker_thread at ffffffff81093d6c #2 [ffff880637c93ee8] kthread at ffffffff81099eb6 #3 [ffff880637c93f48] kernel_thread at ffffffff8100c20a PID: 2959 TASK: ffff880639d20ac0 CPU: 7 COMMAND: "ib_mad1" #0 [ffff88063a343d70] schedule at ffffffff81528762 #1 [ffff88063a343e38] worker_thread at ffffffff81093d6c #2 [ffff88063a343ee8] kthread at ffffffff81099eb6 #3 [ffff88063a343f48] kernel_thread at ffffffff8100c20a PID: 2967 TASK: ffff880639882a80 CPU: 1 COMMAND: "ib_mcast" #0 [ffff88063856fd70] schedule at ffffffff81528762 #1 [ffff88063856fe38] worker_thread at ffffffff81093d6c #2 [ffff88063856fee8] kthread at ffffffff81099eb6 #3 [ffff88063856ff48] kernel_thread at ffffffff8100c20a PID: 2968 TASK: ffff880639d2d540 CPU: 1 COMMAND: "ib_inform" #0 [ffff880637c5fd70] schedule at ffffffff81528762 #1 [ffff880637c5fe38] worker_thread at ffffffff81093d6c #2 [ffff880637c5fee8] kthread at ffffffff81099eb6 #3 [ffff880637c5ff48] kernel_thread at ffffffff8100c20a PID: 2969 TASK: ffff880639c48040 CPU: 0 COMMAND: "local_sa" #0 [ffff88063acf5d70] schedule at ffffffff81528762 #1 [ffff88063acf5e38] worker_thread at ffffffff81093d6c #2 [ffff88063acf5ee8] kthread at ffffffff81099eb6 #3 [ffff88063acf5f48] kernel_thread at ffffffff8100c20a PID: 2970 TASK: ffff880639c92b00 CPU: 0 COMMAND: "ib_cm/0" #0 [ffff88063a015d70] schedule at ffffffff81528762 #1 [ffff88063a015e38] worker_thread at ffffffff81093d6c #2 [ffff88063a015ee8] kthread at ffffffff81099eb6 #3 [ffff88063a015f48] kernel_thread at ffffffff8100c20a PID: 2971 TASK: ffff8806399d7580 CPU: 1 COMMAND: "ib_cm/1" #0 [ffff880637d99d70] schedule at ffffffff81528762 #1 [ffff880637d99e38] worker_thread at ffffffff81093d6c #2 [ffff880637d99ee8] kthread at ffffffff81099eb6 #3 [ffff880637d99f48] kernel_thread at ffffffff8100c20a PID: 2972 TASK: ffff8806387a6080 CPU: 2 COMMAND: "ib_cm/2" #0 [ffff880637d93d70] schedule at ffffffff81528762 #1 [ffff880637d93e38] worker_thread at ffffffff81093d6c #2 [ffff880637d93ee8] kthread at ffffffff81099eb6 #3 [ffff880637d93f48] kernel_thread at ffffffff8100c20a PID: 2973 TASK: ffff880639834080 CPU: 3 COMMAND: "ib_cm/3" #0 [ffff880637c4dd70] schedule at ffffffff81528762 #1 [ffff880637c4de38] worker_thread at ffffffff81093d6c #2 [ffff880637c4dee8] kthread at ffffffff81099eb6 #3 [ffff880637c4df48] kernel_thread at ffffffff8100c20a PID: 2974 TASK: ffff8806387a0b00 CPU: 4 COMMAND: "ib_cm/4" #0 [ffff88063a79dd70] schedule at ffffffff81528762 #1 [ffff88063a79de38] worker_thread at ffffffff81093d6c #2 [ffff88063a79dee8] kthread at ffffffff81099eb6 #3 [ffff88063a79df48] kernel_thread at ffffffff8100c20a PID: 2975 TASK: ffff88063a70cb00 CPU: 5 COMMAND: "ib_cm/5" #0 [ffff88063a067d70] schedule at ffffffff81528762 #1 [ffff88063a067e38] worker_thread at ffffffff81093d6c #2 [ffff88063a067ee8] kthread at ffffffff81099eb6 #3 [ffff88063a067f48] kernel_thread at ffffffff8100c20a PID: 2976 TASK: ffff88063a6d0080 CPU: 6 COMMAND: "ib_cm/6" #0 [ffff88063a71fd70] schedule at ffffffff81528762 #1 [ffff88063a71fe38] worker_thread at ffffffff81093d6c #2 [ffff88063a71fee8] kthread at ffffffff81099eb6 #3 [ffff88063a71ff48] kernel_thread at ffffffff8100c20a PID: 2977 TASK: ffff880639a86100 CPU: 7 COMMAND: "ib_cm/7" #0 [ffff8806385b5d70] schedule at ffffffff81528762 #1 [ffff8806385b5e38] worker_thread at ffffffff81093d6c #2 [ffff8806385b5ee8] kthread at ffffffff81099eb6 #3 [ffff8806385b5f48] kernel_thread at ffffffff8100c20a PID: 2978 TASK: ffff8806398c2b00 CPU: 3 COMMAND: "ipoib" #0 [ffff880638619d70] schedule at ffffffff81528762 #1 [ffff880638619e38] worker_thread at ffffffff81093d6c #2 [ffff880638619ee8] kthread at ffffffff81099eb6 #3 [ffff880638619f48] kernel_thread at ffffffff8100c20a PID: 3057 TASK: ffff88063a37a040 CPU: 2 COMMAND: "ib_addr" #0 [ffff880639829d70] schedule at ffffffff81528762 #1 [ffff880639829e38] worker_thread at ffffffff81093d6c #2 [ffff880639829ee8] kthread at ffffffff81099eb6 #3 [ffff880639829f48] kernel_thread at ffffffff8100c20a PID: 3058 TASK: ffff880639d2cb00 CPU: 1 COMMAND: "iw_cm_wq" #0 [ffff88063988dd70] schedule at ffffffff81528762 #1 [ffff88063988de38] worker_thread at ffffffff81093d6c #2 [ffff88063988dee8] kthread at ffffffff81099eb6 #3 [ffff88063988df48] kernel_thread at ffffffff8100c20a PID: 3059 TASK: ffff880639d67500 CPU: 7 COMMAND: "rdma_cm" #0 [ffff8806328f5d70] schedule at ffffffff81528762 #1 [ffff8806328f5e38] worker_thread at ffffffff81093d6c #2 [ffff8806328f5ee8] kthread at ffffffff81099eb6 #3 [ffff8806328f5f48] kernel_thread at ffffffff8100c20a PID: 3060 TASK: ffff8806387a00c0 CPU: 0 COMMAND: "rx_comp_wq/0" #0 [ffff880632ac1d70] schedule at ffffffff81528762 #1 [ffff880632ac1e38] worker_thread at ffffffff81093d6c #2 [ffff880632ac1ee8] kthread at ffffffff81099eb6 #3 [ffff880632ac1f48] kernel_thread at ffffffff8100c20a PID: 3061 TASK: ffff880639b8c100 CPU: 1 COMMAND: "rx_comp_wq/1" #0 [ffff8806328f7d70] schedule at ffffffff81528762 #1 [ffff8806328f7e38] worker_thread at ffffffff81093d6c #2 [ffff8806328f7ee8] kthread at ffffffff81099eb6 #3 [ffff8806328f7f48] kernel_thread at ffffffff8100c20a PID: 3062 TASK: ffff88063a64cb40 CPU: 2 COMMAND: "rx_comp_wq/2" #0 [ffff8806328d7d70] schedule at ffffffff81528762 #1 [ffff8806328d7e38] worker_thread at ffffffff81093d6c #2 [ffff8806328d7ee8] kthread at ffffffff81099eb6 #3 [ffff8806328d7f48] kernel_thread at ffffffff8100c20a PID: 3063 TASK: ffff880639834ac0 CPU: 3 COMMAND: "rx_comp_wq/3" #0 [ffff880632875d70] schedule at ffffffff81528762 #1 [ffff880632875e38] worker_thread at ffffffff81093d6c #2 [ffff880632875ee8] kthread at ffffffff81099eb6 #3 [ffff880632875f48] kernel_thread at ffffffff8100c20a PID: 3064 TASK: ffff88063a6d1500 CPU: 4 COMMAND: "rx_comp_wq/4" #0 [ffff8806328ddd70] schedule at ffffffff81528762 #1 [ffff8806328dde38] worker_thread at ffffffff81093d6c #2 [ffff8806328ddee8] kthread at ffffffff81099eb6 #3 [ffff8806328ddf48] kernel_thread at ffffffff8100c20a PID: 3065 TASK: ffff880639c36080 CPU: 5 COMMAND: "rx_comp_wq/5" #0 [ffff88063286dd70] schedule at ffffffff81528762 #1 [ffff88063286de38] worker_thread at ffffffff81093d6c #2 [ffff88063286dee8] kthread at ffffffff81099eb6 #3 [ffff88063286df48] kernel_thread at ffffffff8100c20a PID: 3066 TASK: ffff8806399d8080 CPU: 6 COMMAND: "rx_comp_wq/6" #0 [ffff880632873d70] schedule at ffffffff81528762 #1 [ffff880632873e38] worker_thread at ffffffff81093d6c #2 [ffff880632873ee8] kthread at ffffffff81099eb6 #3 [ffff880632873f48] kernel_thread at ffffffff8100c20a PID: 3067 TASK: ffff88063a006b40 CPU: 7 COMMAND: "rx_comp_wq/7" #0 [ffff8806328e3d70] schedule at ffffffff81528762 #1 [ffff8806328e3e38] worker_thread at ffffffff81093d6c #2 [ffff8806328e3ee8] kthread at ffffffff81099eb6 #3 [ffff8806328e3f48] kernel_thread at ffffffff8100c20a PID: 3068 TASK: ffff88063a006100 CPU: 1 COMMAND: "sdp_wq" #0 [ffff88063281dd70] schedule at ffffffff81528762 #1 [ffff88063281de38] worker_thread at ffffffff81093d6c #2 [ffff88063281dee8] kthread at ffffffff81099eb6 #3 [ffff88063281df48] kernel_thread at ffffffff8100c20a PID: 3069 TASK: ffff88063a70c0c0 CPU: 1 COMMAND: "ib_fmr(mlx4_0)" #0 [ffff88063876ddd0] schedule at ffffffff81528762 #1 [ffff88063876de98] ib_fmr_cleanup_thread at ffffffffa025282a [ib_core] #2 [ffff88063876dee8] kthread at ffffffff81099eb6 #3 [ffff88063876df48] kernel_thread at ffffffff8100c20a PID: 3084 TASK: ffff880339af4ac0 CPU: 4 COMMAND: "multipathd" #0 [ffff88033527db38] schedule at ffffffff81528762 #1 [ffff88033527dc00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff88033527dc40] futex_wait at ffffffff810afab8 #3 [ffff88033527ddc0] do_futex at ffffffff810b1221 #4 [ffff88033527def0] sys_futex at ffffffff810b1cdb #5 [ffff88033527df80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120b5bc RSP: 00007fff17a1afa0 RFLAGS: 00010202 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 0000003bb0ee8b31 RDX: 0000000000000001 RSI: 0000000000000080 RDI: 0000000000610a44 RBP: 000000000040cc97 R8: 0000000000610a00 R9: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00007fff17a1b080 R13: 0000000000610b28 R14: 0000000000610b68 R15: 0000000000610b20 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 3090 TASK: ffff880639838100 CPU: 4 COMMAND: "multipathd" #0 [ffff8806328cdb38] schedule at ffffffff81528762 #1 [ffff8806328cdc00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff8806328cdc40] futex_wait at ffffffff810afab8 #3 [ffff8806328cddc0] do_futex at ffffffff810b1221 #4 [ffff8806328cdef0] sys_futex at ffffffff810b1cdb #5 [ffff8806328cdf80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120b5bc RSP: 00007fced37e3b78 RFLAGS: 00010202 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 000000000000001f RSI: 0000000000000080 RDI: 00000000025871d4 RBP: 0000000000610ae0 R8: 0000000002587100 R9: 000000000000000f R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003 R13: 0000000000000000 R14: 00007fced37e49c0 R15: 00007fff17a1b120 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 3091 TASK: ffff88063a6d0ac0 CPU: 4 COMMAND: "multipathd" #0 [ffff88063289d998] schedule at ffffffff81528762 #1 [ffff88063289da60] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff88063289db00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff88063289db20] do_sys_poll at ffffffff811a0b47 #4 [ffff88063289df40] sys_poll at ffffffff811a0e01 #5 [ffff88063289df80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf343 RSP: 00007fced35c8d10 RFLAGS: 00000293 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 0000003bb0edf343 RDX: 0000000000001388 RSI: 0000000000000001 RDI: 00007fcecc001cd0 RBP: 0000000000000010 R8: 0000000000000000 R9: 00007fcecc000070 R10: 000000000001f170 R11: 0000000000000293 R12: 0000000000000008 R13: 0000000000610b90 R14: 0000000000000000 R15: 0000000000000001 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 3092 TASK: ffff880639c154c0 CPU: 5 COMMAND: "multipathd" #0 [ffff88063b63b8b8] schedule at ffffffff81528762 #1 [ffff88063b63b980] schedule_timeout at ffffffff81529655 #2 [ffff88063b63ba30] __skb_recv_datagram at ffffffff814551c7 #3 [ffff88063b63bae0] skb_recv_datagram at ffffffff81455244 #4 [ffff88063b63bb00] unix_dgram_recvmsg at ffffffff814f600f #5 [ffff88063b63bbc0] sock_recvmsg at ffffffff8144a573 #6 [ffff88063b63bd80] __sys_recvmsg at ffffffff814484fc #7 [ffff88063b63bf10] sys_recvmsg at ffffffff81449039 #8 [ffff88063b63bf80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120ec3d RSP: 00007fced31b4b60 RFLAGS: 00010202 RAX: 000000000000002f RBX: ffffffff8100b072 RCX: 00007fcec8003370 RDX: 0000000000000000 RSI: 00007fced31b4cc0 RDI: 0000000000000007 RBP: 0000000000610b68 R8: 0000003ee0e42ea0 R9: 0000000004000001 R10: 0000000000000001 R11: 0000000000000293 R12: 00007fced31b4d00 R13: 00007fced31b4cc0 R14: 00007fcec80008c0 R15: 00007fced31b4cc0 ORIG_RAX: 000000000000002f CS: 0033 SS: 002b PID: 3140 TASK: ffff88063a7214c0 CPU: 1 COMMAND: "multipathd" #0 [ffff880632a69b08] schedule at ffffffff81528762 #1 [ffff880632a69bd0] dm_wait_event at ffffffffa0213b21 [dm_mod] #2 [ffff880632a69c50] dev_wait at ffffffffa021bb3c [dm_mod] #3 [ffff880632a69c90] ctl_ioctl at ffffffffa021ca84 [dm_mod] #4 [ffff880632a69e50] dm_ctl_ioctl at ffffffffa021ccd3 [dm_mod] #5 [ffff880632a69e60] vfs_ioctl at ffffffff8119e202 #6 [ffff880632a69ea0] do_vfs_ioctl at ffffffff8119e3a4 #7 [ffff880632a69f30] sys_ioctl at ffffffff8119e921 #8 [ffff880632a69f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0ee0b37 RSP: 00007fced2d9e9b8 RFLAGS: 00010206 RAX: 0000000000000010 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 00007fceb80009b0 RSI: 00000000c138fd08 RDI: 0000000000000003 RBP: 0000003ab3a32434 R8: 0000003ab3a32586 R9: 0000003ab3a32434 R10: 0000003ab3a32434 R11: 0000000000000246 R12: 00007fceb80009e0 R13: 00007fceb80009b0 R14: 0000003ab3a32434 R15: 00007fceb80008c0 ORIG_RAX: 0000000000000010 CS: 0033 SS: 002b PID: 3142 TASK: ffff8806399d6100 CPU: 5 COMMAND: "multipathd" #0 [ffff880632a6bb08] schedule at ffffffff81528762 #1 [ffff880632a6bbd0] dm_wait_event at ffffffffa0213b21 [dm_mod] #2 [ffff880632a6bc50] dev_wait at ffffffffa021bb3c [dm_mod] #3 [ffff880632a6bc90] ctl_ioctl at ffffffffa021ca84 [dm_mod] #4 [ffff880632a6be50] dm_ctl_ioctl at ffffffffa021ccd3 [dm_mod] #5 [ffff880632a6be60] vfs_ioctl at ffffffff8119e202 #6 [ffff880632a6bea0] do_vfs_ioctl at ffffffff8119e3a4 #7 [ffff880632a6bf30] sys_ioctl at ffffffff8119e921 #8 [ffff880632a6bf80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0ee0b37 RSP: 00007fced2d969b8 RFLAGS: 00010206 RAX: 0000000000000010 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 00007fcebc0009b0 RSI: 00000000c138fd08 RDI: 0000000000000003 RBP: 0000003ab3a32434 R8: 0000003ab3a32586 R9: 0000003ab3a32434 R10: 0000003ab3a32434 R11: 0000000000000246 R12: 00007fcebc0009e0 R13: 00007fcebc0009b0 R14: 0000003ab3a32434 R15: 00007fcebc0008c0 ORIG_RAX: 0000000000000010 CS: 0033 SS: 002b PID: 3144 TASK: ffff880639c14a80 CPU: 2 COMMAND: "multipathd" #0 [ffff880632911b08] schedule at ffffffff81528762 #1 [ffff880632911bd0] dm_wait_event at ffffffffa0213b21 [dm_mod] #2 [ffff880632911c50] dev_wait at ffffffffa021bb3c [dm_mod] #3 [ffff880632911c90] ctl_ioctl at ffffffffa021ca84 [dm_mod] #4 [ffff880632911e50] dm_ctl_ioctl at ffffffffa021ccd3 [dm_mod] #5 [ffff880632911e60] vfs_ioctl at ffffffff8119e202 #6 [ffff880632911ea0] do_vfs_ioctl at ffffffff8119e3a4 #7 [ffff880632911f30] sys_ioctl at ffffffff8119e921 #8 [ffff880632911f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0ee0b37 RSP: 00007fced2d8e9b8 RFLAGS: 00010206 RAX: 0000000000000010 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 00007fceb40009b0 RSI: 00000000c138fd08 RDI: 0000000000000003 RBP: 0000003ab3a32434 R8: 0000003ab3a32586 R9: 0000003ab3a32434 R10: 0000003ab3a32434 R11: 0000000000000246 R12: 00007fceb40009e0 R13: 00007fceb40009b0 R14: 0000003ab3a32434 R15: 00007fceb40008c0 ORIG_RAX: 0000000000000010 CS: 0033 SS: 002b PID: 3146 TASK: ffff88063989eb00 CPU: 4 COMMAND: "multipathd" #0 [ffff880632913b08] schedule at ffffffff81528762 #1 [ffff880632913bd0] dm_wait_event at ffffffffa0213b21 [dm_mod] #2 [ffff880632913c50] dev_wait at ffffffffa021bb3c [dm_mod] #3 [ffff880632913c90] ctl_ioctl at ffffffffa021ca84 [dm_mod] #4 [ffff880632913e50] dm_ctl_ioctl at ffffffffa021ccd3 [dm_mod] #5 [ffff880632913e60] vfs_ioctl at ffffffff8119e202 #6 [ffff880632913ea0] do_vfs_ioctl at ffffffff8119e3a4 #7 [ffff880632913f30] sys_ioctl at ffffffff8119e921 #8 [ffff880632913f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0ee0b37 RSP: 00007fced2d869b8 RFLAGS: 00010206 RAX: 0000000000000010 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 00007fcec00009b0 RSI: 00000000c138fd08 RDI: 0000000000000003 RBP: 0000003ab3a32434 R8: 0000003ab3a32586 R9: 0000003ab3a32434 R10: 0000003ab3a32434 R11: 0000000000000246 R12: 00007fcec00009e0 R13: 00007fcec00009b0 R14: 0000003ab3a32434 R15: 00007fcec00008c0 ORIG_RAX: 0000000000000010 CS: 0033 SS: 002b PID: 3147 TASK: ffff880639bab580 CPU: 1 COMMAND: "multipathd" #0 [ffff880632915b08] schedule at ffffffff81528762 #1 [ffff880632915bd0] dm_wait_event at ffffffffa0213b21 [dm_mod] #2 [ffff880632915c50] dev_wait at ffffffffa021bb3c [dm_mod] #3 [ffff880632915c90] ctl_ioctl at ffffffffa021ca84 [dm_mod] #4 [ffff880632915e50] dm_ctl_ioctl at ffffffffa021ccd3 [dm_mod] #5 [ffff880632915e60] vfs_ioctl at ffffffff8119e202 #6 [ffff880632915ea0] do_vfs_ioctl at ffffffff8119e3a4 #7 [ffff880632915f30] sys_ioctl at ffffffff8119e921 #8 [ffff880632915f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0ee0b37 RSP: 00007fced2d7e9b8 RFLAGS: 00010206 RAX: 0000000000000010 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 00007fcea80009b0 RSI: 00000000c138fd08 RDI: 0000000000000003 RBP: 0000003ab3a32434 R8: 0000003ab3a32586 R9: 0000003ab3a32434 R10: 0000003ab3a32434 R11: 0000000000000246 R12: 00007fcea80009e0 R13: 00007fcea80009b0 R14: 0000003ab3a32434 R15: 00007fcea80008c0 ORIG_RAX: 0000000000000010 CS: 0033 SS: 002b PID: 3148 TASK: ffff880639baab40 CPU: 6 COMMAND: "multipathd" #0 [ffff880632917b08] schedule at ffffffff81528762 #1 [ffff880632917bd0] dm_wait_event at ffffffffa0213b21 [dm_mod] #2 [ffff880632917c50] dev_wait at ffffffffa021bb3c [dm_mod] #3 [ffff880632917c90] ctl_ioctl at ffffffffa021ca84 [dm_mod] #4 [ffff880632917e50] dm_ctl_ioctl at ffffffffa021ccd3 [dm_mod] #5 [ffff880632917e60] vfs_ioctl at ffffffff8119e202 #6 [ffff880632917ea0] do_vfs_ioctl at ffffffff8119e3a4 #7 [ffff880632917f30] sys_ioctl at ffffffff8119e921 #8 [ffff880632917f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0ee0b37 RSP: 00007fced2d769b8 RFLAGS: 00010206 RAX: 0000000000000010 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 00007fceb00009b0 RSI: 00000000c138fd08 RDI: 0000000000000003 RBP: 0000003ab3a32434 R8: 0000003ab3a32586 R9: 0000003ab3a32434 R10: 0000003ab3a32434 R11: 0000000000000246 R12: 00007fceb00009e0 R13: 00007fceb00009b0 R14: 0000003ab3a32434 R15: 00007fceb00008c0 ORIG_RAX: 0000000000000010 CS: 0033 SS: 002b PID: 3149 TASK: ffff8806387a6ac0 CPU: 2 COMMAND: "multipathd" #0 [ffff880632919b08] schedule at ffffffff81528762 #1 [ffff880632919bd0] dm_wait_event at ffffffffa0213b21 [dm_mod] #2 [ffff880632919c50] dev_wait at ffffffffa021bb3c [dm_mod] #3 [ffff880632919c90] ctl_ioctl at ffffffffa021ca84 [dm_mod] #4 [ffff880632919e50] dm_ctl_ioctl at ffffffffa021ccd3 [dm_mod] #5 [ffff880632919e60] vfs_ioctl at ffffffff8119e202 #6 [ffff880632919ea0] do_vfs_ioctl at ffffffff8119e3a4 #7 [ffff880632919f30] sys_ioctl at ffffffff8119e921 #8 [ffff880632919f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0ee0b37 RSP: 00007fced2d6e9b8 RFLAGS: 00010206 RAX: 0000000000000010 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 00007fcea00009b0 RSI: 00000000c138fd08 RDI: 0000000000000003 RBP: 0000003ab3a32434 R8: 0000003ab3a32586 R9: 0000003ab3a32434 R10: 0000003ab3a32434 R11: 0000000000000246 R12: 00007fcea00009e0 R13: 00007fcea00009b0 R14: 0000003ab3a32434 R15: 00007fcea00008c0 ORIG_RAX: 0000000000000010 CS: 0033 SS: 002b PID: 3150 TASK: ffff88063989f540 CPU: 5 COMMAND: "multipathd" #0 [ffff88063291bda8] schedule at ffffffff81528762 #1 [ffff88063291be70] do_nanosleep at ffffffff8152a38b #2 [ffff88063291bea0] hrtimer_nanosleep at ffffffff8109f5f4 #3 [ffff88063291bf50] sys_nanosleep at ffffffff8109f71e #4 [ffff88063291bf80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0eaccdd RSP: 00007fced2d66ba8 RFLAGS: 00010202 RAX: 0000000000000023 RBX: ffffffff8100b072 RCX: 0000000000000008 RDX: 0000000000000000 RSI: 00007fced2d66d10 RDI: 00007fced2d66d10 RBP: 00007fced2d66c10 R8: 00007fced2d66b70 R9: 000000000259e360 R10: 0000000000000008 R11: 0000000000000293 R12: 00000000ffffffff R13: 00007fced2d66c90 R14: 0000000000000000 R15: 0000000000000001 ORIG_RAX: 0000000000000023 CS: 0033 SS: 002b PID: 3151 TASK: ffff880639d21500 CPU: 0 COMMAND: "multipathd" #0 [ffff88063291db38] schedule at ffffffff81528762 #1 [ffff88063291dc00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff88063291dc40] futex_wait at ffffffff810afab8 #3 [ffff88063291ddc0] do_futex at ffffffff810b1221 #4 [ffff88063291def0] sys_futex at ffffffff810b1cdb #5 [ffff88063291df80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120b5bc RSP: 00007fced2d558c0 RFLAGS: 00000202 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 0000000000000001 RDX: 0000000000000039 RSI: 0000000000000080 RDI: 0000003ee0e42ea4 RBP: 0000003ee0e43518 R8: 0000003ee0e42e00 R9: 000000000000001c R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003 R13: 0000000000000000 R14: 00007fced2d569c0 R15: 0000003ee0e41788 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 3541 TASK: ffff880639820b40 CPU: 4 COMMAND: "auditd" #0 [ffff880632a47ce8] schedule at ffffffff81528762 #1 [ffff880632a47db0] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff880632a47e50] ep_poll at ffffffff811d201d #3 [ffff880632a47f40] sys_epoll_wait at ffffffff811d2165 #4 [ffff880632a47f80] system_call_fastpath at ffffffff8100b072 RIP: 00007fc2707f5163 RSP: 00007fffdd709f00 RFLAGS: 00000293 RAX: 00000000000000e8 RBX: ffffffff8100b072 RCX: 00007fc270aaebd3 RDX: 0000000000000040 RSI: 00007fc272121b60 RDI: 0000000000000006 RBP: 00007fc2719ba460 R8: 0000000000000000 R9: 00007fc2717a4dea R10: 000000000000e95f R11: 0000000000000293 R12: 0000000000000000 R13: 00007fc2719ba4c8 R14: 404ddf1a9fbe76c9 R15: 0000000000000000 ORIG_RAX: 00000000000000e8 CS: 0033 SS: 002b PID: 3542 TASK: ffff880338c9a080 CPU: 5 COMMAND: "auditd" #0 [ffff88033525bb38] schedule at ffffffff81528762 #1 [ffff88033525bc00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff88033525bc40] futex_wait at ffffffff810afab8 #3 [ffff88033525bdc0] do_futex at ffffffff810b1221 #4 [ffff88033525bef0] sys_futex at ffffffff810b1cdb #5 [ffff88033525bf80] system_call_fastpath at ffffffff8100b072 RIP: 00007fc270aab5bc RSP: 00007fc2704fcd60 RFLAGS: 00000246 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 00000000015d0665 RDX: 0000000000013d57 RSI: 0000000000000080 RDI: 00007fc2719ba294 RBP: 00007fc2719ba268 R8: 00007fc2719ba200 R9: 0000000000009eab R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003 R13: 0000000000000000 R14: 00007fc272126b00 R15: 00007fc2719ba290 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 4342 TASK: ffff880631c14a80 CPU: 0 COMMAND: "irqbalance" #0 [ffff880639a03da8] schedule at ffffffff81528762 #1 [ffff880639a03e70] do_nanosleep at ffffffff8152a38b #2 [ffff880639a03ea0] hrtimer_nanosleep at ffffffff8109f5f4 #3 [ffff880639a03f50] sys_nanosleep at ffffffff8109f71e #4 [ffff880639a03f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f9b0cee8cc0 RSP: 00007fff5b73c740 RFLAGS: 00010206 RAX: 0000000000000023 RBX: ffffffff8100b072 RCX: 00007f9b0da7f008 RDX: 0000000000000009 RSI: 0000000000000000 RDI: 00007fff5b73f5c0 RBP: 00007f9b0dc8ed34 R8: 00007f9b0da6a720 R9: 0000000000000000 R10: 00000000ffffffff R11: 0000000000000246 R12: 00007f9b0dc8ed3c R13: 0000000000000008 R14: 000000003b83b3d8 R15: 0000000000000009 ORIG_RAX: 0000000000000023 CS: 0033 SS: 002b PID: 4356 TASK: ffff88063297b500 CPU: 3 COMMAND: "rpcbind" #0 [ffff8806329c7998] schedule at ffffffff81528762 #1 [ffff8806329c7a60] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff8806329c7b00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff8806329c7b20] do_sys_poll at ffffffff811a0b47 #4 [ffff8806329c7f40] sys_poll at ffffffff811a0e01 #5 [ffff8806329c7f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f56f798e308 RSP: 00007ffffed05d28 RFLAGS: 00000246 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000007530 RSI: 0000000000000007 RDI: 00007ffffed05e50 RBP: 00007f56f7c3fd60 R8: 0000000000000000 R9: 0000000000000000 R10: 00007f56f86ad38c R11: 0000000000000246 R12: 00007f56f86adb20 R13: 00007f56f8ce7c60 R14: 0000000000000000 R15: 0000000000000007 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 4379 TASK: ffff880632928040 CPU: 0 COMMAND: "rpciod/0" #0 [ffff880632a21d70] schedule at ffffffff81528762 #1 [ffff880632a21e38] worker_thread at ffffffff81093d6c #2 [ffff880632a21ee8] kthread at ffffffff81099eb6 #3 [ffff880632a21f48] kernel_thread at ffffffff8100c20a PID: 4380 TASK: ffff880632932040 CPU: 1 COMMAND: "rpciod/1" #0 [ffff880632bc5d70] schedule at ffffffff81528762 #1 [ffff880632bc5e38] worker_thread at ffffffff81093d6c #2 [ffff880632bc5ee8] kthread at ffffffff81099eb6 #3 [ffff880632bc5f48] kernel_thread at ffffffff8100c20a PID: 4381 TASK: ffff880632a4eb00 CPU: 2 COMMAND: "rpciod/2" #0 [ffff8806329a7d70] schedule at ffffffff81528762 #1 [ffff8806329a7e38] worker_thread at ffffffff81093d6c #2 [ffff8806329a7ee8] kthread at ffffffff81099eb6 #3 [ffff8806329a7f48] kernel_thread at ffffffff8100c20a PID: 4382 TASK: ffff880639a8ab00 CPU: 3 COMMAND: "rpciod/3" #0 [ffff880632a07d70] schedule at ffffffff81528762 #1 [ffff880632a07e38] worker_thread at ffffffff81093d6c #2 [ffff880632a07ee8] kthread at ffffffff81099eb6 #3 [ffff880632a07f48] kernel_thread at ffffffff8100c20a PID: 4383 TASK: ffff880632948b40 CPU: 4 COMMAND: "rpciod/4" #0 [ffff880632aabd70] schedule at ffffffff81528762 #1 [ffff880632aabe38] worker_thread at ffffffff81093d6c #2 [ffff880632aabee8] kthread at ffffffff81099eb6 #3 [ffff880632aabf48] kernel_thread at ffffffff8100c20a PID: 4384 TASK: ffff880632ae4b00 CPU: 5 COMMAND: "rpciod/5" #0 [ffff880632a53d70] schedule at ffffffff81528762 #1 [ffff880632a53e38] worker_thread at ffffffff81093d6c #2 [ffff880632a53ee8] kthread at ffffffff81099eb6 #3 [ffff880632a53f48] kernel_thread at ffffffff8100c20a PID: 4385 TASK: ffff880639956040 CPU: 6 COMMAND: "rpciod/6" #0 [ffff880632a29d70] schedule at ffffffff81528762 #1 [ffff880632a29e38] worker_thread at ffffffff81093d6c #2 [ffff880632a29ee8] kthread at ffffffff81099eb6 #3 [ffff880632a29f48] kernel_thread at ffffffff8100c20a PID: 4386 TASK: ffff880631c14040 CPU: 7 COMMAND: "rpciod/7" #0 [ffff880632a77d70] schedule at ffffffff81528762 #1 [ffff880632a77e38] worker_thread at ffffffff81093d6c #2 [ffff880632a77ee8] kthread at ffffffff81099eb6 #3 [ffff880632a77f48] kernel_thread at ffffffff8100c20a PID: 4390 TASK: ffff88033a00d4c0 CPU: 6 COMMAND: "rpc.idmapd" #0 [ffff88033389dce8] schedule at ffffffff81528762 #1 [ffff88033389ddb0] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff88033389de50] ep_poll at ffffffff811d201d #3 [ffff88033389df40] sys_epoll_wait at ffffffff811d2165 #4 [ffff88033389df80] system_call_fastpath at ffffffff8100b072 RIP: 00007f27cd61d143 RSP: 00007fff7e791768 RFLAGS: 00000246 RAX: 00000000000000e8 RBX: ffffffff8100b072 RCX: ffffffffffffffff RDX: 0000000000000020 RSI: 00007f27cf8a5c30 RDI: 0000000000000003 RBP: 00007f27cf8a5c30 R8: 0000000000000000 R9: 0000000000000001 R10: 00000000ffffffff R11: 0000000000000246 R12: 00007f27cf8a5c00 R13: 0000000000000000 R14: 00007fff7e7917e0 R15: 00007f27cf8a8660 ORIG_RAX: 00000000000000e8 CS: 0033 SS: 002b PID: 4526 TASK: ffff88033910d580 CPU: 6 COMMAND: "dbus-daemon" #0 [ffff8803352c1998] schedule at ffffffff81528762 #1 [ffff8803352c1a60] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff8803352c1b00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff8803352c1b20] do_sys_poll at ffffffff811a0b47 #4 [ffff8803352c1f40] sys_poll at ffffffff811a0e01 #5 [ffff8803352c1f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f7faa857308 RSP: 00007fff8275aa60 RFLAGS: 00000206 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 00007f7fab9e54e0 RDX: ffffffffffffffff RSI: 0000000000000007 RDI: 00007fff8275b140 RBP: 0000000000000000 R8: ffffffffffffffff R9: 00007f7faab07ed0 R10: 00007f7faab07ed0 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fff8275b140 R14: 0000000000000007 R15: 00007f7fad788180 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 4545 TASK: ffff88033388eac0 CPU: 5 COMMAND: "rpc.statd" #0 [ffff88033392f848] schedule at ffffffff81528762 #1 [ffff88033392f910] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff88033392f9b0] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff88033392f9d0] do_select at ffffffff811a146c #4 [ffff88033392fd70] core_sys_select at ffffffff811a173a #5 [ffff88033392ff10] sys_select at ffffffff811a1ac7 #6 [ffff88033392ff80] system_call_fastpath at ffffffff8100b072 RIP: 00007f705f9995c3 RSP: 00007fff89e92048 RFLAGS: 00010246 RAX: 0000000000000017 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 00007fff89e934f0 RDI: 0000000000000400 RBP: 00007fff89e93588 R8: 0000000000000000 R9: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00007f7060eff260 R13: 0000000000000001 R14: 00007fff89e93570 R15: 00007f7060cf4c99 ORIG_RAX: 0000000000000017 CS: 0033 SS: 002b PID: 4567 TASK: ffff88033995cb40 CPU: 0 COMMAND: "ypbind" #0 [ffff8803350bf998] schedule at ffffffff81528762 #1 [ffff8803350bfa60] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff8803350bfb00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff8803350bfb20] do_sys_poll at ffffffff811a0b47 #4 [ffff8803350bff40] sys_poll at ffffffff811a0e01 #5 [ffff8803350bff80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf343 RSP: 00007fffb7f90f98 RFLAGS: 00010283 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 0000000000000030 RDX: ffffffffffffffff RSI: 0000000000000002 RDI: 0000000001826270 RBP: 0000000000000002 R8: 0000000000000000 R9: 00000000000011d7 R10: 0000000000000040 R11: 0000000000000293 R12: 0000000000000000 R13: 0000003bb11937c8 R14: 0000000000000002 R15: 0000000001826270 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 4569 TASK: ffff8806399ed500 CPU: 3 COMMAND: "ypbind" #0 [ffff880631c2fd08] schedule at ffffffff81528762 #1 [ffff880631c2fdd0] schedule_timeout at ffffffff81529655 #2 [ffff880631c2fe80] schedule_timeout_interruptible at ffffffff8152977e #3 [ffff880631c2fe90] sys_rt_sigtimedwait at ffffffff8108bdc6 #4 [ffff880631c2ff80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120f4b5 RSP: 00007fabeca21da8 RFLAGS: 00010246 RAX: 0000000000000080 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007fabeca21db0 RBP: 00007fabeca21e7c R8: 0000000000000000 R9: 0000000000000080 R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000000003 R13: 0000000000000000 R14: 00007fabeca229c0 R15: 00007fabeca21e7c ORIG_RAX: 0000000000000080 CS: 0033 SS: 002b PID: 4573 TASK: ffff88063280aa80 CPU: 4 COMMAND: "ypbind" #0 [ffff8806385a3da8] schedule at ffffffff81528762 #1 [ffff8806385a3e70] do_nanosleep at ffffffff8152a38b #2 [ffff8806385a3ea0] hrtimer_nanosleep at ffffffff8109f5f4 #3 [ffff8806385a3f50] sys_nanosleep at ffffffff8109f71e #4 [ffff8806385a3f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0eaccdd RSP: 00007fabeb61fbb8 RFLAGS: 00010246 RAX: 0000000000000023 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 00007fabeb61fd60 RSI: 00007fabeb61fe60 RDI: 00007fabeb61fe60 RBP: 00007fabeb61fd60 R8: 000000000060aa30 R9: 0000000000000000 R10: 0000000000000008 R11: 0000000000000293 R12: 00000000ffffffff R13: 00007fabeb61fde0 R14: 0000000000000000 R15: 0000000000000014 ORIG_RAX: 0000000000000023 CS: 0033 SS: 002b PID: 4595 TASK: ffff880339af5500 CPU: 1 COMMAND: "cupsd" #0 [ffff880335235ce8] schedule at ffffffff81528762 #1 [ffff880335235db0] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff880335235e50] ep_poll at ffffffff811d201d #3 [ffff880335235f40] sys_epoll_wait at ffffffff811d2165 #4 [ffff880335235f80] system_call_fastpath at ffffffff8100b072 RIP: 00007fa18cf1f143 RSP: 00007fffcb0547f8 RFLAGS: 00010246 RAX: 00000000000000e8 RBX: ffffffff8100b072 RCX: 20c49ba5e353f7cf RDX: 0000000000001000 RSI: 00007fa190d4bb10 RDI: 0000000000000004 RBP: 0000000000015180 R8: 0000000000000000 R9: 0000000000000000 R10: 00000000ffffffff R11: 0000000000000246 R12: 00007fa19049c2e0 R13: 0000000000000000 R14: 0000000000000001 R15: 00007fffcb054d40 ORIG_RAX: 00000000000000e8 CS: 0033 SS: 002b PID: 4616 TASK: ffff880632901540 CPU: 1 COMMAND: "kslowd000" #0 [ffff8806385e7da0] schedule at ffffffff81528762 #1 [ffff8806385e7e68] slow_work_thread at ffffffff811142eb #2 [ffff8806385e7ee8] kthread at ffffffff81099eb6 #3 [ffff8806385e7f48] kernel_thread at ffffffff8100c20a PID: 4617 TASK: ffff880632a2b500 CPU: 1 COMMAND: "kslowd001" #0 [ffff880639987da0] schedule at ffffffff81528762 #1 [ffff880639987e68] slow_work_thread at ffffffff811142eb #2 [ffff880639987ee8] kthread at ffffffff81099eb6 #3 [ffff880639987f48] kernel_thread at ffffffff8100c20a PID: 4618 TASK: ffff880632a8ab40 CPU: 1 COMMAND: "nfsiod" #0 [ffff880639d47d70] schedule at ffffffff81528762 #1 [ffff880639d47e38] worker_thread at ffffffff81093d6c #2 [ffff880639d47ee8] kthread at ffffffff81099eb6 #3 [ffff880639d47f48] kernel_thread at ffffffff8100c20a PID: 4625 TASK: ffff88033a1814c0 CPU: 5 COMMAND: "lockd" #0 [ffff880333865c50] schedule at ffffffff81528762 #1 [ffff880333865d18] schedule_timeout at ffffffff81529655 #2 [ffff880333865dc8] svc_recv at ffffffffa039c265 [sunrpc] #3 [ffff880333865e58] lockd at ffffffffa03fc0a1 [lockd] #4 [ffff880333865ee8] kthread at ffffffff81099eb6 #5 [ffff880333865f48] kernel_thread at ffffffff8100c20a PID: 4630 TASK: ffff8803338634c0 CPU: 5 COMMAND: "nfsv4.0-svc" #0 [ffff88033998fcb0] schedule at ffffffff81528762 #1 [ffff88033998fd78] schedule_timeout at ffffffff81529655 #2 [ffff88033998fe28] svc_recv at ffffffffa039c265 [sunrpc] #3 [ffff88033998feb8] nfs4_callback_svc at ffffffffa044f0a3 [nfs] #4 [ffff88033998fee8] kthread at ffffffff81099eb6 #5 [ffff88033998ff48] kernel_thread at ffffffff8100c20a PID: 4642 TASK: ffff8803392aab40 CPU: 4 COMMAND: "acpid" #0 [ffff8803352a9998] schedule at ffffffff81528762 #1 [ffff8803352a9a60] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff8803352a9b00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff8803352a9b20] do_sys_poll at ffffffff811a0b47 #4 [ffff8803352a9f40] sys_poll at ffffffff811a0e01 #5 [ffff8803352a9f80] system_call_fastpath at ffffffff8100b072 RIP: 00007fc780139308 RSP: 00007fff90c1c780 RFLAGS: 00010206 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 0000000000001000 RDX: ffffffffffffffff RSI: 0000000000000002 RDI: 00007fff90c1d7d0 RBP: 00007fc7805f86a8 R8: 0000000000000002 R9: 00007fc78094bcd4 R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000001 R14: 0000000000000005 R15: 0000000000000003 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 4651 TASK: ffff880338dd4080 CPU: 0 COMMAND: "hald" #0 [ffff880336cd5998] schedule at ffffffff81528762 #1 [ffff880336cd5a60] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff880336cd5b00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff880336cd5b20] do_sys_poll at ffffffff811a0b47 #4 [ffff880336cd5f40] sys_poll at ffffffff811a0e01 #5 [ffff880336cd5f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf343 RSP: 00007fff8e9be548 RFLAGS: 00000246 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 000000000858ae3e RDX: 0000000000005e25 RSI: 000000000000000c RDI: 0000000000c1d9f0 RBP: 0000000000c1d9f0 R8: 0000000000000000 R9: 000000000000122b R10: 0000000000000001 R11: 0000000000000293 R12: 000000000000000c R13: 0000003bb2504360 R14: 0000003bb2250ba0 R15: 0000000000b56e80 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 4652 TASK: ffff880632a2e0c0 CPU: 0 COMMAND: "hald-runner" #0 [ffff88063298f998] schedule at ffffffff81528762 #1 [ffff88063298fa60] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff88063298fb00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff88063298fb20] do_sys_poll at ffffffff811a0b47 #4 [ffff88063298ff40] sys_poll at ffffffff811a0e01 #5 [ffff88063298ff80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf308 RSP: 00007fff000f0710 RFLAGS: 00010206 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: ffffffffffffffff RSI: 0000000000000001 RDI: 00000000022fb4a0 RBP: 00000000022fb4a0 R8: 0000000000000001 R9: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001 R13: 0000003bb2504360 R14: 0000003bb2250ba0 R15: 00000000022fa6a0 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 4653 TASK: ffff880632a2a080 CPU: 1 COMMAND: "hald" #0 [ffff8806328c1be8] schedule at ffffffff81528762 #1 [ffff8806328c1cb0] pipe_wait at ffffffff8119408b #2 [ffff8806328c1d00] pipe_read at ffffffff81194b36 #3 [ffff8806328c1dc0] do_sync_read at ffffffff8118941a #4 [ffff8806328c1ef0] vfs_read at ffffffff81189d05 #5 [ffff8806328c1f30] sys_read at ffffffff81189e41 #6 [ffff8806328c1f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120e75d RSP: 00007f4e5c386e08 RFLAGS: 00010246 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000000000000001 RDX: 0000000000000014 RSI: 00007f4e5c386e30 RDI: 000000000000000c RBP: 0000000000000001 R8: 0000000000b56e88 R9: 0000000000000000 R10: 0000000000000001 R11: 0000000000000293 R12: 0000003bb2503cc0 R13: 0000003bb2503c88 R14: 0000003bb2504360 R15: 0000003bb2503c88 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 4705 TASK: ffff880339a1b580 CPU: 1 COMMAND: "hald-addon-inpu" #0 [ffff88033211f998] schedule at ffffffff81528762 #1 [ffff88033211fa60] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff88033211fb00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff88033211fb20] do_sys_poll at ffffffff811a0b47 #4 [ffff88033211ff40] sys_poll at ffffffff811a0e01 #5 [ffff88033211ff80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf308 RSP: 00007fff3d11b978 RFLAGS: 00010206 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 00000000004035d3 RDX: ffffffffffffffff RSI: 0000000000000004 RDI: 0000000001347910 RBP: 0000000001347910 R8: 0000000000000001 R9: 0000000000000001 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000004 R13: 0000003bb2504360 R14: 0000003bb2250ba0 R15: 00000000013482d0 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 4715 TASK: ffff8803392c8a80 CPU: 0 COMMAND: "hald-addon-acpi" #0 [ffff880333a15a18] schedule at ffffffff81528762 #1 [ffff880333a15ae0] schedule_timeout at ffffffff81529655 #2 [ffff880333a15b90] unix_stream_recvmsg at ffffffff814f5ca9 #3 [ffff880333a15cf0] sock_aio_read at ffffffff81447651 #4 [ffff880333a15dc0] do_sync_read at ffffffff8118941a #5 [ffff880333a15ef0] vfs_read at ffffffff81189dd1 #6 [ffff880333a15f30] sys_read at ffffffff81189e41 #7 [ffff880333a15f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fff6ac63c68 RFLAGS: 00010202 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000001000 RSI: 00007f79fb482000 RDI: 0000000000000004 RBP: 00000000000000ff R8: 0000000000000001 R9: 0000000000000000 R10: 000000000000000d R11: 0000000000000246 R12: 0000000000000000 R13: 000000000000000a R14: 00000000013cbf80 R15: 000000000000000a ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 4751 TASK: ffff880632a2f540 CPU: 1 COMMAND: "rpc.rquotad" #0 [ffff880632bcb998] schedule at ffffffff81528762 #1 [ffff880632bcba60] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff880632bcbb00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff880632bcbb20] do_sys_poll at ffffffff811a0b47 #4 [ffff880632bcbf40] sys_poll at ffffffff811a0e01 #5 [ffff880632bcbf80] system_call_fastpath at ffffffff8100b072 RIP: 00007f479de22308 RSP: 00007fffbc4bfbe8 RFLAGS: 00010246 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 00000000000000c3 RDX: ffffffffffffffff RSI: 0000000000000002 RDI: 00007f479f1ea2f0 RBP: 0000000000000002 R8: 0000000000000000 R9: 0000000000000000 R10: 0000000000000010 R11: 0000000000000246 R12: 00007fffbc4bfcb8 R13: 00007f479e0d67c8 R14: 0000000000000002 R15: 00007f479f1ea2f0 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 4755 TASK: ffff880632a2eb00 CPU: 1 COMMAND: "rpc.mountd" #0 [ffff88063a27b848] schedule at ffffffff81528762 #1 [ffff88063a27b910] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff88063a27b9b0] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff88063a27b9d0] do_select at ffffffff811a146c #4 [ffff88063a27bd70] core_sys_select at ffffffff811a173a #5 [ffff88063a27bf10] sys_select at ffffffff811a1ac7 #6 [ffff88063a27bf80] system_call_fastpath at ffffffff8100b072 RIP: 00007eff5b39b5c3 RSP: 00007fffe2369868 RFLAGS: 00010202 RAX: 0000000000000017 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 00007fffe2369890 RDI: 0000000000000400 RBP: 00007eff5c0b8b20 R8: 0000000000000000 R9: 0000000000000001 R10: 0000000000000000 R11: 0000000000000246 R12: 00007eff5c71a8e3 R13: 00007eff5c6eb6e8 R14: 00007fffe2369890 R15: 0000000000000001 ORIG_RAX: 0000000000000017 CS: 0033 SS: 002b PID: 4760 TASK: ffff88033a180a80 CPU: 4 COMMAND: "nfsd4" #0 [ffff880335373d70] schedule at ffffffff81528762 #1 [ffff880335373e38] worker_thread at ffffffff81093d6c #2 [ffff880335373ee8] kthread at ffffffff81099eb6 #3 [ffff880335373f48] kernel_thread at ffffffff8100c20a PID: 4761 TASK: ffff8803399cea80 CPU: 5 COMMAND: "nfsd4_callbacks" #0 [ffff8803347a5d70] schedule at ffffffff81528762 #1 [ffff8803347a5e38] worker_thread at ffffffff81093d6c #2 [ffff8803347a5ee8] kthread at ffffffff81099eb6 #3 [ffff8803347a5f48] kernel_thread at ffffffff8100c20a PID: 4762 TASK: ffff880335097500 CPU: 4 COMMAND: "nfsd" #0 [ffff88033479bcb0] schedule at ffffffff81528762 #1 [ffff88033479bd78] schedule_timeout at ffffffff815295d2 #2 [ffff88033479be28] svc_recv at ffffffffa039c265 [sunrpc] #3 [ffff88033479beb8] nfsd at ffffffffa049bb35 [nfsd] #4 [ffff88033479bee8] kthread at ffffffff81099eb6 #5 [ffff88033479bf48] kernel_thread at ffffffff8100c20a PID: 4763 TASK: ffff880339b5eb40 CPU: 6 COMMAND: "nfsd" #0 [ffff8803347e9cb0] schedule at ffffffff81528762 #1 [ffff8803347e9d78] schedule_timeout at ffffffff815295d2 #2 [ffff8803347e9e28] svc_recv at ffffffffa039c265 [sunrpc] #3 [ffff8803347e9eb8] nfsd at ffffffffa049bb35 [nfsd] #4 [ffff8803347e9ee8] kthread at ffffffff81099eb6 #5 [ffff8803347e9f48] kernel_thread at ffffffff8100c20a PID: 4764 TASK: ffff8803392a4080 CPU: 4 COMMAND: "nfsd" #0 [ffff880335323cb0] schedule at ffffffff81528762 #1 [ffff880335323d78] schedule_timeout at ffffffff815295d2 #2 [ffff880335323e28] svc_recv at ffffffffa039c265 [sunrpc] #3 [ffff880335323eb8] nfsd at ffffffffa049bb35 [nfsd] #4 [ffff880335323ee8] kthread at ffffffff81099eb6 #5 [ffff880335323f48] kernel_thread at ffffffff8100c20a PID: 4765 TASK: ffff880339af4080 CPU: 4 COMMAND: "nfsd" #0 [ffff8803351fdcb0] schedule at ffffffff81528762 #1 [ffff8803351fdd78] schedule_timeout at ffffffff815295d2 #2 [ffff8803351fde28] svc_recv at ffffffffa039c265 [sunrpc] #3 [ffff8803351fdeb8] nfsd at ffffffffa049bb35 [nfsd] #4 [ffff8803351fdee8] kthread at ffffffff81099eb6 #5 [ffff8803351fdf48] kernel_thread at ffffffff8100c20a PID: 4766 TASK: ffff88033a180040 CPU: 4 COMMAND: "nfsd" #0 [ffff8803338d9cb0] schedule at ffffffff81528762 #1 [ffff8803338d9d78] schedule_timeout at ffffffff815295d2 #2 [ffff8803338d9e28] svc_recv at ffffffffa039c265 [sunrpc] #3 [ffff8803338d9eb8] nfsd at ffffffffa049bb35 [nfsd] #4 [ffff8803338d9ee8] kthread at ffffffff81099eb6 #5 [ffff8803338d9f48] kernel_thread at ffffffff8100c20a PID: 4767 TASK: ffff880339bceac0 CPU: 4 COMMAND: "nfsd" #0 [ffff8803361d3cb0] schedule at ffffffff81528762 #1 [ffff8803361d3d78] schedule_timeout at ffffffff815295d2 #2 [ffff8803361d3e28] svc_recv at ffffffffa039c265 [sunrpc] #3 [ffff8803361d3eb8] nfsd at ffffffffa049bb35 [nfsd] #4 [ffff8803361d3ee8] kthread at ffffffff81099eb6 #5 [ffff8803361d3f48] kernel_thread at ffffffff8100c20a PID: 4768 TASK: ffff8803338a6100 CPU: 4 COMMAND: "nfsd" #0 [ffff880336c95cb0] schedule at ffffffff81528762 #1 [ffff880336c95d78] schedule_timeout at ffffffff815295d2 #2 [ffff880336c95e28] svc_recv at ffffffffa039c265 [sunrpc] #3 [ffff880336c95eb8] nfsd at ffffffffa049bb35 [nfsd] #4 [ffff880336c95ee8] kthread at ffffffff81099eb6 #5 [ffff880336c95f48] kernel_thread at ffffffff8100c20a PID: 4769 TASK: ffff8803338eb500 CPU: 4 COMMAND: "nfsd" #0 [ffff88033a2f7cb0] schedule at ffffffff81528762 #1 [ffff88033a2f7d78] schedule_timeout at ffffffff815295d2 #2 [ffff88033a2f7e28] svc_recv at ffffffffa039c265 [sunrpc] #3 [ffff88033a2f7eb8] nfsd at ffffffffa049bb35 [nfsd] #4 [ffff88033a2f7ee8] kthread at ffffffff81099eb6 #5 [ffff88033a2f7f48] kernel_thread at ffffffff8100c20a PID: 4791 TASK: ffff880631c00100 CPU: 4 COMMAND: "nscd" #0 [ffff880632afdce8] schedule at ffffffff81528762 #1 [ffff880632afddb0] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff880632afde50] ep_poll at ffffffff811d201d #3 [ffff880632afdf40] sys_epoll_wait at ffffffff811d2165 #4 [ffff880632afdf80] system_call_fastpath at ffffffff8100b072 RIP: 00007f3f3b443163 RSP: 00007fff89d25820 RFLAGS: 00000293 RAX: 00000000000000e8 RBX: ffffffff8100b072 RCX: 00007f3f3b443163 RDX: 0000000000000064 RSI: 00007fff89d25860 RDI: 000000000000000f RBP: 00007fff89d25d50 R8: 00007f3f3c7b18a0 R9: 0000000004000001 R10: 0000000000007518 R11: 0000000000000293 R12: 0000000053b6aecf R13: 0000000000000000 R14: 0000000000000001 R15: 00007fff89d25870 ORIG_RAX: 00000000000000e8 CS: 0033 SS: 002b PID: 4803 TASK: ffff88033a3a5540 CPU: 6 COMMAND: "nscd" #0 [ffff88033999db38] schedule at ffffffff81528762 #1 [ffff88033999dc00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff88033999dc40] futex_wait at ffffffff810afab8 #3 [ffff88033999ddc0] do_futex at ffffffff810b1221 #4 [ffff88033999def0] sys_futex at ffffffff810b1cdb #5 [ffff88033999df80] system_call_fastpath at ffffffff8100b072 RIP: 00007f3f3bf5198e RSP: 00007f3f2aca6300 RFLAGS: 00010246 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 00000000000016ab RSI: 0000000000000089 RDI: 00007f3f3c7b10dc RBP: 0000000053b6ae6a R8: 00007f3f3c7b1108 R9: 00000000ffffffff R10: 00007f3f2aca6dd0 R11: 0000000000000202 R12: 00007f3f3c7b1100 R13: ffffffffffffff92 R14: 00007f3f2aca6dd0 R15: 00000000000016ab ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 4804 TASK: ffff880339d09580 CPU: 4 COMMAND: "nscd" #0 [ffff880339cc1b38] schedule at ffffffff81528762 #1 [ffff880339cc1c00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff880339cc1c40] futex_wait at ffffffff810afab8 #3 [ffff880339cc1dc0] do_futex at ffffffff810b1221 #4 [ffff880339cc1ef0] sys_futex at ffffffff810b1cdb #5 [ffff880339cc1f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f3f3bf5198e RSP: 00007f3f2aaa5890 RFLAGS: 00010246 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 0000000000000031 RDX: 0000000000000515 RSI: 0000000000000089 RDI: 00007f3f3c7b1254 RBP: 0000000053b6a317 R8: 00007f3f3c7b1280 R9: 00000000ffffffff R10: 00007f3f2aaa5dd0 R11: 0000000000000202 R12: 00007f3f3c7b1200 R13: ffffffffffffff92 R14: 00007f3f2aaa5dd0 R15: 0000000000000515 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 4805 TASK: ffff88033a38a040 CPU: 7 COMMAND: "nscd" #0 [ffff880335103b38] schedule at ffffffff81528762 #1 [ffff880335103c00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff880335103c40] futex_wait at ffffffff810afab8 #3 [ffff880335103dc0] do_futex at ffffffff810b1221 #4 [ffff880335103ef0] sys_futex at ffffffff810b1cdb #5 [ffff880335103f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f3f3bf5198e RSP: 00007f3f2a8a4320 RFLAGS: 00010246 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 00000000000009ab RSI: 0000000000000089 RDI: 00007f3f3c7b13cc RBP: 0000000053b6ad84 R8: 00007f3f3c7b13f8 R9: 00000000ffffffff R10: 00007f3f2a8a4dd0 R11: 0000000000000206 R12: 00007f3f3c7b1400 R13: ffffffffffffff92 R14: 00007f3f2a8a4dd0 R15: 00000000000009ab ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 4806 TASK: ffff88033a3a40c0 CPU: 4 COMMAND: "nscd" #0 [ffff88033528db38] schedule at ffffffff81528762 #1 [ffff88033528dc00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff88033528dc40] futex_wait at ffffffff810afab8 #3 [ffff88033528ddc0] do_futex at ffffffff810b1221 #4 [ffff88033528def0] sys_futex at ffffffff810b1cdb #5 [ffff88033528df80] system_call_fastpath at ffffffff8100b072 RIP: 00007f3f3bf5198e RSP: 00007f3f2a6a33b0 RFLAGS: 00010246 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000000057 RSI: 0000000000000089 RDI: 00007f3f3c7b1544 RBP: 0000000053b6694f R8: 00007f3f3c7b1570 R9: 00000000ffffffff R10: 00007f3f2a6a3dd0 R11: 0000000000000206 R12: 00007f3f3c7b1500 R13: ffffffffffffff92 R14: 00007f3f2a6a3dd0 R15: 0000000000000057 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 4807 TASK: ffff880339d08100 CPU: 4 COMMAND: "nscd" #0 [ffff88033a3e9b38] schedule at ffffffff81528762 #1 [ffff88033a3e9c00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff88033a3e9c40] futex_wait at ffffffff810afab8 #3 [ffff88033a3e9dc0] do_futex at ffffffff810b1221 #4 [ffff88033a3e9ef0] sys_futex at ffffffff810b1cdb #5 [ffff88033a3e9f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f3f3bf5198e RSP: 00007f3f2a4a2d80 RFLAGS: 00000206 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 00007f3f3c5a1dd6 RDX: 0000000000000013 RSI: 0000000000000089 RDI: 00007f3f3c7b16bc RBP: 0000000053b64c81 R8: 00007f3f3c7b16e8 R9: 00000000ffffffff R10: 00007f3f2a4a2dd0 R11: 0000000000000202 R12: 00007f3f3c7b1700 R13: ffffffffffffff92 R14: 00007f3f2a4a2dd0 R15: 0000000000000013 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 4808 TASK: ffff88033a00ca80 CPU: 0 COMMAND: "nscd" #0 [ffff880333923b38] schedule at ffffffff81528762 #1 [ffff880333923c00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff880333923c40] futex_wait at ffffffff810afab8 #3 [ffff880333923dc0] do_futex at ffffffff810b1221 #4 [ffff880333923ef0] sys_futex at ffffffff810b1cdb #5 [ffff880333923f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f3f3bf515bc RSP: 00007f3f2a2a07a0 RFLAGS: 00000246 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 00007f3f3bf515bc RDX: 000000000000ced3 RSI: 0000000000000080 RDI: 00007f3f3c7b18a4 RBP: 00007f3f2a2a08a0 R8: 00007f3f3c7b1800 R9: 0000000000006767 R10: 0000000000000000 R11: 0000000000000246 R12: 00007f3f3c7b10a0 R13: 0000000000000002 R14: 00007f3f3c7b10a0 R15: 00007f3f2a2a08b0 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 4809 TASK: ffff880339a1c040 CPU: 1 COMMAND: "nscd" #0 [ffff880335285b38] schedule at ffffffff81528762 #1 [ffff880335285c00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff880335285c40] futex_wait at ffffffff810afab8 #3 [ffff880335285dc0] do_futex at ffffffff810b1221 #4 [ffff880335285ef0] sys_futex at ffffffff810b1cdb #5 [ffff880335285f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f3f3bf515bc RSP: 00007f3f2a09f020 RFLAGS: 00010283 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 0000000000034c00 RDX: 000000000000cecf RSI: 0000000000000080 RDI: 00007f3f3c7b18a4 RBP: 00007f3f2a09f8a0 R8: 00007f3f3c7b1800 R9: 0000000000006765 R10: 0000000000000000 R11: 0000000000000246 R12: 00007f3f3c7b10a0 R13: 0000000000000002 R14: 00007f3f3c7b10a0 R15: 00007f3f2a09f8b0 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 4810 TASK: ffff8803392a5500 CPU: 0 COMMAND: "nscd" #0 [ffff880336f85b38] schedule at ffffffff81528762 #1 [ffff880336f85c00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff880336f85c40] futex_wait at ffffffff810afab8 #3 [ffff880336f85dc0] do_futex at ffffffff810b1221 #4 [ffff880336f85ef0] sys_futex at ffffffff810b1cdb #5 [ffff880336f85f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f3f3bf515bc RSP: 00007f3f29e9e040 RFLAGS: 00010287 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 0000000000000010 RDX: 000000000000cecd RSI: 0000000000000080 RDI: 00007f3f3c7b18a4 RBP: 00007f3f29e9e8a0 R8: 00007f3f3c7b1800 R9: 0000000000006764 R10: 0000000000000000 R11: 0000000000000246 R12: 00007f3f3c7b10a0 R13: 0000000000000002 R14: 00007f3f3c7b10a0 R15: 00007f3f29e9e8b0 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 4811 TASK: ffff8803351ec100 CPU: 7 COMMAND: "nscd" #0 [ffff88033a0dbb38] schedule at ffffffff81528762 #1 [ffff88033a0dbc00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff88033a0dbc40] futex_wait at ffffffff810afab8 #3 [ffff88033a0dbdc0] do_futex at ffffffff810b1221 #4 [ffff88033a0dbef0] sys_futex at ffffffff810b1cdb #5 [ffff88033a0dbf80] system_call_fastpath at ffffffff8100b072 RIP: 00007f3f3bf515bc RSP: 00007f3f29c9d040 RFLAGS: 00010287 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 0000000000034c00 RDX: 000000000000ced5 RSI: 0000000000000080 RDI: 00007f3f3c7b18a4 RBP: 00007f3f29c9d8a0 R8: 00007f3f3c7b1800 R9: 0000000000006768 R10: 0000000000000000 R11: 0000000000000246 R12: 00007f3f3c7b10a0 R13: 0000000000000002 R14: 00007f3f3c7b10a0 R15: 00007f3f29c9d8b0 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 4812 TASK: ffff88033a3a4b00 CPU: 7 COMMAND: "nscd" #0 [ffff88033392db38] schedule at ffffffff81528762 #1 [ffff88033392dc00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff88033392dc40] futex_wait at ffffffff810afab8 #3 [ffff88033392ddc0] do_futex at ffffffff810b1221 #4 [ffff88033392def0] sys_futex at ffffffff810b1cdb #5 [ffff88033392df80] system_call_fastpath at ffffffff8100b072 RIP: 00007f3f3bf515bc RSP: 00007f3f29a9c158 RFLAGS: 00010216 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 00007f3f3b43574d RDX: 000000000000ced1 RSI: 0000000000000080 RDI: 00007f3f3c7b18a4 RBP: 00007f3f29a9c8a0 R8: 00007f3f3c7b1800 R9: 0000000000006766 R10: 0000000000000000 R11: 0000000000000246 R12: 00007f3f3c7b10a0 R13: 0000000000000002 R14: 00007f3f3c7b10a0 R15: 00007f3f29a9c8b0 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 4819 TASK: ffff880339b5e100 CPU: 5 COMMAND: "mcelog" #0 [ffff880333bf9948] schedule at ffffffff81528762 #1 [ffff880333bf9a10] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff880333bf9ab0] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff880333bf9ad0] do_sys_poll at ffffffff811a0b47 #4 [ffff880333bf9ef0] sys_ppoll at ffffffff811a0c5c #5 [ffff880333bf9f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf407 RSP: 00007fff4a965678 RFLAGS: 00010246 RAX: 000000000000010f RBX: ffffffff8100b072 RCX: 0000003bb0edf407 RDX: 0000000000000000 RSI: 0000000000000002 RDI: 000000000061d960 RBP: 0000000000000002 R8: 0000000000000008 R9: 0000000000000000 R10: 000000000061dac0 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 000000000061dac0 R15: 000000000061d960 ORIG_RAX: 000000000000010f CS: 0033 SS: 002b PID: 4827 TASK: ffff880333817540 CPU: 6 COMMAND: "snmpd" #0 [ffff8803338bb848] schedule at ffffffff81528762 #1 [ffff8803338bb910] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff8803338bb9b0] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff8803338bb9d0] do_select at ffffffff811a146c #4 [ffff8803338bbd70] core_sys_select at ffffffff811a173a #5 [ffff8803338bbf10] sys_select at ffffffff811a1ac7 #6 [ffff8803338bbf80] system_call_fastpath at ffffffff8100b072 RIP: 00007fbd7be4a5c3 RSP: 00007fff76e93080 RFLAGS: 00000246 RAX: 0000000000000017 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 00007fff76e93d80 RSI: 00007fff76e93e10 RDI: 0000000000000009 RBP: 00007fff76e95eec R8: 00007fff76e93e90 R9: 0000000000000100 R10: 00007fff76e93cf0 R11: 0000000000000246 R12: 00007fff76e93eb8 R13: 00007fbd7ebdc598 R14: 00007fbd7e76ad70 R15: 00007fff76e93e00 ORIG_RAX: 0000000000000017 CS: 0033 SS: 002b PID: 4838 TASK: ffff880632949580 CPU: 1 COMMAND: "syslog-ng" #0 [ffff88063163ddb8] schedule at ffffffff81528762 #1 [ffff88063163de80] do_wait at ffffffff81075464 #2 [ffff88063163dee0] sys_wait4 at ffffffff81075563 #3 [ffff88063163df80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120f26e RSP: 00007fff4ff206d8 RFLAGS: 00010246 RAX: 000000000000003d RBX: ffffffff8100b072 RCX: 0000003bb120e7a0 RDX: 0000000000000000 RSI: 00007fff4ff2071c RDI: 00000000000012e7 RBP: 0000000000000000 R8: 0000000000000001 R9: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000431f71 R13: 00000000000012e7 R14: 000000000000000f R15: 0000000000000009 ORIG_RAX: 000000000000003d CS: 0033 SS: 002b PID: 4839 TASK: ffff8803351ecb40 CPU: 2 COMMAND: "syslog-ng" #0 [ffff8803338dd998] schedule at ffffffff81528762 #1 [ffff8803338dda60] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff8803338ddb00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff8803338ddb20] do_sys_poll at ffffffff811a0b47 #4 [ffff8803338ddf40] sys_poll at ffffffff811a0e01 #5 [ffff8803338ddf80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf308 RSP: 00007fff4ff20978 RFLAGS: 00000202 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 0000000001536820 RDX: 00000000001b7b43 RSI: 0000000000000011 RDI: 000000000154dbd0 RBP: 000000000154dbd0 R8: 00000000015033a8 R9: 00000000000012e7 R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000011 R13: 0000000000647f40 R14: 0000003bb2250ba0 R15: 00000000015033a0 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 4853 TASK: ffff88063b7beb00 CPU: 1 COMMAND: "sshd" #0 [ffff880632925848] schedule at ffffffff81528762 #1 [ffff880632925910] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff8806329259b0] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff8806329259d0] do_select at ffffffff811a146c #4 [ffff880632925d70] core_sys_select at ffffffff811a173a #5 [ffff880632925f10] sys_select at ffffffff811a1ac7 #6 [ffff880632925f80] system_call_fastpath at ffffffff8100b072 RIP: 00007ff266ca55c3 RSP: 00007fff2bb961a8 RFLAGS: 00000246 RAX: 0000000000000017 RBX: ffffffff8100b072 RCX: ffffffffffffffff RDX: 0000000000000000 RSI: 00007ff26b2d77c0 RDI: 0000000000000009 RBP: 00000000ffffffff R8: 0000000000000000 R9: 0000000000000001 R10: 0000000000000000 R11: 0000000000000246 R12: 00007fff2bb96360 R13: 0000000000000008 R14: 0000000000000002 R15: 0000000000000001 ORIG_RAX: 0000000000000017 CS: 0033 SS: 002b PID: 4863 TASK: ffff880639c36ac0 CPU: 0 COMMAND: "ntpd" #0 [ffff880631671848] schedule at ffffffff81528762 #1 [ffff880631671910] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff8806316719b0] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff8806316719d0] do_select at ffffffff811a146c #4 [ffff880631671d70] core_sys_select at ffffffff811a173a #5 [ffff880631671f10] sys_select at ffffffff811a1ac7 #6 [ffff880631671f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f627581a5c3 RSP: 00007fff7010df90 RFLAGS: 00010206 RAX: 0000000000000017 RBX: ffffffff8100b072 RCX: 0000000000000001 RDX: 0000000000000000 RSI: 00007fff7010eb60 RDI: 000000000000001c RBP: 00007fff7010eb60 R8: 00007fff7010ebe0 R9: 0000000000010000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000001 R15: 00007fff7010ebe0 ORIG_RAX: 0000000000000017 CS: 0033 SS: 002b PID: 4898 TASK: ffff8806329b6a80 CPU: 4 COMMAND: "abrt-dump-oops" #0 [ffff8806316a9d68] schedule at ffffffff81528762 #1 [ffff8806316a9e30] inotify_read at ffffffff811cfba9 #2 [ffff8806316a9ef0] vfs_read at ffffffff81189d05 #3 [ffff8806316a9f30] sys_read at ffffffff81189e41 #4 [ffff8806316a9f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fffdd357fd8 RFLAGS: 00010202 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000000000000225 RDX: 0000000000001000 RSI: 00007fffdd358290 RDI: 0000000000000003 RBP: 0000000000000004 R8: 00007fffdd357e00 R9: 0000000000100000 R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000402aab R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 5049 TASK: ffff8806329960c0 CPU: 0 COMMAND: "crond" #0 [ffff880631675da8] schedule at ffffffff81528762 #1 [ffff880631675e70] do_nanosleep at ffffffff8152a38b #2 [ffff880631675ea0] hrtimer_nanosleep at ffffffff8109f5f4 #3 [ffff880631675f50] sys_nanosleep at ffffffff8109f71e #4 [ffff880631675f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f9f848f8cc0 RSP: 00007fff708f7658 RFLAGS: 00010202 RAX: 0000000000000023 RBX: ffffffff8100b072 RCX: 00000000746f6f72 RDX: 0000000000000000 RSI: 00007fff708f7820 RDI: 00007fff708f7820 RBP: 00007fff708f7720 R8: 00007fff708f7680 R9: 0000000000000000 R10: 0000000000000008 R11: 0000000000000246 R12: 00000000ffffffff R13: 00007fff708f77a0 R14: 0000000000000000 R15: 000000000000003c ORIG_RAX: 0000000000000023 CS: 0033 SS: 002b PID: 5062 TASK: ffff8806329b6040 CPU: 5 COMMAND: "rhsmcertd" #0 [ffff880631685998] schedule at ffffffff81528762 #1 [ffff880631685a60] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff880631685b00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff880631685b20] do_sys_poll at ffffffff811a0b47 #4 [ffff880631685f40] sys_poll at ffffffff811a0e01 #5 [ffff880631685f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf308 RSP: 00007fff3ebc2060 RFLAGS: 00010202 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 000000000233d030 RDX: 0000000000dbb987 RSI: 0000000000000000 RDI: 0000000000000000 RBP: 0000000000000000 R8: 0000000000000001 R9: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 0000003bb2504360 R14: 0000003bb2250ba0 R15: 0000000002341800 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 5075 TASK: ffff8806329a4a80 CPU: 1 COMMAND: "certmonger" #0 [ffff880632967ce8] schedule at ffffffff81528762 #1 [ffff880632967db0] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff880632967e50] ep_poll at ffffffff811d201d #3 [ffff880632967f40] sys_epoll_wait at ffffffff811d2165 #4 [ffff880632967f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0ee9143 RSP: 00007fffc666db58 RFLAGS: 00000202 RAX: 00000000000000e8 RBX: ffffffff8100b072 RCX: 0000000000b506b0 RDX: 0000000000000001 RSI: 00007fffc666db80 RDI: 0000000000000003 RBP: 00007fffc666db80 R8: 00007fffc666da90 R9: 00007fffc665da8c R10: 0000000000007530 R11: 0000000000000246 R12: 0000000000b50420 R13: 0000000000000000 R14: 00007fffc666fedd R15: 0000000000000000 ORIG_RAX: 00000000000000e8 CS: 0033 SS: 002b PID: 5097 TASK: ffff880631c154c0 CPU: 1 COMMAND: "collectdmon" #0 [ffff8806315eddb8] schedule at ffffffff81528762 #1 [ffff8806315ede80] do_wait at ffffffff81075464 #2 [ffff8806315edee0] sys_wait4 at ffffffff81075563 #3 [ffff8806315edf80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0eac8be RSP: 00007fffea5beaa0 RFLAGS: 00010206 RAX: 000000000000003d RBX: ffffffff8100b072 RCX: 0000003bb0eacdbd RDX: 0000000000000000 RSI: 00007fffea5beb5c RDI: 00000000000013eb RBP: 00007fffea5beb5c R8: 00007f2a583b9700 R9: 00000000000013e9 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000004 R13: 000000000040180d R14: 0000000000401795 R15: 0000000001e86010 ORIG_RAX: 000000000000003d CS: 0033 SS: 002b PID: 5099 TASK: ffff88033997e0c0 CPU: 2 COMMAND: "collectd" #0 [ffff8803350cfda8] schedule at ffffffff81528762 #1 [ffff8803350cfe70] do_nanosleep at ffffffff8152a38b #2 [ffff8803350cfea0] hrtimer_nanosleep at ffffffff8109f5f4 #3 [ffff8803350cff50] sys_nanosleep at ffffffff8109f71e #4 [ffff8803350cff80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120ef3d RSP: 00007fffc7b405a0 RFLAGS: 00000246 RAX: 0000000000000023 RBX: ffffffff8100b072 RCX: 0000000053b6a584 RDX: 00000000000f4239 RSI: 00007fffc7b40d30 RDI: 00007fffc7b40d30 RBP: 00007fffc7b40d30 R8: 00007fffc7b40d40 R9: 0000000053b6aee4 R10: 431bde82d7b634db R11: 0000000000000293 R12: 00007fffc7b40d50 R13: 0000000000000000 R14: 000000003b9aaea8 R15: 000000000000012b ORIG_RAX: 0000000000000023 CS: 0033 SS: 002b PID: 5140 TASK: ffff88063a36c0c0 CPU: 0 COMMAND: "collectd" #0 [ffff880631689b38] schedule at ffffffff81528762 #1 [ffff880631689c00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff880631689c40] futex_wait at ffffffff810afab8 #3 [ffff880631689dc0] do_futex at ffffffff810b1221 #4 [ffff880631689ef0] sys_futex at ffffffff810b1cdb #5 [ffff880631689f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120b98e RSP: 00007feca35266e0 RFLAGS: 00000246 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 0000000000001000 RDX: 0000000000002553 RSI: 0000000000000189 RDI: 0000000000623884 RBP: 00007feca3528e50 R8: 0000000000623840 R9: 00000000ffffffff R10: 0000000000f58340 R11: 0000000000000206 R12: 0000000000000000 R13: ffffffffffffff92 R14: 0000000000f58340 R15: 0000000000002553 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 5142 TASK: ffff8806329334c0 CPU: 5 COMMAND: "smartd" #0 [ffff880631605da8] schedule at ffffffff81528762 #1 [ffff880631605e70] do_nanosleep at ffffffff8152a38b #2 [ffff880631605ea0] hrtimer_nanosleep at ffffffff8109f5f4 #3 [ffff880631605f50] sys_nanosleep at ffffffff8109f71e #4 [ffff880631605f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f6fdc59dcc0 RSP: 00007fffaa417d28 RFLAGS: 00010206 RAX: 0000000000000023 RBX: ffffffff8100b072 RCX: 0000000000000033 RDX: 0000000000000000 RSI: 00007fffaa4183d0 RDI: 00007fffaa4183d0 RBP: 00007fffaa4182d0 R8: 00007fffaa418230 R9: 0000000000000000 R10: 0000000000000008 R11: 0000000000000246 R12: 00000000ffffffff R13: 00007fffaa418350 R14: 0000000000000000 R15: 0000000000000708 ORIG_RAX: 0000000000000023 CS: 0033 SS: 002b PID: 5151 TASK: ffff880339b874c0 CPU: 4 COMMAND: "mingetty" #0 [ffff880339af1c08] schedule at ffffffff81528762 #1 [ffff880339af1cd0] schedule_timeout at ffffffff81529655 #2 [ffff880339af1d80] n_tty_read at ffffffff81337b57 #3 [ffff880339af1ea0] tty_read at ffffffff81332496 #4 [ffff880339af1ef0] vfs_read at ffffffff81189d05 #5 [ffff880339af1f30] sys_read at ffffffff81189e41 #6 [ffff880339af1f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fff0f541728 RFLAGS: 00010246 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000000001 RSI: 00007fff0f541f5f RDI: 0000000000000000 RBP: 00000000006f41a0 R8: 00000000ffffffff R9: 00007f6e8d7e9700 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000000000ff R13: 00007fff0f541f5f R14: 00007f6e8d7e96a8 R15: 00007fff0f541f5f ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 5153 TASK: ffff880335366040 CPU: 0 COMMAND: "mingetty" #0 [ffff880339af7c08] schedule at ffffffff81528762 #1 [ffff880339af7cd0] schedule_timeout at ffffffff81529655 #2 [ffff880339af7d80] n_tty_read at ffffffff81337b57 #3 [ffff880339af7ea0] tty_read at ffffffff81332496 #4 [ffff880339af7ef0] vfs_read at ffffffff81189d05 #5 [ffff880339af7f30] sys_read at ffffffff81189e41 #6 [ffff880339af7f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fff70bff9c8 RFLAGS: 00010246 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000000001 RSI: 00007fff70c001ff RDI: 0000000000000000 RBP: 00000000024431a0 R8: 00000000ffffffff R9: 00007fe0aa3a6700 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000000000ff R13: 00007fff70c001ff R14: 00007fe0aa3a66a8 R15: 00007fff70c001ff ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 5155 TASK: ffff8803338eaac0 CPU: 6 COMMAND: "mingetty" #0 [ffff88033a3fbc08] schedule at ffffffff81528762 #1 [ffff88033a3fbcd0] schedule_timeout at ffffffff81529655 #2 [ffff88033a3fbd80] n_tty_read at ffffffff81337b57 #3 [ffff88033a3fbea0] tty_read at ffffffff81332496 #4 [ffff88033a3fbef0] vfs_read at ffffffff81189d05 #5 [ffff88033a3fbf30] sys_read at ffffffff81189e41 #6 [ffff88033a3fbf80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fff702ad158 RFLAGS: 00010246 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000000001 RSI: 00007fff702ad98f RDI: 0000000000000000 RBP: 0000000001a791a0 R8: 00000000ffffffff R9: 00007f9aff204700 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000000000ff R13: 00007fff702ad98f R14: 00007f9aff2046a8 R15: 00007fff702ad98f ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 5157 TASK: ffff880339b86040 CPU: 3 COMMAND: "mingetty" #0 [ffff88033a3fdc08] schedule at ffffffff81528762 #1 [ffff88033a3fdcd0] schedule_timeout at ffffffff81529655 #2 [ffff88033a3fdd80] n_tty_read at ffffffff81337b57 #3 [ffff88033a3fdea0] tty_read at ffffffff81332496 #4 [ffff88033a3fdef0] vfs_read at ffffffff81189d05 #5 [ffff88033a3fdf30] sys_read at ffffffff81189e41 #6 [ffff88033a3fdf80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fffc855d238 RFLAGS: 00010246 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000000001 RSI: 00007fffc855da6f RDI: 0000000000000000 RBP: 0000000000a391a0 R8: 00000000ffffffff R9: 00007ff8cff08700 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000000000ff R13: 00007fffc855da6f R14: 00007ff8cff086a8 R15: 00007fffc855da6f ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 5159 TASK: ffff880335096ac0 CPU: 7 COMMAND: "mingetty" #0 [ffff88033534dc08] schedule at ffffffff81528762 #1 [ffff88033534dcd0] schedule_timeout at ffffffff81529655 #2 [ffff88033534dd80] n_tty_read at ffffffff81337b57 #3 [ffff88033534dea0] tty_read at ffffffff81332496 #4 [ffff88033534def0] vfs_read at ffffffff81189d05 #5 [ffff88033534df30] sys_read at ffffffff81189e41 #6 [ffff88033534df80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fff2dd3a448 RFLAGS: 00010246 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000000001 RSI: 00007fff2dd3ac7f RDI: 0000000000000000 RBP: 00000000013d71a0 R8: 00000000ffffffff R9: 00007f042c05a700 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000000000ff R13: 00007fff2dd3ac7f R14: 00007f042c05a6a8 R15: 00007fff2dd3ac7f ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 5161 TASK: ffff8803338ea080 CPU: 1 COMMAND: "mingetty" #0 [ffff88033a177c08] schedule at ffffffff81528762 #1 [ffff88033a177c90] schedule_timeout at ffffffff81529655 #2 [ffff88033a177d80] n_tty_read at ffffffff81337b57 #3 [ffff88033a177ea0] tty_read at ffffffff81332496 #4 [ffff88033a177ef0] vfs_read at ffffffff81189d05 #5 [ffff88033a177f30] sys_read at ffffffff81189e41 #6 [ffff88033a177f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fff0b38e908 RFLAGS: 00010246 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000000001 RSI: 00007fff0b38f13f RDI: 0000000000000000 RBP: 00000000023c51a0 R8: 00000000ffffffff R9: 00007f7ff1a84700 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000000000ff R13: 00007fff0b38f13f R14: 00007f7ff1a846a8 R15: 00007fff0b38f13f ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 5162 TASK: ffff880339017540 CPU: 0 COMMAND: "agetty" #0 [ffff88033a393c08] schedule at ffffffff81528762 #1 [ffff88033a393cd0] schedule_timeout at ffffffff81529655 #2 [ffff88033a393d80] n_tty_read at ffffffff81337b57 #3 [ffff88033a393ea0] tty_read at ffffffff81332496 #4 [ffff88033a393ef0] vfs_read at ffffffff81189d05 #5 [ffff88033a393f30] sys_read at ffffffff81189e41 #6 [ffff88033a393f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fff6c3bb048 RFLAGS: 00010206 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000000000000011 RDX: 0000000000000001 RSI: 00007fff6c3bb57f RDI: 0000000000000000 RBP: 00000000006055a0 R8: 00007f7e0e00f700 R9: 0000000000000000 R10: 00007fff6c3bb1c0 R11: 0000000000000246 R12: 00007fff6c3bb5f0 R13: 00000000006056a0 R14: 00007fff6c3bb5d0 R15: 00000000006056a0 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 5166 TASK: ffff88033928c0c0 CPU: 2 COMMAND: "udevd" #0 [ffff88033a3ff948] schedule at ffffffff81528762 #1 [ffff88033a3ffa10] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff88033a3ffab0] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff88033a3ffad0] do_sys_poll at ffffffff811a0b47 #4 [ffff88033a3ffef0] sys_ppoll at ffffffff811a0c5c #5 [ffff88033a3fff80] system_call_fastpath at ffffffff8100b072 RIP: 00007f67ab081407 RSP: 00007fff340deaa8 RFLAGS: 00000287 RAX: 000000000000010f RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007fff340e3b20 RBP: 0000000000000001 R8: 0000000000000008 R9: 0000000000000000 R10: 00007fff340e3a90 R11: 0000000000000246 R12: 00007f67accb9d70 R13: 00007f67acccb0d0 R14: 00007fff340e3a90 R15: 00007fff340e3b20 ORIG_RAX: 000000000000010f CS: 0033 SS: 002b PID: 5167 TASK: ffff8803399ef540 CPU: 5 COMMAND: "udevd" #0 [ffff880333b45948] schedule at ffffffff81528762 #1 [ffff880333b45a10] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff880333b45ab0] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff880333b45ad0] do_sys_poll at ffffffff811a0b47 #4 [ffff880333b45ef0] sys_ppoll at ffffffff811a0c5c #5 [ffff880333b45f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f67ab081407 RSP: 00007fff340e39b8 RFLAGS: 00010246 RAX: 000000000000010f RBX: ffffffff8100b072 RCX: ffffffffffffff00 RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007fff340e3b20 RBP: 0000000000000001 R8: 0000000000000008 R9: 0000000000000000 R10: 00007fff340e3a90 R11: 0000000000000246 R12: 00007f67accba490 R13: 00007f67acccb0d0 R14: 00007fff340e3a90 R15: 00007fff340e3b20 ORIG_RAX: 000000000000010f CS: 0033 SS: 002b PID: 5955 TASK: ffff8803392ab580 CPU: 3 COMMAND: "sshd" #0 [ffff88032e1e1848] schedule at ffffffff81528762 #1 [ffff88032e1e1910] schedule_hrtimeout_range at ffffffff8152a2bd #2 [ffff88032e1e19b0] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff88032e1e19d0] do_select at ffffffff811a146c #4 [ffff88032e1e1d70] core_sys_select at ffffffff811a173a #5 [ffff88032e1e1f10] sys_select at ffffffff811a1ac7 #6 [ffff88032e1e1f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f95a5ac05c3 RSP: 00007fff96b12f78 RFLAGS: 00000282 RAX: 0000000000000017 RBX: ffffffff8100b072 RCX: 00000000fffffd51 RDX: 00007f95aa5641a0 RSI: 00007f95aa5641c0 RDI: 000000000000000a RBP: 0000000000000000 R8: 0000000000000000 R9: 0101010101010101 R10: 0000000000000000 R11: 0000000000000246 R12: 00007fff96b131b0 R13: 00007fff96b131b4 R14: 00007f95a897073c R15: 00007fff96b131a0 ORIG_RAX: 0000000000000017 CS: 0033 SS: 002b PID: 5957 TASK: ffff880339960ac0 CPU: 4 COMMAND: "bash" #0 [ffff88032e259db8] schedule at ffffffff81528762 #1 [ffff88032e259e80] do_wait at ffffffff81075464 #2 [ffff88032e259ee0] sys_wait4 at ffffffff81075563 #3 [ffff88032e259f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0eac8be RSP: 00007fff16d73630 RFLAGS: 00010246 RAX: 000000000000003d RBX: ffffffff8100b072 RCX: 0000003bb118fe88 RDX: 000000000000000a RSI: 00007fff16d7354c RDI: ffffffffffffffff RBP: 00000000ffffffff R8: 0000000001871210 R9: 0000000001876288 R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000000a R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000003200 ORIG_RAX: 000000000000003d CS: 0033 SS: 002b PID: 11955 TASK: ffff880339960080 CPU: 5 COMMAND: "cfs_rh_00" #0 [ffff880339be3dc0] schedule at ffffffff81528762 #1 [ffff880339be3e88] cfs_wi_scheduler at ffffffffa07b8103 [libcfs] #2 [ffff880339be3f48] kernel_thread at ffffffff8100c20a PID: 11956 TASK: ffff880339d08b40 CPU: 5 COMMAND: "cfs_rh_01" #0 [ffff88033530bdc0] schedule at ffffffff81528762 #1 [ffff88033530be88] cfs_wi_scheduler at ffffffffa07b8103 [libcfs] #2 [ffff88033530bf48] kernel_thread at ffffffff8100c20a PID: 11957 TASK: ffff88032a7f54c0 CPU: 5 COMMAND: "cfs_rh_02" #0 [ffff880325357dc0] schedule at ffffffff81528762 #1 [ffff880325357e88] cfs_wi_scheduler at ffffffffa07b8103 [libcfs] #2 [ffff880325357f48] kernel_thread at ffffffff8100c20a PID: 11958 TASK: ffff880339b86a80 CPU: 5 COMMAND: "cfs_rh_03" #0 [ffff880333b9fdc0] schedule at ffffffff81528762 #1 [ffff880333b9fe88] cfs_wi_scheduler at ffffffffa07b8103 [libcfs] #2 [ffff880333b9ff48] kernel_thread at ffffffff8100c20a PID: 11988 TASK: ffff880639a87580 CPU: 1 COMMAND: "obd_zombid" #0 [ffff8806244bddf0] schedule at ffffffff81528762 #1 [ffff8806244bdeb8] cfs_waitq_wait at ffffffffa079d6fe [libcfs] #2 [ffff8806244bdec8] obd_zombie_impexp_thread at ffffffffa09c7745 [obdclass] #3 [ffff8806244bdf48] kernel_thread at ffffffff8100c20a PID: 11989 TASK: ffff8806398834c0 CPU: 1 COMMAND: "ptlrpc_hr00_000" #0 [ffff88062cdabd70] schedule at ffffffff81528762 #1 [ffff88062cdabe38] cfs_waitq_wait at ffffffffa079d6fe [libcfs] #2 [ffff88062cdabe48] ptlrpc_hr_main at ffffffffa10c7c3a [ptlrpc] #3 [ffff88062cdabf48] kernel_thread at ffffffff8100c20a PID: 11990 TASK: ffff880639c48a80 CPU: 2 COMMAND: "ptlrpc_hr00_001" #0 [ffff88062bd89d70] schedule at ffffffff81528762 #1 [ffff88062bd89e38] cfs_waitq_wait at ffffffffa079d6fe [libcfs] #2 [ffff88062bd89e48] ptlrpc_hr_main at ffffffffa10c7c3a [ptlrpc] #3 [ffff88062bd89f48] kernel_thread at ffffffff8100c20a PID: 11991 TASK: ffff880639820100 CPU: 0 COMMAND: "ptlrpc_hr00_002" #0 [ffff88062bc6dd70] schedule at ffffffff81528762 #1 [ffff88062bc6de38] cfs_waitq_wait at ffffffffa079d6fe [libcfs] #2 [ffff88062bc6de48] ptlrpc_hr_main at ffffffffa10c7c3a [ptlrpc] #3 [ffff88062bc6df48] kernel_thread at ffffffff8100c20a PID: 11992 TASK: ffff880639821580 CPU: 3 COMMAND: "ptlrpc_hr00_003" #0 [ffff88062d337d70] schedule at ffffffff81528762 #1 [ffff88062d337e38] cfs_waitq_wait at ffffffffa079d6fe [libcfs] #2 [ffff88062d337e48] ptlrpc_hr_main at ffffffffa10c7c3a [ptlrpc] #3 [ffff88062d337f48] kernel_thread at ffffffff8100c20a PID: 11993 TASK: ffff88063a36d540 CPU: 4 COMMAND: "ptlrpc_hr01_000" #0 [ffff88062c675d70] schedule at ffffffff81528762 #1 [ffff88062c675e38] cfs_waitq_wait at ffffffffa079d6fe [libcfs] #2 [ffff88062c675e48] ptlrpc_hr_main at ffffffffa10c7c3a [ptlrpc] #3 [ffff88062c675f48] kernel_thread at ffffffff8100c20a PID: 11994 TASK: ffff88063a36cb00 CPU: 4 COMMAND: "ptlrpc_hr01_001" #0 [ffff88062cb67d70] schedule at ffffffff81528762 #1 [ffff88062cb67e38] cfs_waitq_wait at ffffffffa079d6fe [libcfs] #2 [ffff88062cb67e48] ptlrpc_hr_main at ffffffffa10c7c3a [ptlrpc] #3 [ffff88062cb67f48] kernel_thread at ffffffff8100c20a PID: 11995 TASK: ffff880632900b00 CPU: 7 COMMAND: "ptlrpc_hr01_002" #0 [ffff88062bd27d70] schedule at ffffffff81528762 #1 [ffff88062bd27e38] cfs_waitq_wait at ffffffffa079d6fe [libcfs] #2 [ffff88062bd27e48] ptlrpc_hr_main at ffffffffa10c7c3a [ptlrpc] #3 [ffff88062bd27f48] kernel_thread at ffffffff8100c20a PID: 11996 TASK: ffff880639835500 CPU: 4 COMMAND: "ptlrpc_hr01_003" #0 [ffff88063a179d70] schedule at ffffffff81528762 #1 [ffff88063a179e38] cfs_waitq_wait at ffffffffa079d6fe [libcfs] #2 [ffff88063a179e48] ptlrpc_hr_main at ffffffffa10c7c3a [ptlrpc] #3 [ffff88063a179f48] kernel_thread at ffffffff8100c20a PID: 11999 TASK: ffff88062dc20080 CPU: 5 COMMAND: "kiblnd_connd" #0 [ffff88063171dd40] schedule at ffffffff81528762 #1 [ffff88063171de08] schedule_timeout at ffffffff815295d2 #2 [ffff88063171deb8] cfs_waitq_timedwait at ffffffffa079d6d1 [libcfs] #3 [ffff88063171dec8] kiblnd_connd at ffffffffa0dc7ee1 [ko2iblnd] #4 [ffff88063171df48] kernel_thread at ffffffff8100c20a PID: 12000 TASK: ffff8806398c3540 CPU: 1 COMMAND: "kiblnd_sd_00_00" #0 [ffff8806227b3d80] schedule at ffffffff81528762 #1 [ffff8806227b3e48] cfs_waitq_wait at ffffffffa079d6fe [libcfs] #2 [ffff8806227b3e58] kiblnd_scheduler at ffffffffa0dc891f [ko2iblnd] #3 [ffff8806227b3f48] kernel_thread at ffffffff8100c20a PID: 12001 TASK: ffff880639c14040 CPU: 0 COMMAND: "kiblnd_sd_00_01" #0 [ffff88062eb63d80] schedule at ffffffff81528762 #1 [ffff88062eb63e48] cfs_waitq_wait at ffffffffa079d6fe [libcfs] #2 [ffff88062eb63e58] kiblnd_scheduler at ffffffffa0dc891f [ko2iblnd] #3 [ffff88062eb63f48] kernel_thread at ffffffff8100c20a PID: 12002 TASK: ffff88063a007580 CPU: 6 COMMAND: "kiblnd_sd_01_00" #0 [ffff8806227a9d80] schedule at ffffffff81528762 #1 [ffff8806227a9e48] cfs_waitq_wait at ffffffffa079d6fe [libcfs] #2 [ffff8806227a9e58] kiblnd_scheduler at ffffffffa0dc891f [ko2iblnd] #3 [ffff8806227a9f48] kernel_thread at ffffffff8100c20a PID: 12003 TASK: ffff880628cd9540 CPU: 5 COMMAND: "kiblnd_sd_01_01" #0 [ffff880625045d80] schedule at ffffffff81528762 #1 [ffff880625045e48] cfs_waitq_wait at ffffffffa079d6fe [libcfs] #2 [ffff880625045e58] kiblnd_scheduler at ffffffffa0dc891f [ko2iblnd] #3 [ffff880625045f48] kernel_thread at ffffffff8100c20a PID: 12004 TASK: ffff88062e0b20c0 CPU: 6 COMMAND: "router_checker" #0 [ffff880623301d10] schedule at ffffffff81528762 #1 [ffff880623301dd8] schedule_timeout at ffffffff815295d2 #2 [ffff880623301e88] cfs_schedule_timeout_and_set_state at ffffffffa079d6bd [libcfs] #3 [ffff880623301e98] lnet_router_checker at ffffffffa08b3bc3 [lnet] #4 [ffff880623301f48] kernel_thread at ffffffff8100c20a PID: 12005 TASK: ffff880632948100 CPU: 6 COMMAND: "ptlrpcd_rcv" #0 [ffff880623303ce0] schedule at ffffffff81528762 #1 [ffff880623303da8] schedule_timeout at ffffffff815295d2 #2 [ffff880623303e58] cfs_waitq_timedwait at ffffffffa079d6d1 [libcfs] #3 [ffff880623303e68] ptlrpcd at ffffffffa10deb89 [ptlrpc] #4 [ffff880623303f48] kernel_thread at ffffffff8100c20a PID: 12006 TASK: ffff88063a37aa80 CPU: 2 COMMAND: "ptlrpcd_0" #0 [ffff880623307ce0] schedule at ffffffff81528762 #1 [ffff880623307da8] schedule_timeout at ffffffff815295d2 #2 [ffff880623307e58] cfs_waitq_timedwait at ffffffffa079d6d1 [libcfs] #3 [ffff880623307e68] ptlrpcd at ffffffffa10deb89 [ptlrpc] #4 [ffff880623307f48] kernel_thread at ffffffff8100c20a PID: 12007 TASK: ffff880632ae5540 CPU: 2 COMMAND: "ptlrpcd_1" #0 [ffff880623321ce0] schedule at ffffffff81528762 #1 [ffff880623321da8] schedule_timeout at ffffffff815295d2 #2 [ffff880623321e58] cfs_waitq_timedwait at ffffffffa079d6d1 [libcfs] #3 [ffff880623321e68] ptlrpcd at ffffffffa10deb89 [ptlrpc] #4 [ffff880623321f48] kernel_thread at ffffffff8100c20a PID: 12008 TASK: ffff8806329a54c0 CPU: 2 COMMAND: "ptlrpcd_2" #0 [ffff880623325ce0] schedule at ffffffff81528762 #1 [ffff880623325da8] schedule_timeout at ffffffff815295d2 #2 [ffff880623325e58] cfs_waitq_timedwait at ffffffffa079d6d1 [libcfs] #3 [ffff880623325e68] ptlrpcd at ffffffffa10deb89 [ptlrpc] #4 [ffff880623325f48] kernel_thread at ffffffff8100c20a PID: 12009 TASK: ffff88063280b4c0 CPU: 2 COMMAND: "ptlrpcd_3" #0 [ffff880623327ce0] schedule at ffffffff81528762 #1 [ffff880623327da8] schedule_timeout at ffffffff815295d2 #2 [ffff880623327e58] cfs_waitq_timedwait at ffffffffa079d6d1 [libcfs] #3 [ffff880623327e68] ptlrpcd at ffffffffa10deb89 [ptlrpc] #4 [ffff880623327f48] kernel_thread at ffffffff8100c20a PID: 12010 TASK: ffff88063a720040 CPU: 6 COMMAND: "ptlrpcd_4" #0 [ffff880626cd9ce0] schedule at ffffffff81528762 #1 [ffff880626cd9da8] schedule_timeout at ffffffff815295d2 #2 [ffff880626cd9e58] cfs_waitq_timedwait at ffffffffa079d6d1 [libcfs] #3 [ffff880626cd9e68] ptlrpcd at ffffffffa10deb89 [ptlrpc] #4 [ffff880626cd9f48] kernel_thread at ffffffff8100c20a PID: 12011 TASK: ffff88063b7bf540 CPU: 4 COMMAND: "ptlrpcd_5" #0 [ffff880626cddce0] schedule at ffffffff81528762 #1 [ffff880626cddda8] schedule_timeout at ffffffff815295d2 #2 [ffff880626cdde58] cfs_waitq_timedwait at ffffffffa079d6d1 [libcfs] #3 [ffff880626cdde68] ptlrpcd at ffffffffa10deb89 [ptlrpc] #4 [ffff880626cddf48] kernel_thread at ffffffff8100c20a PID: 12012 TASK: ffff880632a8b580 CPU: 5 COMMAND: "ptlrpcd_6" #0 [ffff880626cdfce0] schedule at ffffffff81528762 #1 [ffff880626cdfda8] schedule_timeout at ffffffff815295d2 #2 [ffff880626cdfe58] cfs_waitq_timedwait at ffffffffa079d6d1 [libcfs] #3 [ffff880626cdfe68] ptlrpcd at ffffffffa10deb89 [ptlrpc] #4 [ffff880626cdff48] kernel_thread at ffffffff8100c20a PID: 12013 TASK: ffff880632a8a100 CPU: 5 COMMAND: "ptlrpcd_7" #0 [ffff8806260b1ce0] schedule at ffffffff81528762 #1 [ffff8806260b1da8] schedule_timeout at ffffffff815295d2 #2 [ffff8806260b1e58] cfs_waitq_timedwait at ffffffffa079d6d1 [libcfs] #3 [ffff8806260b1e68] ptlrpcd at ffffffffa10deb89 [ptlrpc] #4 [ffff8806260b1f48] kernel_thread at ffffffff8100c20a PID: 12014 TASK: ffff880639baa100 CPU: 0 COMMAND: "ll_ping" #0 [ffff8806260b5cf0] schedule at ffffffff81528762 #1 [ffff8806260b5db8] schedule_timeout at ffffffff815295d2 #2 [ffff8806260b5e68] cfs_waitq_timedwait at ffffffffa079d6d1 [libcfs] #3 [ffff8806260b5e78] ptlrpc_pinger_main at ffffffffa10d189e [ptlrpc] #4 [ffff8806260b5f48] kernel_thread at ffffffff8100c20a PID: 12015 TASK: ffff88063a2b4100 CPU: 1 COMMAND: "sptlrpc_gc" #0 [ffff8806260b7d30] schedule at ffffffff81528762 #1 [ffff8806260b7df8] schedule_timeout at ffffffff815295d2 #2 [ffff8806260b7ea8] cfs_waitq_timedwait at ffffffffa079d6d1 [libcfs] #3 [ffff8806260b7eb8] sec_gc_main at ffffffffa10f1cfb [ptlrpc] #4 [ffff8806260b7f48] kernel_thread at ffffffff8100c20a PID: 12024 TASK: ffff880639a86b40 CPU: 1 COMMAND: "ll_capa" #0 [ffff880629053de0] schedule at ffffffff81528762 #1 [ffff880629053ea8] cfs_waitq_wait at ffffffffa079d6fe [libcfs] #2 [ffff880629053eb8] capa_thread_main at ffffffffa14b2a70 [lustre] #3 [ffff880629053f48] kernel_thread at ffffffff8100c20a PID: 12943 TASK: ffff88063989e0c0 CPU: 6 COMMAND: "dd" #0 [ffff880627371af8] schedule at ffffffff81528762 #1 [ffff880627371bc0] io_schedule at ffffffff81528f43 #2 [ffff880627371be0] sync_page at ffffffff8111fa2d #3 [ffff880627371bf0] sync_page_killable at ffffffff8111fa4e #4 [ffff880627371c00] __wait_on_bit_lock at ffffffff815297da #5 [ffff880627371c50] __lock_page_killable at ffffffff8111f957 #6 [ffff880627371cb0] generic_file_aio_read at ffffffff81121684 #7 [ffff880627371d90] blkdev_aio_read at ffffffff811c4eb1 #8 [ffff880627371dc0] do_sync_read at ffffffff8118941a #9 [ffff880627371ef0] vfs_read at ffffffff81189d05 #10 [ffff880627371f30] sys_read at ffffffff81189e41 #11 [ffff880627371f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fff49adc1b0 RFLAGS: 00000217 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000003bb0edb730 RDX: 0000000000001000 RSI: 0000000000afd000 RDI: 0000000000000000 RBP: 0000000000afd000 R8: 0000003bb118fee8 R9: 0000000000000001 R10: 0000000000003003 R11: 0000000000000246 R12: 0000000000afcfff R13: 0000000000000000 R14: 0000000000001000 R15: 0000000000000000 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 12944 TASK: ffff880639c920c0 CPU: 5 COMMAND: "dd" #0 [ffff88062d329af8] schedule at ffffffff81528762 #1 [ffff88062d329bc0] io_schedule at ffffffff81528f43 #2 [ffff88062d329be0] sync_page at ffffffff8111fa2d #3 [ffff88062d329bf0] sync_page_killable at ffffffff8111fa4e #4 [ffff88062d329c00] __wait_on_bit_lock at ffffffff815297da #5 [ffff88062d329c50] __lock_page_killable at ffffffff8111f957 #6 [ffff88062d329cb0] generic_file_aio_read at ffffffff81121684 #7 [ffff88062d329d90] blkdev_aio_read at ffffffff811c4eb1 #8 [ffff88062d329dc0] do_sync_read at ffffffff8118941a #9 [ffff88062d329ef0] vfs_read at ffffffff81189d05 #10 [ffff88062d329f30] sys_read at ffffffff81189e41 #11 [ffff88062d329f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fffc44120c0 RFLAGS: 00000206 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000003bb0edb730 RDX: 0000000000001000 RSI: 0000000001550000 RDI: 0000000000000000 RBP: 0000000001550000 R8: 0000003bb118fee8 R9: 0000000000000001 R10: 0000000000003003 R11: 0000000000000246 R12: 000000000154ffff R13: 0000000000000000 R14: 0000000000001000 R15: 0000000000000000 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 12945 TASK: ffff8806399af540 CPU: 6 COMMAND: "dd" #0 [ffff880631ca5af8] schedule at ffffffff81528762 #1 [ffff880631ca5bc0] io_schedule at ffffffff81528f43 #2 [ffff880631ca5be0] sync_page at ffffffff8111fa2d #3 [ffff880631ca5bf0] sync_page_killable at ffffffff8111fa4e #4 [ffff880631ca5c00] __wait_on_bit_lock at ffffffff815297da #5 [ffff880631ca5c50] __lock_page_killable at ffffffff8111f957 #6 [ffff880631ca5cb0] generic_file_aio_read at ffffffff81121684 #7 [ffff880631ca5d90] blkdev_aio_read at ffffffff811c4eb1 #8 [ffff880631ca5dc0] do_sync_read at ffffffff8118941a #9 [ffff880631ca5ef0] vfs_read at ffffffff81189d05 #10 [ffff880631ca5f30] sys_read at ffffffff81189e41 #11 [ffff880631ca5f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fff91613e10 RFLAGS: 00000217 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000003bb0edb730 RDX: 0000000000001000 RSI: 0000000000d11000 RDI: 0000000000000000 RBP: 0000000000d11000 R8: 0000003bb118fee8 R9: 0000000000000001 R10: 0000000000003003 R11: 0000000000000246 R12: 0000000000d10fff R13: 0000000000000000 R14: 0000000000001000 R15: 0000000000000000 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 12946 TASK: ffff88062dc21500 CPU: 5 COMMAND: "dd" #0 [ffff88062331faf8] schedule at ffffffff81528762 #1 [ffff88062331fbc0] io_schedule at ffffffff81528f43 #2 [ffff88062331fbe0] sync_page at ffffffff8111fa2d #3 [ffff88062331fbf0] sync_page_killable at ffffffff8111fa4e #4 [ffff88062331fc00] __wait_on_bit_lock at ffffffff815297da #5 [ffff88062331fc50] __lock_page_killable at ffffffff8111f957 #6 [ffff88062331fcb0] generic_file_aio_read at ffffffff81121684 #7 [ffff88062331fd90] blkdev_aio_read at ffffffff811c4eb1 #8 [ffff88062331fdc0] do_sync_read at ffffffff8118941a #9 [ffff88062331fef0] vfs_read at ffffffff81189d05 #10 [ffff88062331ff30] sys_read at ffffffff81189e41 #11 [ffff88062331ff80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fff26c5c840 RFLAGS: 00000206 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000003bb0edb730 RDX: 0000000000001000 RSI: 0000000001975000 RDI: 0000000000000000 RBP: 0000000001975000 R8: 0000003bb118fee8 R9: 0000000000000001 R10: 0000000000003003 R11: 0000000000000246 R12: 0000000001974fff R13: 0000000000000000 R14: 0000000000001000 R15: 0000000000000000 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 12947 TASK: ffff880639d2c0c0 CPU: 4 COMMAND: "dd" #0 [ffff88034ac07e90] crash_nmi_callback at ffffffff81030096 #1 [ffff88034ac07ea0] notifier_call_chain at ffffffff8152e3b5 #2 [ffff88034ac07ee0] atomic_notifier_call_chain at ffffffff8152e41a #3 [ffff88034ac07ef0] notify_die at ffffffff810a052e #4 [ffff88034ac07f20] do_nmi at ffffffff8152c07b #5 [ffff88034ac07f50] nmi at ffffffff8152b940 [exception RIP: block_read_full_page+549] RIP: ffffffff811c11f5 RSP: ffff880622411bd8 RFLAGS: 00000246 RAX: 0000000000000000 RBX: ffff880625cf3678 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 000000000000938f RDI: ffff88033b395140 RBP: ffff880622411c98 R8: ffffea0015834f18 R9: 0000000000000400 R10: ffff88033b395218 R11: 0000000072a4c600 R12: ffff880622411c28 R13: 0000000000000004 R14: 0000000000000002 R15: ffff880622411c38 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 --- --- #6 [ffff880622411bd8] block_read_full_page at ffffffff811c11f5 #7 [ffff880622411ca0] blkdev_readpage at ffffffff811c5be8 #8 [ffff880622411cb0] generic_file_aio_read at ffffffff811213cc #9 [ffff880622411d90] blkdev_aio_read at ffffffff811c4eb1 #10 [ffff880622411dc0] do_sync_read at ffffffff8118941a #11 [ffff880622411ef0] vfs_read at ffffffff81189d05 #12 [ffff880622411f30] sys_read at ffffffff81189e41 #13 [ffff880622411f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fffc9a0cc60 RFLAGS: 00000202 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000000000000010 RDX: 0000000000001000 RSI: 00000000013a6000 RDI: 0000000000000000 RBP: 00000000013a6000 R8: 0000003bb118fee8 R9: 0000000000000001 R10: 0000000000003003 R11: 0000000000000246 R12: 00000000013a5fff R13: 0000000000000000 R14: 0000000000001000 R15: 0000000000000000 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 12948 TASK: ffff88063a64c100 CPU: 5 COMMAND: "dd" #0 [ffff88062d28baf8] schedule at ffffffff81528762 #1 [ffff88062d28bbc0] io_schedule at ffffffff81528f43 #2 [ffff88062d28bbe0] sync_page at ffffffff8111fa2d #3 [ffff88062d28bbf0] sync_page_killable at ffffffff8111fa4e #4 [ffff88062d28bc00] __wait_on_bit_lock at ffffffff815297da #5 [ffff88062d28bc50] __lock_page_killable at ffffffff8111f957 #6 [ffff88062d28bcb0] generic_file_aio_read at ffffffff811216a8 #7 [ffff88062d28bd90] blkdev_aio_read at ffffffff811c4eb1 #8 [ffff88062d28bdc0] do_sync_read at ffffffff8118941a #9 [ffff88062d28bef0] vfs_read at ffffffff81189d05 #10 [ffff88062d28bf30] sys_read at ffffffff81189e41 #11 [ffff88062d28bf80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fff48cbacc8 RFLAGS: 00000206 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000003bb0edb730 RDX: 0000000000001000 RSI: 0000000001f29000 RDI: 0000000000000000 RBP: 0000000001f29000 R8: 0000003bb118fee8 R9: 0000000000000001 R10: 0000000000003003 R11: 0000000000000246 R12: 0000000001f28fff R13: 0000000000000000 R14: 0000000000001000 R15: 0000000000000000 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 12949 TASK: ffff88063a64d580 CPU: 5 COMMAND: "dd" #0 [ffff88062d067af8] schedule at ffffffff81528762 #1 [ffff88062d067bc0] io_schedule at ffffffff81528f43 #2 [ffff88062d067be0] sync_page at ffffffff8111fa2d #3 [ffff88062d067bf0] sync_page_killable at ffffffff8111fa4e #4 [ffff88062d067c00] __wait_on_bit_lock at ffffffff815297da #5 [ffff88062d067c50] __lock_page_killable at ffffffff8111f957 #6 [ffff88062d067cb0] generic_file_aio_read at ffffffff81121684 #7 [ffff88062d067d90] blkdev_aio_read at ffffffff811c4eb1 #8 [ffff88062d067dc0] do_sync_read at ffffffff8118941a #9 [ffff88062d067ef0] vfs_read at ffffffff81189d05 #10 [ffff88062d067f30] sys_read at ffffffff81189e41 #11 [ffff88062d067f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fff74cf8530 RFLAGS: 00000206 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000003bb0edb730 RDX: 0000000000001000 RSI: 0000000001bdf000 RDI: 0000000000000000 RBP: 0000000001bdf000 R8: 0000003bb118fee8 R9: 0000000000000001 R10: 0000000000003003 R11: 0000000000000246 R12: 0000000001bdefff R13: 0000000000000000 R14: 0000000000001000 R15: 0000000000000000 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 12950 TASK: ffff8806329000c0 CPU: 7 COMMAND: "dd" #0 [ffff88062cb99af8] schedule at ffffffff81528762 #1 [ffff88062cb99bc0] io_schedule at ffffffff81528f43 #2 [ffff88062cb99be0] sync_page at ffffffff8111fa2d #3 [ffff88062cb99bf0] sync_page_killable at ffffffff8111fa4e #4 [ffff88062cb99c00] __wait_on_bit_lock at ffffffff815297da #5 [ffff88062cb99c50] __lock_page_killable at ffffffff8111f957 #6 [ffff88062cb99cb0] generic_file_aio_read at ffffffff81121684 #7 [ffff88062cb99d90] blkdev_aio_read at ffffffff811c4eb1 #8 [ffff88062cb99dc0] do_sync_read at ffffffff8118941a #9 [ffff88062cb99ef0] vfs_read at ffffffff81189d05 #10 [ffff88062cb99f30] sys_read at ffffffff81189e41 #11 [ffff88062cb99f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fff92737b90 RFLAGS: 00000206 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000003bb0edb730 RDX: 0000000000001000 RSI: 00000000011f3000 RDI: 0000000000000000 RBP: 00000000011f3000 R8: 0000003bb118fee8 R9: 0000000000000001 R10: 0000000000003003 R11: 0000000000000246 R12: 00000000011f2fff R13: 0000000000000000 R14: 0000000000001000 R15: 0000000000000000 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 12951 TASK: ffff880628cd8b00 CPU: 7 COMMAND: "dd" #0 [ffff88062d5c7af8] schedule at ffffffff81528762 #1 [ffff88062d5c7bc0] io_schedule at ffffffff81528f43 #2 [ffff88062d5c7be0] sync_page at ffffffff8111fa2d #3 [ffff88062d5c7bf0] sync_page_killable at ffffffff8111fa4e #4 [ffff88062d5c7c00] __wait_on_bit_lock at ffffffff815297da #5 [ffff88062d5c7c50] __lock_page_killable at ffffffff8111f957 #6 [ffff88062d5c7cb0] generic_file_aio_read at ffffffff81121684 #7 [ffff88062d5c7d90] blkdev_aio_read at ffffffff811c4eb1 #8 [ffff88062d5c7dc0] do_sync_read at ffffffff8118941a #9 [ffff88062d5c7ef0] vfs_read at ffffffff81189d05 #10 [ffff88062d5c7f30] sys_read at ffffffff81189e41 #11 [ffff88062d5c7f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fffe5a9e5a8 RFLAGS: 00000206 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000003bb0edb730 RDX: 0000000000001000 RSI: 00000000009fe000 RDI: 0000000000000000 RBP: 00000000009fe000 R8: 0000003bb118fee8 R9: 0000000000000001 R10: 0000000000003003 R11: 0000000000000246 R12: 00000000009fdfff R13: 0000000000000000 R14: 0000000000001000 R15: 0000000000000000 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 12952 TASK: ffff8806387a7500 CPU: 7 COMMAND: "dd" #0 [ffff8806252b3af8] schedule at ffffffff81528762 #1 [ffff8806252b3bc0] io_schedule at ffffffff81528f43 #2 [ffff8806252b3be0] sync_page at ffffffff8111fa2d #3 [ffff8806252b3bf0] sync_page_killable at ffffffff8111fa4e #4 [ffff8806252b3c00] __wait_on_bit_lock at ffffffff815297da #5 [ffff8806252b3c50] __lock_page_killable at ffffffff8111f957 #6 [ffff8806252b3cb0] generic_file_aio_read at ffffffff81121684 #7 [ffff8806252b3d90] blkdev_aio_read at ffffffff811c4eb1 #8 [ffff8806252b3dc0] do_sync_read at ffffffff8118941a #9 [ffff8806252b3ef0] vfs_read at ffffffff81189d05 #10 [ffff8806252b3f30] sys_read at ffffffff81189e41 #11 [ffff8806252b3f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edb730 RSP: 00007fff9a316990 RFLAGS: 00000206 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000003bb0edb790 RDX: 0000000000001000 RSI: 0000000001523000 RDI: 0000000000000000 RBP: 0000000001523000 R8: 0000003bb118fee8 R9: 0000000000000001 R10: 0000000000003003 R11: 0000000000000246 R12: 0000000001522fff R13: 0000000000000000 R14: 0000000000001000 R15: 0000000000000000 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 12955 TASK: ffff88062bc634c0 CPU: 6 COMMAND: "mount" #0 [ffff880623609db8] schedule at ffffffff81528762 #1 [ffff880623609e80] do_wait at ffffffff81075464 #2 [ffff880623609ee0] sys_wait4 at ffffffff81075563 #3 [ffff880623609f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f89ab49c824 RSP: 00007fff0bb4beb0 RFLAGS: 00010246 RAX: 000000000000003d RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 00007fff0bb4c06c RDI: ffffffffffffffff RBP: 00007f89ad69c970 R8: 00007f89ac20c7e0 R9: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f89ad69ca10 R14: 00007f89ad69c990 R15: 00007fff0bb4c3ec ORIG_RAX: 000000000000003d CS: 0033 SS: 002b PID: 12956 TASK: ffff880639c08040 CPU: 3 COMMAND: "mount" #0 [ffff880620989db8] schedule at ffffffff81528762 #1 [ffff880620989e80] do_wait at ffffffff81075464 #2 [ffff880620989ee0] sys_wait4 at ffffffff81075563 #3 [ffff880620989f80] system_call_fastpath at ffffffff8100b072 RIP: 00007f672bb56824 RSP: 00007fff267325c0 RFLAGS: 00010246 RAX: 000000000000003d RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 00007fff2673277c RDI: ffffffffffffffff RBP: 00007f672e9ee970 R8: 00007f672c8c67e0 R9: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f672e9eea10 R14: 00007f672e9ee990 R15: 00007fff26732afc ORIG_RAX: 000000000000003d CS: 0033 SS: 002b PID: 12957 TASK: ffff88063a70d540 CPU: 2 COMMAND: "mount.lustre" #0 [ffff88062964bac0] machine_kexec at ffffffff8103915b #1 [ffff88062964bb20] crash_kexec at ffffffff810c5e62 #2 [ffff88062964bbf0] panic at ffffffff815280aa #3 [ffff88062964bc70] lbug_with_loc at ffffffffa079ceeb [libcfs] #4 [ffff88062964bc90] server_fill_super at ffffffffa0a2d913 [obdclass] #5 [ffff88062964bd70] lustre_fill_super at ffffffffa09fd998 [obdclass] #6 [ffff88062964bda0] get_sb_nodev at ffffffff8118c7ff #7 [ffff88062964bde0] lustre_get_sb at ffffffffa09f5175 [obdclass] #8 [ffff88062964be00] vfs_kern_mount at ffffffff8118be5b #9 [ffff88062964be50] do_kern_mount at ffffffff8118c002 #10 [ffff88062964bea0] do_mount at ffffffff811ad00b #11 [ffff88062964bf20] sys_mount at ffffffff811ad6d0 #12 [ffff88062964bf80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0ee92fa RSP: 00007fff8643e448 RFLAGS: 00010206 RAX: 00000000000000a5 RBX: ffffffff8100b072 RCX: 0000000001000000 RDX: 0000000000408b5f RSI: 00007fff864414b8 RDI: 00000000025b6170 RBP: 0000000000000000 R8: 00000000025b6190 R9: 0000000000000000 R10: 0000000001000000 R11: 0000000000000206 R12: 000000000060db18 R13: 000000000060db10 R14: 00000000025b6190 R15: 0000000000000000 ORIG_RAX: 00000000000000a5 CS: 0033 SS: 002b PID: 12958 TASK: ffff880333816b00 CPU: 6 COMMAND: "mount.lustre" #0 [ffff8803289d13c8] schedule at ffffffff81528762 #1 [ffff8803289d1490] io_schedule at ffffffff81528f43 #2 [ffff8803289d14b0] sync_page at ffffffff8111fa2d #3 [ffff8803289d14c0] __wait_on_bit_lock at ffffffff815297da #4 [ffff8803289d1510] __lock_page at ffffffff8111f9c7 #5 [ffff8803289d1570] truncate_inode_pages_range at ffffffff81137bc3 #6 [ffff8803289d1660] truncate_inode_pages at ffffffff81137c75 #7 [ffff8803289d1670] kill_bdev at ffffffff811c50ea #8 [ffff8803289d1690] set_blocksize at ffffffff811c6bae #9 [ffff8803289d16c0] sb_set_blocksize at ffffffff811c6bdd #10 [ffff8803289d16e0] sb_min_blocksize at ffffffff811c6c61 #11 [ffff8803289d16f0] ldiskfs_fill_super at ffffffffa057a37a [ldiskfs] #12 [ffff8803289d1810] get_sb_bdev at ffffffff8118c9ce #13 [ffff8803289d18a0] ldiskfs_get_sb at ffffffffa0575018 [ldiskfs] #14 [ffff8803289d18b0] vfs_kern_mount at ffffffff8118be5b #15 [ffff8803289d1900] osd_mount at ffffffffa05c980b [osd_ldiskfs] #16 [ffff8803289d1960] osd_device_alloc at ffffffffa05ca8c2 [osd_ldiskfs] #17 [ffff8803289d19b0] obd_setup at ffffffffa09ea867 [obdclass] #18 [ffff8803289d1a70] class_setup at ffffffffa09eab78 [obdclass] #19 [ffff8803289d1ac0] class_process_config at ffffffffa09f208c [obdclass] #20 [ffff8803289d1b50] do_lcfg at ffffffffa09f7719 [obdclass] #21 [ffff8803289d1c30] lustre_start_simple at ffffffffa09f7ae4 [obdclass] #22 [ffff8803289d1c90] server_fill_super at ffffffffa0a2cfbd [obdclass] #23 [ffff8803289d1d70] lustre_fill_super at ffffffffa09fd998 [obdclass] #24 [ffff8803289d1da0] get_sb_nodev at ffffffff8118c7ff #25 [ffff8803289d1de0] lustre_get_sb at ffffffffa09f5175 [obdclass] #26 [ffff8803289d1e00] vfs_kern_mount at ffffffff8118be5b #27 [ffff8803289d1e50] do_kern_mount at ffffffff8118c002 #28 [ffff8803289d1ea0] do_mount at ffffffff811ad00b #29 [ffff8803289d1f20] sys_mount at ffffffff811ad6d0 #30 [ffff8803289d1f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0ee92fa RSP: 00007fffa6593608 RFLAGS: 00010206 RAX: 00000000000000a5 RBX: ffffffff8100b072 RCX: 0000000001000000 RDX: 0000000000408b5f RSI: 00007fffa6596678 RDI: 000000000227d170 RBP: 0000000000000000 R8: 000000000227d190 R9: 0000000000000000 R10: 0000000001000000 R11: 0000000000000206 R12: 000000000060db18 R13: 000000000060db10 R14: 000000000227d190 R15: 0000000000000000 ORIG_RAX: 00000000000000a5 CS: 0033 SS: 002b PID: 15672 TASK: ffff880325652a80 CPU: 0 COMMAND: "flush-8:0" #0 [ffff88032732fce0] schedule at ffffffff81528762 #1 [ffff88032732fda8] schedule_timeout at ffffffff815295d2 #2 [ffff88032732fe58] schedule_timeout_interruptible at ffffffff8152977e #3 [ffff88032732fe68] bdi_writeback_task at ffffffff811b6410 #4 [ffff88032732feb8] bdi_start_fn at ffffffff81143ae6 #5 [ffff88032732fee8] kthread at ffffffff81099eb6 #6 [ffff88032732ff48] kernel_thread at ffffffff8100c20a PID: 25335 TASK: ffff880339106080 CPU: 0 COMMAND: "flush-253:2" #0 [ffff88032f50fce0] schedule at ffffffff81528762 #1 [ffff88032f50fda8] schedule_timeout at ffffffff815295d2 #2 [ffff88032f50fe58] schedule_timeout_interruptible at ffffffff8152977e #3 [ffff88032f50fe68] bdi_writeback_task at ffffffff811b6410 #4 [ffff88032f50feb8] bdi_start_fn at ffffffff81143ae6 #5 [ffff88032f50fee8] kthread at ffffffff81099eb6 #6 [ffff88032f50ff48] kernel_thread at ffffffff8100c20a PID: 25911 TASK: ffff880639839580 CPU: 2 COMMAND: "iomonitor" #0 [ffff88062057dbe8] schedule at ffffffff81528762 #1 [ffff88062057dcb0] pipe_wait at ffffffff8119408b #2 [ffff88062057dd00] pipe_read at ffffffff81194b36 #3 [ffff88062057ddc0] do_sync_read at ffffffff8118941a #4 [ffff88062057def0] vfs_read at ffffffff81189d05 #5 [ffff88062057df30] sys_read at ffffffff81189e41 #6 [ffff88062057df80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120e740 RSP: 00007fffc6db9268 RFLAGS: 00010202 RAX: 0000000000000000 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 0000000000001000 RSI: 000000000149e640 RDI: 0000000000000000 RBP: 000000000149e640 R8: 0000000000000000 R9: 0000000000100000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000001000 R13: 0000000000f1d830 R14: 0000000000f03010 R15: 0000000000f1d830 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b PID: 26964 TASK: ffff880622d3d580 CPU: 4 COMMAND: "corosync" #0 [ffff8806227f3998] schedule at ffffffff81528762 #1 [ffff8806227f3a60] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff8806227f3b00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff8806227f3b20] do_sys_poll at ffffffff811a0b47 #4 [ffff8806227f3f40] sys_poll at ffffffff811a0e01 #5 [ffff8806227f3f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf343 RSP: 00007fff9122dd90 RFLAGS: 00000217 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 0000000000000002 RDX: 00000000000000d1 RSI: 000000000000000d RDI: 00000000010d2cb0 RBP: 00000000000000d1 R8: 0000000000000000 R9: 00000000000aeeae R10: 000000000000000a R11: 0000000000000293 R12: 00007fff9122de00 R13: 0000000001064070 R14: 0000000000000000 R15: 00000000010d2b70 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 26965 TASK: ffff880622d3cb40 CPU: 3 COMMAND: "corosync" #0 [ffff880622d03b38] schedule at ffffffff81528762 #1 [ffff880622d03c00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff880622d03c40] futex_wait at ffffffff810afab8 #3 [ffff880622d03dc0] do_futex at ffffffff810b1221 #4 [ffff880622d03ef0] sys_futex at ffffffff810b1cdb #5 [ffff880622d03f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120d930 RSP: 00007f8374fb1e20 RFLAGS: 00000206 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 0000003bb118fe98 RDX: 0000000000000000 RSI: 0000000000000080 RDI: 0000003ab40085e0 RBP: 00000000010ec1e0 R8: 0000000000000000 R9: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003 R13: 0000000000000000 R14: 0000000000000000 R15: 00007f8374fb1e8c ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 26967 TASK: ffff880622d3c100 CPU: 3 COMMAND: "corosync" #0 [ffff88062ebbb998] schedule at ffffffff81528762 #1 [ffff88062ebbba60] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff88062ebbbb00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff88062ebbbb20] do_sys_poll at ffffffff811a0b47 #4 [ffff88062ebbbf40] sys_poll at ffffffff811a0e01 #5 [ffff88062ebbbf80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf343 RSP: 00007f83745b0428 RFLAGS: 00000282 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: ffffffffffffffff RDX: 00000000000005e5 RSI: 0000000000000000 RDI: 0000000000000000 RBP: 00007f83745b04f0 R8: 0000000000000000 R9: 0000000000006957 R10: 00000000000aeead R11: 0000000000000293 R12: ffffffff00000000 R13: fffffffefffffffe R14: 00007f83745b04e0 R15: 0000000000618da0 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 26968 TASK: ffff880622d074c0 CPU: 4 COMMAND: "corosync" #0 [ffff880622d09b38] schedule at ffffffff81528762 #1 [ffff880622d09c00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff880622d09c40] futex_wait at ffffffff810afab8 #3 [ffff880622d09dc0] do_futex at ffffffff810b1221 #4 [ffff880622d09ef0] sys_futex at ffffffff810b1cdb #5 [ffff880622d09f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120d930 RSP: 00007f836fffeff0 RFLAGS: 00000206 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: ffffffffffffffff RDX: 0000000000000000 RSI: 0000000000000080 RDI: 0000000000616680 RBP: 0000000000000000 R8: 0000000000000000 R9: 00007f836ffff700 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003 R13: 0000000000000000 R14: 00007f836ffff9c0 R15: 0000003bb141c360 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 26969 TASK: ffff880622d06a80 CPU: 5 COMMAND: "corosync" #0 [ffff880622d0dda8] schedule at ffffffff81528762 #1 [ffff880622d0de70] do_nanosleep at ffffffff8152a38b #2 [ffff880622d0dea0] hrtimer_nanosleep at ffffffff8109f5f4 #3 [ffff880622d0df50] sys_nanosleep at ffffffff8109f71e #4 [ffff880622d0df80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120ef3d RSP: 00007f836ed4de20 RFLAGS: 00000293 RAX: 0000000000000023 RBX: ffffffff8100b072 RCX: 0000000000000010 RDX: 0000000000000001 RSI: 0000000000000000 RDI: 00007f836ed4de60 RBP: 00007f8374499b00 R8: 00007f836ed4e700 R9: 00007f836ed4e700 R10: 0000000000000000 R11: 0000000000000293 R12: 00007f836ed4de74 R13: 00007f8374499b00 R14: 0000000000000000 R15: 0000000000000001 ORIG_RAX: 0000000000000023 CS: 0033 SS: 002b PID: 26970 TASK: ffff880622d06040 CPU: 7 COMMAND: "cib" #0 [ffff880622d0f998] schedule at ffffffff81528762 #1 [ffff880622d0fa60] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff880622d0fb00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff880622d0fb20] do_sys_poll at ffffffff811a0b47 #4 [ffff880622d0ff40] sys_poll at ffffffff811a0e01 #5 [ffff880622d0ff80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf308 RSP: 00007fffdcf84750 RFLAGS: 00000246 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 000000001ddd27b6 RDX: 00000000000001f4 RSI: 000000000000000a RDI: 0000000001dfc4a0 RBP: 0000000001dfc4a0 R8: 0000000000000001 R9: 0000000000000000 R10: 0000000000000040 R11: 0000000000000246 R12: 000000000000000a R13: 0000003bb2504360 R14: 0000003bb2250ba0 R15: 0000000001c2c740 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 26971 TASK: ffff88062270f500 CPU: 5 COMMAND: "stonithd" #0 [ffff880622711998] schedule at ffffffff81528762 #1 [ffff880622711a60] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff880622711b00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff880622711b20] do_sys_poll at ffffffff811a0b47 #4 [ffff880622711f40] sys_poll at ffffffff811a0e01 #5 [ffff880622711f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf308 RSP: 00007ffff95f6e80 RFLAGS: 00000202 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 0000000000ab95d0 RDX: 00000000000001f4 RSI: 0000000000000006 RDI: 0000000000abe430 RBP: 0000000000abe430 R8: 0000000000000001 R9: 0000000000000000 R10: 0000000000000040 R11: 0000000000000246 R12: 0000000000000006 R13: 0000003bb2504360 R14: 0000003bb2250ba0 R15: 0000000000ab2080 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 26972 TASK: ffff88062270eac0 CPU: 5 COMMAND: "lrmd" #0 [ffff880622753998] schedule at ffffffff81528762 #1 [ffff880622753a60] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff880622753b00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff880622753b20] do_sys_poll at ffffffff811a0b47 #4 [ffff880622753f40] sys_poll at ffffffff811a0e01 #5 [ffff880622753f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf308 RSP: 00007fff9203d690 RFLAGS: 00000246 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 0000000000000000 RDX: 00000000000003e8 RSI: 0000000000000004 RDI: 000000000073e0b0 RBP: 000000000073e0b0 R8: 0000000000000001 R9: 0000000000000000 R10: 0000000000000040 R11: 0000000000000246 R12: 0000000000000004 R13: 0000003bb2504360 R14: 0000003bb2250ba0 R15: 000000000071b0d0 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 26973 TASK: ffff88062270e080 CPU: 0 COMMAND: "attrd" #0 [ffff880625b13998] schedule at ffffffff81528762 #1 [ffff880625b13a60] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff880625b13b00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff880625b13b20] do_sys_poll at ffffffff811a0b47 #4 [ffff880625b13f40] sys_poll at ffffffff811a0e01 #5 [ffff880625b13f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf308 RSP: 00007fff130eaa60 RFLAGS: 00000246 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 0000003bb0edf308 RDX: 00000000000001f4 RSI: 0000000000000004 RDI: 0000000000957420 RBP: 0000000000957420 R8: 0000000000000001 R9: 0000000000000000 R10: 0000000000000040 R11: 0000000000000246 R12: 0000000000000004 R13: 0000003bb2504360 R14: 0000003bb2250ba0 R15: 00000000009528e0 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 26974 TASK: ffff88062e0b3540 CPU: 0 COMMAND: "pengine" #0 [ffff88062e0b5998] schedule at ffffffff81528762 #1 [ffff88062e0b5a60] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff88062e0b5b00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff88062e0b5b20] do_sys_poll at ffffffff811a0b47 #4 [ffff88062e0b5f40] sys_poll at ffffffff811a0e01 #5 [ffff88062e0b5f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf308 RSP: 00007fff37fd3810 RFLAGS: 00000246 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 000000000257b410 RDX: 00000000000001f4 RSI: 0000000000000001 RDI: 000000000257b290 RBP: 000000000257b290 R8: 0000000000000001 R9: 0000000000000000 R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000001 R13: 0000003bb2504360 R14: 0000003bb2250ba0 R15: 00000000025770d0 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 26975 TASK: ffff88062e0b2b00 CPU: 2 COMMAND: "crmd" #0 [ffff88062ebd7998] schedule at ffffffff81528762 #1 [ffff88062ebd7a60] schedule_hrtimeout_range at ffffffff8152a248 #2 [ffff88062ebd7b00] poll_schedule_timeout at ffffffff811a03a9 #3 [ffff88062ebd7b20] do_sys_poll at ffffffff811a0b47 #4 [ffff88062ebd7f40] sys_poll at ffffffff811a0e01 #5 [ffff88062ebd7f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0edf308 RSP: 00007fff79534a90 RFLAGS: 00000246 RAX: 0000000000000007 RBX: ffffffff8100b072 RCX: 000000000115c700 RDX: 00000000000001f4 RSI: 0000000000000006 RDI: 000000000115a330 RBP: 000000000115a330 R8: 0000000000000001 R9: 0000000000000000 R10: 0000000000000040 R11: 0000000000000246 R12: 0000000000000006 R13: 0000003bb2504360 R14: 0000003bb2250ba0 R15: 00000000011565a0 ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b PID: 26976 TASK: ffff8806399d8ac0 CPU: 6 COMMAND: "corosync" #0 [ffff88062ebf1b38] schedule at ffffffff81528762 #1 [ffff88062ebf1c00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff88062ebf1c40] futex_wait at ffffffff810afab8 #3 [ffff88062ebf1dc0] do_futex at ffffffff810b1221 #4 [ffff88062ebf1ef0] sys_futex at ffffffff810b1cdb #5 [ffff88062ebf1f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120d930 RSP: 00007f836d13fa48 RFLAGS: 00010202 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 0000000000000018 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007f8375bd6010 RBP: 00000000010d3848 R8: 0000000000000000 R9: 0000000000006960 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003 R13: 00000000010d3848 R14: 00007f8375bd6010 R15: 00007f836d13fba0 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 26977 TASK: ffff88062dc20ac0 CPU: 3 COMMAND: "corosync" #0 [ffff88062ebd5b38] schedule at ffffffff81528762 #1 [ffff88062ebd5c00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff88062ebd5c40] futex_wait at ffffffff810afab8 #3 [ffff88062ebd5dc0] do_futex at ffffffff810b1221 #4 [ffff88062ebd5ef0] sys_futex at ffffffff810b1cdb #5 [ffff88062ebd5f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120d930 RSP: 00007f836cd0cad8 RFLAGS: 00010202 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 000000000000052a RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007f836d10e010 RBP: 00000000010d7ba8 R8: 0000000000000000 R9: 0000000000006961 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003 R13: 00000000010d7ba8 R14: 00007f836d10e010 R15: 00007f836cd0cba0 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 26978 TASK: ffff880639c93540 CPU: 6 COMMAND: "corosync" #0 [ffff88062ebe3b38] schedule at ffffffff81528762 #1 [ffff88062ebe3c00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff88062ebe3c40] futex_wait at ffffffff810afab8 #3 [ffff88062ebe3dc0] do_futex at ffffffff810b1221 #4 [ffff88062ebe3ef0] sys_futex at ffffffff810b1cdb #5 [ffff88062ebe3f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120d930 RSP: 00007f836c8dba58 RFLAGS: 00000246 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 0000000000000061 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007f8375bd4010 RBP: 00000000010dca98 R8: 0000000000000000 R9: 0000000000006962 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003 R13: 00000000010dca98 R14: 00007f8375bd4010 R15: 00007f836c8dbba0 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b PID: 26980 TASK: ffff880628cd80c0 CPU: 6 COMMAND: "corosync" #0 [ffff88062276bb38] schedule at ffffffff81528762 #1 [ffff88062276bc00] futex_wait_queue_me at ffffffff810ae9a9 #2 [ffff88062276bc40] futex_wait at ffffffff810afab8 #3 [ffff88062276bdc0] do_futex at ffffffff810b1221 #4 [ffff88062276bef0] sys_futex at ffffffff810b1cdb #5 [ffff88062276bf80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb120d930 RSP: 00007f836c3a7ad8 RFLAGS: 00010202 RAX: 00000000000000ca RBX: ffffffff8100b072 RCX: 000000000000052a RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007f836c7a9010 RBP: 00000000010e2088 R8: 0000000000000000 R9: 0000000000006964 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003 R13: 00000000010e2088 R14: 00007f836c7a9010 R15: 00007f836c3a7ba0 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b crash> crash> crash> ps | grep mount 4755 1 1 ffff880632a2eb00 IN 0.0 21656 1004 rpc.mountd 12955 5957 6 ffff88062bc634c0 IN 0.0 111740 864 mount 12956 5957 3 ffff880639c08040 IN 0.0 111740 868 mount > 12957 12955 2 ffff88063a70d540 RU 0.0 4140 784 mount.lustre 12958 12956 6 ffff880333816b00 UN 0.0 4140 784 mount.lustre crash> bt -f 12957 PID: 12957 TASK: ffff88063a70d540 CPU: 2 COMMAND: "mount.lustre" #0 [ffff88062964bac0] machine_kexec at ffffffff8103915b ffff88062964bac8: 0000000003091000 ffff880003091000 ffff88062964bad8: 0000000003090000 0000000000000000 ffff88062964bae8: 8800000000000000 ffff88032cebffff ffff88062964baf8: 0000000000000000 ffff88062964bb28 ffff88062964bb08: ffff88032cebbc00 ffff8803250f00b8 ffff88062964bb18: ffff88062964bbe8 ffffffff810c5e62 #1 [ffff88062964bb20] crash_kexec at ffffffff810c5e62 ffff88062964bb28: ffff8803251bc434 ffff8803250f00b8 ffff88062964bb38: ffff88032cebbc00 00000000ffffffed ffff88062964bb48: ffff88062964bbe8 0000000000000000 ffff88062964bb58: 0000000000000000 0000000000000001 ffff88062964bb68: ffffffff81645ba0 0000000000000000 ffff88062964bb78: 0000000000000001 0000000000002af6 ffff88062964bb88: 0000000000000000 0000000000000001 ffff88062964bb98: 0000000000000002 ffffffff81010f15 ffff88062964bba8: ffffffff810c5eef 0000000000000010 ffff88062964bbb8: 0000000000000046 ffff88062964bb28 ffff88062964bbc8: 0000000000000018 0000000000000004 ffff88062964bbd8: ffffffffa07bf599 00000000ffffffed ffff88062964bbe8: ffff88062964bc68 ffffffff815280aa #2 [ffff88062964bbf0] panic at ffffffff815280aa ffff88062964bbf8: ffffffffa07cc260 ffffffffa07bf599 ffff88062964bc08: ffffffff00000008 ffff88062964bc78 ffff88062964bc18: ffff88062964bc28 0000000000000000 ffff88062964bc28: 0000000031353631 0000000000000000 ffff88062964bc38: ffffffffa07be141 0000000000000000 ffff88062964bc48: 0000000000000073 ffffffffa0a40aa0 ffff88062964bc58: ffffffffa0a6f480 ffffffffa0a6f480 ffff88062964bc68: ffff88062964bc88 ffffffffa079ceeb #3 [ffff88062964bc70] lbug_with_loc at ffffffffa079ceeb [libcfs] ffff88062964bc78: 0000000000000000 ffff8803251bc400 ffff88062964bc88: ffff88062964bd68 ffffffffa0a2d913 #4 [ffff88062964bc90] server_fill_super at ffffffffa0a2d913 [obdclass] ffff88062964bc98: ffff88062964bd38 ffff880327284e00 ffff88062964bca8: ffff880327284e00 ffff88062964bd18 ffff88062964bcb8: ffff880333963b40 ffff8803251bc474 ffff88062964bcc8: 0000000000000073 00000000fffffffe ffff88062964bcd8: ffffffffa0a654d0 ffff88032cebbc00 ffff88062964bce8: ffff88062964bde8 ffff880333963b40 ffff88062964bcf8: ffff88062964bde8 ffff88032cebbc00 ffff88062964bd08: ffff88062964bd68 00000004a07ad2d1 ffff88062964bd18: ffff880300303a30 ffff88062964bd78 ffff88062964bd28: ffff88062964bd38 00000000f4157f8a ffff88062964bd38: ffff88062964bd68 ffff88032cebbc00 ffff88062964bd48: ffff88062964bde8 ffff880333963b40 ffff88062964bd58: ffff88062964bde8 ffff88032cebbc00 ffff88062964bd68: ffff88062964bd98 ffffffffa09fd998 #5 [ffff88062964bd70] lustre_fill_super at ffffffffa09fd998 [obdclass] ffff88062964bd78: 0000000000000000 ffff88062964bde8 ffff88062964bd88: ffffffffa09fd7c0 ffff88032c4f8e80 ffff88062964bd98: ffff88062964bdd8 ffffffff8118c7ff #6 [ffff88062964bda0] get_sb_nodev at ffffffff8118c7ff ffff88062964bda8: ffff880327284960 ffff88032c4f8e80 ffff88062964bdb8: ffffffffa0a654a0 0000000000000000 ffff88062964bdc8: ffff880327284960 0000000000000000 ffff88062964bdd8: ffff88062964bdf8 ffffffffa09f5175 #7 [ffff88062964bde0] lustre_get_sb at ffffffffa09f5175 [obdclass] ffff88062964bde8: ffff880339375000 ffff88032c4f8e80 ffff88062964bdf8: ffff88062964be48 ffffffff8118be5b #8 [ffff88062964be00] vfs_kern_mount at ffffffff8118be5b ffff88062964be08: ffff88032e480960 ffff880339375000 ffff88062964be18: ffff88062964be48 ffffffffa0a654a0 ffff88062964be28: ffff88032e480960 0000000000000000 ffff88062964be38: ffff880327284960 ffff880339375000 ffff88062964be48: ffff88062964be98 ffffffff8118c002 #9 [ffff88062964be50] do_kern_mount at ffffffff8118c002 ffff88062964be58: ffffffff81aaac00 0000000000000286 ffff88062964be68: ffff88062964be78 0000000001000000 ffff88062964be78: 0000000000000000 ffff880339375000 ffff88062964be88: ffff880327284960 ffff88032e480960 ffff88062964be98: ffff88062964bf18 ffffffff811ad00b #10 [ffff88062964bea0] do_mount at ffffffff811ad00b ffff88062964bea8: ffff880600000000 00000000811680fa ffff88062964beb8: ffff88062964bf30 00000000025b6190 ffff88062964bec8: 0000000001000000 00000000025b6190 ffff88062964bed8: ffff88033b71a080 ffff880337c92140 ffff88062964bee8: ffff88062964bf18 ffff880333d46000 ffff88062964bef8: 00000000025b6170 0000000001000000 ffff88062964bf08: 00000000025b6190 0000000000000000 ffff88062964bf18: ffff88062964bf78 ffffffff811ad6d0 #11 [ffff88062964bf20] sys_mount at ffffffff811ad6d0 ffff88062964bf28: 0000000000000000 ffff880339375000 ffff88062964bf38: ffff880327284960 ffff88032e480960 ffff88062964bf48: 0000000000000000 00007fff8644054f ffff88062964bf58: 0000000000000000 00000000025b6190 ffff88062964bf68: 000000000060db10 000000000060db18 ffff88062964bf78: 0000000000000000 ffffffff8100b072 #12 [ffff88062964bf80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0ee92fa RSP: 00007fff8643e448 RFLAGS: 00010206 RAX: 00000000000000a5 RBX: ffffffff8100b072 RCX: 0000000001000000 RDX: 0000000000408b5f RSI: 00007fff864414b8 RDI: 00000000025b6170 RBP: 0000000000000000 R8: 00000000025b6190 R9: 0000000000000000 R10: 0000000001000000 R11: 0000000000000206 R12: 000000000060db18 R13: 000000000060db10 R14: 00000000025b6190 R15: 0000000000000000 ORIG_RAX: 00000000000000a5 CS: 0033 SS: 002b crash> bt -f 129578 PID: 12958 TASK: ffff880333816b00 CPU: 6 COMMAND: "mount.lustre" #0 [ffff8803289d13c8] schedule at ffffffff81528762 ffff8803289d13d0: 0000000000000082 ffff8803289d13e8 ffff8803289d13e0: 0000000000000082 ffff8803289d1458 ffff8803289d13f0: ffffffff8105e7ac ffff8803289d1438 ffff8803289d1400: 0000000000000286 ffff8803338170a0 ffff8803289d1410: ffff8803289d1fd8 000000000000fc08 ffff8803289d1420: ffff8803338170a0 ffff88063b715500 ffff8803289d1430: ffff880333816b00 ffff8803289d1478 ffff8803289d1440: ffffffff810a6871 ffff8803289d1538 ffff8803289d1450: 0000000000016cc0 ffff88034ac40000 ffff8803289d1460: ffff880333816b00 ffff88034ac56cc0 ffff8803289d1470: ffff8803289d1528 0000000000000002 ffff8803289d1480: ffffffff8111f9f0 ffff8803289d14a8 ffff8803289d1490: ffffffff81528f43 #1 [ffff8803289d1490] io_schedule at ffffffff81528f43 ffff8803289d1498: ffff8803289d1518 ffff88034aa0afc8 ffff8803289d14a8: ffff8803289d14b8 ffffffff8111fa2d #2 [ffff8803289d14b0] sync_page at ffffffff8111fa2d ffff8803289d14b8: ffff8803289d1508 ffffffff815297da #3 [ffff8803289d14c0] __wait_on_bit_lock at ffffffff815297da ffff8803289d14c8: 00000000000024e3 000000000000000e ffff8803289d14d8: ffff8803289d15a8 ffff8803289d1518 ffff8803289d14e8: ffffffffffffffff ffff88033b395338 ffff8803289d14f8: ffff8803289d15a8 0000000000000000 ffff8803289d1508: ffff8803289d1568 ffffffff8111f9c7 #4 [ffff8803289d1510] __lock_page at ffffffff8111f9c7 ffff8803289d1518: ffffea0015834f18 0000000000000000 ffff8803289d1528: 0000000000000001 ffff880333816b00 ffff8803289d1538: ffffffff8109a2e0 ffff88034aa0afd0 ffff8803289d1548: ffff88062d5c7c80 ffffffff81135d02 ffff8803289d1558: 00000000000024e2 00000000000024e3 ffff8803289d1568: ffff8803289d1658 ffffffff81137bc3 #5 [ffff8803289d1570] truncate_inode_pages_range at ffffffff81137bc3 ffff8803289d1578: ffffea0015834f18 ffffffff00000002 ffff8803289d1588: ffff8803289d15d0 ffff8803289d15c0 ffff8803289d1598: ffff8803289d15c0 ffff880300000000 ffff8803289d15a8: 0000000000000001 0000000000000000 ffff8803289d15b8: ffffea0015834f18 ffffea001590c580 ffff8803289d15c8: ffffea001578f8a8 ffffea001587cbc0 ffff8803289d15d8: ffffea001585c908 ffffea00157f9b78 ffff8803289d15e8: ffffea0015871010 ffffea00157da8b8 ffff8803289d15f8: ffffea00157a08c8 ffffea0015c56960 ffff8803289d1608: ffffea00158dc968 ffffea0015746738 ffff8803289d1618: ffffea00157462a0 ffffea001593b4e8 ffff8803289d1628: ffff8803289d1658 ffff88033b395140 ffff8803289d1638: ffff880333a08c00 ffff880339e47000 ffff8803289d1648: 0000000000146578 0000000000000001 ffff8803289d1658: ffff8803289d1668 ffffffff81137c75 #6 [ffff8803289d1660] truncate_inode_pages at ffffffff81137c75 ffff8803289d1668: ffff8803289d1688 ffffffff811c50ea #7 [ffff8803289d1670] kill_bdev at ffffffff811c50ea ffff8803289d1678: ffff8803289d1688 ffff88033b395140 ffff8803289d1688: ffff8803289d16b8 ffffffff811c6bae #8 [ffff8803289d1690] set_blocksize at ffffffff811c6bae ffff8803289d1698: ffff8803289d16e8 ffffffff00000400 ffff8803289d16a8: ffffffff817a211e 0000000000000400 ffff8803289d16b8: ffff8803289d16d8 ffffffff811c6bdd #9 [ffff8803289d16c0] sb_set_blocksize at ffffffff811c6bdd ffff8803289d16c8: ffff880333a08c00 ffff880328450c00 ffff8803289d16d8: ffff8803289d16e8 ffffffff811c6c61 #10 [ffff8803289d16e0] sb_min_blocksize at ffffffff811c6c61 ffff8803289d16e8: ffff8803289d1808 ffffffffa057a37a #11 [ffff8803289d16f0] ldiskfs_fill_super at ffffffffa057a37a [ldiskfs] ffff8803289d16f8: 00000002000000fd 289d183800000004 ffff8803289d1708: 0000000000000020 ffff8803289d1838 ffff8803289d1718: 0000000000000004 0000000affffffff ffff8803289d1728: ffffffffffffffff ffff88030000000e ffff8803289d1738: ffff8803289d1758 ffff8803289d1838 ffff8803289d1748: ffff880339a2e800 0000000000000000 ffff8803289d1758: ffff88033b3951f8 ffff880300000000 ffff8803289d1768: ffff8803289d17c8 ffffffff8128cdc4 ffff8803289d1778: ffff880333a08e70 ffff8803289d17d8 ffff8803289d1788: ffff8803289d1798 ffff880339e47000 ffff8803289d1798: 0000000000000000 0000400333a08c00 ffff8803289d17a8: ffff8803289d1808 ffff880339a2e80c ffff8803289d17b8: ffffffffa0594030 ffffe8ffffd618b4 ffff8803289d17c8: ffff8803289d17f8 000000009d4b73a1 ffff8803289d17d8: 0000000000000000 0000000000000000 ffff8803289d17e8: 0000000000000003 ffff88033b395140 ffff8803289d17f8: ffff88033b3951f8 ffff880333a08c00 ffff8803289d1808: ffff8803289d1898 ffffffff8118c9ce #12 [ffff8803289d1810] get_sb_bdev at ffffffff8118c9ce ffff8803289d1818: ffff880000045600 ffffffffa057a1f0 ffff8803289d1828: ffff880339e47000 ffff88033b71a480 ffff8803289d1838: 00000000322d6d64 ffffffff81ac4ee0 ffff8803289d1848: 0000000000000000 ffffea000b4a9f88 ffff8803289d1858: ffff8803289d1888 000000009d4b73a1 ffff8803289d1868: ffff88033b71a480 ffff88033b71a480 ffff8803289d1878: ffffffffa0594000 0000000000000000 ffff8803289d1888: ffff880333d2ac48 ffff8803321a5000 ffff8803289d1898: ffff8803289d18a8 ffffffffa0575018 #13 [ffff8803289d18a0] ldiskfs_get_sb at ffffffffa0575018 [ldiskfs] ffff8803289d18a8: ffff8803289d18f8 ffffffff8118be5b #14 [ffff8803289d18b0] vfs_kern_mount at ffffffff8118be5b ffff8803289d18b8: ffff880333d2ac00 ffff880339e47000 ffff8803289d18c8: ffff8803289d18f8 ffff88032dbd2000 ffff8803289d18d8: ffffffffa0594000 ffff880333d2ac48 ffff8803289d18e8: ffff880333d2ac38 ffffea000b4a9f88 ffff8803289d18f8: ffff8803289d1958 ffffffffa05c980b #15 [ffff8803289d1900] osd_mount at ffffffffa05c980b [osd_ldiskfs] ffff8803289d1908: ffff8803289d1938 ffff880339e47000 ffff8803289d1918: 0000000000000000 0000000000000000 ffff8803289d1928: ffff880333d2ac00 ffff88032dbd2000 ffff8803289d1938: ffff8803289d19c8 0000000000000000 ffff8803289d1948: ffff880333d2ac00 ffff8803289d19c8 ffff8803289d1958: ffff8803289d19a8 ffffffffa05ca8c2 #16 [ffff8803289d1960] osd_device_alloc at ffffffffa05ca8c2 [osd_ldiskfs] ffff8803289d1968: ffffffffa0604800 ffff8803289d19c8 ffff8803289d1978: ffff8803289d19a8 ffff8803289d1a08 ffff8803289d1988: ffff8803250f00b8 0000000000000000 ffff8803289d1998: ffffffffa0604800 ffff8803289d19c8 ffff8803289d19a8: ffff8803289d1a68 ffffffffa09ea867 #17 [ffff8803289d19b0] obd_setup at ffffffffa09ea867 [obdclass] ffff8803289d19b8: ffff880333d44000 ffff880333d2ac00 ffff8803289d19c8: 0000000210000080 0000000000000000 ffff8803289d19d8: ffff880334ba4a00 ffff8803289d19e0 ffff8803289d19e8: ffff8803289d19e0 000000000000002b ffff8803289d19f8: ffff8803289d1a08 ffff88032b412c00 ffff8803289d1a08: 0000000200000010 0000000000000000 ffff8803289d1a18: ffff880334ba4c00 ffff8803289d1a20 ffff8803289d1a28: ffff8803289d1a20 000000000000002b ffff8803289d1a38: ffff88032e480880 ffff8803250f00b8 ffff8803289d1a48: ffff8803250f0218 ffff880333d2ac00 ffff8803289d1a58: ffff8803250f0144 ffff880333d2ac20 ffff8803289d1a68: ffff8803289d1ab8 ffffffffa09eab78 #18 [ffff8803289d1a70] class_setup at ffffffffa09eab78 [obdclass] ffff8803289d1a78: ffff880300000800 ffffffffa0a64b60 ffff8803289d1a88: ffff880300000284 ffff880333d2ac00 ffff8803289d1a98: ffff880333d2ac00 ffff8803250f00b8 ffff8803289d1aa8: ffff8803289d1d18 ffff88032e4808c0 ffff8803289d1ab8: ffff8803289d1b48 ffffffffa09f208c #19 [ffff8803289d1ac0] class_process_config at ffffffffa09f208c [obdclass] ffff8803289d1ac8: 00000000000cf003 ffff88033a7e4c34 ffff8803289d1ad8: ffff8803289d1b08 ffff8803289d1b88 ffff8803289d1ae8: ffff8803289d1bc8 ffff880333d2ac00 ffff8803289d1af8: 00000000000cf003 0000000000000005 ffff8803289d1b08: ffff8803289d1b48 ffffffffa09f7143 ffff8803289d1b18: 0000000000000068 ffff880333d2ac00 ffff8803289d1b28: ffff88032e480880 ffff8803289d1d18 ffff8803289d1b38: ffff88032e4808c0 ffff880333d2ac20 ffff8803289d1b48: ffff8803289d1c28 ffffffffa09f7719 #20 [ffff8803289d1b50] do_lcfg at ffffffffa09f7719 [obdclass] ffff8803289d1b58: ffff8803289d1bf8 ffff8803289d1d1a ffff8803289d1b68: ffffffffa0a47404 ffff8803289d1b88 ffff8803289d1b78: 0000000000000000 000cf003a89d1d17 ffff8803289d1b88: ffff88033a7e4c74 ffff88032e4808c0 ffff8803289d1b98: ffff8803289d1d18 ffff88032e480880 ffff8803289d1ba8: ffff88033a7e4c34 0000000000000000 ffff8803289d1bb8: 0000000000000000 0000000000000000 ffff8803289d1bc8: 0000001300000010 0000001f00000004 ffff8803289d1bd8: 000000000000000c 0000000000000000 ffff8803289d1be8: 0000000000000005 0000000000000000 ffff8803289d1bf8: ffff8803289d1c38 0000000000000000 ffff8803289d1c08: ffff88033a7e4c74 ffff88033a7e4cb4 ffff8803289d1c18: ffff88032e480880 ffff8803289d1d18 ffff8803289d1c28: ffff8803289d1c88 ffffffffa09f7ae4 #21 [ffff8803289d1c30] lustre_start_simple at ffffffffa09f7ae4 [obdclass] ffff8803289d1c38: ffff88033a7e4c34 ffff8803289d1c98 ffff8803289d1c48: ffff8803289d1c78 ffff88032e4808c0 ffff8803289d1c58: 0000329e00000000 ffff88033a7e4c00 ffff8803289d1c68: ffff88033a7e4cb4 ffff88032ef65c00 ffff8803289d1c78: 0000000000000000 ffff88033a7e4c34 ffff8803289d1c88: ffff8803289d1d68 ffffffffa0a2cfbd #22 [ffff8803289d1c90] server_fill_super at ffffffffa0a2cfbd [obdclass] ffff8803289d1c98: ffff88033a7e4c34 ffff88032e480880 ffff8803289d1ca8: ffff88032e480880 ffff8803289d1d18 ffff8803289d1cb8: ffff880333963bc0 ffff88033a7e4c74 ffff8803289d1cc8: 0000000000000073 00000000fffffffe ffff8803289d1cd8: ffffffffa0a654d0 ffff88032ef65c00 ffff8803289d1ce8: ffff8803289d1de8 ffff880333963bc0 ffff8803289d1cf8: ffff8803289d1de8 ffff88032ef65c00 ffff8803289d1d08: ffff8803289d1d68 00000004a07ad2d1 ffff8803289d1d18: ffff880300303a30 ffff8803289d1d78 ffff8803289d1d28: ffff8803289d1d38 000000009d4b73a1 ffff8803289d1d38: ffff8803289d1d68 ffff88032ef65c00 ffff8803289d1d48: ffff8803289d1de8 ffff880333963bc0 ffff8803289d1d58: ffff8803289d1de8 ffff88032ef65c00 ffff8803289d1d68: ffff8803289d1d98 ffffffffa09fd998 #23 [ffff8803289d1d70] lustre_fill_super at ffffffffa09fd998 [obdclass] ffff8803289d1d78: 0000000000000000 ffff8803289d1de8 ffff8803289d1d88: ffffffffa09fd7c0 ffff88033b71a780 ffff8803289d1d98: ffff8803289d1dd8 ffffffff8118c7ff #24 [ffff8803289d1da0] get_sb_nodev at ffffffff8118c7ff ffff8803289d1da8: ffff88032e480920 ffff88033b71a780 ffff8803289d1db8: ffffffffa0a654a0 0000000000000000 ffff8803289d1dc8: ffff88032e480920 0000000000000000 ffff8803289d1dd8: ffff8803289d1df8 ffffffffa09f5175 #25 [ffff8803289d1de0] lustre_get_sb at ffffffffa09f5175 [obdclass] ffff8803289d1de8: ffff88032568b000 ffff88033b71a780 ffff8803289d1df8: ffff8803289d1e48 ffffffff8118be5b #26 [ffff8803289d1e00] vfs_kern_mount at ffffffff8118be5b ffff8803289d1e08: ffff88032e480940 ffff88032568b000 ffff8803289d1e18: ffff8803289d1e48 ffffffffa0a654a0 ffff8803289d1e28: ffff88032e480940 0000000000000000 ffff8803289d1e38: ffff88032e480920 ffff88032568b000 ffff8803289d1e48: ffff8803289d1e98 ffffffff8118c002 #27 [ffff8803289d1e50] do_kern_mount at ffffffff8118c002 ffff8803289d1e58: ffffffff81aaac00 0000000000000286 ffff8803289d1e68: ffff8803289d1e78 0000000001000000 ffff8803289d1e78: 0000000000000000 ffff88032568b000 ffff8803289d1e88: ffff88032e480920 ffff88032e480940 ffff8803289d1e98: ffff8803289d1f18 ffffffff811ad00b #28 [ffff8803289d1ea0] do_mount at ffffffff811ad00b ffff8803289d1ea8: ffff880300000000 00000000811680fa ffff8803289d1eb8: ffff8803289d1f30 000000000227d190 ffff8803289d1ec8: 0000000001000000 000000000227d190 ffff8803289d1ed8: ffff88033b71a080 ffff880337c92140 ffff8803289d1ee8: ffff8803289d1f18 ffff8803342ed000 ffff8803289d1ef8: 000000000227d170 0000000001000000 ffff8803289d1f08: 000000000227d190 0000000000000000 ffff8803289d1f18: ffff8803289d1f78 ffffffff811ad6d0 #29 [ffff8803289d1f20] sys_mount at ffffffff811ad6d0 ffff8803289d1f28: 0000000000000000 ffff88032568b000 ffff8803289d1f38: ffff88032e480920 ffff88032e480940 ffff8803289d1f48: 0000000000000000 00007fffa659570f ffff8803289d1f58: 0000000000000000 000000000227d190 ffff8803289d1f68: 000000000060db10 000000000060db18 ffff8803289d1f78: 0000000000000000 ffffffff8100b072 #30 [ffff8803289d1f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003bb0ee92fa RSP: 00007fffa6593608 RFLAGS: 00010206 RAX: 00000000000000a5 RBX: ffffffff8100b072 RCX: 0000000001000000 RDX: 0000000000408b5f RSI: 00007fffa6596678 RDI: 000000000227d170 RBP: 0000000000000000 R8: 000000000227d190 R9: 0000000000000000 R10: 0000000001000000 R11: 0000000000000206 R12: 000000000060db18 R13: 000000000060db10 R14: 000000000227d190 R15: 0000000000000000 ORIG_RAX: 00000000000000a5 CS: 0033 SS: 002b crash> dmesg <6>Initializing cgroup subsys cpuset <6>Initializing cgroup subsys cpu <5>Linux version 2.6.32-431.1.2.el6.Bull.44.x86_64 (hpcdelivery@atlas.frec.bull.fr) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Tue Jan 21 01:58:34 CET 2014 <6>Command line: ro root=UUID=e1ccaa08-a3cc-4179-9304-b5739591216f rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=512M-96G:128M,96G-:320M console=tty0 console=ttyS1,115200 rdloaddriver=megaraid_sas rd_NO_LVM rd_NO_DM nmi_watchdog=0 rdblacklist=mpt2sas log_buf_len=2M rdblacklist=lpfc rdblacklist=nouveau pciehp.pciehp_disable selinux=0 transparent_hugepage=never rdblacklist=dm_mod <6>KERNEL supported cpus: <6> Intel GenuineIntel <6> AMD AuthenticAMD <6> Centaur CentaurHauls <6>BIOS-provided physical RAM map: <6> BIOS-e820: 0000000000000000 - 000000000009a800 (usable) <6> BIOS-e820: 000000000009a800 - 00000000000a0000 (reserved) <6> BIOS-e820: 00000000000e6000 - 0000000000100000 (reserved) <6> BIOS-e820: 0000000000100000 - 00000000bf760000 (usable) <6> BIOS-e820: 00000000bf76e000 - 00000000bf770000 type 9 <6> BIOS-e820: 00000000bf770000 - 00000000bf77e000 (ACPI data) <6> BIOS-e820: 00000000bf77e000 - 00000000bf7d0000 (ACPI NVS) <6> BIOS-e820: 00000000bf7d0000 - 00000000bf7e0000 (reserved) <6> BIOS-e820: 00000000bf7ec000 - 00000000c0000000 (reserved) <6> BIOS-e820: 00000000e0000000 - 00000000f0000000 (reserved) <6> BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved) <6> BIOS-e820: 00000000ffc00000 - 0000000100000000 (reserved) <6> BIOS-e820: 0000000100000000 - 0000000640000000 (usable) <6>DMI present. <4>SMBIOS version 2.6 @ 0xFAE70 <7>DMI: Bull SAS bullx/X8DTT, BIOS R4222X80 05/20/2010 <5>AMI BIOS detected: BIOS may corrupt low RAM, working around it. <7>e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved) <7>e820 update range: 0000000000000000 - 0000000000001000 (usable) ==> (reserved) <7>e820 remove range: 00000000000a0000 - 0000000000100000 (usable) <6>last_pfn = 0x640000 max_arch_pfn = 0x400000000 <7>MTRR default type: uncachable <7>MTRR fixed ranges enabled: <7> 00000-9FFFF write-back <7> A0000-BFFFF uncachable <7> C0000-CFFFF write-protect <7> D0000-DFFFF uncachable <7> E0000-E3FFF write-protect <7> E4000-EBFFF write-through <7> EC000-FFFFF write-protect <7>MTRR variable ranges enabled: <7> 0 base 0000000000 mask FC00000000 write-back <7> 1 base 0400000000 mask FE00000000 write-back <7> 2 base 0600000000 mask FFC0000000 write-back <7> 3 base 00C0000000 mask FFC0000000 uncachable <7> 4 base 00BF800000 mask FFFF800000 uncachable <7> 5 disabled <7> 6 disabled <7> 7 disabled <6>x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106 <7>original variable MTRRs <7>reg 0, base: 0GB, range: 16GB, type WB <7>reg 1, base: 16GB, range: 8GB, type WB <7>reg 2, base: 24GB, range: 1GB, type WB <7>reg 3, base: 3GB, range: 1GB, type UC <7>reg 4, base: 3064MB, range: 8MB, type UC <6>total RAM covered: 24568M <6>Found optimal setting for mtrr clean up <6> gran_size: 64K chunk_size: 16M num_reg: 7 lose cover RAM: 0G <7>New variable MTRRs <7>reg 0, base: 0GB, range: 2GB, type WB <7>reg 1, base: 2GB, range: 1GB, type WB <7>reg 2, base: 3064MB, range: 8MB, type UC <7>reg 3, base: 4GB, range: 4GB, type WB <7>reg 4, base: 8GB, range: 8GB, type WB <7>reg 5, base: 16GB, range: 8GB, type WB <7>reg 6, base: 24GB, range: 1GB, type WB <7>e820 update range: 00000000bf800000 - 0000000100000000 (usable) ==> (reserved) <6>last_pfn = 0xbf760 max_arch_pfn = 0x400000000 <7>initial memory mapped : 0 - 20000000 <6>init_memory_mapping: 0000000000000000-00000000bf760000 <7> 0000000000 - 00bf600000 page 2M <7> 00bf600000 - 00bf760000 page 4k <7>kernel direct mapping tables up to bf760000 @ 10000-15000 <7>Use unified mapping for non-reserved e820 regions. <6>init_memory_mapping: 0000000100000000-0000000640000000 <7> 0100000000 - 0640000000 page 2M <7>kernel direct mapping tables up to 640000000 @ 13000-29000 <6>log_buf_len: 2097152 <6>early log buf free: 520287(99%) <6>RAMDISK: 37aba000 - 37fef8b6 <4>ACPI: RSDP 00000000000f9f60 00024 (v02 ACPIAM) <4>ACPI: XSDT 00000000bf770100 0008C (v01 052010 XSDT1538 20100520 MSFT 00000097) <4>ACPI: FACP 00000000bf770290 000F4 (v04 052010 FACP1538 20100520 MSFT 00000097) <4>ACPI: DSDT 00000000bf770520 059D7 (v02 10007 10007000 00000000 INTL 20051117) <4>ACPI: FACS 00000000bf77e000 00040 <4>ACPI: APIC 00000000bf770390 00112 (v02 052010 APIC1538 20100520 MSFT 00000097) <4>ACPI: MCFG 00000000bf7704b0 0003C (v01 052010 OEMMCFG 20100520 MSFT 00000097) <4>ACPI: SLIT 00000000bf7704f0 00030 (v01 052010 OEMSLIT 20100520 MSFT 00000097) <4>ACPI: OEMB 00000000bf77e040 00082 (v01 052010 OEMB1538 20100520 MSFT 00000097) <4>ACPI: SRAT 00000000bf77a520 00150 (v02 052010 OEMSRAT 00000001 INTL 00000001) <4>ACPI: HPET 00000000bf77a670 00038 (v01 052010 OEMHPET 20100520 MSFT 00000097) <4>ACPI: DMAR 00000000bf77e0d0 00118 (v01 AMI OEMDMAR 00000001 MSFT 00000097) <4>ACPI: SSDT 00000000bf780690 012C9 (v01 DpgPmm CpuPm 00000012 INTL 20051117) <4>ACPI: EINJ 00000000bf77a6b0 00130 (v01 AMIER AMI_EINJ 20100520 MSFT 00000097) <4>ACPI: BERT 00000000bf77a840 00030 (v01 AMIER AMI_BERT 20100520 MSFT 00000097) <4>ACPI: ERST 00000000bf77a870 001B0 (v01 AMIER AMI_ERST 20100520 MSFT 00000097) <4>ACPI: HEST 00000000bf77aa20 000A8 (v01 AMIER ABC_HEST 20100520 MSFT 00000097) <7>ACPI: Local APIC address 0xfee00000 <6>Setting APIC routing to flat. <6>SRAT: PXM 0 -> APIC 0 -> Node 0 <6>SRAT: PXM 0 -> APIC 2 -> Node 0 <6>SRAT: PXM 0 -> APIC 4 -> Node 0 <6>SRAT: PXM 0 -> APIC 6 -> Node 0 <6>SRAT: PXM 1 -> APIC 16 -> Node 1 <6>SRAT: PXM 1 -> APIC 18 -> Node 1 <6>SRAT: PXM 1 -> APIC 20 -> Node 1 <6>SRAT: PXM 1 -> APIC 22 -> Node 1 <6>SRAT: Node 0 PXM 0 0-a0000 <6>SRAT: Node 0 PXM 0 100000-c0000000 <6>SRAT: Node 0 PXM 0 100000000-340000000 <6>SRAT: Node 1 PXM 1 340000000-640000000 <7>NUMA: Allocated memnodemap from 28040 - 34880 <7>NUMA: Using 20 for the hash shift. <6>Bootmem setup node 0 0000000000000000-0000000340000000 <6> NODE_DATA [0000000000034880 - 000000000006887f] <6> bootmap [0000000000100000 - 0000000000167fff] pages 68 <6>(11 early reservations) ==> bootmem [0000000000 - 0340000000] <6> #0 [0000000000 - 0000001000] BIOS data page ==> [0000000000 - 0000001000] <6> #1 [0000006000 - 0000008000] TRAMPOLINE ==> [0000006000 - 0000008000] <6> #2 [0001000000 - 00020226a4] TEXT DATA BSS ==> [0001000000 - 00020226a4] <6> #3 [0037aba000 - 0037fef8b6] RAMDISK ==> [0037aba000 - 0037fef8b6] <6> #4 [0000085c00 - 0000100000] BIOS reserved ==> [0000085c00 - 0000100000] <6> #5 [0002023000 - 00020231c0] BRK ==> [0002023000 - 00020231c0] <6> #6 [0000010000 - 0000013000] PGTABLE ==> [0000010000 - 0000013000] <6> #7 [0000013000 - 0000028000] PGTABLE ==> [0000013000 - 0000028000] <6> #8 [00bf560000 - 00bf760000] LOG BUF ==> [00bf560000 - 00bf760000] <6> #9 [0000028000 - 0000028030] ACPI SLIT ==> [0000028000 - 0000028030] <6> #10 [0000028040 - 0000034880] MEMNODEMAP ==> [0000028040 - 0000034880] <6>Bootmem setup node 1 0000000340000000-0000000640000000 <6> NODE_DATA [0000000340000040 - 000000034003403f] <6> bootmap [0000000340035000 - 0000000340094fff] pages 60 <6>(11 early reservations) ==> bootmem [0340000000 - 0640000000] <6> #0 [0000000000 - 0000001000] BIOS data page <6> #1 [0000006000 - 0000008000] TRAMPOLINE <6> #2 [0001000000 - 00020226a4] TEXT DATA BSS <6> #3 [0037aba000 - 0037fef8b6] RAMDISK <6> #4 [0000085c00 - 0000100000] BIOS reserved <6> #5 [0002023000 - 00020231c0] BRK <6> #6 [0000010000 - 0000013000] PGTABLE <6> #7 [0000013000 - 0000028000] PGTABLE <6> #8 [00bf560000 - 00bf760000] LOG BUF <6> #9 [0000028000 - 0000028030] ACPI SLIT <6> #10 [0000028040 - 0000034880] MEMNODEMAP <6>found SMP MP-table at [ffff8800000ff780] ff780 <6>Reserving 128MB of memory at 48MB for crashkernel (System RAM: 25600MB) <7> [ffffea0000000000-ffffea000b5fffff] PMD -> [ffff880028600000-ffff880032dfffff] on node 0 <7> [ffffea000b600000-ffffea0015dfffff] PMD -> [ffff880340200000-ffff88034a9fffff] on node 1 <4>Zone PFN ranges: <4> DMA 0x00000010 -> 0x00001000 <4> DMA32 0x00001000 -> 0x00100000 <4> Normal 0x00100000 -> 0x00640000 <4>Movable zone start PFN for each node <4>early_node_map[4] active PFN ranges <4> 0: 0x00000010 -> 0x0000009a <4> 0: 0x00000100 -> 0x000bf760 <4> 0: 0x00100000 -> 0x00340000 <4> 1: 0x00340000 -> 0x00640000 <7>On node 0 totalpages: 3143402 <7> DMA zone: 56 pages used for memmap <7> DMA zone: 161 pages reserved <7> DMA zone: 3761 pages, LIFO batch:0 <7> DMA32 zone: 14280 pages used for memmap <7> DMA32 zone: 765848 pages, LIFO batch:31 <7> Normal zone: 32256 pages used for memmap <7> Normal zone: 2327040 pages, LIFO batch:31 <7>On node 1 totalpages: 3145728 <7> Normal zone: 43008 pages used for memmap <7> Normal zone: 3102720 pages, LIFO batch:31 <6>ACPI: PM-Timer IO Port: 0x808 <7>ACPI: Local APIC address 0xfee00000 <6>Setting APIC routing to flat. <6>ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled) <6>ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) <6>ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled) <6>ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled) <6>ACPI: LAPIC (acpi_id[0x05] lapic_id[0x10] enabled) <6>ACPI: LAPIC (acpi_id[0x06] lapic_id[0x12] enabled) <6>ACPI: LAPIC (acpi_id[0x07] lapic_id[0x14] enabled) <6>ACPI: LAPIC (acpi_id[0x08] lapic_id[0x16] enabled) <6>ACPI: LAPIC (acpi_id[0x09] lapic_id[0x88] disabled) <6>ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x89] disabled) <6>ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x8a] disabled) <6>ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x8b] disabled) <6>ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x8c] disabled) <6>ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x8d] disabled) <6>ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x8e] disabled) <6>ACPI: LAPIC (acpi_id[0x10] lapic_id[0x8f] disabled) <6>ACPI: LAPIC (acpi_id[0x11] lapic_id[0x90] disabled) <6>ACPI: LAPIC (acpi_id[0x12] lapic_id[0x91] disabled) <6>ACPI: LAPIC (acpi_id[0x13] lapic_id[0x92] disabled) <6>ACPI: LAPIC (acpi_id[0x14] lapic_id[0x93] disabled) <6>ACPI: LAPIC (acpi_id[0x15] lapic_id[0x94] disabled) <6>ACPI: LAPIC (acpi_id[0x16] lapic_id[0x95] disabled) <6>ACPI: LAPIC (acpi_id[0x17] lapic_id[0x96] disabled) <6>ACPI: LAPIC (acpi_id[0x18] lapic_id[0x97] disabled) <6>ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) <6>ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0]) <6>IOAPIC[0]: apic_id 1, version 32, address 0xfec00000, GSI 0-23 <6>ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) <6>ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 20 low level) <7>ACPI: IRQ0 used by override. <7>ACPI: IRQ2 used by override. <6>Using ACPI (MADT) for SMP configuration information <6>ACPI: HPET id: 0x8086a301 base: 0xfed00000 <6>SMP: Allowing 24 CPUs, 16 hotplug CPUs <7>nr_irqs_gsi: 24 <6>PM: Registered nosave memory: 000000000009a000 - 000000000009b000 <6>PM: Registered nosave memory: 000000000009b000 - 00000000000a0000 <6>PM: Registered nosave memory: 00000000000a0000 - 00000000000e6000 <6>PM: Registered nosave memory: 00000000000e6000 - 0000000000100000 <6>PM: Registered nosave memory: 00000000bf760000 - 00000000bf76e000 <6>PM: Registered nosave memory: 00000000bf76e000 - 00000000bf770000 <6>PM: Registered nosave memory: 00000000bf770000 - 00000000bf77e000 <6>PM: Registered nosave memory: 00000000bf77e000 - 00000000bf7d0000 <6>PM: Registered nosave memory: 00000000bf7d0000 - 00000000bf7e0000 <6>PM: Registered nosave memory: 00000000bf7e0000 - 00000000bf7ec000 <6>PM: Registered nosave memory: 00000000bf7ec000 - 00000000c0000000 <6>PM: Registered nosave memory: 00000000c0000000 - 00000000e0000000 <6>PM: Registered nosave memory: 00000000e0000000 - 00000000f0000000 <6>PM: Registered nosave memory: 00000000f0000000 - 00000000fee00000 <6>PM: Registered nosave memory: 00000000fee00000 - 00000000fee01000 <6>PM: Registered nosave memory: 00000000fee01000 - 00000000ffc00000 <6>PM: Registered nosave memory: 00000000ffc00000 - 0000000100000000 <6>Allocating PCI resources starting at c0000000 (gap: c0000000:20000000) <6>Booting paravirtualized kernel on bare hardware <6>NR_CPUS:4096 nr_cpumask_bits:24 nr_cpu_ids:24 nr_node_ids:2 <6>PERCPU: Embedded 31 pages/cpu @ffff880028200000 s95960 r8192 d22824 u131072 <6>pcpu-alloc: s95960 r8192 d22824 u131072 alloc=1*2097152 <6>pcpu-alloc: [0] 00 01 02 03 08 10 12 14 16 18 20 22 -- -- -- -- <6>pcpu-alloc: [1] 04 05 06 07 09 11 13 15 17 19 21 23 -- -- -- -- <4>Built 2 zonelists in Zone order, mobility grouping on. Total pages: 6199369 <4>Policy zone: Normal <5>Kernel command line: ro root=UUID=e1ccaa08-a3cc-4179-9304-b5739591216f rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=512M-96G:128M,96G-:320M console=tty0 console=ttyS1,115200 rdloaddriver=megaraid_sas rd_NO_LVM rd_NO_DM nmi_watchdog=0 rdblacklist=mpt2sas log_buf_len=2M rdblacklist=lpfc rdblacklist=nouveau pciehp.pciehp_disable selinux=0 transparent_hugepage=never rdblacklist=dm_mod <6>PID hash table entries: 4096 (order: 3, 32768 bytes) <6>Tick synchro disabled. <6>Checking aperture... <6>No AGP bridge found <6>PCI-DMA: Using software bounce buffering for IO (SWIOTLB) <6>Placing 64MB software IO TLB between ffff880020000000 - ffff880024000000 <6>software IO TLB at phys 0x20000000 - 0x24000000 <6>Memory: 24587380k/26214400k available (5329k kernel code, 1057880k absent, 569140k reserved, 7014k data, 1280k init) <6>Hierarchical RCU implementation. <6>NR_IRQS:33024 nr_irqs:600 <6>Extended CMOS year: 2000 <4>Console: colour VGA+ 80x25 <6>console [tty0] enabled <6>console [ttyS1] enabled <6>allocated 100663296 bytes of page_cgroup <6>please try 'cgroup_disable=memory' option if you don't want memory cgroups <4>HPET: enabling legacy interrupts <4>Wrote HPET irq cfg 3 <7>hpet clockevent registered <4>Fast TSC calibration using PIT <4>Detected 2800.168 MHz processor. <6>Calibrating delay loop (skipped), value calculated using timer frequency.. 5600.33 BogoMIPS (lpj=2800168) <6>pid_max: default: 32768 minimum: 301 <6>Security Framework initialized <6>SELinux: Disabled at boot. <6>Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes) <6>Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes) <4>Mount-cache hash table entries: 256 <6>Initializing cgroup subsys ns <6>Initializing cgroup subsys cpuacct <6>Initializing cgroup subsys memory <6>Initializing cgroup subsys devices <6>Initializing cgroup subsys freezer <6>Initializing cgroup subsys net_cls <6>Initializing cgroup subsys blkio <6>Initializing cgroup subsys perf_event <6>Initializing cgroup subsys net_prio <6>CPU: Physical Processor ID: 0 <6>CPU: Processor Core ID: 0 <6>mce: CPU supports 9 MCE banks <6>CPU0: Thermal monitoring enabled (TM1) <6>using mwait in idle threads. <6>ACPI: Core revision 20090903 <6>ftrace: converting mcount calls to 0f 1f 44 00 00 <6>ftrace: allocating 21798 entries in 86 pages <6>dmar: Host address width 40 <6>dmar: DRHD base: 0x000000fbffe000 flags: 0x1 <6>dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap c90780106f0462 ecap f020f6 <6>dmar: RMRR base: 0x000000000e6000 end: 0x000000000e9fff <6>dmar: RMRR base: 0x000000bf7ec000 end: 0x000000bf7fffff <6>dmar: ATSR flags: 0x0 <6>APIC routing finalized to physical flat. <7> alloc irq_desc for 20 on node 0 <7> alloc kstat_irqs on node 0 <6>..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 <6>CPU0: Intel(R) Xeon(R) CPU X5560 @ 2.80GHz stepping 05 <6>Performance Events: PEBS fmt1+, 16-deep LBR, Nehalem events, Intel PMU driver. <6>CPU erratum AAJ80 worked around <4>CPUID marked event: 'bus cycles' unavailable <6>... version: 3 <6>... bit width: 48 <6>... generic registers: 4 <6>... value mask: 0000ffffffffffff <6>... max period: 000000007fffffff <6>... fixed-purpose events: 3 <6>... event mask: 000000070000000f <4>synchro_early_init() <6>Booting Node 0, Processors #1 #2 #3 Ok. <6>Booting Node 1, Processors #4 #5 #6 #7 <6>Brought up 8 CPUs <6>Total of 8 processors activated (44798.06 BogoMIPS). <7>sizeof(vma)=200 bytes <7>sizeof(page)=56 bytes <7>sizeof(inode)=592 bytes <7>sizeof(dentry)=192 bytes <7>sizeof(ext3inode)=800 bytes <7>sizeof(buffer_head)=104 bytes <7>sizeof(skbuff)=232 bytes <7>sizeof(task_struct)=2624 bytes <6>devtmpfs: initialized <6>PM: Registering ACPI NVS region at bf77e000 (335872 bytes) <6>regulator: core version 0.5 <6>NET: Registered protocol family 16 <6>ACPI: bus type pci registered <5>PCI: MCFG configuration 0: base e0000000 segment 0 buses 0 - 255 <5>PCI: MCFG area at e0000000 reserved in E820 <6>PCI: Using MMCONFIG at e0000000 - efffffff <6>PCI: Using configuration type 1 for base access <4>bio: create slab at 0 <7>ACPI: EC: Look up EC in DSDT <4>ACPI Warning for \_SB_._OSC: Return type mismatch - found Integer, expected Buffer (20090903/nspredef-1018) <7>\_SB_:_OSC evaluation returned wrong type <7>_OSC request data:1 1f <4>ACPI: Executed 1 blocks of module-level executable AML code <6>ACPI: Interpreter enabled <6>ACPI: (supports S0 S1 S4 S5) <6>ACPI: Using IOAPIC for interrupt routing <4>ACPI Warning: Incorrect checksum in table [OEMB] - 90, should be 8D (20090903/tbutils-314) <6>ACPI: No dock devices found. <6>HEST: Table parsing has been initialized. <6>PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug <6>ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) <6>pci_root PNP0A08:00: host bridge window [io 0x0000-0x0cf7] <6>pci_root PNP0A08:00: host bridge window [io 0x0d00-0xffff] <6>pci_root PNP0A08:00: host bridge window [mem 0x000a0000-0x000bffff] <6>pci_root PNP0A08:00: host bridge window [mem 0x000d0000-0x000dffff] <6>pci_root PNP0A08:00: host bridge window [mem 0xc0000000-0xdfffffff] <6>pci_root PNP0A08:00: host bridge window [mem 0xf0000000-0xfed8ffff] <6>PCI host bridge to bus 0000:00 <6>pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7] <6>pci_bus 0000:00: root bus resource [io 0x0d00-0xffff] <6>pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff] <6>pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000dffff] <6>pci_bus 0000:00: root bus resource [mem 0xc0000000-0xdfffffff] <6>pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfed8ffff] <6>pci 0000:00:00.0: PME# supported from D0 D3hot D3cold <6>pci 0000:00:00.0: PME# disabled <6>pci 0000:00:01.0: PME# supported from D0 D3hot D3cold <6>pci 0000:00:01.0: PME# disabled <6>pci 0000:00:03.0: PME# supported from D0 D3hot D3cold <6>pci 0000:00:03.0: PME# disabled <6>pci 0000:00:05.0: PME# supported from D0 D3hot D3cold <6>pci 0000:00:05.0: PME# disabled <6>pci 0000:00:07.0: PME# supported from D0 D3hot D3cold <6>pci 0000:00:07.0: PME# disabled <7>pci 0000:00:16.0: reg 10: [mem 0xfacdc000-0xfacdffff 64bit] <7>pci 0000:00:16.1: reg 10: [mem 0xface0000-0xface3fff 64bit] <7>pci 0000:00:16.2: reg 10: [mem 0xface4000-0xface7fff 64bit] <7>pci 0000:00:16.3: reg 10: [mem 0xface8000-0xfacebfff 64bit] <7>pci 0000:00:16.4: reg 10: [mem 0xfacec000-0xfaceffff 64bit] <7>pci 0000:00:16.5: reg 10: [mem 0xfacf0000-0xfacf3fff 64bit] <7>pci 0000:00:16.6: reg 10: [mem 0xfacf4000-0xfacf7fff 64bit] <7>pci 0000:00:16.7: reg 10: [mem 0xfacf8000-0xfacfbfff 64bit] <7>pci 0000:00:1a.0: reg 20: [io 0xb880-0xb89f] <7>pci 0000:00:1a.1: reg 20: [io 0xbc00-0xbc1f] <7>pci 0000:00:1a.2: reg 20: [io 0xc000-0xc01f] <7>pci 0000:00:1a.7: reg 10: [mem 0xfacda000-0xfacda3ff] <6>pci 0000:00:1a.7: PME# supported from D0 D3hot D3cold <6>pci 0000:00:1a.7: PME# disabled <7>pci 0000:00:1d.0: reg 20: [io 0xb400-0xb41f] <7>pci 0000:00:1d.1: reg 20: [io 0xb480-0xb49f] <7>pci 0000:00:1d.2: reg 20: [io 0xb800-0xb81f] <7>pci 0000:00:1d.7: reg 10: [mem 0xfacd8000-0xfacd83ff] <6>pci 0000:00:1d.7: PME# supported from D0 D3hot D3cold <6>pci 0000:00:1d.7: PME# disabled <6>pci 0000:00:1f.0: quirk: [io 0x0800-0x087f] claimed by ICH6 ACPI/GPIO/TCO <6>pci 0000:00:1f.0: quirk: [io 0x0500-0x053f] claimed by ICH6 GPIO <6>pci 0000:00:1f.0: ICH7 LPC Generic IO decode 2 PIO at 1640 (mask 000f) <6>pci 0000:00:1f.0: ICH7 LPC Generic IO decode 3 PIO at 0290 (mask 001f) <6>pci 0000:00:1f.0: ICH7 LPC Generic IO decode 4 PIO at 0ca0 (mask 000f) <7>pci 0000:00:1f.2: reg 10: [io 0xc400-0xc407] <7>pci 0000:00:1f.2: reg 14: [io 0xcc00-0xcc03] <7>pci 0000:00:1f.2: reg 18: [io 0xc880-0xc887] <7>pci 0000:00:1f.2: reg 1c: [io 0xc800-0xc803] <7>pci 0000:00:1f.2: reg 20: [io 0xc480-0xc49f] <7>pci 0000:00:1f.2: reg 24: [mem 0xfacfe000-0xfacfe7ff] <6>pci 0000:00:1f.2: PME# supported from D3hot <6>pci 0000:00:1f.2: PME# disabled <7>pci 0000:00:1f.3: reg 10: [mem 0xfacfc000-0xfacfc0ff 64bit] <7>pci 0000:00:1f.3: reg 20: [io 0x0400-0x041f] <7>pci 0000:01:00.0: reg 10: [mem 0xfad60000-0xfad7ffff] <7>pci 0000:01:00.0: reg 14: [mem 0xfad40000-0xfad5ffff] <7>pci 0000:01:00.0: reg 18: [io 0xd880-0xd89f] <7>pci 0000:01:00.0: reg 1c: [mem 0xfad98000-0xfad9bfff] <7>pci 0000:01:00.0: reg 30: [mem 0xfad20000-0xfad3ffff pref] <6>pci 0000:01:00.0: PME# supported from D0 D3hot D3cold <6>pci 0000:01:00.0: PME# disabled <7>pci 0000:01:00.0: reg 184: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.0: reg 190: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.1: reg 10: [mem 0xfade0000-0xfadfffff] <7>pci 0000:01:00.1: reg 14: [mem 0xfadc0000-0xfaddffff] <7>pci 0000:01:00.1: reg 18: [io 0xdc00-0xdc1f] <7>pci 0000:01:00.1: reg 1c: [mem 0xfad9c000-0xfad9ffff] <7>pci 0000:01:00.1: reg 30: [mem 0xfada0000-0xfadbffff pref] <6>pci 0000:01:00.1: PME# supported from D0 D3hot D3cold <6>pci 0000:01:00.1: PME# disabled <7>pci 0000:01:00.1: reg 184: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.1: reg 190: [mem 0x00000000-0x00003fff 64bit] <6>pci 0000:00:01.0: PCI bridge to [bus 01-01] <7>pci 0000:00:01.0: bridge window [io 0xd000-0xdfff] <7>pci 0000:00:01.0: bridge window [mem 0xfad00000-0xfadfffff] <7>pci 0000:00:01.0: bridge window [mem 0xfff00000-0x000fffff pref] (disabled) <7>pci 0000:02:00.0: reg 10: [mem 0xfae00000-0xfaefffff 64bit] <7>pci 0000:02:00.0: reg 18: [mem 0xf8800000-0xf8ffffff 64bit pref] <6>pci 0000:00:03.0: PCI bridge to [bus 02-02] <7>pci 0000:00:03.0: bridge window [io 0xf000-0x0000] (disabled) <7>pci 0000:00:03.0: bridge window [mem 0xfae00000-0xfaefffff] <7>pci 0000:00:03.0: bridge window [mem 0xf8800000-0xf8ffffff 64bit pref] <6>pci 0000:00:05.0: PCI bridge to [bus 03-03] <7>pci 0000:00:05.0: bridge window [io 0xf000-0x0000] (disabled) <7>pci 0000:00:05.0: bridge window [mem 0xfff00000-0x000fffff] (disabled) <7>pci 0000:00:05.0: bridge window [mem 0xfff00000-0x000fffff pref] (disabled) <7>pci 0000:04:00.0: reg 10: [mem 0xfafba000-0xfafbafff 64bit] <7>pci 0000:04:00.0: reg 18: [mem 0xfafb4000-0xfafb7fff 64bit] <7>pci 0000:04:00.0: reg 20: [io 0xe400-0xe4ff] <7>pci 0000:04:00.0: reg 30: [mem 0xfaf40000-0xfaf7ffff pref] <7>pci 0000:04:00.1: reg 10: [mem 0xfafbb000-0xfafbbfff 64bit] <7>pci 0000:04:00.1: reg 18: [mem 0xfafbc000-0xfafbffff 64bit] <7>pci 0000:04:00.1: reg 20: [io 0xe800-0xe8ff] <7>pci 0000:04:00.1: reg 30: [mem 0xfafc0000-0xfaffffff pref] <6>pci 0000:00:07.0: PCI bridge to [bus 04-04] <7>pci 0000:00:07.0: bridge window [io 0xe000-0xefff] <7>pci 0000:00:07.0: bridge window [mem 0xfaf00000-0xfaffffff] <7>pci 0000:00:07.0: bridge window [mem 0xfff00000-0x000fffff pref] (disabled) <7>pci 0000:05:01.0: reg 10: [mem 0xf9000000-0xf9ffffff pref] <7>pci 0000:05:01.0: reg 14: [mem 0xfbefc000-0xfbefffff] <7>pci 0000:05:01.0: reg 18: [mem 0xfb000000-0xfb7fffff] <6>pci 0000:00:1e.0: PCI bridge to [bus 05-05] (subtractive decode) <7>pci 0000:00:1e.0: bridge window [io 0xf000-0x0000] (disabled) <7>pci 0000:00:1e.0: bridge window [mem 0xfb000000-0xfbefffff] <7>pci 0000:00:1e.0: bridge window [mem 0xf9000000-0xf9ffffff 64bit pref] <7>pci 0000:00:1e.0: bridge window [io 0x0000-0x0cf7] (subtractive decode) <7>pci 0000:00:1e.0: bridge window [io 0x0d00-0xffff] (subtractive decode) <7>pci 0000:00:1e.0: bridge window [mem 0x000a0000-0x000bffff] (subtractive decode) <7>pci 0000:00:1e.0: bridge window [mem 0x000d0000-0x000dffff] (subtractive decode) <7>pci 0000:00:1e.0: bridge window [mem 0xc0000000-0xdfffffff] (subtractive decode) <7>pci 0000:00:1e.0: bridge window [mem 0xf0000000-0xfed8ffff] (subtractive decode) <7>ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT] <7>ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P1._PRT] <7>ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE7._PRT] <7>ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE1._PRT] <7>ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE3._PRT] <6> pci0000:00: Requesting ACPI _OSC control (0x1d) <6>Unable to assume _OSC PCIe control. Disabling ASPM <6>ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 10 11 12 14 *15) <6>ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 *5 6 7 10 11 12 14 15) <6>ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 6 7 *10 11 12 14 15) <6>ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 7 10 *11 12 14 15) <6>ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled. <6>ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 7 10 11 12 *14 15) <6>ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled. <6>ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 *7 10 11 12 14 15) <6>vgaarb: device added: PCI:0000:05:01.0,decodes=io+mem,owns=io+mem,locks=none <6>vgaarb: loaded <6>vgaarb: bridge control possible 0000:05:01.0 <5>SCSI subsystem initialized <7>libata version 3.00 loaded. <6>usbcore: registered new interface driver usbfs <6>usbcore: registered new interface driver hub <6>usbcore: registered new device driver usb <6>PCI: Using ACPI for IRQ routing <7>PCI: old code would have set cacheline size to 32 bytes, but clflush_size = 64 <7>PCI: pci_cache_line_size set to 64 bytes <5>lo: Dropping TSO features since no CSUM feature. <6>NetLabel: Initializing <6>NetLabel: domain hash size = 128 <6>NetLabel: protocols = UNLABELED CIPSOv4 <6>NetLabel: unlabeled traffic allowed by default <6>HPET: 4 timers in total, 0 timers will be used for per-cpu timer <6>hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0 <6>hpet0: 4 comparators, 64-bit 14.318180 MHz counter <6>Switching to clocksource hpet <4>hrtimer_switch_to_hres cpu 0 <4>hrtimer_switch_to_hres cpu 4 <4>hrtimer_switch_to_hres cpu 2 <4>hrtimer_switch_to_hres cpu 7 <4>hrtimer_switch_to_hres cpu 6 <4>hrtimer_switch_to_hres cpu 1 <4>hrtimer_switch_to_hres cpu 5 <4>hrtimer_switch_to_hres cpu 3 <6>pnp: PnP ACPI init <6>ACPI: bus type pnp registered <7>pnp 00:00: [io 0x0cf8-0x0cff] <7>pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) <7>pnp 00:01: [mem 0xfbf00000-0xfbffffff] <7>pnp 00:01: [mem 0xfc000000-0xfcffffff] <7>pnp 00:01: [mem 0xfd000000-0xfdffffff] <7>pnp 00:01: [mem 0xfe000000-0xfebfffff] <7>pnp 00:01: [mem 0xfec8a000-0xfec8afff] <7>pnp 00:01: [mem 0xfed10000-0xfed10fff] <7>pnp 00:01: Plug and Play ACPI device, IDs PNP0c01 (active) <7>pnp 00:02: [dma 4] <7>pnp 00:02: [io 0x0000-0x000f] <7>pnp 00:02: [io 0x0081-0x0083] <7>pnp 00:02: [io 0x0087] <7>pnp 00:02: [io 0x0089-0x008b] <7>pnp 00:02: [io 0x008f] <7>pnp 00:02: [io 0x00c0-0x00df] <7>pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active) <7>pnp 00:03: [io 0x0070-0x0071] <7>pnp 00:03: [irq 8] <7>pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active) <7>pnp 00:04: [io 0x0061] <7>pnp 00:04: Plug and Play ACPI device, IDs PNP0800 (active) <7>pnp 00:05: [io 0x00f0-0x00ff] <7>pnp 00:05: [irq 13] <7>pnp 00:05: Plug and Play ACPI device, IDs PNP0c04 (active) <7>pnp 00:06: [io 0x03f8-0x03ff] <7>pnp 00:06: [irq 4] <7>pnp 00:06: [dma 0 disabled] <7>pnp 00:06: Plug and Play ACPI device, IDs PNP0501 (active) <7>pnp 00:07: [io 0x02f8-0x02ff] <7>pnp 00:07: [irq 3] <7>pnp 00:07: [dma 0 disabled] <7>pnp 00:07: Plug and Play ACPI device, IDs PNP0501 (active) <7>pnp 00:08: [io 0x0000-0xffffffffffffffff disabled] <7>pnp 00:08: [io 0x0a00-0x0a0f] <7>pnp 00:08: Plug and Play ACPI device, IDs PNP0c02 (active) <7>pnp 00:09: [io 0x0010-0x001f] <7>pnp 00:09: [io 0x0022-0x003f] <7>pnp 00:09: [io 0x0044-0x005f] <7>pnp 00:09: [io 0x0062-0x0063] <7>pnp 00:09: [io 0x0065-0x006f] <7>pnp 00:09: [io 0x0072-0x007f] <7>pnp 00:09: [io 0x0080] <7>pnp 00:09: [io 0x0084-0x0086] <7>pnp 00:09: [io 0x0088] <7>pnp 00:09: [io 0x008c-0x008e] <7>pnp 00:09: [io 0x0090-0x009f] <7>pnp 00:09: [io 0x00a2-0x00bf] <7>pnp 00:09: [io 0x00e0-0x00ef] <7>pnp 00:09: [io 0x04d0-0x04d1] <7>pnp 00:09: [io 0x0800-0x087f] <7>pnp 00:09: [io 0x0000-0xffffffffffffffff disabled] <7>pnp 00:09: [io 0x0500-0x057f] <7>pnp 00:09: [mem 0xfed1c000-0xfed1ffff] <7>pnp 00:09: [mem 0xfed20000-0xfed3ffff] <7>pnp 00:09: [mem 0xfed40000-0xfed8ffff] <7>pnp 00:09: Plug and Play ACPI device, IDs PNP0c02 (active) <7>pnp 00:0a: [mem 0xfed00000-0xfed003ff] <7>pnp 00:0a: Plug and Play ACPI device, IDs PNP0103 (active) <7>pnp 00:0b: [io 0x0060] <7>pnp 00:0b: [io 0x0064] <7>pnp 00:0b: [mem 0xfec00000-0xfec00fff] <7>pnp 00:0b: [mem 0xfee00000-0xfee00fff] <7>pnp 00:0b: Plug and Play ACPI device, IDs PNP0c02 (active) <7>pnp 00:0c: [io 0x0ca2-0x0ca3] <7>pnp 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active) <7>pnp 00:0d: [mem 0xe0000000-0xefffffff] <7>pnp 00:0d: Plug and Play ACPI device, IDs PNP0c02 (active) <7>pnp 00:0e: [mem 0x00000000-0x0009ffff] <7>pnp 00:0e: [mem 0x000c0000-0x000cffff] <7>pnp 00:0e: [mem 0x000e0000-0x000fffff] <7>pnp 00:0e: [mem 0x00100000-0xbfffffff] <7>pnp 00:0e: [mem 0xfed90000-0xffffffff] <7>pnp 00:0e: Plug and Play ACPI device, IDs PNP0c01 (active) <6>pnp: PnP ACPI: found 15 devices <6>ACPI: ACPI bus type pnp unregistered <6>system 00:01: [mem 0xfbf00000-0xfbffffff] could not be reserved <6>system 00:01: [mem 0xfc000000-0xfcffffff] has been reserved <6>system 00:01: [mem 0xfd000000-0xfdffffff] has been reserved <6>system 00:01: [mem 0xfe000000-0xfebfffff] has been reserved <6>system 00:01: [mem 0xfec8a000-0xfec8afff] has been reserved <6>system 00:01: [mem 0xfed10000-0xfed10fff] has been reserved <6>system 00:08: [io 0x0a00-0x0a0f] has been reserved <6>system 00:09: [io 0x04d0-0x04d1] has been reserved <6>system 00:09: [io 0x0800-0x087f] has been reserved <6>system 00:09: [io 0x0500-0x057f] could not be reserved <6>system 00:09: [mem 0xfed1c000-0xfed1ffff] has been reserved <6>system 00:09: [mem 0xfed20000-0xfed3ffff] has been reserved <6>system 00:09: [mem 0xfed40000-0xfed8ffff] has been reserved <6>system 00:0b: [mem 0xfec00000-0xfec00fff] could not be reserved <6>system 00:0b: [mem 0xfee00000-0xfee00fff] has been reserved <6>system 00:0c: [io 0x0ca2-0x0ca3] has been reserved <6>system 00:0d: [mem 0xe0000000-0xefffffff] has been reserved <6>system 00:0e: [mem 0x00000000-0x0009ffff] could not be reserved <6>system 00:0e: [mem 0x000c0000-0x000cffff] could not be reserved <6>system 00:0e: [mem 0x000e0000-0x000fffff] could not be reserved <6>system 00:0e: [mem 0x00100000-0xbfffffff] could not be reserved <6>system 00:0e: [mem 0xfed90000-0xffffffff] could not be reserved <7>PCI: max bus depth: 1 pci_try_num: 2 <7>pci 0000:01:00.0: reg 184: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.0: reg 190: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.0: reg 184: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.1: reg 184: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.0: reg 184: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.0: reg 190: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.1: reg 190: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.0: reg 184: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.0: reg 190: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.1: reg 184: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.0: reg 184: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.0: reg 184: [mem 0x00000000-0x00003fff 64bit] <6>pci 0000:01:00.0: BAR 7: assigned [mem 0xfad00000-0xfad1ffff 64bit] <6>pci 0000:01:00.0: BAR 7: set to [mem 0xfad00000-0xfad1ffff 64bit] (PCI address [0xfad00000-0xfad1ffff] <7>pci 0000:01:00.0: reg 190: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.0: reg 190: [mem 0x00000000-0x00003fff 64bit] <6>pci 0000:01:00.0: BAR 10: can't assign mem (size 0x20000) <7>pci 0000:01:00.1: reg 184: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.1: reg 184: [mem 0x00000000-0x00003fff 64bit] <6>pci 0000:01:00.1: BAR 7: can't assign mem (size 0x20000) <7>pci 0000:01:00.1: reg 190: [mem 0x00000000-0x00003fff 64bit] <7>pci 0000:01:00.1: reg 190: [mem 0x00000000-0x00003fff 64bit] <6>pci 0000:01:00.1: BAR 10: can't assign mem (size 0x20000) <6>pci 0000:00:01.0: PCI bridge to [bus 01-01] <6>pci 0000:00:01.0: PCI bridge to [bus 01-01] <6>pci 0000:00:01.0: bridge window [io 0xd000-0xdfff] <6>pci 0000:00:01.0: bridge window [mem 0xfad00000-0xfadfffff] <6>pci 0000:00:01.0: bridge window [mem pref disabled] <6>pci 0000:00:03.0: PCI bridge to [bus 02-02] <6>pci 0000:00:03.0: PCI bridge to [bus 02-02] <6>pci 0000:00:03.0: bridge window [io disabled] <6>pci 0000:00:03.0: bridge window [mem 0xfae00000-0xfaefffff] <6>pci 0000:00:03.0: bridge window [mem 0xf8800000-0xf8ffffff 64bit pref] <6>pci 0000:00:05.0: PCI bridge to [bus 03-03] <6>pci 0000:00:05.0: PCI bridge to [bus 03-03] <6>pci 0000:00:05.0: bridge window [io disabled] <6>pci 0000:00:05.0: bridge window [mem disabled] <6>pci 0000:00:05.0: bridge window [mem pref disabled] <6>pci 0000:00:07.0: PCI bridge to [bus 04-04] <6>pci 0000:00:07.0: PCI bridge to [bus 04-04] <6>pci 0000:00:07.0: bridge window [io 0xe000-0xefff] <6>pci 0000:00:07.0: bridge window [mem 0xfaf00000-0xfaffffff] <6>pci 0000:00:07.0: bridge window [mem pref disabled] <6>pci 0000:00:1e.0: PCI bridge to [bus 05-05] <6>pci 0000:00:1e.0: PCI bridge to [bus 05-05] <6>pci 0000:00:1e.0: bridge window [io disabled] <6>pci 0000:00:1e.0: bridge window [mem 0xfb000000-0xfbefffff] <6>pci 0000:00:1e.0: bridge window [mem 0xf9000000-0xf9ffffff 64bit pref] <7>pci 0000:00:01.0: setting latency timer to 64 <7>pci 0000:00:03.0: setting latency timer to 64 <7>pci 0000:00:05.0: setting latency timer to 64 <7>pci 0000:00:07.0: setting latency timer to 64 <7>pci 0000:00:1e.0: setting latency timer to 64 <7>pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7] <7>pci_bus 0000:00: resource 5 [io 0x0d00-0xffff] <7>pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff] <7>pci_bus 0000:00: resource 7 [mem 0x000d0000-0x000dffff] <7>pci_bus 0000:00: resource 8 [mem 0xc0000000-0xdfffffff] <7>pci_bus 0000:00: resource 9 [mem 0xf0000000-0xfed8ffff] <7>pci_bus 0000:01: resource 0 [io 0xd000-0xdfff] <7>pci_bus 0000:01: resource 1 [mem 0xfad00000-0xfadfffff] <7>pci_bus 0000:02: resource 1 [mem 0xfae00000-0xfaefffff] <7>pci_bus 0000:02: resource 2 [mem 0xf8800000-0xf8ffffff 64bit pref] <7>pci_bus 0000:04: resource 0 [io 0xe000-0xefff] <7>pci_bus 0000:04: resource 1 [mem 0xfaf00000-0xfaffffff] <7>pci_bus 0000:05: resource 1 [mem 0xfb000000-0xfbefffff] <7>pci_bus 0000:05: resource 2 [mem 0xf9000000-0xf9ffffff 64bit pref] <7>pci_bus 0000:05: resource 4 [io 0x0000-0x0cf7] <7>pci_bus 0000:05: resource 5 [io 0x0d00-0xffff] <7>pci_bus 0000:05: resource 6 [mem 0x000a0000-0x000bffff] <7>pci_bus 0000:05: resource 7 [mem 0x000d0000-0x000dffff] <7>pci_bus 0000:05: resource 8 [mem 0xc0000000-0xdfffffff] <7>pci_bus 0000:05: resource 9 [mem 0xf0000000-0xfed8ffff] <6>NET: Registered protocol family 2 <6>IP route cache hash table entries: 524288 (order: 10, 4194304 bytes) <6>TCP established hash table entries: 524288 (order: 11, 8388608 bytes) <6>TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) <6>TCP: Hash tables configured (established 524288 bind 65536) <6>TCP reno registered <6>NET: Registered protocol family 1 <7> alloc irq_desc for 16 on node -1 <7> alloc kstat_irqs on node -1 <6>pci 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 <6>pci 0000:00:1a.0: PCI INT A disabled <7> alloc irq_desc for 21 on node -1 <7> alloc kstat_irqs on node -1 <6>pci 0000:00:1a.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 <6>pci 0000:00:1a.1: PCI INT B disabled <7> alloc irq_desc for 19 on node -1 <7> alloc kstat_irqs on node -1 <6>pci 0000:00:1a.2: PCI INT D -> GSI 19 (level, low) -> IRQ 19 <6>pci 0000:00:1a.2: PCI INT D disabled <7> alloc irq_desc for 18 on node -1 <7> alloc kstat_irqs on node -1 <6>pci 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18 <6>pci 0000:00:1a.7: PCI INT C disabled <7> alloc irq_desc for 23 on node -1 <7> alloc kstat_irqs on node -1 <6>pci 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 <6>pci 0000:00:1d.0: PCI INT A disabled <6>pci 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 <6>pci 0000:00:1d.1: PCI INT B disabled <6>pci 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 <6>pci 0000:00:1d.2: PCI INT C disabled <6>pci 0000:00:1d.7: PCI INT A -> GSI 23 (level, low) -> IRQ 23 <6>pci 0000:00:1d.7: PCI INT A disabled <7>pci 0000:05:01.0: Boot video device <6>Trying to unpack rootfs image as initramfs... <6>Freeing initrd memory: 5334k freed <6>audit: initializing netlink socket (disabled) <5>type=2000 audit(1403764797.887:1): initialized <6>HugeTLB registered 2 MB page size, pre-allocated 0 pages <5>VFS: Disk quotas dquot_6.5.2 <4>Dquot-cache hash table entries: 512 (order 0, 4096 bytes) <6>msgmni has been set to 32768 <6>alg: No test for stdrng (krng) <4>ksign: Installing public key data <4>Loading keyring <4>- Added public key 68BDBC6B5D1E0ABC <4>- User ID: Bull S.A.S (Kernel Module GPG key) <6>Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) <6>io scheduler noop registered <6>io scheduler anticipatory registered <6>io scheduler deadline registered <6>io scheduler cfq registered (default) <6>pci_hotplug: PCI Hot Plug PCI Core version: 0.5 <6>pciehp: PCI Express Hot Plug Controller Driver version: 0.4 <6>acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 <7>intel_idle: MWAIT substates: 0x1120 <7>intel_idle: v0.4 model 0x1A <7>intel_idle: lapic_timer_reliable_states 0x2 <6>ipmi message handler version 39.2 <6>IPMI System Interface driver. <6>ipmi_si: probing via SMBIOS <6>ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 <6>ipmi_si: Adding SMBIOS-specified kcs state machine <6>ipmi_si: Trying SMBIOS-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 <6>ipmi_si ipmi_si.0: Found new BMC (man_id: 0x00b980, prod_id: 0xaabb, dev_id: 0x20) <6>ipmi_si ipmi_si.0: IPMI kcs interface initialized <6>input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 <6>ACPI: Power Button [PWRB] <6>input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input1 <6>ACPI: Power Button [PWRF] <7>ACPI: acpi_idle yielding to intel_idleACPI: SSDT 00000000bf77e1f0 01E1C (v01 DpgPmm P001Ist 00000011 INTL 20051117) <4>ACPI: SSDT 00000000bf780010 00676 (v01 PmRef P001Cst 00003001 INTL 20051117) <4>ACPI Exception: AE_NOT_FOUND, No or invalid critical threshold (20090903/thermal-386) <3>ERST: Failed to get Error Log Address Range. <4>[Firmware Warn]: GHES: Poll interval is 0 for generic hardware error source: 1, disabled. <6>GHES: APEI firmware first mode is enabled by WHEA _OSC. <6>Non-volatile memory driver v1.3 <6>Linux agpgart interface v0.103 <6>crash memory driver: version 1.1 <6>Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled <6>serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A <6>serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A <6>00:06: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A <6>00:07: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A <6>brd: module loaded <6>loop: module loaded <6>input: Macintosh mouse button emulation as /devices/virtual/input/input2 <6>Fixed MDIO Bus: probed <6>ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver <6>ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18 <7>ehci_hcd 0000:00:1a.7: setting latency timer to 64 <6>ehci_hcd 0000:00:1a.7: EHCI Host Controller <6>ehci_hcd 0000:00:1a.7: new USB bus registered, assigned bus number 1 <6>ehci_hcd 0000:00:1a.7: debug port 1 <7>ehci_hcd 0000:00:1a.7: cache line size of 64 is not supported <6>ehci_hcd 0000:00:1a.7: irq 18, io mem 0xfacda000 <6>ehci_hcd 0000:00:1a.7: USB 2.0 started, EHCI 1.00 <6>usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 <6>usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 <6>usb usb1: Product: EHCI Host Controller <6>usb usb1: Manufacturer: Linux 2.6.32-431.1.2.el6.Bull.44.x86_64 ehci_hcd <6>usb usb1: SerialNumber: 0000:00:1a.7 <6>usb usb1: configuration #1 chosen from 1 choice <6>hub 1-0:1.0: USB hub found <6>hub 1-0:1.0: 6 ports detected <6>ehci_hcd 0000:00:1d.7: PCI INT A -> GSI 23 (level, low) -> IRQ 23 <7>ehci_hcd 0000:00:1d.7: setting latency timer to 64 <6>ehci_hcd 0000:00:1d.7: EHCI Host Controller <6>ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 2 <6>ehci_hcd 0000:00:1d.7: debug port 1 <7>ehci_hcd 0000:00:1d.7: cache line size of 64 is not supported <6>ehci_hcd 0000:00:1d.7: irq 23, io mem 0xfacd8000 <6>ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 <6>usb usb2: New USB device found, idVendor=1d6b, idProduct=0002 <6>usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 <6>usb usb2: Product: EHCI Host Controller <6>usb usb2: Manufacturer: Linux 2.6.32-431.1.2.el6.Bull.44.x86_64 ehci_hcd <6>usb usb2: SerialNumber: 0000:00:1d.7 <6>usb usb2: configuration #1 chosen from 1 choice <6>hub 2-0:1.0: USB hub found <6>hub 2-0:1.0: 6 ports detected <6>ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver <6>uhci_hcd: USB Universal Host Controller Interface driver <6>Refined TSC clocksource calibration: 2800.099 MHz. <6>Switching to clocksource tsc <6>uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 <7>uhci_hcd 0000:00:1a.0: setting latency timer to 64 <6>uhci_hcd 0000:00:1a.0: UHCI Host Controller <6>uhci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 3 <6>uhci_hcd 0000:00:1a.0: irq 16, io base 0x0000b880 <6>usb usb3: New USB device found, idVendor=1d6b, idProduct=0001 <6>usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 <6>usb usb3: Product: UHCI Host Controller <6>usb usb3: Manufacturer: Linux 2.6.32-431.1.2.el6.Bull.44.x86_64 uhci_hcd <6>usb usb3: SerialNumber: 0000:00:1a.0 <6>usb usb3: configuration #1 chosen from 1 choice <6>hub 3-0:1.0: USB hub found <6>hub 3-0:1.0: 2 ports detected <6>uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 <7>uhci_hcd 0000:00:1a.1: setting latency timer to 64 <6>uhci_hcd 0000:00:1a.1: UHCI Host Controller <6>uhci_hcd 0000:00:1a.1: new USB bus registered, assigned bus number 4 <6>uhci_hcd 0000:00:1a.1: irq 21, io base 0x0000bc00 <6>usb usb4: New USB device found, idVendor=1d6b, idProduct=0001 <6>usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1 <6>usb usb4: Product: UHCI Host Controller <6>usb usb4: Manufacturer: Linux 2.6.32-431.1.2.el6.Bull.44.x86_64 uhci_hcd <6>usb usb4: SerialNumber: 0000:00:1a.1 <6>usb usb4: configuration #1 chosen from 1 choice <6>hub 4-0:1.0: USB hub found <6>hub 4-0:1.0: 2 ports detected <6>uhci_hcd 0000:00:1a.2: PCI INT D -> GSI 19 (level, low) -> IRQ 19 <7>uhci_hcd 0000:00:1a.2: setting latency timer to 64 <6>uhci_hcd 0000:00:1a.2: UHCI Host Controller <6>uhci_hcd 0000:00:1a.2: new USB bus registered, assigned bus number 5 <6>uhci_hcd 0000:00:1a.2: irq 19, io base 0x0000c000 <6>usb usb5: New USB device found, idVendor=1d6b, idProduct=0001 <6>usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1 <6>usb usb5: Product: UHCI Host Controller <6>usb usb5: Manufacturer: Linux 2.6.32-431.1.2.el6.Bull.44.x86_64 uhci_hcd <6>usb usb5: SerialNumber: 0000:00:1a.2 <6>usb usb5: configuration #1 chosen from 1 choice <6>hub 5-0:1.0: USB hub found <6>hub 5-0:1.0: 2 ports detected <6>uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 <7>uhci_hcd 0000:00:1d.0: setting latency timer to 64 <6>uhci_hcd 0000:00:1d.0: UHCI Host Controller <6>uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 6 <6>uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000b400 <6>usb usb6: New USB device found, idVendor=1d6b, idProduct=0001 <6>usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1 <6>usb usb6: Product: UHCI Host Controller <6>usb usb6: Manufacturer: Linux 2.6.32-431.1.2.el6.Bull.44.x86_64 uhci_hcd <6>usb usb6: SerialNumber: 0000:00:1d.0 <6>usb usb6: configuration #1 chosen from 1 choice <6>hub 6-0:1.0: USB hub found <6>hub 6-0:1.0: 2 ports detected <6>uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 <7>uhci_hcd 0000:00:1d.1: setting latency timer to 64 <6>uhci_hcd 0000:00:1d.1: UHCI Host Controller <6>uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 7 <6>uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000b480 <6>usb usb7: New USB device found, idVendor=1d6b, idProduct=0001 <6>usb usb7: New USB device strings: Mfr=3, Product=2, SerialNumber=1 <6>usb usb7: Product: UHCI Host Controller <6>usb usb7: Manufacturer: Linux 2.6.32-431.1.2.el6.Bull.44.x86_64 uhci_hcd <6>usb usb7: SerialNumber: 0000:00:1d.1 <6>usb usb7: configuration #1 chosen from 1 choice <6>hub 7-0:1.0: USB hub found <6>hub 7-0:1.0: 2 ports detected <6>uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 <7>uhci_hcd 0000:00:1d.2: setting latency timer to 64 <6>uhci_hcd 0000:00:1d.2: UHCI Host Controller <6>uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 8 <6>uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000b800 <6>usb usb8: New USB device found, idVendor=1d6b, idProduct=0001 <6>usb usb8: New USB device strings: Mfr=3, Product=2, SerialNumber=1 <6>usb usb8: Product: UHCI Host Controller <6>usb usb8: Manufacturer: Linux 2.6.32-431.1.2.el6.Bull.44.x86_64 uhci_hcd <6>usb usb8: SerialNumber: 0000:00:1d.2 <6>usb usb8: configuration #1 chosen from 1 choice <6>hub 8-0:1.0: USB hub found <6>hub 8-0:1.0: 2 ports detected <6>PNP: No PS/2 controller found. Probing ports directly. <6>serio: i8042 KBD port at 0x60,0x64 irq 1 <6>serio: i8042 AUX port at 0x60,0x64 irq 12 <6>mice: PS/2 mouse device common for all mice <6>rtc_cmos 00:03: RTC can wake from S4 <6>rtc_cmos 00:03: rtc core: registered rtc_cmos as rtc0 <6>rtc0: alarms up to one month, y3k, 114 bytes nvram, hpet irqs <6>cpuidle: using governor ladder <6>cpuidle: using governor menu <6>EFI Variables Facility v0.08 2004-May-17 <6>usbcore: registered new interface driver hiddev <6>usbcore: registered new interface driver usbhid <6>usbhid: v2.6:USB HID core driver <6>GRE over IPv4 demultiplexor driver <6>TCP cubic registered <6>Initializing XFRM netlink socket <6>NET: Registered protocol family 17 <4>registered taskstats version 1 <6>rtc_cmos 00:03: setting system clock to 2014-06-26 06:40:01 UTC (1403764801) <6>Initalizing network drop monitor service <6>Freeing unused kernel memory: 1280k freed <6>Write protecting the kernel read-only data: 10240k <6>usb 4-1: new full speed USB device number 2 using uhci_hcd <6>Freeing unused kernel memory: 796k freed <6>Freeing unused kernel memory: 1584k freed <6>megasas: 06.700.06.00-rh1 Sat. Aug. 31 17:00:00 PDT 2013 <6>dracut: dracut-004-336.el6_5.2 <6>dracut: rd_NO_LUKS: removing cryptoluks activation <6>udev: starting version 147 <6>dracut: Starting plymouth daemon <7>ahci 0000:00:1f.2: version 3.0 <6>ahci 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 <7> alloc irq_desc for 24 on node -1 <7> alloc kstat_irqs on node -1 <7>ahci 0000:00:1f.2: irq 24 for MSI/MSI-X <6>ahci: SSS flag set, parallel bus scan disabled <6>ahci 0000:00:1f.2: AHCI 0001.0200 32 slots 6 ports 3 Gbps 0x3f impl SATA mode <6>usb 4-1: New USB device found, idVendor=046b, idProduct=ff10 <6>usb 4-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 <6>usb 4-1: Product: Virtual Keyboard and Mouse <6>usb 4-1: Manufacturer: American Megatrends Inc. <6>usb 4-1: SerialNumber: serial <6>usb 4-1: configuration #1 chosen from 1 choice <6>ahci 0000:00:1f.2: flags: 64bit ncq sntf stag pm led clo pio slum part ccc ems sxs <7>ahci 0000:00:1f.2: setting latency timer to 64 <6>input: American Megatrends Inc. Virtual Keyboard and Mouse as /devices/pci0000:00/0000:00:1a.1/usb4/4-1/4-1:1.0/input/input3 <6>scsi0 : ahci <6>generic-usb 0003:046B:FF10.0001: input,hidraw0: USB HID v1.10 Keyboard [American Megatrends Inc. Virtual Keyboard and Mouse] on usb-0000:00:1a.1-1/input0 <6>scsi1 : ahci <6>scsi2 : ahci <6>scsi3 : ahci <6>scsi4 : ahci <6>scsi5 : ahci <6>ata1: SATA max UDMA/133 abar m2048@0xfacfe000 port 0xfacfe100 irq 24 <6>ata2: SATA max UDMA/133 abar m2048@0xfacfe000 port 0xfacfe180 irq 24 <6>ata3: SATA max UDMA/133 abar m2048@0xfacfe000 port 0xfacfe200 irq 24 <6>ata4: SATA max UDMA/133 abar m2048@0xfacfe000 port 0xfacfe280 irq 24 <6>ata5: SATA max UDMA/133 abar m2048@0xfacfe000 port 0xfacfe300 irq 24 <6>ata6: SATA max UDMA/133 abar m2048@0xfacfe000 port 0xfacfe380 irq 24 <6>input: American Megatrends Inc. Virtual Keyboard and Mouse as /devices/pci0000:00/0000:00:1a.1/usb4/4-1/4-1:1.1/input/input4 <6>generic-usb 0003:046B:FF10.0002: input,hidraw1: USB HID v1.10 Mouse [American Megatrends Inc. Virtual Keyboard and Mouse] on usb-0000:00:1a.1-1/input1 <6>ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) <6>ata1.00: ATA-7: WDC WD1600YS-01SHB1, 20.06C06, max UDMA/133 <6>ata1.00: 321672960 sectors, multi 0: LBA48 NCQ (depth 31/32), AA <6>ata1.00: configured for UDMA/133 <5>scsi 0:0:0:0: Direct-Access ATA WDC WD1600YS-01S 20.0 PQ: 0 ANSI: 5 <6>ata2: SATA link down (SStatus 0 SControl 300) <6>ata3: SATA link down (SStatus 0 SControl 300) <6>ata4: SATA link down (SStatus 0 SControl 300) <6>ata5: SATA link down (SStatus 0 SControl 300) <6>ata6: SATA link down (SStatus 0 SControl 300) <5>sd 0:0:0:0: [sda] 321672960 512-byte logical blocks: (164 GB/153 GiB) <5>sd 0:0:0:0: [sda] Write Protect is off <7>sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 <5>sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA <6> sda: sda1 sda2 <5>sd 0:0:0:0: [sda] Attached SCSI disk <6>EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: <6>dracut: Mounted root filesystem /dev/sda1 <6>dracut: Switching root <6>udev: starting version 147 <6>shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 <6>dca service started, version 1.12.1 <6>ioatdma: Intel(R) QuickData Technology Driver 4.00 <6>ioatdma 0000:00:16.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 <7>ioatdma 0000:00:16.0: setting latency timer to 64 <7> alloc irq_desc for 25 on node -1 <7> alloc kstat_irqs on node -1 <7>ioatdma 0000:00:16.0: irq 25 for MSI/MSI-X <7> alloc irq_desc for 17 on node -1 <7> alloc kstat_irqs on node -1 <6>ioatdma 0000:00:16.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 <7>ioatdma 0000:00:16.1: setting latency timer to 64 <7> alloc irq_desc for 26 on node -1 <7> alloc kstat_irqs on node -1 <7>ioatdma 0000:00:16.1: irq 26 for MSI/MSI-X <6>ioatdma 0000:00:16.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 <7>ioatdma 0000:00:16.2: setting latency timer to 64 <7> alloc irq_desc for 27 on node -1 <7> alloc kstat_irqs on node -1 <7>ioatdma 0000:00:16.2: irq 27 for MSI/MSI-X <6>ioatdma 0000:00:16.3: PCI INT D -> GSI 19 (level, low) -> IRQ 19 <7>ioatdma 0000:00:16.3: setting latency timer to 64 <7> alloc irq_desc for 28 on node -1 <7> alloc kstat_irqs on node -1 <7>ioatdma 0000:00:16.3: irq 28 for MSI/MSI-X <6>ioatdma 0000:00:16.4: PCI INT A -> GSI 16 (level, low) -> IRQ 16 <7>ioatdma 0000:00:16.4: setting latency timer to 64 <7> alloc irq_desc for 29 on node -1 <7> alloc kstat_irqs on node -1 <7>ioatdma 0000:00:16.4: irq 29 for MSI/MSI-X <6>ioatdma 0000:00:16.5: PCI INT B -> GSI 17 (level, low) -> IRQ 17 <7>ioatdma 0000:00:16.5: setting latency timer to 64 <7> alloc irq_desc for 30 on node -1 <7> alloc kstat_irqs on node -1 <7>ioatdma 0000:00:16.5: irq 30 for MSI/MSI-X <6>ioatdma 0000:00:16.6: PCI INT C -> GSI 18 (level, low) -> IRQ 18 <7>ioatdma 0000:00:16.6: setting latency timer to 64 <7> alloc irq_desc for 31 on node -1 <7> alloc kstat_irqs on node -1 <7>ioatdma 0000:00:16.6: irq 31 for MSI/MSI-X <6>ioatdma 0000:00:16.7: PCI INT D -> GSI 19 (level, low) -> IRQ 19 <7>ioatdma 0000:00:16.7: setting latency timer to 64 <7> alloc irq_desc for 32 on node -1 <7> alloc kstat_irqs on node -1 <7>ioatdma 0000:00:16.7: irq 32 for MSI/MSI-X <4>ACPI: resource (null) [io 0x0828-0x082f] conflicts with ACPI region PMRG [io 0x800-0x84f] <6>ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver <4>lpc_ich: Resource conflict(s) found affecting gpio_ich <6>pps_core: LinuxPPS API ver. 1 registered <6>pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <6>PTP clock support registered <6>igb: Intel(R) Gigabit Ethernet Network Driver - version 5.0.5-k <6>igb: Copyright (c) 2007-2013 Intel Corporation. <6>igb 0000:01:00.0: power state changed by ACPI to D0 <6>igb 0000:01:00.0: power state changed by ACPI to D0 <6>igb 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 <7>igb 0000:01:00.0: setting latency timer to 64 <7> alloc irq_desc for 33 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.0: irq 33 for MSI/MSI-X <7> alloc irq_desc for 34 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.0: irq 34 for MSI/MSI-X <7> alloc irq_desc for 35 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.0: irq 35 for MSI/MSI-X <7> alloc irq_desc for 36 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.0: irq 36 for MSI/MSI-X <7> alloc irq_desc for 37 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.0: irq 37 for MSI/MSI-X <7> alloc irq_desc for 38 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.0: irq 38 for MSI/MSI-X <7> alloc irq_desc for 39 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.0: irq 39 for MSI/MSI-X <7> alloc irq_desc for 40 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.0: irq 40 for MSI/MSI-X <7> alloc irq_desc for 41 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.0: irq 41 for MSI/MSI-X <7>igb 0000:01:00.0: irq 33 for MSI/MSI-X <7>igb 0000:01:00.0: irq 34 for MSI/MSI-X <7>igb 0000:01:00.0: irq 35 for MSI/MSI-X <7>igb 0000:01:00.0: irq 36 for MSI/MSI-X <7>igb 0000:01:00.0: irq 37 for MSI/MSI-X <7>igb 0000:01:00.0: irq 38 for MSI/MSI-X <7>igb 0000:01:00.0: irq 39 for MSI/MSI-X <7>igb 0000:01:00.0: irq 40 for MSI/MSI-X <7>igb 0000:01:00.0: irq 41 for MSI/MSI-X <6>igb 0000:01:00.0: DCA enabled <6>igb 0000:01:00.0: added PHC on eth0 <6>igb 0000:01:00.0: Intel(R) Gigabit Ethernet Network Connection <6>igb 0000:01:00.0: eth0: (PCIe:2.5Gb/s:Width x4) 00:30:48:c8:72:24 <6>igb 0000:01:00.0: eth0: PBA No: Unknown <6>igb 0000:01:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) <6>igb 0000:01:00.1: power state changed by ACPI to D0 <6>igb 0000:01:00.1: power state changed by ACPI to D0 <6>igb 0000:01:00.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 <7>igb 0000:01:00.1: setting latency timer to 64 <7> alloc irq_desc for 42 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.1: irq 42 for MSI/MSI-X <7> alloc irq_desc for 43 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.1: irq 43 for MSI/MSI-X <7> alloc irq_desc for 44 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.1: irq 44 for MSI/MSI-X <7> alloc irq_desc for 45 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.1: irq 45 for MSI/MSI-X <7> alloc irq_desc for 46 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.1: irq 46 for MSI/MSI-X <7> alloc irq_desc for 47 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.1: irq 47 for MSI/MSI-X <7> alloc irq_desc for 48 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.1: irq 48 for MSI/MSI-X <7> alloc irq_desc for 49 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.1: irq 49 for MSI/MSI-X <7> alloc irq_desc for 50 on node -1 <7> alloc kstat_irqs on node -1 <7>igb 0000:01:00.1: irq 50 for MSI/MSI-X <7>igb 0000:01:00.1: irq 42 for MSI/MSI-X <7>igb 0000:01:00.1: irq 43 for MSI/MSI-X <7>igb 0000:01:00.1: irq 44 for MSI/MSI-X <7>igb 0000:01:00.1: irq 45 for MSI/MSI-X <7>igb 0000:01:00.1: irq 46 for MSI/MSI-X <7>igb 0000:01:00.1: irq 47 for MSI/MSI-X <7>igb 0000:01:00.1: irq 48 for MSI/MSI-X <7>igb 0000:01:00.1: irq 49 for MSI/MSI-X <7>igb 0000:01:00.1: irq 50 for MSI/MSI-X <6>igb 0000:01:00.1: DCA enabled <6>igb 0000:01:00.1: added PHC on eth1 <6>igb 0000:01:00.1: Intel(R) Gigabit Ethernet Network Connection <6>igb 0000:01:00.1: eth1: (PCIe:2.5Gb/s:Width x4) 00:30:48:c8:72:25 <6>igb 0000:01:00.1: eth1: PBA No: Unknown <6>igb 0000:01:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) <4>Emulex LightPulse Fibre Channel SCSI driver 8.3.7.21.4p <4>Copyright(c) 2004-2013 Emulex. All rights reserved. <6>lpfc 0000:04:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 <7>lpfc 0000:04:00.0: setting latency timer to 64 <6>scsi6 : Emulex LPe12000 PCIe Fibre Channel Adapter on PCI bus 04 device 00 irq 16 <7> alloc irq_desc for 51 on node -1 <7> alloc kstat_irqs on node -1 <7>lpfc 0000:04:00.0: irq 51 for MSI/MSI-X <7> alloc irq_desc for 52 on node -1 <7> alloc kstat_irqs on node -1 <7>lpfc 0000:04:00.0: irq 52 for MSI/MSI-X <6>lpfc 0000:04:00.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 <7>lpfc 0000:04:00.1: setting latency timer to 64 <6>scsi7 : Emulex LPe12000 PCIe Fibre Channel Adapter on PCI bus 04 device 01 irq 17 <3>lpfc 0000:04:00.0: 0:1303 Link Up Event x1 received Data: x1 x1 x10 x2 x0 x0 0 <3>lpfc 0000:04:00.0: 0:1309 Link Up Event npiv not supported in loop topology <3>lpfc 0000:04:00.0: 0:(0):2858 FLOGI failure Status:x3/x18 TMO:x0 <3>lpfc 0000:04:00.0: 0:(0):2858 FLOGI failure Status:x3/x18 TMO:x0 <3>lpfc 0000:04:00.0: 0:(0):2858 FLOGI failure Status:x3/x18 TMO:x0 <3>lpfc 0000:04:00.0: 0:(0):0100 FLOGI failure Status:x3/x18 TMO:x0 <5>scsi 6:0:0:0: Direct-Access DGC VRAID 0430 PQ: 0 ANSI: 4 <5>scsi 6:0:0:1: Direct-Access DGC VRAID 0430 PQ: 0 ANSI: 4 <5>sd 6:0:0:0: [sdb] 3586894848 512-byte logical blocks: (1.83 TB/1.66 TiB) <5>scsi 6:0:0:2: Direct-Access DGC VRAID 0430 PQ: 0 ANSI: 4 <5>scsi 6:0:0:3: Direct-Access DGC VRAID 0430 PQ: 0 ANSI: 4 <5>sd 6:0:0:1: [sdc] 3586894848 512-byte logical blocks: (1.83 TB/1.66 TiB) <5>sd 6:0:0:1: [sdc] Write Protect is off <7>sd 6:0:0:1: [sdc] Mode Sense: 87 00 00 08 <5>sd 6:0:0:2: [sdd] 3586894848 512-byte logical blocks: (1.83 TB/1.66 TiB) <5>scsi 6:0:0:4: Direct-Access DGC VRAID 0430 PQ: 0 ANSI: 4 <5>sd 6:0:0:1: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA <5>sd 6:0:0:2: [sdd] Write Protect is off <7>sd 6:0:0:2: [sdd] Mode Sense: 87 00 00 08 <5>sd 6:0:0:2: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA <5>sd 6:0:0:3: [sde] 3846802432 512-byte logical blocks: (1.96 TB/1.79 TiB) <5>scsi 6:0:0:5: Direct-Access DGC VRAID 0430 PQ: 0 ANSI: 4 <5>sd 6:0:0:3: [sde] Write Protect is off <7>sd 6:0:0:3: [sde] Mode Sense: 87 00 00 08 <6> sdc: <6> sdd: <5>sd 6:0:0:4: [sdf] 3846802432 512-byte logical blocks: (1.96 TB/1.79 TiB) <5>scsi 6:0:0:6: Direct-Access DGC VRAID 0430 PQ: 0 ANSI: 4 <5>sd 6:0:0:3: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA <5>sd 6:0:0:4: [sdf] Write Protect is off <7>sd 6:0:0:4: [sdf] Mode Sense: 87 00 00 08 <5>sd 6:0:0:5: [sdg] 3846802432 512-byte logical blocks: (1.96 TB/1.79 TiB) <5>sd 6:0:0:4: [sdf] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA <5>sd 6:0:0:5: [sdg] Write Protect is off <7>sd 6:0:0:5: [sdg] Mode Sense: 87 00 00 08 <6> sde: <5>sd 6:0:0:5: [sdg] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA <5>sd 6:0:0:6: [sdh] 3846802432 512-byte logical blocks: (1.96 TB/1.79 TiB) <5>sd 6:0:0:6: [sdh] Write Protect is off <7>sd 6:0:0:6: [sdh] Mode Sense: 87 00 00 08 <6> sdf: <6> sdg: <5>sd 6:0:0:6: [sdh] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA <6> sdh: unknown partition table <5>sd 6:0:0:0: [sdb] Write Protect is off <4> unknown partition table <7>sd 6:0:0:0: [sdb] Mode Sense: 87 00 00 08 <5>sd 6:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA <5>sd 6:0:0:5: [sdg] Attached SCSI disk <4> unknown partition table <5>sd 6:0:0:4: [sdf] Attached SCSI disk <4> unknown partition table <4> unknown partition table <5>sd 6:0:0:6: [sdh] Attached SCSI disk <5>sd 6:0:0:1: [sdc] Attached SCSI disk <4> unknown partition table <5>sd 6:0:0:3: [sde] Attached SCSI disk <6> sdb: <5>sd 6:0:0:2: [sdd] Attached SCSI disk <4> unknown partition table <5>sd 6:0:0:0: [sdb] Attached SCSI disk <7> alloc irq_desc for 53 on node -1 <7> alloc kstat_irqs on node -1 <7>lpfc 0000:04:00.1: irq 53 for MSI/MSI-X <7> alloc irq_desc for 54 on node -1 <7> alloc kstat_irqs on node -1 <7>lpfc 0000:04:00.1: irq 54 for MSI/MSI-X <5>sd 0:0:0:0: Attached scsi generic sg0 type 0 <5>sd 6:0:0:0: Attached scsi generic sg1 type 0 <5>sd 6:0:0:1: Attached scsi generic sg2 type 0 <5>sd 6:0:0:2: Attached scsi generic sg3 type 0 <5>sd 6:0:0:3: Attached scsi generic sg4 type 0 <5>sd 6:0:0:4: Attached scsi generic sg5 type 0 <5>sd 6:0:0:5: Attached scsi generic sg6 type 0 <5>sd 6:0:0:6: Attached scsi generic sg7 type 0 <3>lpfc 0000:04:00.1: 1:1303 Link Up Event x1 received Data: x1 x1 x10 x2 x0 x0 0 <3>lpfc 0000:04:00.1: 1:1309 Link Up Event npiv not supported in loop topology <3>lpfc 0000:04:00.1: 1:(0):2858 FLOGI failure Status:x3/x18 TMO:x0 <3>lpfc 0000:04:00.1: 1:(0):2858 FLOGI failure Status:x3/x18 TMO:x0 <3>lpfc 0000:04:00.1: 1:(0):2858 FLOGI failure Status:x3/x18 TMO:x0 <3>lpfc 0000:04:00.1: 1:(0):0100 FLOGI failure Status:x3/x18 TMO:x0 <5>scsi 7:0:0:0: Direct-Access DGC VRAID 0430 PQ: 0 ANSI: 4 <5>sd 7:0:0:0: Attached scsi generic sg8 type 0 <5>sd 7:0:0:0: [sdi] 3586894848 512-byte logical blocks: (1.83 TB/1.66 TiB) <5>sd 7:0:0:0: [sdi] Write Protect is off <7>sd 7:0:0:0: [sdi] Mode Sense: 87 00 00 08 <5>scsi 7:0:0:1: Direct-Access DGC VRAID 0430 PQ: 0 ANSI: 4 <5>sd 7:0:0:0: [sdi] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA <5>sd 7:0:0:1: Attached scsi generic sg9 type 0 <5>scsi 7:0:0:2: Direct-Access DGC VRAID 0430 PQ: 0 ANSI: 4 <5>sd 7:0:0:2: Attached scsi generic sg10 type 0 <5>scsi 7:0:0:3: Direct-Access DGC VRAID 0430 PQ: 0 ANSI: 4 <5>sd 7:0:0:2: [sdk] 3586894848 512-byte logical blocks: (1.83 TB/1.66 TiB) <5>sd 7:0:0:1: [sdj] 3586894848 512-byte logical blocks: (1.83 TB/1.66 TiB) <5>sd 7:0:0:3: Attached scsi generic sg11 type 0 <5>sd 7:0:0:1: [sdj] Write Protect is off <7>sd 7:0:0:1: [sdj] Mode Sense: 87 00 00 08 <5>sd 7:0:0:2: [sdk] Write Protect is off <7>sd 7:0:0:2: [sdk] Mode Sense: 87 00 00 08 <5>scsi 7:0:0:4: Direct-Access DGC VRAID 0430 PQ: 0 ANSI: 4 <5>sd 7:0:0:2: [sdk] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA <5>sd 7:0:0:4: Attached scsi generic sg12 type 0 <5>sd 7:0:0:1: [sdj] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA <5>scsi 7:0:0:5: Direct-Access DGC VRAID 0430 PQ: 0 ANSI: 4 <5>sd 7:0:0:5: Attached scsi generic sg13 type 0 <5>sd 7:0:0:3: [sdl] 3846802432 512-byte logical blocks: (1.96 TB/1.79 TiB) <5>sd 7:0:0:5: [sdn] 3846802432 512-byte logical blocks: (1.96 TB/1.79 TiB) <5>sd 7:0:0:3: [sdl] Write Protect is off <7>sd 7:0:0:3: [sdl] Mode Sense: 87 00 00 08 <5>sd 7:0:0:5: [sdn] Write Protect is off <7>sd 7:0:0:5: [sdn] Mode Sense: 87 00 00 08 <5>scsi 7:0:0:6: Direct-Access DGC VRAID 0430 PQ: 0 ANSI: 4 <5>sd 7:0:0:6: Attached scsi generic sg14 type 0 <5>sd 7:0:0:4: [sdm] 3846802432 512-byte logical blocks: (1.96 TB/1.79 TiB) <5>sd 7:0:0:4: [sdm] Write Protect is off <7>sd 7:0:0:4: [sdm] Mode Sense: 87 00 00 08 <5>sd 7:0:0:3: [sdl] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA <5>sd 7:0:0:5: [sdn] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA <6> sdj: <6> sdk: <5>sd 7:0:0:6: [sdo] 3846802432 512-byte logical blocks: (1.96 TB/1.79 TiB) <5>sd 7:0:0:4: [sdm] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA <5>sd 7:0:0:6: [sdo] Write Protect is off <7>sd 7:0:0:6: [sdo] Mode Sense: 87 00 00 08 <5>sd 7:0:0:6: [sdo] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA <6> sdn: <6> sdl: unknown partition table <4> unknown partition table <4> unknown partition table <4> unknown partition table <5>sd 7:0:0:3: [sdl] Attached SCSI disk <5>sd 7:0:0:1: [sdj] Attached SCSI disk <6> sdm: <5>sd 7:0:0:2: [sdk] Attached SCSI disk <6> sdo: <5>sd 7:0:0:5: [sdn] Attached SCSI disk <6> sdi: unknown partition table <4> unknown partition table <4> unknown partition table <5>sd 7:0:0:6: [sdo] Attached SCSI disk <5>sd 7:0:0:0: [sdi] Attached SCSI disk <5>sd 7:0:0:4: [sdm] Attached SCSI disk <6>device-mapper: uevent: version 1.0.3 <6>device-mapper: ioctl: 4.24.6-ioctl (2013-01-15) initialised: dm-devel@redhat.com <6>device-mapper: multipath: version 1.5.0 loaded <6>emc: device handler registered <6>device-mapper: multipath round-robin: version 1.0.0 loaded <6>sd 7:0:0:1: emc: detected Clariion CX4-120, flags 0 <5>sd 7:0:0:1: emc: ALUA failover mode detected <6>sd 7:0:0:1: emc: connected to SP B Port 5 (owned, default SP B) <6>sd 6:0:0:1: emc: detected Clariion CX4-120, flags 0 <5>sd 6:0:0:1: emc: ALUA failover mode detected <6>sd 6:0:0:1: emc: connected to SP A Port 5 (bound, default SP B) <6>sd 6:0:0:2: emc: detected Clariion CX4-120, flags 0 <5>sd 6:0:0:2: emc: ALUA failover mode detected <6>sd 6:0:0:2: emc: connected to SP A Port 5 (owned, default SP A) <6>sd 7:0:0:2: emc: detected Clariion CX4-120, flags 0 <5>sd 7:0:0:2: emc: ALUA failover mode detected <6>sd 7:0:0:2: emc: connected to SP B Port 5 (bound, default SP A) <6>sd 7:0:0:3: emc: detected Clariion CX4-120, flags 0 <5>sd 7:0:0:3: emc: ALUA failover mode detected <6>sd 7:0:0:3: emc: connected to SP B Port 5 (owned, default SP B) <6>sd 6:0:0:3: emc: detected Clariion CX4-120, flags 0 <5>sd 6:0:0:3: emc: ALUA failover mode detected <6>sd 6:0:0:3: emc: connected to SP A Port 5 (bound, default SP B) <6>sd 6:0:0:4: emc: detected Clariion CX4-120, flags 0 <5>sd 6:0:0:4: emc: ALUA failover mode detected <6>sd 6:0:0:4: emc: connected to SP A Port 5 (owned, default SP A) <6>sd 7:0:0:4: emc: detected Clariion CX4-120, flags 0 <5>sd 7:0:0:4: emc: ALUA failover mode detected <6>sd 7:0:0:4: emc: connected to SP B Port 5 (bound, default SP A) <6>sd 7:0:0:5: emc: detected Clariion CX4-120, flags 0 <5>sd 7:0:0:5: emc: ALUA failover mode detected <6>sd 7:0:0:5: emc: connected to SP B Port 5 (owned, default SP B) <6>sd 6:0:0:5: emc: detected Clariion CX4-120, flags 0 <5>sd 6:0:0:5: emc: ALUA failover mode detected <6>sd 6:0:0:5: emc: connected to SP A Port 5 (bound, default SP B) <6>sd 6:0:0:6: emc: detected Clariion CX4-120, flags 0 <5>sd 6:0:0:6: emc: ALUA failover mode detected <6>sd 6:0:0:6: emc: connected to SP A Port 5 (owned, default SP A) <6>sd 7:0:0:6: emc: detected Clariion CX4-120, flags 0 <5>sd 7:0:0:6: emc: ALUA failover mode detected <6>sd 7:0:0:6: emc: connected to SP B Port 5 (bound, default SP A) <6>sd 6:0:0:0: emc: detected Clariion CX4-120, flags 0 <5>sd 6:0:0:0: emc: ALUA failover mode detected <6>sd 6:0:0:0: emc: connected to SP A Port 5 (owned, default SP A) <6>sd 7:0:0:0: emc: detected Clariion CX4-120, flags 0 <5>sd 7:0:0:0: emc: ALUA failover mode detected <6>sd 7:0:0:0: emc: connected to SP B Port 5 (bound, default SP A) <5>sd 7:0:0:1: emc: ALUA failover mode detected <6>sd 7:0:0:1: emc: at SP B Port 5 (owned, default SP B) <5>sd 6:0:0:2: emc: ALUA failover mode detected <6>sd 6:0:0:2: emc: at SP A Port 5 (owned, default SP A) <5>sd 7:0:0:3: emc: ALUA failover mode detected <6>sd 7:0:0:3: emc: at SP B Port 5 (owned, default SP B) <5>sd 6:0:0:4: emc: ALUA failover mode detected <6>sd 6:0:0:4: emc: at SP A Port 5 (owned, default SP A) <5>sd 7:0:0:5: emc: ALUA failover mode detected <6>sd 7:0:0:5: emc: at SP B Port 5 (owned, default SP B) <5>sd 6:0:0:6: emc: ALUA failover mode detected <6>sd 6:0:0:6: emc: at SP A Port 5 (owned, default SP A) <5>sd 6:0:0:0: emc: ALUA failover mode detected <6>sd 6:0:0:0: emc: at SP A Port 5 (owned, default SP A) <6>Adding 1023428k swap on /dev/sda2. Priority:-1 extents:1 across:1023428k <6>mlx4_core: Mellanox ConnectX core driver v1.0-ofed1.5.4 (November 10, 2011) <6>mlx4_core: Initializing 0000:02:00.0 <6>mlx4_core 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 <7>mlx4_core 0000:02:00.0: setting latency timer to 64 <7>mlx4_core 0000:02:00.0: vpd r/w failed. This is likely a firmware bug on this device. Contact the card vendor for a firmware update. <7> alloc irq_desc for 55 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 55 for MSI/MSI-X <7> alloc irq_desc for 56 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 56 for MSI/MSI-X <7> alloc irq_desc for 57 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 57 for MSI/MSI-X <7> alloc irq_desc for 58 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 58 for MSI/MSI-X <7> alloc irq_desc for 59 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 59 for MSI/MSI-X <7> alloc irq_desc for 60 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 60 for MSI/MSI-X <7> alloc irq_desc for 61 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 61 for MSI/MSI-X <7> alloc irq_desc for 62 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 62 for MSI/MSI-X <7> alloc irq_desc for 63 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 63 for MSI/MSI-X <7> alloc irq_desc for 64 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 64 for MSI/MSI-X <7> alloc irq_desc for 65 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 65 for MSI/MSI-X <7> alloc irq_desc for 66 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 66 for MSI/MSI-X <7> alloc irq_desc for 67 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 67 for MSI/MSI-X <7> alloc irq_desc for 68 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 68 for MSI/MSI-X <7> alloc irq_desc for 69 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 69 for MSI/MSI-X <7> alloc irq_desc for 70 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 70 for MSI/MSI-X <7> alloc irq_desc for 71 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 71 for MSI/MSI-X <7> alloc irq_desc for 72 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 72 for MSI/MSI-X <7> alloc irq_desc for 73 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 73 for MSI/MSI-X <7> alloc irq_desc for 74 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 74 for MSI/MSI-X <7> alloc irq_desc for 75 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 75 for MSI/MSI-X <7> alloc irq_desc for 76 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 76 for MSI/MSI-X <7> alloc irq_desc for 77 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 77 for MSI/MSI-X <7> alloc irq_desc for 78 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 78 for MSI/MSI-X <7> alloc irq_desc for 79 on node -1 <7> alloc kstat_irqs on node -1 <7>mlx4_core 0000:02:00.0: irq 79 for MSI/MSI-X <6>mlx4_ib: Mellanox ConnectX InfiniBand driver v1.0-ofed1.5.4 (November 10, 2011) <6>NET: Registered protocol family 10 <6>lo: Disabled Privacy Extensions <4>ib0: multicast join failed for ff12:401b:ffff:0000:0000:0000:ffff:ffff, status -22 <6>ADDRCONF(NETDEV_UP): ib0: link is not ready <4>ib0: multicast join failed for ff12:401b:ffff:0000:0000:0000:ffff:ffff, status -22 <4>ib0: enabling connected mode will cause multicast packet drops <4>ib0: mtu > 2044 will cause multicast packet drops. <4>ib0: mtu > 2044 will cause multicast packet drops. <6>NET: Registered protocol family 27 <6>NET: Registered protocol family 28 <5>sd 7:0:0:1: emc: ALUA failover mode detected <6>sd 7:0:0:1: emc: at SP B Port 5 (owned, default SP B) <5>sd 6:0:0:2: emc: ALUA failover mode detected <6>sd 6:0:0:2: emc: at SP A Port 5 (owned, default SP A) <5>sd 7:0:0:3: emc: ALUA failover mode detected <6>sd 7:0:0:3: emc: at SP B Port 5 (owned, default SP B) <5>sd 6:0:0:4: emc: ALUA failover mode detected <6>sd 6:0:0:4: emc: at SP A Port 5 (owned, default SP A) <5>sd 7:0:0:5: emc: ALUA failover mode detected <6>sd 7:0:0:5: emc: at SP B Port 5 (owned, default SP B) <5>sd 6:0:0:6: emc: ALUA failover mode detected <6>sd 6:0:0:6: emc: at SP A Port 5 (owned, default SP A) <5>sd 6:0:0:0: emc: ALUA failover mode detected <6>sd 6:0:0:0: emc: at SP A Port 5 (owned, default SP A) <6>ADDRCONF(NETDEV_UP): eth0: link is not ready <6>ADDRCONF(NETDEV_CHANGE): ib0: link becomes ready <6>igb: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX <6>ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready <6>ADDRCONF(NETDEV_UP): eth1: link is not ready <6>igb: eth1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX <6>ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready <4>ib0: Unicast, no dst: type 0030, QPN 200800 1404:0002:8000:0048:fe80:0000:0000:0000 <4>ACPI Error (psargs-0359): [PSTE] Namespace lookup failure, AE_NOT_FOUND <4>ACPI Error (psparse-0537): Method parse/execution failed [\_PR_.P001._PPC] (Node ffff88033b7c42b8), AE_NOT_FOUND <4>ACPI Error (psargs-0359): [PSTE] Namespace lookup failure, AE_NOT_FOUND <4>ACPI Error (psparse-0537): Method parse/execution failed [\_PR_.P002._PPC] (Node ffff88033b7c41a0), AE_NOT_FOUND <4>ACPI Error (psargs-0359): [PSTE] Namespace lookup failure, AE_NOT_FOUND <4>ACPI Error (psparse-0537): Method parse/execution failed [\_PR_.P003._PPC] (Node ffff88033b7c4f88), AE_NOT_FOUND <4>ACPI Error (psargs-0359): [PSTE] Namespace lookup failure, AE_NOT_FOUND <4>ACPI Error (psparse-0537): Method parse/execution failed [\_PR_.P004._PPC] (Node ffff88033b7c4f10), AE_NOT_FOUND <4>ACPI Error (psargs-0359): [PSTE] Namespace lookup failure, AE_NOT_FOUND <4>ACPI Error (psparse-0537): Method parse/execution failed [\_PR_.P005._PPC] (Node ffff88033b7c4e98), AE_NOT_FOUND <4>ACPI Error (psargs-0359): [PSTE] Namespace lookup failure, AE_NOT_FOUND <4>ACPI Error (psparse-0537): Method parse/execution failed [\_PR_.P006._PPC] (Node ffff88033b7c4e20), AE_NOT_FOUND <4>ACPI Error (psargs-0359): [PSTE] Namespace lookup failure, AE_NOT_FOUND <4>ACPI Error (psparse-0537): Method parse/execution failed [\_PR_.P007._PPC] (Node ffff88033b7c4da8), AE_NOT_FOUND <4>ACPI Error (psargs-0359): [PSTE] Namespace lookup failure, AE_NOT_FOUND <4>ACPI Error (psparse-0537): Method parse/execution failed [\_PR_.P008._PPC] (Node ffff88033b7c4d30), AE_NOT_FOUND <6>ipmi device interface <6>RPC: Registered named UNIX socket transport module. <6>RPC: Registered udp transport module. <6>RPC: Registered tcp transport module. <6>RPC: Registered tcp NFSv4.1 backchannel transport module. <4>ib0: Unicast, no dst: type 0030, QPN 200800 1404:0001:8000:0048:fe80:0000:0000:0000 <5>Slow work thread pool: Starting up <5>Slow work thread pool: Ready <5>FS-Cache: Loaded <5>NFS: Registering the id_resolver key type <5>FS-Cache: Netfs 'nfs' registered for caching <6>Installing knfsd (copyright (C) 1996 okir@monad.swb.de). <4>NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory <6>NFSD: starting 90-second grace period <7>ib0: no IPv6 routers present <7>eth0: no IPv6 routers present <7>eth1: no IPv6 routers present <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <5>padlock: VIA PadLock Hash Engine not detected. <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <4>LDISKFS-fs warning (device dm-2): ldiskfs_multi_mount_protect: MMP interval 42 higher than expected, please wait. <4> <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): recovery complete <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 5734:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403764949/real 1403764949] req@ffff88062c4c7800 x1471954189025336/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403764954 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 5734:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403764974/real 1403764974] req@ffff88063b7aa800 x1471954189025348/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403764979 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 1 client reconnects <4>Lustre: fs1-OST0004: Denying connection for new client 87f0d3a8-856f-9961-44be-3341777b2176 (at 10.3.1.15@o2ib), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 2:29 <4>Lustre: fs1-OST0004: Denying connection for new client 80a04dbc-1141-4281-e15c-d15d757864b0 (at 10.3.1.11@o2ib), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 2:29 <4>Lustre: Skipped 4 previous similar messages <4>Lustre: fs1-OST0004: Denying connection for new client e9330d9f-e747-6d2b-8c04-5e431a0cd8a1 (at 10.3.1.27@o2ib), waiting for all 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 2:29 <4>Lustre: Skipped 5 previous similar messages <6>Lustre: fs1-OST0004: Recovery over after 0:05, of 1 clients 1 recovered and 0 were evicted. <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.6@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.15@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.27@o2ib (no target) <3>LustreError: Skipped 12 previous similar messages <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.6@o2ib (no target) <3>LustreError: Skipped 1 previous similar message <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.15@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.6@o2ib (no target) <3>LustreError: Skipped 14 previous similar messages <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.15@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.6@o2ib (no target) <3>LustreError: Skipped 14 previous similar messages <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.6@o2ib (no target) <3>LustreError: Skipped 15 previous similar messages <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.6@o2ib (no target) <3>LustreError: Skipped 15 previous similar messages <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.6@o2ib (no target) <3>LustreError: Skipped 15 previous similar messages <4>Lustre: 5741:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403770024/real 1403770024] req@ffff88032f49e800 x1471954189026964/t0(0) o400->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 224/224 e 0 to 1 dl 1403770031 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-MDT0000-lwp-OST0004: Connection to fs1-MDT0000 (at 10.3.1.6@o2ib) was lost; in progress operations using this service will wait for recovery to complete <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 5734:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403770068/real 1403770068] req@ffff88062bf5f000 x1471954189027032/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403770073 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 5734:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 1 previous similar message <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 15 clients reconnect <6>Lustre: fs1-OST0004: Recovery over after 0:04, of 15 clients 15 recovered and 0 were evicted. <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.11@o2ib (no target) <3>LustreError: Skipped 18 previous similar messages <4>Lustre: 5734:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403770093/real 1403770093] req@ffff8803233f1000 x1471954189027044/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403770098 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.18@o2ib (no target) <3>LustreError: Skipped 26 previous similar messages <4>Lustre: 16432:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403770563/real 1403770563] req@ffff88063986d800 x1471954189027192/t0(0) o39->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 224/224 e 0 to 1 dl 1403770573 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 16432:0:(obd_mount_server.c:1443:server_put_super()) fs1-OST0004: failed to disconnect lwp. (rc=-110) <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 5734:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403771364/real 1403771364] req@ffff880336cd6400 x1471954189027256/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403771369 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0001_UUID: not available for connect from 10.4.1.15@o2ib1 (no target) <3>LustreError: Skipped 12 previous similar messages <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 15 clients reconnect <4>Lustre: fs1-OST0004: Denying connection for new client fs1-MDT0000-mdtlov_UUID (at 10.3.1.6@o2ib), waiting for all 15 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 2:29 <4>Lustre: Skipped 1 previous similar message <4>Lustre: 5734:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403771389/real 1403771389] req@ffff8803320a1c00 x1471954189027268/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403771394 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <6>Lustre: fs1-OST0004: Recovery over after 0:24, of 15 clients 15 recovered and 0 were evicted. <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.6@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.20@o2ib (no target) <4>Lustre: 19558:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403771721/real 1403771721] req@ffff880621b7bc00 x1471954189027376/t0(0) o39->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 224/224 e 0 to 1 dl 1403771731 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 19558:0:(obd_mount_server.c:1443:server_put_super()) fs1-OST0004: failed to disconnect lwp. (rc=-110) <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 19787:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403771748/real 1403771748] req@ffff88033a3d6400 x1471961363382328/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403771753 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 15 clients reconnect <6>Lustre: fs1-OST0004: Recovery over after 0:05, of 15 clients 15 recovered and 0 were evicted. <4>Lustre: 19787:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403771773/real 1403771773] req@ffff880627c8b000 x1471961363382340/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403771778 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>IPMI message handler: Event queue full, discarding incoming events <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.20@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.16@o2ib (no target) <3>LustreError: Skipped 1 previous similar message <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.27@o2ib (no target) <3>LustreError: Skipped 10 previous similar messages <4>Lustre: 19792:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403779298/real 1403779298] req@ffff880324f29000 x1471961363384748/t0(0) o400->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 224/224 e 0 to 1 dl 1403779305 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-MDT0000-lwp-OST0004: Connection to fs1-MDT0000 (at 10.3.1.6@o2ib) was lost; in progress operations using this service will wait for recovery to complete <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 19787:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403779342/real 1403779342] req@ffff880324f12400 x1471961363384816/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403779347 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 19787:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 1 previous similar message <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 15 clients reconnect <6>Lustre: fs1-OST0004: Recovery over after 0:04, of 15 clients 15 recovered and 0 were evicted. <4>Lustre: 19787:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403779367/real 1403779367] req@ffff88032f461c00 x1471961363384828/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403779372 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 14640:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403782512/real 1403782512] req@ffff88033a2bc800 x1471972649205816/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403782517 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 16 clients reconnect <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.20@o2ib (no target) <3>LustreError: 137-5: fs1-OST0001_UUID: not available for connect from 10.4.1.20@o2ib1 (no target) <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.27@o2ib (no target) <3>LustreError: Skipped 25 previous similar messages <4>Lustre: 14640:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403782537/real 1403782537] req@ffff880631c51c00 x1471972649205828/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403782542 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 14640:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403782562/real 1403782562] req@ffff880623db8800 x1471972649205836/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403782572 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 14640:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403782587/real 1403782587] req@ffff88063b767c00 x1471972649205844/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403782597 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <6>Lustre: fs1-OST0004: Recovery over after 1:44, of 16 clients 16 recovered and 0 were evicted. <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 16 clients reconnect <6>Lustre: fs1-OST0004: Recovery over after 0:11, of 16 clients 16 recovered and 0 were evicted. <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 19901:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403783865/real 1403783865] req@ffff88062811b800 x1471974068977720/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403783870 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 16 clients reconnect <6>Lustre: fs1-OST0004: Recovery over after 0:05, of 16 clients 16 recovered and 0 were evicted. <4>Lustre: 19901:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403783890/real 1403783890] req@ffff880339260400 x1471974068977732/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403783895 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <3>LNetError: 2976:0:(o2iblnd_cb.c:2267:kiblnd_passive_connect()) Can't accept 10.3.1.6@o2ib on NA (ib0:1:10.3.1.10): bad dst nid 10.3.1.10@o2ib <3>LNetError: 2976:0:(o2iblnd_cb.c:2267:kiblnd_passive_connect()) Can't accept 10.3.1.6@o2ib on NA (ib0:1:10.3.1.10): bad dst nid 10.3.1.10@o2ib <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 23835:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403785724/real 1403785724] req@ffff88062c977c00 x1471975950123064/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403785729 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.15@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.20@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.21@o2ib (no target) <3>LustreError: Skipped 4 previous similar messages <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 16 clients reconnect <4>Lustre: 23835:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403785749/real 1403785749] req@ffff88032c47e800 x1471975950123076/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403785754 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.12@o2ib (no target) <3>LustreError: Skipped 7 previous similar messages <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.22@o2ib (no target) <4>Lustre: 23835:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403785774/real 1403785774] req@ffff880333d87c00 x1471975950123084/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403785784 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.14@o2ib (no target) <3>LustreError: Skipped 2 previous similar messages <4>Lustre: 23835:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403785799/real 1403785799] req@ffff88033707a000 x1471975950123092/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403785809 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <6>Lustre: fs1-OST0004: Recovery over after 1:49, of 16 clients 16 recovered and 0 were evicted. <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 28054:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403787356/real 1403787356] req@ffff88032f6a4c00 x1471977728507960/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403787361 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0000_UUID: not available for connect from 10.4.1.14@o2ib1 (no target) <3>LustreError: 137-5: fs1-OST0000_UUID: not available for connect from 10.4.1.21@o2ib1 (no target) <3>LustreError: 137-5: fs1-OST0000_UUID: not available for connect from 10.4.1.26@o2ib1 (no target) <3>LustreError: Skipped 4 previous similar messages <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 16 clients reconnect <4>Lustre: 28054:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403787381/real 1403787381] req@ffff88032ce1a000 x1471977728507972/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403787386 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <6>Lustre: fs1-OST0004: Recovery over after 0:44, of 16 clients 16 recovered and 0 were evicted. <4>Lustre: Failing over fs1-OST0004 <3>LustreError: 137-5: fs1-OST0004_UUID: not available for connect from 10.3.1.14@o2ib (no target) <3>LustreError: Skipped 6 previous similar messages <4>Lustre: server umount fs1-OST0004 complete <3>LNetError: 28052:0:(lib-move.c:1931:lnet_parse()) 10.3.1.6@o2ib, src 10.3.1.6@o2ib: Dropping PUT (error -108 looking up sender) <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 2575:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403791158/real 1403791158] req@ffff88032be20c00 x1471981661716536/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403791163 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.21@o2ib (no target) <4>Lustre: 2575:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403791183/real 1403791183] req@ffff88032723f800 x1471981661716548/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403791188 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0001_UUID: not available for connect from 10.4.1.27@o2ib1 (no target) <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.17@o2ib (no target) <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.19@o2ib (no target) <3>LustreError: Skipped 13 previous similar messages <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 16 clients reconnect <4>Lustre: 2575:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403791208/real 1403791208] req@ffff880339bc4c00 x1471981661716556/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403791218 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.22@o2ib (no target) <3>LustreError: Skipped 7 previous similar messages <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.14@o2ib (no target) <4>Lustre: 2575:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403791233/real 1403791233] req@ffff8803260da400 x1471981661716564/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403791243 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.27@o2ib (no target) <3>LustreError: Skipped 1 previous similar message <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.21@o2ib (no target) <6>Lustre: fs1-OST0004: Recovery over after 1:11, of 16 clients 16 recovered and 0 were evicted. <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <3>LNetError: 2977:0:(o2iblnd_cb.c:2267:kiblnd_passive_connect()) Can't accept 10.3.1.14@o2ib on NA (ib0:1:10.3.1.10): bad dst nid 10.3.1.10@o2ib <3>LNetError: 2977:0:(o2iblnd_cb.c:2267:kiblnd_passive_connect()) Can't accept 10.3.1.14@o2ib on NA (ib0:1:10.3.1.10): bad dst nid 10.3.1.10@o2ib <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 6528:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403792562/real 1403792562] req@ffff88032e2f1800 x1471983187394616/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403792567 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 16 clients reconnect <4>Lustre: 6528:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403792587/real 1403792587] req@ffff88063b72f000 x1471983187394628/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403792592 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.21@o2ib (no target) <3>LustreError: 137-5: fs1-OST0001_UUID: not available for connect from 10.4.1.21@o2ib1 (no target) <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.17@o2ib (no target) <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.19@o2ib (no target) <3>LustreError: Skipped 16 previous similar messages <4>Lustre: 6528:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403792612/real 1403792612] req@ffff88062cf88800 x1471983187394636/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403792622 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.22@o2ib (no target) <3>LustreError: Skipped 9 previous similar messages <4>Lustre: 6528:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403792637/real 1403792637] req@ffff880625b51400 x1471983187394644/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403792647 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <6>Lustre: fs1-OST0004: Recovery over after 1:44, of 16 clients 16 recovered and 0 were evicted. <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 16 clients reconnect <6>Lustre: fs1-OST0004: Recovery over after 0:05, of 16 clients 16 recovered and 0 were evicted. <6>Lustre: fs1-OST0004: deleting orphan objects from 0x0:418 to 0x0:449 <4>Lustre: Failing over fs1-OST0004 <3>LustreError: 137-5: fs1-OST0004_UUID: not available for connect from 10.3.1.6@o2ib (no target) <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 16 clients reconnect <6>Lustre: fs1-OST0004: Recovery over after 0:03, of 16 clients 16 recovered and 0 were evicted. <6>Lustre: fs1-OST0004: deleting orphan objects from 0x0:418 to 0x0:481 <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 11712:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403793683/real 1403793683] req@ffff88062bd1d000 x1471984363896888/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403793688 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 16 clients reconnect <4>Lustre: 11712:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403793708/real 1403793708] req@ffff880325881400 x1471984363896900/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403793713 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.22@o2ib (no target) <3>LustreError: 137-5: fs1-OST0001_UUID: not available for connect from 10.4.1.22@o2ib1 (no target) <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.21@o2ib (no target) <3>LustreError: Skipped 1 previous similar message <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.17@o2ib (no target) <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.12@o2ib (no target) <3>LustreError: Skipped 24 previous similar messages <4>Lustre: 11712:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403793733/real 1403793733] req@ffff880329672c00 x1471984363896908/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403793743 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 11712:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403793758/real 1403793758] req@ffff880325088000 x1471984363896916/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403793768 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <6>Lustre: fs1-OST0004: Recovery over after 1:44, of 16 clients 16 recovered and 0 were evicted. <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 13612:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403794010/real 1403794010] req@ffff880627903800 x1471984705732668/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403794015 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 16 clients reconnect <4>Lustre: 13612:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403794035/real 1403794035] req@ffff880333c37000 x1471984705732680/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403794040 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.21@o2ib (no target) <3>LustreError: 137-5: fs1-OST0001_UUID: not available for connect from 10.4.1.21@o2ib1 (no target) <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.17@o2ib (no target) <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.19@o2ib (no target) <3>LustreError: Skipped 15 previous similar messages <4>Lustre: 13612:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403794060/real 1403794060] req@ffff880324a58800 x1471984705732688/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403794070 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.22@o2ib (no target) <3>LustreError: Skipped 8 previous similar messages <4>Lustre: 13612:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403794085/real 1403794085] req@ffff880325733800 x1471984705732696/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403794095 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <6>Lustre: fs1-OST0004: Recovery over after 1:44, of 16 clients 16 recovered and 0 were evicted. <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 16362:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403794728/real 1403794728] req@ffff880629b80000 x1471985459658812/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403794733 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 16 clients reconnect <4>Lustre: 16362:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403794753/real 1403794753] req@ffff88032adf2c00 x1471985459658824/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403794758 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <6>Lustre: fs1-OST0004: Recovery over after 1:08, of 16 clients 16 recovered and 0 were evicted. <4>Lustre: 19638:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403796287/real 1403796287] req@ffff880326e9b800 x1471985459659316/t0(0) o39->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 224/224 e 0 to 1 dl 1403796293 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 19638:0:(obd_mount_server.c:1443:server_put_super()) fs1-OST0004: failed to disconnect lwp. (rc=-110) <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 19931:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403796349/real 1403796349] req@ffff8803290ba000 x1471987114311740/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403796354 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 15 clients reconnect <6>Lustre: fs1-OST0004: Recovery over after 0:05, of 15 clients 15 recovered and 0 were evicted. <4>Lustre: 19931:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403796374/real 1403796374] req@ffff88032a05cc00 x1471987114311752/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403796379 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.6@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.15@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.18@o2ib (no target) <3>LustreError: Skipped 1 previous similar message <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.6@o2ib (no target) <3>LustreError: 137-5: fs1-OST0001_UUID: not available for connect from 10.4.1.6@o2ib1 (no target) <3>LustreError: 137-5: fs1-OST0000_UUID: not available for connect from 10.4.1.6@o2ib1 (no target) <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 1 client reconnects <6>Lustre: fs1-OST0004: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.6@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.21@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.25@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.18@o2ib (no target) <3>LustreError: Skipped 5 previous similar messages <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.15@o2ib (no target) <3>LustreError: Skipped 4 previous similar messages <3>LustreError: 11-0: fs1-MDT0000-lwp-OST0004: Communicating with 10.3.1.6@o2ib, operation obd_ping failed with -107. <4>Lustre: fs1-MDT0000-lwp-OST0004: Connection to fs1-MDT0000 (at 10.3.1.6@o2ib) was lost; in progress operations using this service will wait for recovery to complete <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 1150:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403851724/real 1403851724] req@ffff88063b6f2800 x1472045224296504/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403851729 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 15 clients reconnect <6>Lustre: fs1-OST0004: Recovery over after 0:04, of 15 clients 15 recovered and 0 were evicted. <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.22@o2ib (no target) <3>LustreError: 137-5: fs1-OST0001_UUID: not available for connect from 10.4.1.22@o2ib1 (no target) <4>Lustre: 1150:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403851749/real 1403851749] req@ffff8803251a8800 x1472045224296516/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403851754 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.18@o2ib (no target) <3>LustreError: Skipped 1 previous similar message <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.15@o2ib (no target) <3>LustreError: Skipped 7 previous similar messages <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.21@o2ib (no target) <3>LustreError: Skipped 7 previous similar messages <4>Lustre: 1150:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403851774/real 1403851774] req@ffff880327ffe000 x1472045224296524/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403851784 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 1150:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403851799/real 1403851799] req@ffff880327ce1c00 x1472045224296532/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403851809 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 16 clients reconnect <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.18@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.26@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.11@o2ib (no target) <3>LustreError: Skipped 1 previous similar message <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.21@o2ib (no target) <3>LustreError: Skipped 4 previous similar messages <6>Lustre: fs1-OST0004: Recovery over after 1:46, of 16 clients 16 recovered and 0 were evicted. <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.18@o2ib (no target) <3>LustreError: Skipped 6 previous similar messages <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.21@o2ib (no target) <3>LustreError: Skipped 7 previous similar messages <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.18@o2ib (no target) <3>LustreError: Skipped 6 previous similar messages <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.18@o2ib (no target) <3>LustreError: Skipped 14 previous similar messages <4>Lustre: 4605:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403853002/real 1403853002] req@ffff8803353a6400 x1472045934182696/t0(0) o400->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 224/224 e 0 to 1 dl 1403853045 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-MDT0000-lwp-OST0004: Connection to fs1-MDT0000 (at 10.3.1.6@o2ib) was lost; in progress operations using this service will wait for recovery to complete <4>Lustre: 4601:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403853045/real 1403853045] req@ffff8803353a6400 x1472045934182708/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403853051 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 4607:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403853027/real 1403853027] req@ffff880328e6dc00 x1472045934182704/t0(0) o400->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 224/224 e 0 to 1 dl 1403853070 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 4601:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403853070/real 1403853070] req@ffff8803353a6400 x1472045934182716/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403853076 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.18@o2ib (no target) <3>LustreError: Skipped 14 previous similar messages <4>Lustre: 4601:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1403853095/real 0] req@ffff880339b4d000 x1472045934182724/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403853106 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 4601:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403853120/real 1403853120] req@ffff88032863b000 x1472045934182732/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403853131 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 4601:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403853145/real 1403853145] req@ffff8803370ab000 x1472045934182740/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403853161 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 4601:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403853170/real 1403853170] req@ffff88032db4c000 x1472045934182748/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403853186 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.18@o2ib (no target) <3>LustreError: Skipped 14 previous similar messages <4>Lustre: 4601:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403853220/real 1403853220] req@ffff8803289c8000 x1472045934182764/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403853241 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 4601:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 1 previous similar message <4>Lustre: 4601:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403853320/real 1403853320] req@ffff88032b071400 x1472045934182792/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403853351 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 4601:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 2 previous similar messages <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.18@o2ib (no target) <3>LustreError: Skipped 29 previous similar messages <4>Lustre: 4601:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403853420/real 1403853420] req@ffff8803352e4c00 x1472045934182820/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403853456 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 4601:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 2 previous similar messages <4>Lustre: 4601:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403853720/real 1403853720] req@ffff880339c65400 x1472045934182896/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403853775 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 4601:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 6 previous similar messages <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 12061:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403853769/real 1403853769] req@ffff8806292c6800 x1472047367585848/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403853774 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 15 clients reconnect <6>Lustre: fs1-OST0004: Recovery over after 0:04, of 15 clients 15 recovered and 0 were evicted. <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.18@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.26@o2ib (no target) <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.11@o2ib (no target) <3>LustreError: Skipped 1 previous similar message <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.21@o2ib (no target) <3>LustreError: Skipped 4 previous similar messages <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <3>LustreError: 137-5: fs1-OST0001_UUID: not available for connect from 10.4.1.18@o2ib1 (no target) <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.18@o2ib (no target) <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 15 clients reconnect <3>LustreError: 137-5: fs1-OST0001_UUID: not available for connect from 10.4.1.20@o2ib1 (no target) <3>LustreError: Skipped 1 previous similar message <4>Lustre: fs1-OST0004: Denying connection for new client fs1-MDT0000-mdtlov_UUID (at 10.3.1.6@o2ib), waiting for all 15 known clients (13 recovered, 0 in progress, and 0 evicted) to recover in 2:23 <6>Lustre: fs1-OST0004: Recovery over after 0:08, of 15 clients 15 recovered and 0 were evicted. <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.6@o2ib (no target) <3>LustreError: Skipped 10 previous similar messages <4>Lustre: 20183:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403858022/real 1403858022] req@ffff88032b84f400 x1472047455667612/t0(0) o39->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 224/224 e 0 to 1 dl 1403858028 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 20183:0:(obd_mount_server.c:1443:server_put_super()) fs1-OST0004: failed to disconnect lwp. (rc=-110) <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 24620:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403861042/real 1403861042] req@ffff880333d84800 x1472054993879096/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403861047 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 15 clients reconnect <6>Lustre: fs1-OST0004: Recovery over after 0:05, of 15 clients 15 recovered and 0 were evicted. <4>Lustre: 24620:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403861067/real 1403861067] req@ffff88033991e000 x1472054993879108/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403861072 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.18@o2ib (no target) <3>LustreError: 137-5: fs1-OST0001_UUID: not available for connect from 10.4.1.18@o2ib1 (no target) <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.20@o2ib (no target) <3>LustreError: Skipped 2 previous similar messages <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.21@o2ib (no target) <3>LustreError: Skipped 10 previous similar messages <3>LustreError: 137-5: fs1-OST0002_UUID: not available for connect from 10.3.1.22@o2ib (no target) <3>LustreError: Skipped 3 previous similar messages <4>Lustre: 24620:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403861092/real 1403861092] req@ffff88032ea9c800 x1472054993879116/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403861102 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 24620:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403861117/real 1403861117] req@ffff88032b596800 x1472054993879124/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403861127 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 25950:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403861420/real 1403861420] req@ffff88063ad88c00 x1472054993879224/t0(0) o39->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 224/224 e 0 to 1 dl 1403861435 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 25950:0:(obd_mount_server.c:1443:server_put_super()) fs1-OST0004: failed to disconnect lwp. (rc=-110) <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 15 clients reconnect <4>Lustre: fs1-OST0004: Denying connection for new client fs1-MDT0000-mdtlov_UUID (at 10.3.1.6@o2ib), waiting for all 15 known clients (11 recovered, 0 in progress, and 0 evicted) to recover in 2:25 <6>Lustre: fs1-OST0004: Recovery over after 0:05, of 15 clients 15 recovered and 0 were evicted. <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.6@o2ib (no target) <4>Lustre: 1412:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403865424/real 1403865424] req@ffff88063997cc00 x1472055429039496/t0(0) o39->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 224/224 e 0 to 1 dl 1403865430 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 1412:0:(obd_mount_server.c:1443:server_put_super()) fs1-OST0004: failed to disconnect lwp. (rc=-110) <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <3>LNetError: 2970:0:(o2iblnd_cb.c:2267:kiblnd_passive_connect()) Can't accept 10.3.1.13@o2ib on NA (ib0:0:10.3.1.10): bad dst nid 10.3.1.10@o2ib <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: 1991:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1403865660/real 1403865660] req@ffff88032634a400 x1472059836203064/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403865665 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 15 clients reconnect <4>Lustre: fs1-OST0004: Denying connection for new client fs1-MDT0000-mdtlov_UUID (at 10.3.1.6@o2ib), waiting for all 15 known clients (14 recovered, 0 in progress, and 0 evicted) to recover in 2:25 <6>Lustre: fs1-OST0004: Recovery over after 0:09, of 15 clients 15 recovered and 0 were evicted. <4>Lustre: 1991:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1403865685/real 1403865685] req@ffff88062d467000 x1472059836203076/t0(0) o38->fs1-MDT0000-lwp-OST0004@10.3.1.5@o2ib:12/10 lens 400/544 e 0 to 1 dl 1403865690 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.6@o2ib (no target) <4>Lustre: 1997:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404119460/real 1404119460] req@ffff88032f00c800 x1472059836284284/t0(0) o400->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 224/224 e 0 to 1 dl 1404119467 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: fs1-MDT0000-lwp-OST0004: Connection to fs1-MDT0000 (at 10.3.1.6@o2ib) was lost; in progress operations using this service will wait for recovery to complete <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <3>LustreError: 137-5: fs1-OST0003_UUID: not available for connect from 10.3.1.6@o2ib (no target) <4>Lustre: 6605:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404120149/real 1404120149] req@ffff880639acfc00 x1472326016172428/t0(0) o39->fs1-MDT0000-lwp-OST0004@10.3.1.6@o2ib:12/10 lens 224/224 e 0 to 1 dl 1404120155 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 6605:0:(obd_mount_server.c:1443:server_put_super()) fs1-OST0004: failed to disconnect lwp. (rc=-110) <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 15 clients reconnect <6>Lustre: fs1-OST0004: Recovery over after 0:06, of 15 clients 15 recovered and 0 were evicted. <4>Lustre: 7091:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404202144/real 1404202144] req@ffff88032ce6b000 x1472326735521492/t0(0) o400->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 224/224 e 0 to 1 dl 1404202151 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.3.1.5@o2ib) was lost; in progress operations using this service will fail <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404202144/real 1404202144] req@ffff880327313400 x1472326735521500/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404202150 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404202169/real 1404202169] req@ffff88032dcecc00 x1472326735521504/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404202175 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404202194/real 1404202194] req@ffff88033764b000 x1472326735521512/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404202200 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404202219/real 1404202219] req@ffff880339012800 x1472326735521520/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404202225 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404202244/real 1404202244] req@ffff880326540000 x1472326735521528/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404202255 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404202269/real 1404202269] req@ffff8803248f3000 x1472326735521536/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404202280 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404202294/real 1404202294] req@ffff88032e248c00 x1472326735521544/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404202305 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404202344/real 1404202344] req@ffff88032ce6b400 x1472326735521560/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404202360 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 1 previous similar message <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404202394/real 1404202394] req@ffff8803286fe400 x1472326735521576/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404202410 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 1 previous similar message <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404202519/real 1404202519] req@ffff88032944e800 x1472326735521616/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404202540 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 4 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404202794/real 1404202794] req@ffff88032a760c00 x1472326735521692/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404202825 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 7 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404203344/real 1404203344] req@ffff88032f44d000 x1472326735521840/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404203395 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 14 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404203944/real 1404203944] req@ffff8803347ed000 x1472326735521984/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404203999 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 11 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404204619/real 1404204619] req@ffff880333ace400 x1472326735522144/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404204674 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404205294/real 1404205294] req@ffff88032aa12800 x1472326735522304/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404205349 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404205919/real 1404205919] req@ffff88032ce6b000 x1472326735522456/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404205974 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404206544/real 1404206544] req@ffff88032e11d000 x1472326735522608/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404206599 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404207219/real 1404207219] req@ffff8803248f3000 x1472326735522768/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404207274 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404207894/real 1404207894] req@ffff8803255ca400 x1472326735522928/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404207949 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404208519/real 1404208519] req@ffff8803342e7400 x1472326735523080/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404208574 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404209144/real 1404209144] req@ffff88033764b000 x1472326735523232/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404209199 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404209819/real 1404209819] req@ffff88032de59800 x1472326735523392/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404209874 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404210494/real 1404210494] req@ffff88032515ec00 x1472326735523552/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404210549 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404211119/real 1404211119] req@ffff880326687400 x1472326735523704/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404211174 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <4>Lustre: Evicted from MGS (at 10.3.1.5@o2ib) after server handle changed from 0x70580d322f2b1410 to 0xa32acc55e2895b33 <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.3.1.5@o2ib) <4>Lustre: 7091:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1404212544/real 0] req@ffff880328e2e800 x1472326735524156/t0(0) o400->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 224/224 e 0 to 1 dl 1404212551 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7091:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 6 previous similar messages <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.3.1.5@o2ib) was lost; in progress operations using this service will fail <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404212627/real 1404212627] req@ffff8803286e8400 x1472326735524184/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404212633 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 3 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404212802/real 1404212802] req@ffff880326543800 x1472326735524240/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404212818 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 6 previous similar messages <4>Lustre: Evicted from MGS (at 10.3.1.5@o2ib) after server handle changed from 0xa32acc55e2895b33 to 0x776a0c70401137d4 <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.3.1.5@o2ib) <4>Lustre: 7090:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404213177/real 1404213177] req@ffff88032f7ee400 x1472326735524380/t0(0) o400->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 224/224 e 0 to 1 dl 1404213209 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7090:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 9 previous similar messages <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.3.1.5@o2ib) was lost; in progress operations using this service will fail <4>Lustre: Evicted from MGS (at 10.3.1.5@o2ib) after server handle changed from 0x776a0c70401137d4 to 0x71041d6c621e1979 <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.3.1.5@o2ib) <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.3.1.5@o2ib) was lost; in progress operations using this service will fail <4>Lustre: Evicted from MGS (at MGC10.3.1.5@o2ib_1) after server handle changed from 0x71041d6c621e1979 to 0x4e0f484c36ec8c9 <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.4.1.5@o2ib1) <4>Lustre: 7092:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404217552/real 1404217552] req@ffff88033a2d5400 x1472326735525836/t0(0) o400->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 224/224 e 0 to 1 dl 1404217559 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7092:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 6 previous similar messages <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.4.1.5@o2ib1) was lost; in progress operations using this service will fail <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404217634/real 1404217634] req@ffff88063171bc00 x1472326735525864/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404217640 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 3 previous similar messages <4>Lustre: Evicted from MGS (at MGC10.3.1.5@o2ib_1) after server handle changed from 0x4e0f484c36ec8c9 to 0xdd08a0fb695e7b9a <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.4.1.5@o2ib1) <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.4.1.5@o2ib1) was lost; in progress operations using this service will fail <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404217784/real 1404217784] req@ffff880628cc8c00 x1472326735525944/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404217795 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 4 previous similar messages <4>Lustre: Evicted from MGS (at MGC10.3.1.5@o2ib_1) after server handle changed from 0xdd08a0fb695e7b9a to 0xbb475160d0960d22 <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.4.1.5@o2ib1) <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.4.1.5@o2ib1) was lost; in progress operations using this service will fail <4>Lustre: Evicted from MGS (at 10.3.1.5@o2ib) after server handle changed from 0xbb475160d0960d22 to 0x6d46982f460598d3 <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.3.1.5@o2ib) <4>Lustre: 7087:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404218084/real 1404218084] req@ffff8803248f3c00 x1472326735526100/t0(0) o400->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 224/224 e 0 to 1 dl 1404218101 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7087:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 2 previous similar messages <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.3.1.5@o2ib) was lost; in progress operations using this service will fail <4>Lustre: Evicted from MGS (at MGC10.3.1.5@o2ib_1) after server handle changed from 0x6d46982f460598d3 to 0xe417c2ec532f7f99 <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.4.1.5@o2ib1) <4>Lustre: 7087:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404314276/real 1404314276] req@ffff8803286fe400 x1472326735556892/t0(0) o400->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 224/224 e 0 to 1 dl 1404314283 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7087:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 17 previous similar messages <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.4.1.5@o2ib1) was lost; in progress operations using this service will fail <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404314358/real 1404314358] req@ffff880329228c00 x1472326735556920/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404314364 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 3 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1404314508/real 0] req@ffff880328450800 x1472326735556968/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404314524 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 5 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404314833/real 1404314833] req@ffff880327313800 x1472326735557064/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404314864 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 10 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404315458/real 1404315458] req@ffff880326540000 x1472326735557232/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404315509 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 16 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404316083/real 1404316083] req@ffff880326543800 x1472326735557384/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404316138 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404316758/real 1404316758] req@ffff88032862cc00 x1472326735557544/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404316813 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404317433/real 1404317433] req@ffff880326543800 x1472326735557704/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404317488 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <4>Lustre: Evicted from MGS (at MGC10.3.1.5@o2ib_1) after server handle changed from 0xe417c2ec532f7f99 to 0xccf39fa31ffde66a <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.4.1.5@o2ib1) <4>Lustre: 7087:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1404318383/real 0] req@ffff8803338b8000 x1472326735558012/t0(0) o400->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 224/224 e 0 to 1 dl 1404318390 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7087:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 3 previous similar messages <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.4.1.5@o2ib1) was lost; in progress operations using this service will fail <4>Lustre: Evicted from MGS (at MGC10.3.1.5@o2ib_1) after server handle changed from 0xccf39fa31ffde66a to 0xca82efdb6c6463fe <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.4.1.5@o2ib1) <4>Lustre: 7086:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1404399765/real 0] req@ffff88032de3f400 x1472326735584084/t0(0) o400->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 224/224 e 0 to 1 dl 1404399772 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7086:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 12 previous similar messages <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.4.1.5@o2ib1) was lost; in progress operations using this service will fail <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404399847/real 1404399847] req@ffff880327313400 x1472326735584112/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404399853 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 3 previous similar messages <4>Lustre: Evicted from MGS (at 10.3.1.5@o2ib) after server handle changed from 0xca82efdb6c6463fe to 0x6e4bc0f0d944d7f2 <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.3.1.5@o2ib) <4>Lustre: 7091:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1404400647/real 0] req@ffff88032c8e7400 x1472326735584396/t0(0) o400->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 224/224 e 0 to 1 dl 1404400654 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7091:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 5 previous similar messages <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.3.1.5@o2ib) was lost; in progress operations using this service will fail <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404400679/real 1404400679] req@ffff88033201b000 x1472326735584408/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404400685 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 1 previous similar message <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404400729/real 1404400729] req@ffff88032c48c800 x1472326735584424/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404400735 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 1 previous similar message <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404400804/real 1404400804] req@ffff880334bb9400 x1472326735584448/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404400815 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 2 previous similar messages <4>Lustre: Evicted from MGS (at MGC10.3.1.5@o2ib_1) after server handle changed from 0x6e4bc0f0d944d7f2 to 0x59e0cc94842eea0f <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.4.1.5@o2ib1) <4>Lustre: 7092:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1404401404/real 0] req@ffff88032c48cc00 x1472326735584668/t0(0) o400->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 224/224 e 0 to 1 dl 1404401421 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7092:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 2 previous similar messages <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.4.1.5@o2ib1) was lost; in progress operations using this service will fail <4>Lustre: Evicted from MGS (at 10.3.1.5@o2ib) after server handle changed from 0x59e0cc94842eea0f to 0xecee58c7c12da7c <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.3.1.5@o2ib) <4>Lustre: 7091:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1404458121/real 0] req@ffff88032e230c00 x1472326735602844/t0(0) o400->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 224/224 e 0 to 1 dl 1404458128 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7091:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 9 previous similar messages <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.3.1.5@o2ib) was lost; in progress operations using this service will fail <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404458178/real 1404458178] req@ffff880334bb9400 x1472326735602864/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404458184 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 2 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404458278/real 1404458278] req@ffff880339b4a800 x1472326735602896/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404458289 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 3 previous similar messages <4>Lustre: Evicted from MGS (at 10.3.1.5@o2ib) after server handle changed from 0xecee58c7c12da7c to 0xb2368126cba4ce2 <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.3.1.5@o2ib) <4>Lustre: 7091:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404459328/real 1404459328] req@ffff88033a2d5400 x1472326735603260/t0(0) o400->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 224/224 e 0 to 1 dl 1404459335 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7091:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 5 previous similar messages <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.3.1.5@o2ib) was lost; in progress operations using this service will fail <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404459360/real 1404459360] req@ffff880328e0a000 x1472326735603272/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404459366 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 1 previous similar message <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404459410/real 1404459410] req@ffff88033a206000 x1472326735603288/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404459416 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 1 previous similar message <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404459485/real 1404459485] req@ffff880325016c00 x1472326735603312/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404459496 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 2 previous similar messages <4>Lustre: Evicted from MGS (at 10.3.1.5@o2ib) after server handle changed from 0xb2368126cba4ce2 to 0x1a801459090c81d <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.3.1.5@o2ib) <4>Lustre: 7091:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1404460960/real 0] req@ffff880326efd000 x1472326735603812/t0(0) o400->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 224/224 e 0 to 1 dl 1404460967 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7091:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 5 previous similar messages <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.3.1.5@o2ib) was lost; in progress operations using this service will fail <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404460992/real 1404460992] req@ffff8803255ca400 x1472326735603824/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404460998 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 1 previous similar message <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404461042/real 1404461042] req@ffff88033a2d5400 x1472326735603840/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404461048 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 1 previous similar message <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404461117/real 1404461117] req@ffff88032bda2c00 x1472326735603864/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404461128 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 2 previous similar messages <4>Lustre: Evicted from MGS (at 10.3.1.5@o2ib) after server handle changed from 0x1a801459090c81d to 0x2d000126d3693572 <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.3.1.5@o2ib) <4>Lustre: 7089:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1404462817/real 0] req@ffff88032c48cc00 x1472326735604436/t0(0) o400->MGC10.3.1.5@o2ib@10.3.1.5@o2ib:26/25 lens 224/224 e 0 to 1 dl 1404462824 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7089:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 5 previous similar messages <3>LustreError: 166-1: MGC10.3.1.5@o2ib: Connection to MGS (at 10.3.1.5@o2ib) was lost; in progress operations using this service will fail <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404462849/real 1404462849] req@ffff880325339800 x1472326735604448/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404462855 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 1 previous similar message <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404462899/real 1404462899] req@ffff880327313c00 x1472326735604464/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.6@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404462905 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 1 previous similar message <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404462974/real 1404462974] req@ffff8803372f3000 x1472326735604488/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404462985 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 2 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1404463149/real 1404463149] req@ffff88032a8f4800 x1472326735604544/t0(0) o250->MGC10.3.1.5@o2ib@10.4.1.5@o2ib1:26/25 lens 400/544 e 0 to 1 dl 1404463170 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 6 previous similar messages <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1404463424/real 1404463424] req@ffff88032cebbc00 x1472326735604624/t0(0) o250->MGC10.3.1.5@o2ib@10.3.1.6@o2ib:26/25 lens 400/544 e 0 to 1 dl 1404463455 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 <4>Lustre: 7084:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 8 previous similar messages <4>Lustre: Evicted from MGS (at 10.3.1.5@o2ib) after server handle changed from 0x2d000126d3693572 to 0x6887c1fd983a1697 <6>Lustre: MGC10.3.1.5@o2ib: Connection restored to MGS (at 10.3.1.5@o2ib) <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 1 client reconnects <6>Lustre: fs1-OST0004: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. <6>Lustre: fs1-OST0004: deleting orphan objects from 0x0:930 to 0x0:961 <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 1 client reconnects <6>Lustre: fs1-OST0004: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. <6>Lustre: fs1-OST0004: deleting orphan objects from 0x0:930 to 0x0:993 <3>LustreError: 161-6: The target named fs1-OST0004 is already running. Double-mount may have compromised the disk journal. <3>LustreError: 8207:0:(obd_mount.c:1289:lustre_fill_super()) Unable to mount (-114) <3>LustreError: 161-6: The target named fs1-OST0004 is already running. Double-mount may have compromised the disk journal. <3>LustreError: 8291:0:(obd_mount.c:1289:lustre_fill_super()) Unable to mount (-114) <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 1 client reconnects <6>Lustre: fs1-OST0004: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. <6>Lustre: fs1-OST0004: deleting orphan objects from 0x0:930 to 0x0:1025 <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <6>LNet: Removed LNI 10.4.1.10@o2ib1 <3>LNetError: 2977:0:(o2iblnd_cb.c:2267:kiblnd_passive_connect()) Can't accept 10.3.1.6@o2ib on NA (ib0:1:10.3.1.10): bad dst nid 10.3.1.10@o2ib <6>LNet: Removed LNI 10.3.1.10@o2ib <6>LNet: HW CPU cores: 8, npartitions: 2 <6>alg: No test for crc32 (crc32-table) <6>alg: No test for adler32 (adler32-zlib) <6>Lustre: Lustre: Build Version: T-bullpatches-2.4.2.0-FIX_17287_AER4-g9608be4-CHANGED-2.6.32-431.1.2.el6.Bull.44.x86_64 <6>LNet: Added LNI 10.3.1.10@o2ib [8/256/0/180] <6>LNet: Added LNI 10.4.1.10@o2ib1 [8/256/0/180] <6>LDISKFS-fs (dm-2): barriers disabled <6>LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: <3>LustreError: 12837:0:(genops.c:320:class_newdev()) Device fs1-OST0004-osd already exists at 0, won't add <3>LustreError: 12837:0:(obd_config.c:374:class_attach()) Cannot create device fs1-OST0004-osd of type osd-ldiskfs : -17 <3>LustreError: 12837:0:(obd_mount.c:196:lustre_start_simple()) fs1-OST0004-osd attach error -17 <3>LustreError: 12837:0:(obd_mount_server.c:1682:server_fill_super()) Unable to start osd on /dev/mapper/mpathe: -17 <3>LustreError: 12837:0:(obd_mount.c:1289:lustre_fill_super()) Unable to mount (-17) <6>Lustre: fs1-OST0004: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 <4>Lustre: fs1-OST0004: Will be in recovery for at least 2:30, or until 1 client reconnects <6>Lustre: fs1-OST0004: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. <6>Lustre: fs1-OST0004: deleting orphan objects from 0x0:930 to 0x0:1057 <4>Lustre: Failing over fs1-OST0004 <4>Lustre: server umount fs1-OST0004 complete <3>LustreError: 12957:0:(obd_class.h:1008:obd_connect()) Device 0 not setup <3>LustreError: 12957:0:(obd_config.c:619:class_cleanup()) Device 0 not setup <0>LustreError: 12957:0:(obd_mount_server.c:1651:osd_start()) ASSERTION( obd->obd_lu_dev ) failed: <0>LustreError: 12957:0:(obd_mount_server.c:1651:osd_start()) LBUG <4>Pid: 12957, comm: mount.lustre <4> <4>Call Trace: <4> [] libcfs_debug_dumpstack+0x55/0x80 [libcfs] <4> [] lbug_with_loc+0x47/0xb0 [libcfs] <4> [] server_fill_super+0x14b3/0x1580 [obdclass] <4> [] lustre_fill_super+0x1d8/0x530 [obdclass] <4> [] ? lustre_fill_super+0x0/0x530 [obdclass] <4> [] get_sb_nodev+0x5f/0xa0 <4> [] lustre_get_sb+0x25/0x30 [obdclass] <4> [] vfs_kern_mount+0x7b/0x1b0 <4> [] do_kern_mount+0x52/0x130 <4> [] do_mount+0x2fb/0x930 <4> [] sys_mount+0x90/0xe0 <4> [] system_call_fastpath+0x16/0x1b <4> <0>Kernel panic - not syncing: LBUG <4>Pid: 12957, comm: mount.lustre Not tainted 2.6.32-431.1.2.el6.Bull.44.x86_64 #1 <4>Call Trace: <4> [] ? panic+0xa7/0x16f <4> [] ? lbug_with_loc+0x9b/0xb0 [libcfs] <4> [] ? server_fill_super+0x14b3/0x1580 [obdclass] <4> [] ? lustre_fill_super+0x1d8/0x530 [obdclass] <4> [] ? lustre_fill_super+0x0/0x530 [obdclass] <4> [] ? get_sb_nodev+0x5f/0xa0 <4> [] ? lustre_get_sb+0x25/0x30 [obdclass] <4> [] ? vfs_kern_mount+0x7b/0x1b0 <4> [] ? do_kern_mount+0x52/0x130 <4> [] ? do_mount+0x2fb/0x930 <4> [] ? sys_mount+0x90/0xe0 <4> [] ? system_call_fastpath+0x16/0x1b