[ 0.304773] random: fast init done [ 0.306711] random: crng init done [ 0.315173] brd: module loaded [ 0.332642] loop: module loaded [ 0.334170] virtio_blk virtio3: [vda] 1526344 512-byte logical blocks (781 MB/745 MiB) [ 0.334363] vda: detected capacity change from 0 to 781488128 [ 0.339547] virtio_blk virtio4: [vdb] 78328 512-byte logical blocks (40.1 MB/38.2 MiB) [ 0.339856] vdb: detected capacity change from 0 to 40103936 [ 0.354046] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [ 0.358582] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 0.358710] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 0.359156] device-mapper: uevent: version 1.0.3 [ 0.359639] device-mapper: ioctl: 4.43.0-ioctl (2020-10-01) initialised: dm-devel@redhat.com [ 0.360771] NET: Registered protocol family 10 [ 0.369499] Segment Routing with IPv6 [ 0.369707] NET: Registered protocol family 17 [ 0.372236] sched_clock: Marking stable (370046523, 0)->(770194448, -400147925) [ 0.377835] registered taskstats version 1 [ 0.384183] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 0.410089] Sending DHCP requests ., OK [ 0.464722] IP-Config: Got DHCP answer from 192.168.120.1, my address is 192.168.125.30 [ 0.464779] IP-Config: Complete: [ 0.464811] device=eth0, hwaddr=3a:f2:ed:d1:2e:f7, ipaddr=192.168.125.30, mask=255.255.248.0, gw=192.168.120.1 [ 0.464872] host=192.168.125.30, domain=, nis-domain=(none) [ 0.464915] bootserver=192.168.120.1, rootserver=192.168.120.1, rootpath= [ 0.464916] nameserver0=192.168.120.1 [ 0.470410] VFS: Mounted root (squashfs filesystem) readonly on device 254:0. [ 0.473160] devtmpfs: mounted [ 0.473249] debug: unmapping init [mem 0xffffffff90c03000-0xffffffff90dfffff] [ 0.473406] debug: unmapping init [mem 0xffffffff901d0000-0xffffffff904bdfff] [ 0.500241] Write protecting the kernel read-only data: 14336k [ 0.505727] debug: unmapping init [mem 0xffff8b770d808000-0xffff8b770d9fffff] [ 0.505950] debug: unmapping init [mem 0xffff8b770dc5b000-0xffff8b770ddfffff] [ 0.821958] systemd[1]: /etc/systemd/system.conf:69: Invalid log level'error': Invalid argument [ 0.822730] systemd[1]: systemd 239 (239-51.el8_5.2) running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy) [ 0.822953] systemd[1]: Detected virtualization kvm. [ 0.823004] systemd[1]: Detected architecture x86-64. Welcome to CentOS Linux 8! [ 0.824006] systemd[1]: Set hostname to . [ 1.244242] systemd[1]: Reached target Network is Online. [ OK ] Reached target Network is Online. [ 1.260932] systemd[1]: Reached target Timers. [ OK ] Reached target Timers. [ 1.262853] systemd[1]: Listening on RPCbind Server Activation Socket. [ OK ] Listening on RPCbind Server Activation Socket. [ 1.272698] systemd[1]: Reached target RPC Port Mapper. [ OK ] Reached target RPC Port Mapper. [ 1.273537] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket. [ 1.274450] systemd[1]: Reached target Slices. [ OK ] Reached target Slices. [ OK ] Listening on udev Control Socket. [ OK ] Listening on Process Core Dump Socket. [ OK ] Listening on Journal Socket. Starting Create list of required st…ce nodes for the current kernel... Mounting Kernel Debug File System... [ OK ] Listening on Journal Socket (/dev/log). Starting Journal Service... [ OK ] Created slice system-sshd\x2dkeygen.slice. [ OK ] Set up automount Arbitrary Executab…rmats File System Automount Point. [ OK ] Reached target Paths. [ OK ] Listening on initctl Compatibility Named Pipe. Starting udev Coldplug all Devices... [ 1.368131] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Starting Configure read-only root support... Starting Load Kernel Modules... Mounting /tmp... [ OK ] Started Create list of required sta…vice nodes for the current kernel. [ OK ] Mounted Kernel Debug File System. [ OK ] Started Load Kernel Modules. [ OK ] Started Journal Service. [[ 1.704219] systemd[1]: Mounted /tmp.  OK ] Mounted /tmp. [ 1.789544] systemd[1]: Starting Apply Kernel Variables... Starting Apply Kernel Variables... [ 1.877490] systemd[1]: Starting Create Static Device Nodes in /dev... [ 1.881213] systemd-sysctl[773]: Couldn't write '0' to 'kernel/yama/ptrace_scope', ignoring: No such file or directory [ 1.883552] systemd-sysctl[773]: Couldn't write 'fq_codel' to 'net/core/default_qdisc', ignoring: No such file or directory Starting Create Static Device Nodes in /dev... [[ 1.939224] systemd[1]: Started Apply Kernel Variables.  OK ] Started Apply Kernel Variables. [[ 1.988145] systemd[1]: Started udev Coldplug all Devices.  OK ] Started udev Coldplug all Devices. [ OK ] Started Create Static Device Nodes in /dev. [ 1.996162] systemd[1]: Started Create Static Device Nodes in /dev. [ 2.000838] systemd[1]: Starting udev Kernel Device Manager... Starting udev Kernel Device Manager... [ 2.146021] systemd-udevd[1035]: Specified user 'tss' unknown [ 2.148016] systemd-udevd[1035]: Specified group 'tss' unknown [ OK ] Started udev Kernel Device Manager. [ 2.163928] systemd[1]: Started udev Kernel Device Manager. [ 2.673802] systemd-udevd[1070]: Using default interface naming scheme 'rhel-8.0'. [ 2.687520] systemd-udevd[1070]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. [ 2.688507] systemd-udevd[1070]: Error changing net interface name 'eth0' to 'enp0s2': Device or resource busy [ 2.688818] systemd-udevd[1070]: could not rename interface '2' from 'eth0' to 'enp0s2': Device or resource busy [ 2.775593] systemd-udevd[1073]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. [ 2.992719] systemd[1]: Started Configure read-only root support. [ OK ] Started Configure read-only root support. [ OK ] Reached target Local File Systems. [ 3.607871] systemd[1]: Reached target Local File Systems. [ 3.637302] systemd[1]: Starting Mark the need to relabel after reboot... Starting Mark the need to relabel after reboot... [ 3.644485] Starting Create Volatile Files and Directories... systemd[1]: Starting Create Volatile Files and Directories... [ 3.676226] touch[1144]: touch: cannot touch '/.autorelabel': Read-only file system [ OK ] Started Mark the need to relabel after reboot. [ 3.720739] [ OK ] Started Create Volatile Files and Directories. [ OK ] Reached target System Initialization. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Reached target Sockets. [ OK ] Reached target Basic System. Starting Permit User Sessions... systemd[1]: Started Mark the need to relabel after reboot. [[ 3.774150] systemd-tmpfiles[1145]: /var/spool does not exist and cannot be created as the file system is read-only. [ 3.774668] systemd[1]: Started Create Volatile Files and Directories. [ 3.775112] systemd[1]: Reached target System Initialization. [ 3.775307] systemd[1]: Listening on D-Bus System Message Bus Socket. [ 3.775482] systemd[1]: Reached target Sockets. [ 3.775653] systemd[1]: Reached target Basic System. [ 3.775823] systemd[1]: Starting Permit User Sessions... [ 3.775994] systemd[1]: Started D-Bus System Message Bus.  OK ] Started D-Bus System Message Bus. [ 3.779272] systemd[1]: Starting /etc/rc.d/rc.local Compatibility... Starting /etc/rc.d/rc.local Compatibility... [ 3.781196] systemd[1]: Reached target sshd-keygen.target. [ OK ] Reached target sshd-keygen.target. [ 3.783398] systemd[1]: Starting OpenSSH server daemon... Starting OpenSSH server daemon... [ 3.787218] systemd[1]: Starting RPC Bind... Starting RPC Bind... [ 3.789654] systemd[1]: Started Permit User Sessions. [ OK ] Started Permit User Sessions. [ OK ] Started /etc/rc.d/rc.local Compatibility. [ 3.807951] systemd[1]: Started /etc/rc.d/rc.local Compatibility. [ OK ] Started OpenSSH server daemon. [ OK ] Reached target Multi-User System. [ 3.894923] systemd[1]: Started OpenSSH server daemon. [ 3.898567] systemd[1]: Reached target Multi-User System. [ 3.898742] systemd[1]: Started RPC Bind. [ OK ] Started RPC Bind. [ 3.930417] systemd[1]: Startup finished in 758ms (kernel) + 3.171s (userspace) = 3.930s. [ 7.714198] /dev/vdb: Can't open blockdev [ 13.190065] libcfs: loading out-of-tree module taints kernel. [ 13.197058] systemd-udevd[1035]: Specified user 'tss' unknown [ 13.198983] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 13.207436] systemd-udevd[1035]: Specified group 'tss' unknown [ 14.073369] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing check_logdir /tmp/ltest-logs [ 14.779948] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing yml_node [ 16.203478] Lustre: DEBUG MARKER: Client: 2.15.54 [ 16.434455] Lustre: DEBUG MARKER: MDS: 2.15.54 [ 16.720494] Lustre: DEBUG MARKER: OSS: 2.15.54 [ 16.851882] Lustre: DEBUG MARKER: excepting tests: 32 53 63 102 115 119 123F 32newtarball 110 [ 16.904614] Lustre: DEBUG MARKER: skipping tests SLOW=no: 45 69 106 111 114 [ 18.721319] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 19.003143] systemd-udevd[1035]: Specified user 'tss' unknown [ 19.055911] systemd-udevd[1035]: Specified group 'tss' unknown [ 19.080771] systemd-udevd[2787]: Using default interface naming scheme 'rhel-8.0'. [ 19.684381] Lustre: Lustre: Build Version: 2.15.54 [ 19.923151] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 19.924346] LNet: Accept secure, port 988 [ 20.513127] Lustre: Echo OBD driver; http://www.lustre.org/ [ 22.084522] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 26.542874] ZFS: Loaded module v2.1.2-1, ZFS pool version 5000, ZFS filesystem version 5 [ 27.081262] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 27.086319] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 27.086915] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 27.087422] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 27.087749] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 27.088080] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 27.088405] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 27.088724] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 27.089047] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 27.089375] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 28.128134] LDISKFS-fs (loop0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 28.154428] systemd[1]: tmp-mntrEaaEI.mount: Succeeded. [ 31.710963] LDISKFS-fs (loop0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 31.722457] systemd[1]: tmp-mnt2QhFz4.mount: Succeeded. [ 33.321109] print_req_error: 8188 callbacks suppressed [ 33.321112] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 33.331471] blk_update_request: operation not supported error, dev loop0, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 33.331978] blk_update_request: operation not supported error, dev loop0, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 33.338929] blk_update_request: operation not supported error, dev loop0, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 33.518839] LDISKFS-fs (loop0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 36.041129] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 36.041507] blk_update_request: operation not supported error, dev loop0, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 36.041996] blk_update_request: operation not supported error, dev loop0, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 36.062576] blk_update_request: operation not supported error, dev loop0, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 36.245946] LDISKFS-fs (loop0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 36.272466] systemd[1]: tmp-mntFuT222.mount: Succeeded. [ 37.724758] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 37.737918] systemd[1]: tmp-mntl3NMBN.mount: Succeeded. [ 37.773605] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 37.783927] ------------[ cut here ]------------ [ 37.784044] DEBUG_LOCKS_WARN_ON(!lockdep_enabled()) [ 37.784062] WARNING: CPU: 0 PID: 4756 at kernel/locking/lockdep.c:4263 lockdep_init_map_waits+0x1bd/0x210 [ 37.784249] Modules linked in: zfs(O) zunicode(O) zzstd(O) zlua(O) zcommon(O) znvpair(O) zavl(O) icp(O) spl(O) lustre(O) ofd(O) osp(O) lod(O) ost(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) fid(O) fld(O) ptlrpc(O) obdclass(O) ksocklnd(O) lnet(O) libcfs(O) [ 37.784603] CPU: 0 PID: 4756 Comm: mount.lustre Tainted: G O --------- - - 4.18.0 #2 [ 37.784718] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 [ 37.784804] RIP: 0010:lockdep_init_map_waits+0x1bd/0x210 [ 37.784873] Code: 00 85 c0 0f 84 41 ff ff ff 8b 05 96 fe e7 00 85 c0 0f 85 33 ff ff ff 48 c7 c6 bb 06 bb 8f 48 c7 c7 44 dd b9 8f e8 29 0a fc ff <0f> 0b e9 19 ff ff ff e8 d7 57 4a 00 85 c0 74 0c 44 8b 1d 64 fe e7 [ 37.785107] RSP: 0018:ffff8b7862743830 EFLAGS: 00010286 [ 37.785174] RAX: 0000000000000027 RBX: ffff8b7858c1c828 RCX: 0000000000000000 [ 37.785273] RDX: 0000000000000007 RSI: ffffffff8f1007b0 RDI: 0000000000000246 [ 37.785372] RBP: ffffffffc0e33d30 R08: ffffffff908fce60 R09: 0000000000000027 [ 37.785470] R10: 0000000000000000 R11: 0000000000001294 R12: 0000000000000002 [ 37.785569] R13: 0000000000000001 R14: 0000000000000000 R15: ffff8b7858c1c828 [ 37.785668] FS: 00007f3e1ee508c0(0000) GS:ffff8b7871000000(0000) knlGS:0000000000000000 [ 37.785768] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 37.785851] CR2: 000055d358194c20 CR3: 000000012efe8000 CR4: 00000000000006b0 [ 37.789087] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 37.789237] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 37.789385] Call Trace: [ 37.789453] ldiskfs_enable_quotas+0x133/0x240 [ldiskfs] [ 37.789566] ldiskfs_fill_super+0x2827/0x3560 [ldiskfs] [ 37.789689] ? ldiskfs_calculate_overhead+0x470/0x470 [ldiskfs] [ 37.789820] mount_bdev+0x178/0x1b0 [ 37.789899] legacy_get_tree+0x28/0x50 [ 37.789985] vfs_get_tree+0x18/0x90 [ 37.790106] fc_mount+0x9/0x40 [ 37.790184] vfs_kern_mount.part.12+0x57/0x80 [ 37.790301] osd_mount+0x482/0xca0 [osd_ldiskfs] [ 37.790416] osd_device_alloc+0x37d/0xb80 [osd_ldiskfs] [ 37.790558] class_setup+0x690/0xad0 [obdclass] [ 37.790668] ? lockdep_init_map_waits+0x4b/0x210 [ 37.790789] class_process_config+0x14ad/0x2da0 [obdclass] [ 37.790915] ? do_lcfg+0x15a/0x4b0 [obdclass] [ 37.791047] do_lcfg+0x223/0x4b0 [obdclass] [ 37.791148] lustre_start_simple+0x72/0x1c0 [obdclass] [ 37.791319] osd_start+0x549/0x790 [ptlrpc] [ 37.792705] ? simple_strtoull+0x2b/0x50 [ 37.792805] ? target_name2index+0x8d/0xb0 [obdclass] [ 37.792966] server_fill_super+0x3a4/0x10e0 [ptlrpc] [ 37.793102] lustre_fill_super+0x38f/0x480 [lustre] [ 37.793221] ? lustre_mount+0x10/0x10 [lustre] [ 37.793328] mount_nodev+0x41/0x90 [ 37.793407] legacy_get_tree+0x28/0x50 [ 37.793485] vfs_get_tree+0x18/0x90 [ 37.793565] ? ns_capable_common+0x26/0x40 [ 37.793643] do_mount+0x80e/0x9e0 [ 37.793723] ksys_mount+0xb1/0xd0 [ 37.793802] __x64_sys_mount+0x1c/0x20 [ 37.793881] do_syscall_64+0x43/0x120 [ 37.793964] entry_SYSCALL_64_after_hwframe+0x65/0xca [ 37.794068] RIP: 0033:0x7f3e1b98592e [ 37.794147] Code: 48 8b 0d 5d 15 2c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 2a 15 2c 00 f7 d8 64 89 01 48 [ 37.794489] RSP: 002b:00007ffc5c729178 EFLAGS: 00000286 ORIG_RAX: 00000000000000a5 [ 37.794638] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f3e1b98592e [ 37.794786] RDX: 000000000040cee4 RSI: 00007ffc5c72f808 RDI: 00000000006259a0 [ 37.794934] RBP: 00007ffc5c72e800 R08: 00000000006259c0 R09: 0000000000000004 [ 37.795089] R10: 0000000001000000 R11: 0000000000000286 R12: 00000000006259c0 [ 37.795236] R13: 000000000040cee4 R14: 00000000fffffff5 R15: 00007ffc5c72f808 [ 37.795385] ---[ end trace d0f0e7702700102e ]--- [ 37.795614] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 38.962586] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 38.976974] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 39.060696] Lustre: lustre-MDT0000: new disk, initializing [ 39.114326] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 39.120529] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 41.311754] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 41.318210] systemd[1]: tmp-mnt7L1PPs.mount: Succeeded. [ 41.363887] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 41.388533] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 41.400716] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 41.400969] Lustre: Skipped 1 previous similar message [ 41.433795] Lustre: lustre-MDT0001: new disk, initializing [ 41.475586] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 41.484421] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 41.497032] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 43.595999] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 44.091325] ------------[ cut here ]------------ [ 44.091479] do not call blocking ops when !TASK_RUNNING; state=1 set at [<00000000a9275031>] prepare_to_wait_event+0x76/0x100 [ 44.091659] WARNING: CPU: 1 PID: 5283 at kernel/sched/core.c:6700 __might_sleep+0x63/0x70 [ 44.091763] Modules linked in: zfs(O) zunicode(O) zzstd(O) zlua(O) zcommon(O) znvpair(O) zavl(O) icp(O) spl(O) lustre(O) ofd(O) osp(O) lod(O) ost(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) fid(O) fld(O) ptlrpc(O) obdclass(O) ksocklnd(O) lnet(O) libcfs(O) [ 44.092126] CPU: 1 PID: 5283 Comm: lod0000_rec0001 Tainted: G W O --------- - - 4.18.0 #2 [ 44.092242] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 [ 44.092330] RIP: 0010:__might_sleep+0x63/0x70 [ 44.092399] Code: 5b 5d 41 5c e9 4e ff ff ff 48 8b 90 48 1d 00 00 48 c7 c7 18 fb ba 8f c6 05 17 11 e9 00 01 48 8b 70 10 48 89 d1 e8 23 f3 fd ff <0f> 0b eb ca 66 0f 1f 84 00 00 00 00 00 85 ff 75 0a 65 48 8b 04 25 [ 44.092627] RSP: 0018:ffff8b783e2f7990 EFLAGS: 00010286 [ 44.092696] RAX: 0000000000000071 RBX: ffffffff8fbbe474 RCX: 0000000000000007 [ 44.092795] RDX: 0000000000000007 RSI: ffffffff8f0ff165 RDI: ffff8b78713e5450 [ 44.092899] RBP: 0000000000000202 R08: 0000000000000000 R09: 0000000000000000 [ 44.093001] R10: 0000000000000000 R11: ffff8b783e2f7855 R12: 0000000000000000 [ 44.093100] R13: 0000000000000000 R14: ffff8b786510c800 R15: ffff8b783e938a80 [ 44.093199] FS: 0000000000000000(0000) GS:ffff8b7871200000(0000) knlGS:0000000000000000 [ 44.093298] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 44.093382] CR2: 000055c0ac263730 CR3: 000000000de12000 CR4: 00000000000006a0 [ 44.093484] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 44.093586] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 44.093686] Call Trace: [ 44.093790] ? null_alloc_repbuf+0x137/0x2d0 [ptlrpc] [ 44.093869] __kmalloc+0xfd/0x1b0 [ 44.093964] null_alloc_repbuf+0x137/0x2d0 [ptlrpc] [ 44.094075] sptlrpc_cli_alloc_repbuf+0x146/0x1e0 [ptlrpc] [ 44.094180] ptl_send_rpc+0x757/0x1180 [ptlrpc] [ 44.094284] ? ptlrpc_import_delay_req+0xbb/0x420 [ptlrpc] [ 44.094388] ptlrpc_check_set+0x2023/0x3180 [ptlrpc] [ 44.094467] ? _raw_spin_lock_irqsave+0x46/0x80 [ 44.094566] ptlrpc_set_wait+0x45c/0x760 [ptlrpc] [ 44.094643] ? wait_woken+0xa0/0xa0 [ 44.094726] ptlrpc_queue_wait+0x7f/0x230 [ptlrpc] [ 44.094813] osp_remote_sync+0x134/0x1b0 [osp] [ 44.094896] osp_attr_get+0x56c/0x810 [osp] [ 44.094960] osp_object_init+0x1a0/0x2d0 [osp] [ 44.095070] lu_object_start.isra.8+0x66/0xf0 [obdclass] [ 44.095166] lu_object_find_at+0x4e8/0xb20 [obdclass] [ 44.095263] dt_locate_at+0x13/0xa0 [obdclass] [ 44.095356] llog_osd_get_cat_list+0xe0/0xde0 [obdclass] [ 44.095451] lod_sub_prep_llog+0x13d/0x7cf [lod] [ 44.095533] ? lod_sub_cancel_llog+0x8d0/0x8d0 [lod] [ 44.095616] ? lod_sub_cancel_llog+0x8d0/0x8d0 [lod] [ 44.095697] lod_sub_recovery_thread+0xd8/0xb10 [lod] [ 44.095772] ? __schedule+0x2a5/0x670 [ 44.095828] ? _raw_spin_lock_irqsave+0x46/0x80 [ 44.095910] ? lod_sub_cancel_llog+0x8d0/0x8d0 [lod] [ 44.095987] kthread+0x129/0x140 [ 44.096042] ? kthread_flush_work_fn+0x10/0x10 [ 44.096113] ret_from_fork+0x1f/0x30 [ 44.096167] ---[ end trace d0f0e7702700102f ]--- [ 44.246847] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 45.278857] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 45.287758] systemd[1]: tmp-mnt20iRjD.mount: Succeeded. [ 45.332424] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 45.435217] Lustre: lustre-OST0000: new disk, initializing [ 45.438091] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 45.468916] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 47.363258] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 47.896486] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 50.497381] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 50.497825] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 50.507279] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 51.208766] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 3 sec [ 51.993775] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 52.144817] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 52.238636] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 55.520379] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 55.520628] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 55.522129] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 58.457136] Lustre: server umount lustre-OST0000 complete [ 58.721465] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 60.560780] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 60.565960] Lustre: Skipped 1 previous similar message [ 60.569134] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 60.573341] Lustre: Skipped 2 previous similar messages [ 64.910646] Lustre: server umount lustre-MDT0000 complete [ 65.206880] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 65.257875] LustreError: 4764:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679930393 with bad export cookie 11572197365976411600 [ 65.263013] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 65.819273] Lustre: DEBUG MARKER: == conf-sanity test 0: single mount setup ================ 15:19:54 (1679930394) [ 66.056000] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 66.212100] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 66.227591] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 66.841277] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 67.621586] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 68.182077] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 68.522612] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 69.408468] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 71.637837] Lustre: Mounted lustre-client [ 78.120488] systemd[1]: mnt-lustre.mount: Succeeded. [ 78.195610] Lustre: Unmounted lustre-client [ 78.285152] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 81.680410] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 81.688748] LustreError: Skipped 1 previous similar message [ 81.688836] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 81.689050] Lustre: Skipped 1 previous similar message [ 81.694509] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 82.722024] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 84.519896] Lustre: server umount lustre-OST0000 complete [ 84.521275] Lustre: Skipped 1 previous similar message [ 84.686031] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 87.120356] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 87.120651] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 87.120745] Lustre: Skipped 1 previous similar message [ 87.120945] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 87.120998] Lustre: Skipped 2 previous similar messages [ 90.909423] Lustre: server umount lustre-MDT0000 complete [ 91.254606] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 91.283573] LustreError: 6192:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679930419 with bad export cookie 11572197365976412440 [ 91.287683] LustreError: 6192:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 91.288111] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 94.780763] LNet: 7431:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 94.781218] LNet: Removed LNI 192.168.125.30@tcp [ 95.792723] systemd-udevd[1035]: Specified user 'tss' unknown [ 95.793178] systemd-udevd[1035]: Specified group 'tss' unknown [ 95.868272] systemd-udevd[7773]: Using default interface naming scheme 'rhel-8.0'. [ 95.917912] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 96.726863] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 97.560919] Lustre: DEBUG MARKER: == conf-sanity test 1: start up ost twice (should return errors) ========================================================== 15:20:25 (1679930425) [ 97.717707] systemd-udevd[1035]: Specified user 'tss' unknown [ 97.718698] systemd-udevd[1035]: Specified group 'tss' unknown [ 97.739330] systemd-udevd[8118]: Using default interface naming scheme 'rhel-8.0'. [ 98.047550] Lustre: Lustre: Build Version: 2.15.54 [ 98.097984] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 98.098185] LNet: Accept secure, port 988 [ 98.434775] Lustre: Echo OBD driver; http://www.lustre.org/ [ 99.620400] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 99.622975] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 100.736489] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 100.757761] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 101.108785] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 101.606845] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 102.020521] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 102.307981] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 102.428070] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 102.428171] Lustre: Skipped 1 previous similar message [ 102.979998] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 106.338480] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:35 to 0x280000401:65 [ 111.368634] Lustre: Mounted lustre-client [ 111.615527] systemd[1]: mnt-lustre.mount: Succeeded. [ 111.672773] Lustre: Unmounted lustre-client [ 111.726114] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 116.400362] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 116.401379] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 116.410240] Lustre: Skipped 1 previous similar message [ 116.411118] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 117.898322] Lustre: server umount lustre-OST0000 complete [ 118.097687] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 121.360374] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 121.360511] LustreError: Skipped 1 previous similar message [ 121.360555] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 121.360865] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 121.360928] Lustre: Skipped 1 previous similar message [ 126.480442] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 126.483272] Lustre: Skipped 2 previous similar messages [ 131.520892] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 131.521364] Lustre: Skipped 1 previous similar message [ 132.320043] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 132.384059] Lustre: server umount lustre-MDT0000 complete [ 132.558310] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 132.560705] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 132.561252] LustreError: Skipped 1 previous similar message [ 132.597377] LustreError: 8785:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679930460 with bad export cookie 13515422054573257917 [ 132.597647] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 135.180326] LNet: 10110:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 135.180666] LNet: Removed LNI 192.168.125.30@tcp [ 136.156489] systemd-udevd[1035]: Specified user 'tss' unknown [ 136.160037] systemd-udevd[1035]: Specified group 'tss' unknown [ 136.207815] systemd-udevd[10459]: Using default interface naming scheme 'rhel-8.0'. [ 136.265262] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 136.635610] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 137.422212] Lustre: DEBUG MARKER: == conf-sanity test 2: start up mds twice (should return err) ========================================================== 15:21:04 (1679930464) [ 137.571639] systemd-udevd[1035]: Specified user 'tss' unknown [ 137.578654] systemd-udevd[1035]: Specified group 'tss' unknown [ 137.608159] systemd-udevd[10901]: Using default interface naming scheme 'rhel-8.0'. [ 137.892702] Lustre: Lustre: Build Version: 2.15.54 [ 137.945877] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 137.946038] LNet: Accept secure, port 988 [ 138.249857] Lustre: Echo OBD driver; http://www.lustre.org/ [ 138.979694] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 138.982422] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 140.070298] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 140.079429] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 140.393971] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 140.888661] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 141.175546] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 141.542390] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 141.612446] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 141.612583] Lustre: Skipped 1 previous similar message [ 142.023636] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 145.126315] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:67 to 0x280000401:97 [ 150.168200] Lustre: Mounted lustre-client [ 150.433131] systemd[1]: mnt-lustre.mount: Succeeded. [ 150.495988] Lustre: Unmounted lustre-client [ 150.555476] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 155.200590] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 155.210115] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 155.210462] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 156.759914] Lustre: server umount lustre-OST0000 complete [ 156.948730] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 160.240702] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 160.241094] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 160.246145] Lustre: Skipped 2 previous similar messages [ 160.246299] Lustre: Skipped 1 previous similar message [ 163.149171] Lustre: server umount lustre-MDT0000 complete [ 163.301563] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 163.351007] LustreError: 11464:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679930491 with bad export cookie 12614087704758879991 [ 163.356693] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 166.090387] LNet: 12789:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 166.090726] LNet: Removed LNI 192.168.125.30@tcp [ 167.014948] systemd-udevd[1035]: Specified user 'tss' unknown [ 167.015899] systemd-udevd[1035]: Specified group 'tss' unknown [ 167.060523] systemd-udevd[13134]: Using default interface naming scheme 'rhel-8.0'. [ 167.195440] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 167.479839] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 168.259050] Lustre: DEBUG MARKER: == conf-sanity test 3: mount client twice (should return err) ========================================================== 15:21:35 (1679930495) [ 168.414144] systemd-udevd[1035]: Specified user 'tss' unknown [ 168.415376] systemd-udevd[1035]: Specified group 'tss' unknown [ 168.455248] systemd-udevd[13493]: Using default interface naming scheme 'rhel-8.0'. [ 168.745575] Lustre: Lustre: Build Version: 2.15.54 [ 168.791797] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 168.791971] LNet: Accept secure, port 988 [ 169.086885] Lustre: Echo OBD driver; http://www.lustre.org/ [ 170.276453] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 170.278010] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 171.411757] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 171.428428] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 171.745993] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 172.291772] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 172.592106] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 172.799886] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 172.879221] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 172.879422] Lustre: Skipped 1 previous similar message [ 173.328099] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 176.484112] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:99 to 0x280000401:129 [ 177.528447] Lustre: Mounted lustre-client [ 182.803424] systemd[1]: mnt-lustre.mount: Succeeded. [ 182.881072] Lustre: Unmounted lustre-client [ 182.921287] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 186.560614] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 186.560753] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 186.561060] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 187.600369] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 187.601846] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 187.607481] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 187.612196] Lustre: Skipped 1 previous similar message [ 189.103955] Lustre: server umount lustre-OST0000 complete [ 189.378427] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 192.000424] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 192.000563] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 192.000873] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 197.680760] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 197.680997] Lustre: Skipped 2 previous similar messages [ 202.720445] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 202.733685] Lustre: Skipped 1 previous similar message [ 204.000079] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 204.054720] Lustre: server umount lustre-MDT0000 complete [ 204.212075] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 204.261172] LustreError: 14144:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679930532 with bad export cookie 15457683429043636183 [ 204.261453] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 206.900359] LNet: 15394:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 206.900889] LNet: Removed LNI 192.168.125.30@tcp [ 207.874051] systemd-udevd[1035]: Specified user 'tss' unknown [ 207.875096] systemd-udevd[1035]: Specified group 'tss' unknown [ 207.926186] systemd-udevd[15617]: Using default interface naming scheme 'rhel-8.0'. [ 207.999403] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 208.294818] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 209.068036] Lustre: DEBUG MARKER: == conf-sanity test 4: force cleanup ost, then cleanup === 15:22:16 (1679930536) [ 209.233256] systemd-udevd[1035]: Specified user 'tss' unknown [ 209.234235] systemd-udevd[1035]: Specified group 'tss' unknown [ 209.289089] systemd-udevd[16086]: Using default interface naming scheme 'rhel-8.0'. [ 209.586577] Lustre: Lustre: Build Version: 2.15.54 [ 209.635464] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 209.636787] LNet: Accept secure, port 988 [ 210.071243] Lustre: Echo OBD driver; http://www.lustre.org/ [ 210.899393] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 210.900998] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 211.982310] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 211.992855] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 212.286809] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 212.762672] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 213.066948] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 213.272677] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 213.342218] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 213.342338] Lustre: Skipped 1 previous similar message [ 213.799331] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 216.883630] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:131 to 0x280000401:161 [ 221.928864] Lustre: Mounted lustre-client [ 223.124697] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 226.961837] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 226.969509] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 226.977912] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 226.981825] Lustre: Skipped 2 previous similar messages [ 232.000829] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 232.000943] Lustre: Skipped 1 previous similar message [ 237.040881] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 237.045367] Lustre: Skipped 3 previous similar messages [ 237.920127] Lustre: lustre-OST0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 237.971965] Lustre: server umount lustre-OST0000 complete [ 238.205587] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 248.644343] systemd[1]: mnt-lustre.mount: Succeeded. [ 248.702332] Lustre: Unmounted lustre-client [ 248.889741] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 248.945727] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 248.945944] Lustre: Skipped 2 previous similar messages [ 248.946381] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 254.000495] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 254.000636] Lustre: Skipped 2 previous similar messages [ 255.164351] Lustre: server umount lustre-MDT0000 complete [ 255.337615] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 255.384350] LustreError: 16796:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679930583 with bad export cookie 10216740037880527309 [ 255.385681] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 269.920048] Lustre: lustre-MDT0001 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 269.988917] Lustre: server umount lustre-MDT0001 complete [ 272.680336] LNet: 18004:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 272.693384] LNet: Removed LNI 192.168.125.30@tcp [ 273.636782] systemd-udevd[1035]: Specified user 'tss' unknown [ 273.640041] systemd-udevd[1035]: Specified group 'tss' unknown [ 273.703878] systemd-udevd[18353]: Using default interface naming scheme 'rhel-8.0'. [ 273.855473] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 274.110446] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 274.895589] Lustre: DEBUG MARKER: == conf-sanity test 5a: force cleanup mds, then cleanup == 15:23:22 (1679930602) [ 275.051747] systemd-udevd[1035]: Specified user 'tss' unknown [ 275.059014] systemd-udevd[1035]: Specified group 'tss' unknown [ 275.101183] systemd-udevd[18788]: Using default interface naming scheme 'rhel-8.0'. [ 275.366801] Lustre: Lustre: Build Version: 2.15.54 [ 275.418519] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 275.419381] LNet: Accept secure, port 988 [ 275.716559] Lustre: Echo OBD driver; http://www.lustre.org/ [ 276.559519] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 276.562386] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 277.667427] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 277.680886] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 278.003653] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 278.447623] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 278.754068] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 278.974414] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 279.049152] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 279.049250] Lustre: Skipped 1 previous similar message [ 279.431623] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 283.522894] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:163 to 0x280000401:193 [ 288.568930] Lustre: Mounted lustre-client [ 289.763350] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 293.200405] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 293.207058] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 293.214079] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 295.975250] Lustre: server umount lustre-MDT0000 complete [ 296.166727] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 296.213832] LustreError: 19356:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679930624 with bad export cookie 144316741302993924 [ 296.214123] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 298.640563] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 298.640866] Lustre: lustre-MDT0001-lwp-OST0000: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 298.645373] LustreError: Skipped 1 previous similar message [ 298.645508] Lustre: Skipped 3 previous similar messages [ 298.645971] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 298.646045] Lustre: Skipped 4 previous similar messages [ 301.680713] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 301.680942] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 301.683816] Lustre: Skipped 1 previous similar message [ 307.760851] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 307.760995] LustreError: Skipped 3 previous similar messages [ 307.761048] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 307.761131] Lustre: Skipped 2 previous similar messages [ 310.880039] Lustre: lustre-MDT0001 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 310.965257] Lustre: server umount lustre-MDT0001 complete [ 311.074794] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 321.522295] systemd[1]: mnt-lustre.mount: Succeeded. [ 321.591057] Lustre: Unmounted lustre-client [ 321.682231] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 327.896957] Lustre: server umount lustre-OST0000 complete [ 330.620544] LNet: 20657:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 330.637779] LNet: Removed LNI 192.168.125.30@tcp [ 331.490477] systemd-udevd[1035]: Specified user 'tss' unknown [ 331.496837] systemd-udevd[1035]: Specified group 'tss' unknown [ 331.539863] systemd-udevd[20933]: Using default interface naming scheme 'rhel-8.0'. [ 331.714102] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 333.075279] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 333.882815] Lustre: DEBUG MARKER: == conf-sanity test 5b: Try to start a client with no MGS (should return errs) ========================================================== 15:24:21 (1679930661) [ 334.180273] systemd-udevd[1035]: Specified user 'tss' unknown [ 334.184939] systemd-udevd[1035]: Specified group 'tss' unknown [ 334.245016] systemd-udevd[21461]: Using default interface naming scheme 'rhel-8.0'. [ 334.548693] Lustre: Lustre: Build Version: 2.15.54 [ 334.604233] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 334.606717] LNet: Accept secure, port 988 [ 335.068175] Lustre: Echo OBD driver; http://www.lustre.org/ [ 336.025164] Lustre: lustre-OST0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 336.026699] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 353.760256] LustreError: 22012:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 1: rc = -5 [ 384.960075] LustreError: 22012:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 4: rc = -110 [ 414.080079] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 414.732089] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 425.200320] LustreError: 15c-8: MGC192.168.125.30@tcp: Confguration from log lustre-client failed from MGS -5. Communication error between node & MGS, a bad configuration, or other errors. See syslog for more info [ 425.205561] Lustre: Unmounted lustre-client [ 425.205670] LustreError: 22221:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -5 [ 425.281141] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 425.363682] Lustre: server umount lustre-OST0000 complete [ 428.240210] LNet: 22645:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 428.240555] LNet: Removed LNI 192.168.125.30@tcp [ 429.109663] systemd-udevd[1035]: Specified user 'tss' unknown [ 429.116069] systemd-udevd[1035]: Specified group 'tss' unknown [ 429.152020] systemd-udevd[22981]: Using default interface naming scheme 'rhel-8.0'. [ 429.269091] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 429.575710] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 430.388329] Lustre: DEBUG MARKER: == conf-sanity test 5c: cleanup after failed mount (bug 2712) (should return errs) ========================================================== 15:25:57 (1679930757) [ 430.603768] systemd-udevd[1035]: Specified user 'tss' unknown [ 430.604656] systemd-udevd[1035]: Specified group 'tss' unknown [ 430.655409] systemd-udevd[23446]: Using default interface naming scheme 'rhel-8.0'. [ 430.995827] Lustre: Lustre: Build Version: 2.15.54 [ 431.059528] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 431.059800] LNet: Accept secure, port 988 [ 431.616146] Lustre: Echo OBD driver; http://www.lustre.org/ [ 433.225169] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 433.242876] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 434.410735] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 434.420575] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 434.768748] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 435.281045] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 435.640728] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 435.911866] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 435.986369] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 435.986558] Lustre: Skipped 1 previous similar message [ 436.430791] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 436.528036] LustreError: 24739:0:(llite_lib.c:1376:ll_fill_super()) wrong.lustre: fsname longer than 8 characters: rc = -36 [ 436.528251] Lustre: Unmounted wrong.lustre-client [ 436.528338] LustreError: 24739:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -36 [ 436.616699] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 436.700510] Lustre: server umount lustre-OST0000 complete [ 436.862039] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 436.925556] LustreError: 24068:0:(osp_object.c:637:osp_attr_get()) lustre-MDT0001-osp-MDT0000: osp_attr_get update error [0x200000009:0x1:0x0]: rc = -5 [ 436.926851] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 436.927618] LustreError: 24068:0:(lod_sub_object.c:932:lod_sub_prep_llog()) lustre-MDT0000-mdtlov: can't get id from catalogs: rc = -5 [ 436.927622] LustreError: 24068:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 3, retries 0, failed: rc = -5 [ 436.937046] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 440.000916] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 440.001187] Lustre: Skipped 2 previous similar messages [ 443.096769] Lustre: server umount lustre-MDT0000 complete [ 443.273649] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 443.333541] LustreError: 24015:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679930771 with bad export cookie 11459200346544521610 [ 443.334373] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 446.010245] LNet: 25172:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 446.023733] LNet: Removed LNI 192.168.125.30@tcp [ 446.995698] systemd-udevd[1035]: Specified user 'tss' unknown [ 446.995905] systemd-udevd[1035]: Specified group 'tss' unknown [ 447.052440] systemd-udevd[25506]: Using default interface naming scheme 'rhel-8.0'. [ 447.370213] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 447.636134] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 448.417367] Lustre: DEBUG MARKER: == conf-sanity test 5d: mount with ost down ============== 15:26:15 (1679930775) [ 448.585584] systemd-udevd[1035]: Specified user 'tss' unknown [ 448.586553] systemd-udevd[1035]: Specified group 'tss' unknown [ 448.627977] systemd-udevd[25926]: Using default interface naming scheme 'rhel-8.0'. [ 448.888872] Lustre: Lustre: Build Version: 2.15.54 [ 448.935909] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 448.936073] LNet: Accept secure, port 988 [ 449.229904] Lustre: Echo OBD driver; http://www.lustre.org/ [ 449.967576] Lustre: lustre-OST0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 449.968936] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 467.680295] LustreError: 26518:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 1: rc = -5 [ 498.880143] LustreError: 26518:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 4: rc = -110 [ 528.000090] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 528.466418] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 528.726762] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 550.690934] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 550.704688] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 550.722736] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:195 to 0x280000401:225 [ 551.036735] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 551.608219] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 551.955983] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 552.073963] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 555.761052] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 555.761227] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 555.761532] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 558.296695] Lustre: server umount lustre-OST0000 complete [ 558.460738] Lustre: Mounted lustre-client [ 558.637731] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 569.043941] systemd[1]: mnt-lustre.mount: Succeeded. [ 569.098629] Lustre: Unmounted lustre-client [ 569.253849] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 571.280564] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 571.280820] LustreError: Skipped 1 previous similar message [ 571.292504] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 571.292667] Lustre: Skipped 1 previous similar message [ 571.301598] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 571.301799] Lustre: Skipped 2 previous similar messages [ 573.760351] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 573.760478] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 575.467194] Lustre: server umount lustre-MDT0000 complete [ 575.677351] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 575.718597] LustreError: 26526:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679930904 with bad export cookie 1850148685420388578 [ 575.721251] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 575.721573] LustreError: 26526:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 582.006377] Lustre: server umount lustre-MDT0001 complete [ 584.720801] LNet: 27775:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 584.721208] LNet: Removed LNI 192.168.125.30@tcp [ 585.787745] systemd-udevd[1035]: Specified user 'tss' unknown [ 585.797371] systemd-udevd[1035]: Specified group 'tss' unknown [ 585.799086] systemd-udevd[27921]: Using default interface naming scheme 'rhel-8.0'. [ 586.021412] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 586.286641] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 587.105357] Lustre: DEBUG MARKER: == conf-sanity test 5e: delayed connect, don't crash (bug 10268) ========================================================== 15:28:34 (1679930914) [ 587.356366] systemd-udevd[1035]: Specified user 'tss' unknown [ 587.360552] systemd-udevd[1035]: Specified group 'tss' unknown [ 587.423379] systemd-udevd[28504]: Using default interface naming scheme 'rhel-8.0'. [ 587.773440] Lustre: Lustre: Build Version: 2.15.54 [ 587.829200] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 587.829370] LNet: Accept secure, port 988 [ 588.139682] Lustre: Echo OBD driver; http://www.lustre.org/ [ 589.107881] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 589.109306] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 590.259979] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 590.269613] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 590.658079] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 591.364893] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 591.840753] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 592.052239] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 592.132782] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 592.132879] Lustre: Skipped 1 previous similar message [ 592.602690] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 592.686089] LustreError: 29861:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 506 sleeping for 11000ms [ 595.203721] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:195 to 0x280000401:257 [ 603.790050] LustreError: 29861:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 506 awake [ 603.834581] Lustre: Mounted lustre-client [ 604.101405] systemd[1]: mnt-lustre.mount: Succeeded. [ 604.171401] Lustre: Unmounted lustre-client [ 604.226945] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 605.280464] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 605.280625] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 605.280963] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 606.320628] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 606.331243] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 606.337261] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 608.880543] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 610.552962] Lustre: server umount lustre-OST0000 complete [ 610.819578] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 610.880343] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 610.880572] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 613.920578] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 613.920945] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 613.932729] Lustre: Skipped 3 previous similar messages [ 617.069345] Lustre: server umount lustre-MDT0000 complete [ 617.385625] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 617.427006] LustreError: 29131:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679930945 with bad export cookie 14272215258723199443 [ 617.427410] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 617.427615] LustreError: 29131:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 621.050745] LNet: 30361:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 621.051177] LNet: Removed LNI 192.168.125.30@tcp [ 622.328918] systemd-udevd[1035]: Specified user 'tss' unknown [ 622.343025] systemd-udevd[1035]: Specified group 'tss' unknown [ 622.496199] systemd-udevd[30703]: Using default interface naming scheme 'rhel-8.0'. [ 622.545333] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 623.619297] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 624.450215] Lustre: DEBUG MARKER: == conf-sanity test 5f: mds down, cleanup after failed mount (bug 2712) ========================================================== 15:29:11 (1679930951) [ 624.560844] Lustre: DEBUG MARKER: SKIP: conf-sanity test_5f needs separate mgs and mds [ 624.788038] Lustre: DEBUG MARKER: == conf-sanity test 5g: handle missing debugfs =========== 15:29:13 (1679930953) [ 624.884358] systemd[1]: sys-kernel-debug.mount: Succeeded. [ 625.297080] Lustre: DEBUG MARKER: == conf-sanity test 5h: start mdt failure at mdt_fs_setup() ========================================================== 15:29:13 (1679930953) [ 625.567487] systemd-udevd[1035]: Specified user 'tss' unknown [ 625.583326] systemd-udevd[1035]: Specified group 'tss' unknown [ 625.627638] systemd-udevd[31321]: Using default interface naming scheme 'rhel-8.0'. [ 626.304692] Lustre: Lustre: Build Version: 2.15.54 [ 626.389374] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 626.389598] LNet: Accept secure, port 988 [ 627.112409] Lustre: Echo OBD driver; http://www.lustre.org/ [ 629.021680] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 629.023306] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 630.316514] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 630.336179] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 631.202496] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 631.387948] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 632.524025] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 633.449684] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 633.866982] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 634.008078] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 634.665550] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 635.845130] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:195 to 0x280000401:289 [ 640.911273] Lustre: Mounted lustre-client [ 642.119642] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 642.165420] Lustre: Failing over lustre-MDT0000 [ 642.295236] Lustre: server umount lustre-MDT0000 complete [ 642.782984] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 642.817297] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 642.818238] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 642.819013] LustreError: 11-0: MGC192.168.125.30@tcp: operation mgs_target_reg to node 0@lo failed: rc = -107 [ 642.819170] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 642.820413] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0x5f5e842db2ec215b to 0x5f5e842db2ec256e [ 642.821299] Lustre: MGC192.168.125.30@tcp: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 642.880255] Lustre: *** cfs_fail_loc=135, val=0*** [ 642.950105] LustreError: 32792:0:(obd_config.c:776:class_setup()) setup lustre-MDT0000 failed (-2) [ 642.950634] LustreError: 32792:0:(obd_config.c:2004:class_config_llog_handler()) MGC192.168.125.30@tcp: cfg command failed: rc = -2 [ 642.950890] Lustre: cmd=cf003 0:lustre-MDT0000 1:lustre-MDT0000_UUID 2:0 3:lustre-MDT0000-mdtlov 4:f [ 642.950890] [ 642.951824] LustreError: 15c-8: MGC192.168.125.30@tcp: Confguration from log lustre-MDT0000 failed from MGS -2. Communication error between node & MGS, a bad configuration, or other errors. See syslog for more info [ 642.952075] LustreError: 32779:0:(tgt_mount.c:1444:server_start_targets()) failed to start server lustre-MDT0000: -2 [ 642.952292] LustreError: 32779:0:(tgt_mount.c:2081:server_fill_super()) Unable to start targets: -2 [ 642.952562] LustreError: 32779:0:(obd_config.c:829:class_cleanup()) Device 5 not setup [ 642.953523] LustreError: 31882:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679930971 with bad export cookie 6872075413224170862 [ 642.957583] Lustre: server umount lustre-MDT0000 complete [ 642.957699] LustreError: 32779:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -2 [ 643.403352] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 647.841366] Lustre: lustre-MDT0000-mdc-ffff8b7836bb4000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 647.841716] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 647.842032] Lustre: Skipped 2 previous similar messages [ 647.849055] LustreError: Skipped 5 previous similar messages [ 648.882711] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0x5f5e842db2ec256e to 0x5f5e842db2ec25f3 [ 648.890400] Lustre: MGC192.168.125.30@tcp: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 649.044292] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 649.059628] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 654.081422] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 654.082710] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 654.111741] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 654.136015] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:195 to 0x280000401:321 [ 654.265375] systemd[1]: mnt-lustre.mount: Succeeded. [ 654.329740] Lustre: Unmounted lustre-client [ 654.421856] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 656.000407] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 656.005332] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 656.006897] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 659.121243] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 659.121725] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 660.629988] Lustre: server umount lustre-OST0000 complete [ 660.920768] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 664.160798] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 664.161232] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 664.161356] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 664.165773] Lustre: Skipped 1 previous similar message [ 667.160381] Lustre: server umount lustre-MDT0000 complete [ 667.423868] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 667.455926] LustreError: 31883:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679930995 with bad export cookie 6872075413224170995 [ 667.458007] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 667.458225] LustreError: 31883:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 667.458947] LustreError: Skipped 1 previous similar message [ 671.050625] LNet: 33415:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 671.051055] LNet: Removed LNI 192.168.125.30@tcp [ 672.238258] systemd-udevd[1035]: Specified user 'tss' unknown [ 672.250052] systemd-udevd[1035]: Specified group 'tss' unknown [ 672.431758] systemd-udevd[33760]: Using default interface naming scheme 'rhel-8.0'. [ 672.537534] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 673.307680] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 674.130071] Lustre: DEBUG MARKER: == conf-sanity test 5i: start mdt failure at mdt_quota_init() ========================================================== 15:30:01 (1679931001) [ 674.553536] systemd-udevd[1035]: Specified user 'tss' unknown [ 674.561297] systemd-udevd[1035]: Specified group 'tss' unknown [ 674.706689] systemd-udevd[34207]: Using default interface naming scheme 'rhel-8.0'. [ 675.897310] Lustre: Lustre: Build Version: 2.15.54 [ 676.095679] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 676.095967] LNet: Accept secure, port 988 [ 676.980189] Lustre: Echo OBD driver; http://www.lustre.org/ [ 679.057028] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 679.066664] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 680.342532] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 680.371279] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 681.407408] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 681.573926] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 682.832404] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 683.952316] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 684.653595] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 684.896087] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 685.834103] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 685.935782] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:195 to 0x280000401:353 [ 686.050711] Lustre: Mounted lustre-client [ 687.223162] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 687.263362] Lustre: Failing over lustre-MDT0000 [ 687.385485] Lustre: server umount lustre-MDT0000 complete [ 687.958343] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 687.993987] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 687.994649] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 687.995523] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 687.996577] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0x912ca74a0b57cd75 to 0x912ca74a0b57d188 [ 687.997866] Lustre: MGC192.168.125.30@tcp: Connection restored to (at 0@lo) [ 688.077908] Lustre: *** cfs_fail_loc=a05, val=0*** [ 688.153536] LustreError: 35683:0:(obd_config.c:776:class_setup()) setup lustre-MDT0000 failed (-9) [ 688.153761] LustreError: 35683:0:(obd_config.c:2004:class_config_llog_handler()) MGC192.168.125.30@tcp: cfg command failed: rc = -9 [ 688.153925] Lustre: cmd=cf003 0:lustre-MDT0000 1:lustre-MDT0000_UUID 2:0 3:lustre-MDT0000-mdtlov 4:f [ 688.153925] [ 688.156510] LustreError: 15c-8: MGC192.168.125.30@tcp: Confguration from log lustre-MDT0000 failed from MGS -9. Communication error between node & MGS, a bad configuration, or other errors. See syslog for more info [ 688.156764] LustreError: 35670:0:(tgt_mount.c:1444:server_start_targets()) failed to start server lustre-MDT0000: -9 [ 688.156978] LustreError: 35670:0:(tgt_mount.c:2081:server_fill_super()) Unable to start targets: -9 [ 688.157238] LustreError: 35670:0:(obd_config.c:829:class_cleanup()) Device 5 not setup [ 688.158231] LustreError: 34772:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931016 with bad export cookie 10460919970934542728 [ 688.162589] Lustre: server umount lustre-MDT0000 complete [ 688.162707] LustreError: 35670:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -9 [ 688.721468] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 693.041401] Lustre: lustre-MDT0000-mdc-ffff8b7844c50000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 693.041736] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 693.047622] Lustre: Skipped 2 previous similar messages [ 693.053055] LustreError: Skipped 5 previous similar messages [ 694.082690] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0x912ca74a0b57d188 to 0x912ca74a0b57d21b [ 694.087837] Lustre: MGC192.168.125.30@tcp: Connection restored to (at 0@lo) [ 694.292231] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 694.310018] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 694.671743] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 699.284750] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 699.291166] Lustre: lustre-MDT0000: Recovery over after 0:05, of 2 clients 2 recovered and 0 were evicted. [ 699.303971] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:195 to 0x280000401:385 [ 699.556345] systemd[1]: mnt-lustre.mount: Succeeded. [ 699.615970] Lustre: Unmounted lustre-client [ 699.747798] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 701.040422] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 701.046501] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 701.049478] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 704.322136] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 704.322614] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 706.006021] Lustre: server umount lustre-OST0000 complete [ 706.240577] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 709.360382] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 709.361106] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 709.361447] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 709.361448] Lustre: Skipped 1 previous similar message [ 709.361748] Lustre: Skipped 1 previous similar message [ 712.527395] Lustre: server umount lustre-MDT0000 complete [ 712.843589] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 712.886310] LustreError: 34771:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931041 with bad export cookie 10460919970934542875 [ 712.890747] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 712.891008] LustreError: Skipped 1 previous similar message [ 716.570404] LNet: 36313:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 716.570761] LNet: Removed LNI 192.168.125.30@tcp [ 717.878030] systemd-udevd[1035]: Specified user 'tss' unknown [ 717.890086] systemd-udevd[1035]: Specified group 'tss' unknown [ 718.143593] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 718.153800] systemd-udevd[36586]: Using default interface naming scheme 'rhel-8.0'. [ 719.209568] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 720.017765] Lustre: DEBUG MARKER: == conf-sanity test 6: manual umount, then mount again === 15:30:47 (1679931047) [ 720.303274] systemd-udevd[1035]: Specified user 'tss' unknown [ 720.320561] systemd-udevd[1035]: Specified group 'tss' unknown [ 720.386339] systemd-udevd[37102]: Using default interface naming scheme 'rhel-8.0'. [ 720.999855] Lustre: Lustre: Build Version: 2.15.54 [ 721.094087] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 721.094371] LNet: Accept secure, port 988 [ 721.733720] Lustre: Echo OBD driver; http://www.lustre.org/ [ 723.745273] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 723.746958] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 725.034553] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 725.055704] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 725.907162] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 726.103454] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 726.838529] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 727.417578] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 727.776765] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 727.963425] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 728.710638] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 729.934918] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:195 to 0x280000401:417 [ 730.975533] Lustre: Mounted lustre-client [ 732.064212] systemd[1]: mnt-lustre.mount: Succeeded. [ 732.123829] Lustre: Unmounted lustre-client [ 732.217597] Lustre: Mounted lustre-client [ 732.484048] systemd[1]: mnt-lustre.mount: Succeeded. [ 733.063916] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 734.960429] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 734.970151] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 734.979240] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 736.000402] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 736.007190] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 736.015124] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 737.280444] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 737.287137] Lustre: Skipped 1 previous similar message [ 738.872940] Lustre: server umount lustre-OST0000 complete [ 739.110852] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 741.200362] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 741.206144] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 741.209092] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 747.360779] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 747.361044] Lustre: Skipped 3 previous similar messages [ 753.760040] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 753.805006] Lustre: server umount lustre-MDT0000 complete [ 754.027245] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 754.055186] LustreError: 37666:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931082 with bad export cookie 6367396663802890675 [ 754.064626] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 757.410522] LNet: 38948:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 757.420156] LNet: Removed LNI 192.168.125.30@tcp [ 758.590315] systemd-udevd[1035]: Specified user 'tss' unknown [ 758.595982] systemd-udevd[1035]: Specified group 'tss' unknown [ 758.666013] systemd-udevd[39128]: Using default interface naming scheme 'rhel-8.0'. [ 759.187148] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 759.672670] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 760.573894] Lustre: DEBUG MARKER: == conf-sanity test 7: manual umount, then cleanup ======= 15:31:27 (1679931087) [ 760.922718] systemd-udevd[1035]: Specified user 'tss' unknown [ 760.939357] systemd-udevd[1035]: Specified group 'tss' unknown [ 761.083221] systemd-udevd[39733]: Using default interface naming scheme 'rhel-8.0'. [ 761.852026] Lustre: Lustre: Build Version: 2.15.54 [ 761.999641] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 761.999971] LNet: Accept secure, port 988 [ 762.861302] Lustre: Echo OBD driver; http://www.lustre.org/ [ 764.555059] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 764.556787] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 765.797661] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 765.807251] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 766.505428] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 766.663617] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 767.392030] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 767.944368] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 768.439581] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 768.588389] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 769.272653] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 771.444344] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:419 to 0x280000401:449 [ 772.491996] Lustre: Mounted lustre-client [ 777.550569] systemd[1]: mnt-lustre.mount: Succeeded. [ 777.606946] Lustre: Unmounted lustre-client [ 777.721298] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 781.520352] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 781.526750] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 781.531129] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 782.561125] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 782.568630] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 783.929779] Lustre: server umount lustre-OST0000 complete [ 784.256174] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 786.800374] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 786.800519] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 786.812025] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 786.812187] Lustre: Skipped 1 previous similar message [ 790.510798] Lustre: server umount lustre-MDT0000 complete [ 790.760528] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 790.801063] LustreError: 40303:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931119 with bad export cookie 12454977787784351808 [ 790.803439] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 790.807661] LustreError: 40303:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 794.390308] LNet: 41511:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 794.395552] LNet: Removed LNI 192.168.125.30@tcp [ 795.609816] systemd-udevd[1035]: Specified user 'tss' unknown [ 795.689595] systemd-udevd[1035]: Specified group 'tss' unknown [ 795.839344] systemd-udevd[41852]: Using default interface naming scheme 'rhel-8.0'. [ 796.222084] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 797.168969] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 798.053089] Lustre: DEBUG MARKER: == conf-sanity test 8: double mount setup ================ 15:32:05 (1679931125) [ 798.545386] systemd-udevd[1035]: Specified user 'tss' unknown [ 798.578486] systemd-udevd[1035]: Specified group 'tss' unknown [ 798.647124] systemd-udevd[42166]: Using default interface naming scheme 'rhel-8.0'. [ 799.785024] Lustre: Lustre: Build Version: 2.15.54 [ 799.942169] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 799.942452] LNet: Accept secure, port 988 [ 801.048643] Lustre: Echo OBD driver; http://www.lustre.org/ [ 802.789829] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 802.794270] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 804.173590] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 804.187928] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 805.066427] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 805.210641] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 806.276048] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 806.919655] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 807.310241] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 807.473905] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 808.284305] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 809.524031] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:419 to 0x280000401:481 [ 810.575181] Lustre: Mounted lustre-client [ 811.918860] Lustre: Mounted lustre-client [ 812.639951] systemd[1]: mnt-lustre2.mount: Succeeded. [ 812.721157] Lustre: Unmounted lustre-client [ 813.019901] systemd[1]: mnt-lustre.mount: Succeeded. [ 813.223878] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 814.560409] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 814.565516] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 814.567242] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 815.600413] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 815.605514] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 815.607248] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 816.960566] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 816.966395] Lustre: Skipped 1 previous similar message [ 819.434751] Lustre: server umount lustre-OST0000 complete [ 819.743851] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 820.320372] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 820.325464] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 820.329201] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 826.044387] Lustre: server umount lustre-MDT0000 complete [ 826.361086] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 826.396065] LustreError: 42866:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931154 with bad export cookie 1535129515385541041 [ 826.398265] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 826.398479] LustreError: 42866:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 830.150574] LNet: 44172:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 830.151031] LNet: Removed LNI 192.168.125.30@tcp [ 831.317443] systemd-udevd[1035]: Specified user 'tss' unknown [ 831.336663] systemd-udevd[1035]: Specified group 'tss' unknown [ 831.626666] systemd-udevd[44360]: Using default interface naming scheme 'rhel-8.0'. [ 831.685197] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 832.391901] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 833.189132] Lustre: DEBUG MARKER: == conf-sanity test 9: test ptldebug and subsystem for mkfs ========================================================== 15:32:40 (1679931160) [ 833.571366] systemd-udevd[1035]: Specified user 'tss' unknown [ 833.571607] systemd-udevd[1035]: Specified group 'tss' unknown [ 833.593119] systemd-udevd[44960]: Using default interface naming scheme 'rhel-8.0'. [ 834.014304] Lustre: Lustre: Build Version: 2.15.54 [ 834.093047] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 834.093288] LNet: Accept secure, port 988 [ 834.623442] Lustre: Echo OBD driver; http://www.lustre.org/ [ 836.048647] Lustre: lustre-OST0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 836.050233] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 853.840244] LustreError: 45518:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 1: rc = -5 [ 884.000119] LustreError: 45518:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 4: rc = -110 [ 914.160098] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 915.414225] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 915.623512] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 915.724099] Lustre: server umount lustre-OST0000 complete [ 916.418085] Lustre: DEBUG MARKER: == conf-sanity test 10a: find lctl param broken symlinks ========================================================== 15:34:04 (1679931244) [ 916.880793] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 917.145266] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 917.164492] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 918.076052] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 918.191530] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 919.254641] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 919.957413] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 920.298576] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 920.444015] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 921.396150] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 922.651067] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:484 to 0x280000401:513 [ 927.701550] Lustre: Mounted lustre-client [ 930.209000] systemd[1]: mnt-lustre.mount: Succeeded. [ 930.278992] Lustre: Unmounted lustre-client [ 930.374095] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 932.720545] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 932.726450] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 932.727044] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 936.691365] Lustre: server umount lustre-OST0000 complete [ 937.043053] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 937.760713] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 937.765544] Lustre: Skipped 1 previous similar message [ 937.767121] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 937.771375] Lustre: Skipped 2 previous similar messages [ 942.800758] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 942.808350] Lustre: Skipped 1 previous similar message [ 943.309172] Lustre: server umount lustre-MDT0000 complete [ 943.628292] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 943.677642] LustreError: 45949:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931272 with bad export cookie 9159860441840939195 [ 943.687992] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 947.600429] LNet: 47267:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 947.600809] LNet: Removed LNI 192.168.125.30@tcp [ 949.044529] systemd-udevd[1035]: Specified user 'tss' unknown [ 949.094548] systemd-udevd[1035]: Specified group 'tss' unknown [ 949.262756] systemd-udevd[47603]: Using default interface naming scheme 'rhel-8.0'. [ 949.580308] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 950.286844] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 951.087015] Lustre: DEBUG MARKER: == conf-sanity test 17: Verify failed mds_postsetup won't fail assertion (2936) (should return errs) ========================================================== 15:34:38 (1679931278) [ 951.448415] systemd-udevd[1035]: Specified user 'tss' unknown [ 951.474789] systemd-udevd[1035]: Specified group 'tss' unknown [ 951.517151] systemd-udevd[47809]: Using default interface naming scheme 'rhel-8.0'. [ 952.266958] Lustre: Lustre: Build Version: 2.15.54 [ 952.472346] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 952.473440] LNet: Accept secure, port 988 [ 953.578572] Lustre: Echo OBD driver; http://www.lustre.org/ [ 955.831289] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 955.849206] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 957.036610] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 957.051424] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 957.582465] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 957.746667] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 958.755829] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 959.692387] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 960.117496] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 960.258517] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 961.172012] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 962.404602] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:484 to 0x280000401:545 [ 963.450397] Lustre: Mounted lustre-client [ 968.865480] systemd[1]: mnt-lustre.mount: Succeeded. [ 968.912842] Lustre: Unmounted lustre-client [ 969.065578] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 972.480445] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 972.486241] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 972.489174] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 973.521765] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 973.521850] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 973.526901] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 973.531412] Lustre: Skipped 1 previous similar message [ 978.560533] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 978.562041] Lustre: Skipped 1 previous similar message [ 983.520049] Lustre: lustre-OST0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 983.649718] Lustre: server umount lustre-OST0000 complete [ 984.012739] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 987.920370] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 987.920510] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 987.920817] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 993.680523] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 993.681005] Lustre: Skipped 3 previous similar messages [ 998.880059] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 998.910473] Lustre: server umount lustre-MDT0000 complete [ 999.206682] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 999.235623] LustreError: 48620:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931327 with bad export cookie 13232975157771102688 [ 999.236285] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1002.900550] LNet: 49858:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 1002.901067] LNet: Removed LNI 192.168.125.30@tcp [ 1004.268881] systemd-udevd[1035]: Specified user 'tss' unknown [ 1004.447436] systemd-udevd[1035]: Specified group 'tss' unknown [ 1004.597686] systemd-udevd[50017]: Using default interface naming scheme 'rhel-8.0'. [ 1004.962413] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 1005.668844] systemd-udevd[1035]: Specified user 'tss' unknown [ 1005.759111] systemd-udevd[1035]: Specified group 'tss' unknown [ 1005.967939] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 2 [ 1005.988989] systemd-udevd[50596]: Using default interface naming scheme 'rhel-8.0'. [ 1007.157635] Lustre: Lustre: Build Version: 2.15.54 [ 1007.333463] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 1007.333730] LNet: Accept secure, port 988 [ 1007.959203] Lustre: Echo OBD driver; http://www.lustre.org/ [ 1010.530882] Lustre: lustre-OST0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 1010.532515] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1027.360224] LustreError: 51163:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 1: rc = -5 [ 1058.560095] LustreError: 51163:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 4: rc = -110 [ 1088.720099] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1089.821772] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1090.463364] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1110.797141] LustreError: 15c-8: MGC192.168.125.30@tcp: Confguration from log lustre-MDT0000 failed from MGS -2. Communication error between node & MGS, a bad configuration, or other errors. See syslog for more info [ 1110.797459] LustreError: 51449:0:(tgt_mount.c:1444:server_start_targets()) failed to start server lustre-MDT0000: -2 [ 1110.797674] LustreError: 51449:0:(tgt_mount.c:2081:server_fill_super()) Unable to start targets: -2 [ 1110.797974] LustreError: 51449:0:(tgt_mount.c:1669:server_put_super()) no obd lustre-MDT0000 [ 1110.927951] Lustre: server umount lustre-MDT0000 complete [ 1110.929071] LustreError: 51449:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -2 [ 1111.748025] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1117.840057] Lustre: 51606:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679931440/real 1679931440] req@00000000619cd54c x1761535679530944/t0(0) o251->MGC192.168.125.30@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1679931446 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 1118.027806] Lustre: server umount lustre-OST0000 complete [ 1119.396751] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 1119.831019] systemd-udevd[1035]: Specified user 'tss' unknown [ 1119.851817] systemd-udevd[1035]: Specified group 'tss' unknown [ 1119.858486] systemd-udevd[52010]: Using default interface naming scheme 'rhel-8.0'. [ 1123.878448] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1123.878863] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1123.879386] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1123.879792] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1123.881964] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1123.882262] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1123.882545] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1123.882833] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1123.883121] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1123.883401] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1124.997290] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 1127.834350] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 1127.882346] systemd[1]: tmp-mntmMKrca.mount: Succeeded. [ 1129.239693] print_req_error: 8188 callbacks suppressed [ 1129.239695] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1129.240276] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1129.240913] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1129.250728] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1129.311617] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 1129.342472] systemd[1]: tmp-mntSR57Os.mount: Succeeded. [ 1131.360914] blk_update_request: operation not supported error, dev loop3, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1131.371860] blk_update_request: operation not supported error, dev loop3, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1131.372283] blk_update_request: operation not supported error, dev loop3, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1131.376816] blk_update_request: operation not supported error, dev loop3, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 1131.494751] LDISKFS-fs (loop3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 1131.536765] systemd[1]: tmp-mnt8uE7QG.mount: Succeeded. [ 1132.281484] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 1132.322448] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1132.615025] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 1132.629602] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 1132.699767] Lustre: lustre-MDT0000: new disk, initializing [ 1132.739232] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1132.743328] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 1134.746642] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 1134.792654] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1134.834553] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 1134.866954] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 1134.870428] Lustre: Skipped 1 previous similar message [ 1134.923250] Lustre: lustre-MDT0001: new disk, initializing [ 1134.992799] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 1135.026828] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 1135.036034] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 1137.251358] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1138.013702] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1138.680685] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 1138.692939] systemd[1]: tmp-mnt7nvxk4.mount: Succeeded. [ 1138.732682] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1138.981624] Lustre: lustre-OST0000: new disk, initializing [ 1138.982102] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 1139.032691] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1139.617277] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 1139.617676] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 1139.644294] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 1141.467388] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1142.453398] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 1142.693038] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 1143.592798] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 1143.739328] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 1143.856929] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1144.642170] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1144.648020] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 1149.681725] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 1149.686602] Lustre: Skipped 1 previous similar message [ 1150.134212] Lustre: server umount lustre-OST0000 complete [ 1150.482985] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1154.721044] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1154.726232] Lustre: Skipped 1 previous similar message [ 1154.728687] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1154.734076] Lustre: Skipped 2 previous similar messages [ 1156.703690] Lustre: server umount lustre-MDT0000 complete [ 1157.029130] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 1157.071885] LustreError: 53466:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931485 with bad export cookie 7131621133055116462 [ 1157.072531] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1157.076635] LustreError: 53466:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 1158.140151] Lustre: DEBUG MARKER: == conf-sanity test 18: check mkfs creates large journals ========================================================== 15:38:06 (1679931486) [ 1158.466000] Lustre: DEBUG MARKER: SKIP: conf-sanity test_18 /dev/mapper/mds1_flakey too small for 2000000kB MDS [ 1158.780242] Lustre: DEBUG MARKER: == conf-sanity test 19a: start/stop MDS without OSTs ===== 15:38:06 (1679931486) [ 1159.325220] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1159.721995] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1159.768665] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1160.625791] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1162.079444] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1163.050542] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1163.224681] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1163.291824] LustreError: 54974:0:(osp_object.c:637:osp_attr_get()) lustre-MDT0001-osp-MDT0000: osp_attr_get update error [0x200000009:0x1:0x0]: rc = -5 [ 1163.298550] LustreError: 54974:0:(lod_sub_object.c:932:lod_sub_prep_llog()) lustre-MDT0000-mdtlov: can't get id from catalogs: rc = -5 [ 1163.298712] LustreError: 54974:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 3, retries 0, failed: rc = -5 [ 1165.840359] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1165.840760] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1165.848867] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1165.852876] Lustre: Skipped 2 previous similar messages [ 1169.494303] Lustre: server umount lustre-MDT0000 complete [ 1169.508566] Lustre: Skipped 1 previous similar message [ 1169.845877] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 1169.913865] LustreError: 54907:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931498 with bad export cookie 7131621133055117232 [ 1169.920792] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1171.169706] Lustre: DEBUG MARKER: == conf-sanity test 19b: start/stop OSTs without MDS ===== 15:38:19 (1679931499) [ 1171.558774] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1188.400326] LustreError: 55645:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 1: rc = -5 [ 1199.840216] LustreError: 55645:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 4: rc = -110 [ 1228.960143] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1228.966932] Lustre: Skipped 1 previous similar message [ 1230.016034] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1230.230504] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1230.423872] Lustre: server umount lustre-OST0000 complete [ 1230.427053] Lustre: Skipped 1 previous similar message [ 1231.205505] Lustre: DEBUG MARKER: == conf-sanity test 20: remount ro,rw mounts work and doesn't break /etc/mtab ========================================================== 15:39:19 (1679931559) [ 1231.688977] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1232.058871] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1232.901417] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1234.112867] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1234.993799] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1235.524541] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1236.614055] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1236.893915] Lustre: Mounted lustre-client [ 1242.087437] Lustre: Remounted lustre-client read-only [ 1242.655310] systemd[1]: mnt-lustre.mount: Succeeded. [ 1242.723277] Lustre: Unmounted lustre-client [ 1242.833665] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1243.200393] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1243.208027] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1243.208465] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1243.208588] Lustre: Skipped 3 previous similar messages [ 1249.084652] Lustre: server umount lustre-MDT0000 complete [ 1249.428086] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 1249.438192] LustreError: 56074:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931577 with bad export cookie 7131621133055117862 [ 1249.438455] LustreError: 56074:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 1249.439038] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1249.985153] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1256.080085] Lustre: 57068:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679931578/real 1679931578] req@00000000a880bed1 x1761535679562432/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679931584 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 1257.192941] Lustre: DEBUG MARKER: == conf-sanity test 21a: start mds before ost, stop ost first ========================================================== 15:39:45 (1679931585) [ 1257.703342] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1257.908730] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1257.925717] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1257.925893] Lustre: Skipped 3 previous similar messages [ 1258.528341] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1259.941726] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1260.650260] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1261.082306] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1262.132045] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1262.926330] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 1263.364064] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:36 to 0x280000401:65 [ 1264.143731] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 1 sec [ 1264.863704] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 1265.030553] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 1265.128549] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1268.400495] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 1268.406642] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1268.406848] Lustre: Skipped 2 previous similar messages [ 1268.413734] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 1268.413877] Lustre: Skipped 3 previous similar messages [ 1271.641011] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1273.760359] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1273.765632] LustreError: Skipped 1 previous similar message [ 1278.092281] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 1278.114583] LustreError: 57274:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931606 with bad export cookie 7131621133055119031 [ 1278.115434] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1278.115644] LustreError: 57274:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 6 previous similar messages [ 1279.095769] Lustre: DEBUG MARKER: == conf-sanity test 21b: start ost before mds, stop mds first ========================================================== 15:40:07 (1679931607) [ 1279.512708] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1296.320319] LustreError: 58580:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 1: rc = -5 [ 1307.760204] LustreError: 58580:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 4: rc = -110 [ 1337.920120] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1337.927011] Lustre: Skipped 2 previous similar messages [ 1338.846315] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1339.318380] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1354.859472] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1354.931216] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:36 to 0x280000401:97 [ 1355.820425] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1356.937515] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1357.450817] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1358.128781] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 1358.286588] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 1358.895073] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 1359.009370] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 1359.092873] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1361.040763] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1361.046931] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1361.047229] Lustre: Skipped 3 previous similar messages [ 1361.047545] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1361.047724] Lustre: Skipped 5 previous similar messages [ 1373.920031] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 1373.983502] Lustre: server umount lustre-MDT0000 complete [ 1373.983693] Lustre: Skipped 5 previous similar messages [ 1374.170999] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 1374.209658] LustreError: 58590:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931702 with bad export cookie 7131621133055120018 [ 1374.211369] LustreError: 58590:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 1374.211749] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1374.593409] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1379.680060] Lustre: 59695:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679931702/real 1679931702] req@00000000e937c446 x1761535679581888/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679931708 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 1382.559886] Lustre: DEBUG MARKER: == conf-sanity test 21c: start mds between two osts, stop mds last ========================================================== 15:41:50 (1679931710) [ 1382.827684] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1399.600330] LustreError: 59890:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 1: rc = -5 [ 1439.120137] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1439.125148] Lustre: Skipped 2 previous similar messages [ 1440.051541] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1440.421945] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1458.171980] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1458.224167] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:36 to 0x280000401:129 [ 1459.122051] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1460.392026] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1461.345458] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1462.557519] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 1462.567989] systemd[1]: tmp-mntqhyO8X.mount: Succeeded. [ 1462.621228] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1462.686731] Lustre: lustre-OST0001: new disk, initializing [ 1462.687239] Lustre: srv-lustre-OST0001: No data found on store. Initialize space: rc = -61 [ 1464.938446] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid [ 1465.943486] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 40 [ 1466.339157] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 1466.339557] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x00000002c0000400-0x0000000300000400]:1:ost] [ 1466.360184] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x2c0000401 [ 1467.225190] Lustre: DEBUG MARKER: os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 1 sec [ 1468.286261] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 40 [ 1468.486971] Lustre: DEBUG MARKER: os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 1468.626651] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1469.360368] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 1469.365026] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1469.365330] Lustre: Skipped 2 previous similar messages [ 1469.366750] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 1469.370982] Lustre: Skipped 9 previous similar messages [ 1474.809576] Lustre: server umount lustre-OST0000 complete [ 1474.809757] Lustre: Skipped 2 previous similar messages [ 1475.214188] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 1476.403107] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1489.761067] Lustre: lustre-OST0001 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 1490.248030] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1496.566027] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1496.566348] LustreError: Skipped 5 previous similar messages [ 1496.859624] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 1496.882547] LustreError: 59902:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931825 with bad export cookie 7131621133055120935 [ 1496.883318] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1496.888664] LustreError: 59902:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 3 previous similar messages [ 1497.718958] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 1497.738911] systemd[1]: tmp-mnt9iOEgN.mount: Succeeded. [ 1498.068722] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 1498.085198] systemd[1]: tmp-mnt2PrU19.mount: Succeeded. [ 1498.410994] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 1498.760867] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 1499.308127] Lustre: DEBUG MARKER: == conf-sanity test 21d: start mgs then ost and then mds ========================================================== 15:43:47 (1679931827) [ 1499.417476] Lustre: DEBUG MARKER: SKIP: conf-sanity test_21d need separate mgs device [ 1499.561231] Lustre: DEBUG MARKER: == conf-sanity test 21e: separate MGS and MDS ============ 15:43:47 (1679931827) [ 1499.663444] Lustre: DEBUG MARKER: SKIP: conf-sanity test_21e mixed loopback and real device not working [ 1499.846041] Lustre: DEBUG MARKER: == conf-sanity test 22: start a client before osts (should return errs) ========================================================== 15:43:48 (1679931828) [ 1500.316185] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1500.545574] Lustre: MGS: Logs for fs lustre were removed by user request. All servers must be restarted in order to regenerate the logs: rc = 0 [ 1500.556375] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 1501.504401] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1501.559445] Lustre: MGS: Regenerating lustre-MDT0001 log by user request: rc = 0 [ 1502.717055] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1503.813894] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1504.445465] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1504.677565] Lustre: MGS: Regenerating lustre-OST0000 log by user request: rc = 0 [ 1505.954358] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1506.413204] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:36 to 0x280000401:161 [ 1506.570950] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 1506.679383] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 1507.414304] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 1507.543740] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 1507.680983] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1511.360365] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 1511.364630] LustreError: Skipped 1 previous similar message [ 1511.364740] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1511.365036] Lustre: Skipped 5 previous similar messages [ 1514.442248] Lustre: Mounted lustre-client [ 1514.928995] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 1525.362384] systemd[1]: mnt-lustre.mount: Succeeded. [ 1525.414847] Lustre: Unmounted lustre-client [ 1526.002859] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1527.540741] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1527.809904] Lustre: Mounted lustre-client [ 1528.999011] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 1541.283801] LustreError: 167-0: lustre-OST0000-osc-MDT0001: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 1541.284491] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 1541.286890] Lustre: Skipped 1 previous similar message [ 1541.805856] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 12 sec [ 1542.770604] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 1542.984491] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 1543.976931] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state (FULL|IDLE) osc.lustre-OST0000-osc-ffff8b7863bc2000.ost_server_uuid 40 [ 1544.206400] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff8b7863bc2000.ost_server_uuid in FULL state after 0 sec [ 1544.291107] LustreError: 61970:0:(ldlm_lockd.c:769:ldlm_handle_ast_error()) ### client (nid 0@lo) returned error from blocking AST (req@00000000479020b1 x1761535679619584 status -107 rc -107), evict it ns: mdt-lustre-MDT0000_UUID lock: 00000000bd3de64d/0x62f89ba2b46b5f11 lrc: 4/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x62f89ba2b46b5f03 expref: 5 pid: 61971 timeout: 1644 lvb_type: 0 [ 1544.291381] LustreError: 138-a: lustre-MDT0000: A client on nid 0@lo was evicted due to a lock blocking callback time out: rc -107 [ 1544.291483] LustreError: 61953:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 0s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: 00000000bd3de64d/0x62f89ba2b46b5f11 lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x62f89ba2b46b5f03 expref: 6 pid: 61971 timeout: 0 lvb_type: 0 [ 1544.732247] systemd[1]: mnt-lustre.mount: Succeeded. [ 1544.779499] Lustre: lustre-MDT0001: haven't heard from client 4134d58d-d456-4068-a816-6f00c5c2ce37 (at 0@lo) in 31 seconds. I think it's dead, and I am evicting it. exp 000000002353e68a, cur 1679931873 expire 1679931843 last 1679931842 [ 1544.794672] Lustre: Unmounted lustre-client [ 1544.980107] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1546.321450] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 1546.321916] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 1546.327502] LustreError: Skipped 2 previous similar messages [ 1546.332536] Lustre: Skipped 13 previous similar messages [ 1551.721132] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1558.255323] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 1558.302840] LustreError: 61949:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931886 with bad export cookie 7131621133055122013 [ 1558.303218] LustreError: 61949:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 1558.303884] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1562.230720] LNet: 64334:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 1562.250065] LNet: Removed LNI 192.168.125.30@tcp [ 1563.687032] systemd-udevd[1035]: Specified user 'tss' unknown [ 1563.804672] systemd-udevd[1035]: Specified group 'tss' unknown [ 1563.941186] systemd-udevd[64627]: Using default interface naming scheme 'rhel-8.0'. [ 1564.111716] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 1565.298752] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 1566.159352] Lustre: DEBUG MARKER: == conf-sanity test 23a: interrupt client during recovery mount delay ========================================================== 15:44:53 (1679931893) [ 1566.502347] systemd-udevd[1035]: Specified user 'tss' unknown [ 1566.526140] systemd-udevd[1035]: Specified group 'tss' unknown [ 1566.618461] systemd-udevd[65133]: Using default interface naming scheme 'rhel-8.0'. [ 1567.265939] Lustre: Lustre: Build Version: 2.15.54 [ 1567.476998] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 1567.477289] LNet: Accept secure, port 988 [ 1568.503669] Lustre: Echo OBD driver; http://www.lustre.org/ [ 1570.923987] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 1570.934207] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1572.179249] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1572.189550] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1573.101062] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1573.311719] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 1574.598227] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1575.482755] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1576.111441] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1576.239194] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1577.365979] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1577.366979] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:163 to 0x280000401:193 [ 1577.640366] Lustre: Mounted lustre-client [ 1578.942564] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1578.986213] Lustre: Failing over lustre-MDT0000 [ 1579.134312] Lustre: server umount lustre-MDT0000 complete [ 1579.338494] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 1583.360360] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1583.365970] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1583.368957] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1587.680104] Lustre: 65707:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679931909/real 1679931909] req@00000000242ac5bd x1761536266740480/t0(0) o101->MGC192.168.125.30@tcp@0@lo:26/25 lens 328/344 e 0 to 1 dl 1679931916 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' [ 1587.686814] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1587.692986] Lustre: 65707:0:(mgc_request.c:1771:mgc_process_log()) MGC192.168.125.30@tcp: IR log lustre-mdtir failed, not fatal: rc = -5 [ 1587.697185] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1587.697805] LustreError: Skipped 4 previous similar messages [ 1589.762363] systemd[1]: mnt-lustre.mount: Succeeded. [ 1589.822754] Lustre: Unmounted lustre-client [ 1590.301482] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1592.722167] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1592.723604] LustreError: Skipped 2 previous similar messages [ 1593.762203] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0x39e7f250bf81c365 to 0x39e7f250bf81c75c [ 1593.778710] Lustre: MGC192.168.125.30@tcp: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 1593.909195] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1593.930502] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1594.246296] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1594.246505] Lustre: lustre-MDT0000: Denying connection for new client 03049e9f-6724-4b20-83ae-c7531387b7ca (at 0@lo), waiting for 2 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 1594.248937] LustreError: 11-0: lustre-MDT0000-mdc-ffff8b7839151000: operation mds_connect to node 0@lo failed: rc = -16 [ 1599.286607] Lustre: lustre-MDT0000: Denying connection for new client 03049e9f-6724-4b20-83ae-c7531387b7ca (at 0@lo), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 1:00 [ 1599.286909] Lustre: Skipped 1 previous similar message [ 1599.287084] LustreError: 11-0: lustre-MDT0000-mdc-ffff8b7839151000: operation mds_connect to node 0@lo failed: rc = -16 [ 1599.287233] LustreError: Skipped 1 previous similar message [ 1599.604903] LustreError: 66685:0:(lmv_obd.c:1311:lmv_statfs()) lustre-MDT0000-mdc-ffff8b7839151000: can't stat MDS #0: rc = -16 [ 1599.646999] Lustre: Unmounted lustre-client [ 1599.647254] LustreError: 66685:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -16 [ 1600.929536] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1603.920368] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 1603.925789] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1603.926089] Lustre: Skipped 2 previous similar messages [ 1603.929778] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 1607.213147] Lustre: server umount lustre-OST0000 complete [ 1607.444605] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1607.496430] LustreError: 66801:0:(ldlm_lib.c:2922:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1607.496580] Lustre: 66629:0:(ldlm_lib.c:2307:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1607.500822] Lustre: lustre-MDT0000-osd: cancel update llog [0x200000400:0x1:0x0] [ 1607.505264] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000401:0x1:0x0] [ 1607.509840] LustreError: 66629:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@0000000067292ce4 x1761536266750272/t0(0) o700->lustre-MDT0001-osp-MDT0000@0@lo:30/10 lens 264/248 e 0 to 0 dl 0 ref 2 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' [ 1607.510210] LustreError: 66629:0:(fid_request.c:233:seq_client_alloc_seq()) cli-cli-lustre-MDT0001-osp-MDT0000: Cannot allocate new meta-sequence: rc = -5 [ 1607.510389] LustreError: 66629:0:(fid_request.c:335:seq_client_alloc_fid()) cli-cli-lustre-MDT0001-osp-MDT0000: Can't allocate new sequence: rc = -5 [ 1607.510841] Lustre: lustre-MDT0000: Recovery over after 0:13, of 2 clients 0 recovered and 2 were evicted. [ 1607.518301] Lustre: 66629:0:(mdt_handler.c:7664:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 1607.936864] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 1607.962518] LustreError: 11-0: lustre-MDT0000-lwp-MDT0001: operation mds_disconnect to node 0@lo failed: rc = -107 [ 1607.962684] LustreError: 65694:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931936 with bad export cookie 4172570008406312796 [ 1607.964805] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1607.965385] LustreError: 65238:0:(import.c:692:ptlrpc_connect_import_locked()) can't connect to a closed import [ 1607.966760] LustreError: 65694:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 1614.227963] Lustre: server umount lustre-MDT0001 complete [ 1614.228104] Lustre: Skipped 1 previous similar message [ 1617.590603] LNet: 67166:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 1617.601302] LNet: Removed LNI 192.168.125.30@tcp [ 1618.907072] systemd-udevd[1035]: Specified user 'tss' unknown [ 1618.915006] systemd-udevd[1035]: Specified group 'tss' unknown [ 1619.016466] systemd-udevd[67476]: Using default interface naming scheme 'rhel-8.0'. [ 1619.345761] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 1619.816883] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 1620.644460] Lustre: DEBUG MARKER: == conf-sanity test 23b: Simulate -EINTR during mount ==== 15:45:48 (1679931948) [ 1620.941066] systemd-udevd[1035]: Specified user 'tss' unknown [ 1621.004547] systemd-udevd[1035]: Specified group 'tss' unknown [ 1621.070097] systemd-udevd[67952]: Using default interface naming scheme 'rhel-8.0'. [ 1621.881720] Lustre: Lustre: Build Version: 2.15.54 [ 1622.056935] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 1622.060992] LNet: Accept secure, port 988 [ 1622.823720] Lustre: Echo OBD driver; http://www.lustre.org/ [ 1624.971198] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 1624.984464] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1626.285527] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1626.296869] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1627.116426] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1627.333584] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 1628.501635] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1629.107765] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1629.452414] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1629.560953] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1630.530943] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1631.765120] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:163 to 0x280000401:225 [ 1636.811820] Lustre: Mounted lustre-client [ 1637.268578] systemd[1]: mnt-lustre.mount: Succeeded. [ 1637.333544] Lustre: Unmounted lustre-client [ 1637.476664] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1641.840546] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 1641.841999] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1641.842290] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 1641.842601] Lustre: Skipped 1 previous similar message [ 1643.713894] Lustre: server umount lustre-OST0000 complete [ 1644.111434] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1646.880893] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1646.889433] Lustre: Skipped 1 previous similar message [ 1646.895603] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1651.920790] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1651.921083] Lustre: Skipped 2 previous similar messages [ 1656.960759] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1656.965220] Lustre: Skipped 1 previous similar message [ 1658.720047] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 1658.788668] Lustre: server umount lustre-MDT0000 complete [ 1659.121408] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 1659.138060] LustreError: 68524:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679931987 with bad export cookie 70881926064323131 [ 1659.140347] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1662.980636] LNet: 69746:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 1662.985702] LNet: Removed LNI 192.168.125.30@tcp [ 1664.153503] systemd-udevd[1035]: Specified user 'tss' unknown [ 1664.191173] systemd-udevd[1035]: Specified group 'tss' unknown [ 1664.277089] systemd-udevd[70058]: Using default interface naming scheme 'rhel-8.0'. [ 1664.498364] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 1665.056105] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 1665.921909] Lustre: DEBUG MARKER: == conf-sanity test 24a: Multiple MDTs on a single node == 15:46:33 (1679931993) [ 1666.055631] Lustre: DEBUG MARKER: SKIP: conf-sanity test_24a mixed loopback and real device not working [ 1666.310265] Lustre: DEBUG MARKER: == conf-sanity test 24b: Multiple MGSs on a single node (should return err) ========================================================== 15:46:34 (1679931994) [ 1666.426159] Lustre: DEBUG MARKER: SKIP: conf-sanity test_24b mixed loopback and real device not working [ 1666.638364] Lustre: DEBUG MARKER: == conf-sanity test 25: Verify modules are referenced ==== 15:46:34 (1679931994) [ 1667.023790] systemd-udevd[1035]: Specified user 'tss' unknown [ 1667.045984] systemd-udevd[1035]: Specified group 'tss' unknown [ 1667.142904] systemd-udevd[70580]: Using default interface naming scheme 'rhel-8.0'. [ 1668.059712] Lustre: Lustre: Build Version: 2.15.54 [ 1668.261151] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 1668.261337] LNet: Accept secure, port 988 [ 1668.785253] Lustre: Echo OBD driver; http://www.lustre.org/ [ 1670.543901] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 1670.546631] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1671.857587] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1671.882593] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1672.811048] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1673.033022] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 1674.312603] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1675.083351] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1675.550260] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1675.708047] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1676.441151] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1677.604457] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:163 to 0x280000401:257 [ 1678.652007] Lustre: Mounted lustre-client [ 1683.969286] systemd[1]: mnt-lustre.mount: Succeeded. [ 1684.021285] Lustre: Unmounted lustre-client [ 1684.147502] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1687.680367] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 1687.686022] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1687.692184] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 1688.720356] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 1688.720672] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 1688.726123] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1688.731455] Lustre: Skipped 1 previous similar message [ 1690.398707] Lustre: server umount lustre-OST0000 complete [ 1690.707642] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1693.120368] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1693.126104] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1693.128979] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1696.909173] Lustre: server umount lustre-MDT0000 complete [ 1697.313055] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 1697.353992] LustreError: 71255:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679932025 with bad export cookie 14550446216958290812 [ 1697.354851] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1697.359972] LustreError: 71255:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 1701.150789] LNet: 72503:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 1701.162095] LNet: Removed LNI 192.168.125.30@tcp [ 1702.541458] systemd-udevd[1035]: Specified user 'tss' unknown [ 1702.572225] systemd-udevd[1035]: Specified group 'tss' unknown [ 1702.617843] systemd-udevd[72734]: Using default interface naming scheme 'rhel-8.0'. [ 1702.905253] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 1703.739849] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 1704.598181] Lustre: DEBUG MARKER: == conf-sanity test 26: MDT startup failure cleans LOV (should return errs) ========================================================== 15:47:12 (1679932032) [ 1704.892941] systemd-udevd[1035]: Specified user 'tss' unknown [ 1704.947126] systemd-udevd[1035]: Specified group 'tss' unknown [ 1705.066746] systemd-udevd[73262]: Using default interface naming scheme 'rhel-8.0'. [ 1705.947136] Lustre: Lustre: Build Version: 2.15.54 [ 1706.113671] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 1706.113957] LNet: Accept secure, port 988 [ 1707.137924] Lustre: Echo OBD driver; http://www.lustre.org/ [ 1709.399384] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 1709.402884] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1710.566181] Lustre: *** cfs_fail_loc=135, val=0*** [ 1710.601163] LustreError: 73911:0:(obd_config.c:776:class_setup()) setup lustre-MDT0000 failed (-2) [ 1710.601385] LustreError: 73911:0:(obd_config.c:2004:class_config_llog_handler()) MGC192.168.125.30@tcp: cfg command failed: rc = -2 [ 1710.601579] Lustre: cmd=cf003 0:lustre-MDT0000 1:lustre-MDT0000_UUID 2:0 3:lustre-MDT0000-mdtlov 4:f [ 1710.601579] [ 1710.601918] LustreError: 15c-8: MGC192.168.125.30@tcp: Confguration from log lustre-MDT0000 failed from MGS -2. Communication error between node & MGS, a bad configuration, or other errors. See syslog for more info [ 1710.602213] LustreError: 73869:0:(tgt_mount.c:1444:server_start_targets()) failed to start server lustre-MDT0000: -2 [ 1710.602447] LustreError: 73869:0:(tgt_mount.c:2081:server_fill_super()) Unable to start targets: -2 [ 1710.602643] LustreError: 73869:0:(obd_config.c:829:class_cleanup()) Device 5 not setup [ 1710.602893] LustreError: 73869:0:(ldlm_resource.c:1124:ldlm_resource_complain()) MGC192.168.125.30@tcp: namespace resource [0x65727473756c:0x0:0x0].0x0 (00000000fc51f2df) refcount nonzero (1) after lock cleanup; forcing cleanup. [ 1710.655951] Lustre: server umount lustre-MDT0000 complete [ 1710.656202] LustreError: 73869:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -2 [ 1711.443309] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1711.674475] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1711.701019] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1712.687128] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1712.874300] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 1714.379217] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1715.667293] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1715.897780] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1715.926752] LustreError: 74060:0:(osp_object.c:637:osp_attr_get()) lustre-MDT0001-osp-MDT0000: osp_attr_get update error [0x200000009:0x1:0x0]: rc = -5 [ 1715.929520] LustreError: 74060:0:(lod_sub_object.c:932:lod_sub_prep_llog()) lustre-MDT0000-mdtlov: can't get id from catalogs: rc = -5 [ 1715.929695] LustreError: 74060:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 4, retries 0, failed: rc = -5 [ 1717.840422] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1717.847753] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1717.851881] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1718.960895] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1718.966710] Lustre: Skipped 2 previous similar messages [ 1722.108158] Lustre: server umount lustre-MDT0000 complete [ 1722.517166] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 1722.557086] LustreError: 74006:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679932050 with bad export cookie 5602727541963831473 [ 1722.563090] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1726.850354] LNet: 74850:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 1726.850731] LNet: Removed LNI 192.168.125.30@tcp [ 1728.249774] systemd-udevd[1035]: Specified user 'tss' unknown [ 1728.347923] systemd-udevd[1035]: Specified group 'tss' unknown [ 1728.497034] systemd-udevd[75068]: Using default interface naming scheme 'rhel-8.0'. [ 1728.663355] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 1729.527294] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 1730.364927] Lustre: DEBUG MARKER: == conf-sanity test 27a: Reacquire MGS lock if OST started first ========================================================== 15:47:37 (1679932057) [ 1731.573215] systemd-udevd[1035]: Specified user 'tss' unknown [ 1731.582693] systemd-udevd[1035]: Specified group 'tss' unknown [ 1731.722868] systemd-udevd[75635]: Using default interface naming scheme 'rhel-8.0'. [ 1732.439644] systemd-udevd[1035]: Specified user 'tss' unknown [ 1732.493536] systemd-udevd[1035]: Specified group 'tss' unknown [ 1732.641057] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 2 [ 1732.653417] systemd-udevd[76122]: Using default interface naming scheme 'rhel-8.0'. [ 1733.857000] Lustre: Lustre: Build Version: 2.15.54 [ 1734.040808] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 1734.041163] LNet: Accept secure, port 988 [ 1735.102558] Lustre: Echo OBD driver; http://www.lustre.org/ [ 1737.582403] Lustre: lustre-OST0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 1737.592209] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1754.480204] LustreError: 76692:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 1: rc = -5 [ 1785.680080] LustreError: 76692:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 4: rc = -110 [ 1815.840082] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1817.092736] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1817.810838] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1837.985399] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1838.015709] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1838.038474] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:259 to 0x280000401:289 [ 1838.902076] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1839.096351] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 1840.185359] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1841.114944] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1841.448389] Lustre: Setting parameter lustre-OST0000.ost.client_cache_seconds in log lustre-OST0000 [ 1843.312376] systemd-udevd[77486]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osd-ldiskfs.lustre-OST0000.client_cache_seconds=115'' failed with exit code 2. [ 1843.678275] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1844.080765] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 1844.086097] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1844.087216] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 1845.912648] Lustre: server umount lustre-OST0000 complete [ 1846.253399] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1848.429561] Lustre: server umount lustre-MDT0000 complete [ 1848.796897] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 1848.828537] LustreError: 11-0: lustre-MDT0000-lwp-MDT0001: operation mds_disconnect to node 0@lo failed: rc = -107 [ 1848.830679] LustreError: 76702:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679932177 with bad export cookie 18045792477422932961 [ 1848.832036] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1848.834080] LustreError: 76702:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 1851.174991] Lustre: server umount lustre-MDT0001 complete [ 1855.010611] LNet: 77923:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 1855.010985] LNet: Removed LNI 192.168.125.30@tcp [ 1856.494228] systemd-udevd[1035]: Specified user 'tss' unknown [ 1856.589930] systemd-udevd[1035]: Specified group 'tss' unknown [ 1856.881695] systemd-udevd[78271]: Using default interface naming scheme 'rhel-8.0'. [ 1857.288251] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 1858.186595] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 1859.074265] Lustre: DEBUG MARKER: == conf-sanity test 27b: Reacquire MGS lock after failover ========================================================== 15:49:46 (1679932186) [ 1859.631151] systemd-udevd[1035]: Specified user 'tss' unknown [ 1859.673007] systemd-udevd[1035]: Specified group 'tss' unknown [ 1859.782620] systemd-udevd[78731]: Using default interface naming scheme 'rhel-8.0'. [ 1860.856574] Lustre: Lustre: Build Version: 2.15.54 [ 1861.103213] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 1861.103804] LNet: Accept secure, port 988 [ 1862.496192] Lustre: Echo OBD driver; http://www.lustre.org/ [ 1864.854190] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 1864.855679] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1866.164653] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1866.194567] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1867.430680] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1867.683782] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 1869.195717] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1870.614752] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1871.325966] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1871.517997] systemd-udevd[79864]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osd-ldiskfs.lustre-OST0000.client_cache_seconds=115'' failed with exit code 2. [ 1871.570964] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1872.816742] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1881.284603] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:259 to 0x280000401:321 [ 1886.335939] Lustre: Mounted lustre-client [ 1886.876416] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1886.916129] Lustre: Failing over lustre-MDT0000 [ 1887.074860] Lustre: server umount lustre-MDT0000 complete [ 1887.760457] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1887.765739] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1887.767387] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1891.361291] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1891.362255] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1891.371564] LustreError: Skipped 3 previous similar messages [ 1896.401109] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1896.404773] LustreError: Skipped 3 previous similar messages [ 1898.091700] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1898.166548] LustreError: 11-0: MGC192.168.125.30@tcp: operation mgs_target_reg to node 0@lo failed: rc = -107 [ 1898.169222] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1898.178049] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0x8ab3f1402d079c79 to 0x8ab3f1402d07a077 [ 1898.179029] Lustre: MGC192.168.125.30@tcp: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 1898.338941] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1898.354252] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1898.791148] Lustre: Setting parameter lustre-MDT0000.mdt.identity_acquire_expire in log lustre-MDT0000 [ 1903.362573] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1903.364160] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 1903.372019] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 1903.386712] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:259 to 0x280000401:353 [ 1905.240903] Lustre: Setting parameter lustre-MDT0000-mdc.mdc.max_rpcs_in_flight in log lustre-client [ 1913.212633] systemd[1]: mnt-lustre.mount: Succeeded. [ 1913.268205] Lustre: Unmounted lustre-client [ 1913.414121] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1913.451151] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1913.457189] Lustre: Skipped 2 previous similar messages [ 1913.457960] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 1913.458284] Lustre: Skipped 1 previous similar message [ 1918.482284] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 1918.486027] Lustre: Skipped 1 previous similar message [ 1919.701501] Lustre: server umount lustre-OST0000 complete [ 1920.090460] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1923.521180] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1923.525190] Lustre: Skipped 1 previous similar message [ 1923.526129] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1923.526671] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1923.530968] Lustre: Skipped 1 previous similar message [ 1926.350585] Lustre: server umount lustre-MDT0000 complete [ 1926.804123] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 1926.822358] LustreError: 79298:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679932255 with bad export cookie 9994597256000938103 [ 1926.829495] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1930.900611] LNet: 80973:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 1930.900979] LNet: Removed LNI 192.168.125.30@tcp [ 1932.376747] systemd-udevd[1035]: Specified user 'tss' unknown [ 1932.491127] systemd-udevd[1035]: Specified group 'tss' unknown [ 1932.617049] systemd-udevd[81311]: Using default interface naming scheme 'rhel-8.0'. [ 1933.108141] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 1934.312045] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 1935.201435] Lustre: DEBUG MARKER: == conf-sanity test 28A: permanent parameter setting ===== 15:51:02 (1679932262) [ 1935.962526] systemd-udevd[1035]: Specified user 'tss' unknown [ 1935.980132] systemd-udevd[1035]: Specified group 'tss' unknown [ 1936.355436] systemd-udevd[81785]: Using default interface naming scheme 'rhel-8.0'. [ 1937.624536] Lustre: Lustre: Build Version: 2.15.54 [ 1937.808990] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 1937.809316] LNet: Accept secure, port 988 [ 1939.192935] Lustre: Echo OBD driver; http://www.lustre.org/ [ 1942.314434] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 1942.332539] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1943.691284] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1943.731092] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1944.880558] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1945.083730] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 1946.372350] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1947.208993] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1947.832672] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1947.975747] systemd-udevd[82912]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osd-ldiskfs.lustre-OST0000.client_cache_seconds=115'' failed with exit code 2. [ 1948.033496] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1949.069684] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:355 to 0x280000401:385 [ 1949.636525] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1949.965790] Lustre: Mounted lustre-client [ 1950.372372] Lustre: Setting parameter lustre-client.llite.max_read_ahead_whole_mb in log lustre-client [ 1953.735635] Lustre: Modifying parameter lustre-client.llite.max_read_ahead_whole_mb in log lustre-client [ 1959.276564] systemd[1]: mnt-lustre.mount: Succeeded. [ 1959.358635] Lustre: Unmounted lustre-client [ 1959.514362] Lustre: Mounted lustre-client [ 1959.811833] Lustre: Modifying parameter lustre-client.llite.max_read_ahead_whole_mb in log lustre-client [ 1966.407843] systemd[1]: mnt-lustre.mount: Succeeded. [ 1966.480330] Lustre: Unmounted lustre-client [ 1966.745651] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 1969.200436] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 1969.204670] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1969.205387] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 1973.037024] Lustre: server umount lustre-OST0000 complete [ 1973.393447] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 1974.640842] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1974.648360] Lustre: Skipped 1 previous similar message [ 1974.653930] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1974.654087] Lustre: Skipped 1 previous similar message [ 1979.664106] Lustre: server umount lustre-MDT0000 complete [ 1979.680626] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1979.681200] LustreError: Skipped 1 previous similar message [ 1980.068315] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 1980.120527] LustreError: 82347:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679932308 with bad export cookie 13850891966911004952 [ 1980.126990] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1984.120600] LNet: 83837:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 1984.121017] LNet: Removed LNI 192.168.125.30@tcp [ 1985.496661] systemd-udevd[1035]: Specified user 'tss' unknown [ 1985.498654] systemd-udevd[1035]: Specified group 'tss' unknown [ 1985.723965] systemd-udevd[84183]: Using default interface naming scheme 'rhel-8.0'. [ 1986.174484] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 1987.263814] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 1988.078597] Lustre: DEBUG MARKER: == conf-sanity test 28a: set symlink parameters permanently with lctl ========================================================== 15:51:55 (1679932315) [ 1988.389899] systemd-udevd[1035]: Specified user 'tss' unknown [ 1988.402380] systemd-udevd[1035]: Specified group 'tss' unknown [ 1988.652221] systemd-udevd[84551]: Using default interface naming scheme 'rhel-8.0'. [ 1989.725920] Lustre: Lustre: Build Version: 2.15.54 [ 1989.943610] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 1989.943901] LNet: Accept secure, port 988 [ 1991.057018] Lustre: Echo OBD driver; http://www.lustre.org/ [ 1993.684854] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 1993.686568] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1995.034415] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1995.061557] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1996.177133] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1996.360061] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 1997.648590] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1998.812692] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 1999.386227] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 1999.560456] systemd-udevd[85781]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osd-ldiskfs.lustre-OST0000.client_cache_seconds=115'' failed with exit code 2. [ 1999.603269] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2000.657977] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:355 to 0x280000401:417 [ 2000.856684] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2001.150553] Lustre: Mounted lustre-client [ 2001.457533] Lustre: Modifying parameter lustre-OST0000.ost.client_cache_seconds in log lustre-OST0000 [ 2007.535282] systemd-udevd[86061]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osd-ldiskfs.lustre-OST0000.client_cache_seconds=230'' failed with exit code 2. [ 2007.827963] Lustre: Modifying parameter lustre-OST0000.ost.client_cache_seconds in log lustre-OST0000 [ 2013.305968] systemd-udevd[86137]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osd-ldiskfs.lustre-OST0000.client_cache_seconds=115'' failed with exit code 2. [ 2014.284016] Lustre: Setting parameter lustre-OST0000.osd.auto_scrub in log lustre-OST0000 [ 2020.770285] Lustre: Modifying parameter lustre-OST0000.osd.auto_scrub in log lustre-OST0000 [ 2026.348646] systemd[1]: mnt-lustre.mount: Succeeded. [ 2026.416739] Lustre: Unmounted lustre-client [ 2026.609977] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2028.841021] Lustre: server umount lustre-OST0000 complete [ 2029.272846] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2031.360763] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2031.370172] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2031.374047] Lustre: Skipped 1 previous similar message [ 2035.440059] Lustre: 86378:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679932357/real 1679932357] req@000000002b031fe9 x1761536710304448/t0(0) o9->lustre-OST0000-osc-MDT0000@0@lo:28/4 lens 224/224 e 0 to 1 dl 1679932363 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 2035.573710] Lustre: server umount lustre-MDT0000 complete [ 2035.914167] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2035.945808] LustreError: 85215:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679932364 with bad export cookie 8924772512950615263 [ 2035.947126] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2042.000066] Lustre: 86423:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679932364/real 1679932364] req@00000000d50a9910 x1761536710305408/t0(0) o9->lustre-OST0000-osc-MDT0001@0@lo:28/4 lens 224/224 e 0 to 1 dl 1679932370 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 2042.244991] Lustre: server umount lustre-MDT0001 complete [ 2045.640644] LNet: 86741:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2045.646768] LNet: Removed LNI 192.168.125.30@tcp [ 2046.866440] systemd-udevd[1035]: Specified user 'tss' unknown [ 2046.932952] systemd-udevd[1035]: Specified group 'tss' unknown [ 2047.052181] systemd-udevd[86894]: Using default interface naming scheme 'rhel-8.0'. [ 2047.657911] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 2048.694049] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 2049.588265] Lustre: DEBUG MARKER: == conf-sanity test 29: permanently remove an OST ======== 15:52:56 (1679932376) [ 2050.150966] systemd-udevd[1035]: Specified user 'tss' unknown [ 2050.194794] systemd-udevd[1035]: Specified group 'tss' unknown [ 2050.350945] systemd-udevd[87515]: Using default interface naming scheme 'rhel-8.0'. [ 2051.495601] Lustre: Lustre: Build Version: 2.15.54 [ 2051.686654] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 2051.686959] LNet: Accept secure, port 988 [ 2052.785267] Lustre: Echo OBD driver; http://www.lustre.org/ [ 2055.494210] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 2055.495837] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2056.847027] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2056.873562] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2057.989904] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2058.233621] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 2059.686649] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2060.690583] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2061.326865] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2061.472761] systemd-udevd[88681]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osd-ldiskfs.lustre-OST0000.client_cache_seconds=115'' failed with exit code 2. [ 2061.526278] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2062.593489] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:355 to 0x280000401:449 [ 2062.648856] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2062.896711] Lustre: Mounted lustre-client [ 2063.759560] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2063.779950] Lustre: MGS: Regenerating lustre-OST0001 log by user request: rc = 0 [ 2063.849648] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 2065.067871] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid [ 2065.359352] Lustre: DEBUG MARKER: osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2075.521307] Lustre: Permanently deactivating lustre-OST0001 [ 2075.551116] Lustre: Setting parameter lustre-OST0001-osc.osc.active in log lustre-client [ 2085.607618] Lustre: setting import lustre-OST0001_UUID INACTIVE by administrator request [ 2096.541140] systemd[1]: mnt-lustre.mount: Succeeded. [ 2096.605929] Lustre: Unmounted lustre-client [ 2096.767710] Lustre: setting import lustre-OST0001_UUID INACTIVE by administrator request [ 2096.770759] Lustre: Skipped 2 previous similar messages [ 2096.792857] Lustre: Mounted lustre-client [ 2097.066887] Lustre: Permanently reactivating lustre-OST0001 [ 2097.069207] Lustre: Modifying parameter lustre-OST0001-osc.osc.active in log lustre-client [ 2097.069348] Lustre: Skipped 2 previous similar messages [ 2104.162615] LustreError: 89509:0:(obd_config.c:2004:class_config_llog_handler()) MGC192.168.125.30@tcp: cfg command failed: rc = -114 [ 2104.169525] Lustre: cmd=cf00f 0:lustre-OST0001-osc 1:osc.active=1 [ 2104.169525] [ 2104.169933] LustreError: 88129:0:(mgc_request.c:623:do_requeue()) failed processing log: -114 [ 2104.184437] Lustre: lustre-OST0001-osc-MDT0001: Connection to lustre-OST0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2104.184905] Lustre: lustre-OST0001: Client lustre-MDT0001-mdtlov_UUID (at 0@lo) reconnecting [ 2104.185239] LustreError: 167-0: lustre-OST0001-osc-MDT0001: This client was evicted by lustre-OST0001; in progress operations using this service will fail. [ 2104.186142] Lustre: lustre-OST0001-osc-MDT0001: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 2104.708883] systemd[1]: mnt-lustre.mount: Succeeded. [ 2104.777068] Lustre: Unmounted lustre-client [ 2104.975065] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 2109.202671] Lustre: lustre-OST0001-osc-MDT0000: Connection to lustre-OST0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2109.207568] Lustre: Skipped 1 previous similar message [ 2109.212558] LustreError: 11-0: lustre-OST0001-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 2109.212686] Lustre: lustre-OST0001: Not available for connect from 0@lo (stopping) [ 2109.213570] Lustre: Skipped 1 previous similar message [ 2111.129629] Lustre: server umount lustre-OST0001 complete [ 2111.500586] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2112.960412] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 2112.965684] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2112.965907] Lustre: Skipped 1 previous similar message [ 2112.967994] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2114.240667] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2114.240765] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2114.254166] LustreError: Skipped 1 previous similar message [ 2117.773574] Lustre: server umount lustre-OST0000 complete [ 2118.088233] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2118.640373] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 2118.645346] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2118.645554] Lustre: Skipped 1 previous similar message [ 2118.651509] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2118.651671] Lustre: Skipped 1 previous similar message [ 2124.320831] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2124.325275] Lustre: Skipped 3 previous similar messages [ 2132.320049] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 2132.402192] Lustre: server umount lustre-MDT0000 complete [ 2132.753420] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2132.793280] LustreError: 88117:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679932461 with bad export cookie 10138607886853250005 [ 2132.810040] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2136.850566] LNet: 90017:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2136.851000] LNet: Removed LNI 192.168.125.30@tcp [ 2138.155782] systemd-udevd[1035]: Specified user 'tss' unknown [ 2138.180086] systemd-udevd[1035]: Specified group 'tss' unknown [ 2138.503321] systemd-udevd[90358]: Using default interface naming scheme 'rhel-8.0'. [ 2138.638620] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 2143.154385] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 2144.003375] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 2144.388347] systemd-udevd[1035]: Specified user 'tss' unknown [ 2144.393321] systemd-udevd[1035]: Specified group 'tss' unknown [ 2144.450444] systemd-udevd[91064]: Using default interface naming scheme 'rhel-8.0'. [ 2145.729858] Lustre: Lustre: Build Version: 2.15.54 [ 2145.938068] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 2145.938380] LNet: Accept secure, port 988 [ 2147.084504] Lustre: Echo OBD driver; http://www.lustre.org/ [ 2151.233375] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2151.246795] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2151.249992] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2151.250723] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2151.251076] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2151.251323] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2151.251580] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2151.251804] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2151.252129] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2151.252423] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2152.797986] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 2152.818105] systemd[1]: tmp-mntft4j4t.mount: Succeeded. [ 2155.413232] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 2155.436684] systemd[1]: tmp-mntcJQkDv.mount: Succeeded. [ 2156.947308] print_req_error: 8188 callbacks suppressed [ 2156.947311] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2156.947789] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2156.948287] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2156.968705] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2157.069340] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 2158.647077] blk_update_request: operation not supported error, dev loop3, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2158.647522] blk_update_request: operation not supported error, dev loop3, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2158.656281] blk_update_request: operation not supported error, dev loop3, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2158.662209] blk_update_request: operation not supported error, dev loop3, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2158.727362] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 2159.293457] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 2159.312289] systemd[1]: tmp-mnt9Q6jQT.mount: Succeeded. [ 2159.361999] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 2159.375182] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2160.591456] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 2160.606788] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 2160.681143] Lustre: lustre-MDT0000: new disk, initializing [ 2160.746036] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2160.756471] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 2162.839715] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 2162.868607] systemd[1]: tmp-mntNH6wzQ.mount: Succeeded. [ 2162.892210] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2162.934657] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 2162.962394] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 2162.962578] Lustre: Skipped 1 previous similar message [ 2163.029033] Lustre: lustre-MDT0001: new disk, initializing [ 2163.091960] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 2163.105671] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 2163.106194] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 2165.466218] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2166.549857] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2167.262608] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 2167.287871] systemd[1]: tmp-mntOF5WMn.mount: Succeeded. [ 2167.323468] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2167.454198] Lustre: lustre-OST0000: new disk, initializing [ 2167.454690] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 2167.480336] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2169.853014] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2171.227384] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 2175.225547] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 2175.225918] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 2175.760956] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 4 sec [ 2176.948179] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 2177.168449] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 2177.327397] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2180.241834] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 2180.242452] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2180.247995] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2183.672939] Lustre: server umount lustre-OST0000 complete [ 2183.975478] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2184.013680] LustreError: 93738:0:(fid_request.c:233:seq_client_alloc_seq()) cli-cli-lustre-OST0000-osc-MDT0000: Cannot allocate new meta-sequence: rc = -5 [ 2184.016339] LustreError: 93738:0:(fid_request.c:275:seq_client_get_seq()) cli-cli-lustre-OST0000-osc-MDT0000: Can't allocate new sequence: rc = -5 [ 2184.016532] LustreError: 93738:0:(osp_precreate.c:521:osp_precreate_rollover_new_seq()) lustre-OST0000-osc-MDT0000: alloc fid error: rc = -5 [ 2185.280815] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2185.285879] Lustre: Skipped 1 previous similar message [ 2185.288950] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2185.292946] Lustre: Skipped 2 previous similar messages [ 2190.321803] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2190.325715] Lustre: Skipped 1 previous similar message [ 2195.360539] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2195.365676] Lustre: Skipped 1 previous similar message [ 2198.240049] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 2198.288389] Lustre: server umount lustre-MDT0000 complete [ 2198.686806] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2198.724612] LustreError: 92829:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679932527 with bad export cookie 3064075127360569068 [ 2198.725909] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2200.099087] Lustre: DEBUG MARKER: == conf-sanity test 30a: Big config llog and permanent parameter deletion ========================================================== 15:55:28 (1679932528) [ 2200.785794] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2201.081570] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2201.117995] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2202.336782] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2204.326791] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2205.516405] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2206.072645] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2206.234072] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2206.234261] Lustre: Skipped 1 previous similar message [ 2207.690788] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2207.916595] Lustre: Mounted lustre-client [ 2207.953242] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000bd0 [ 2208.281561] Lustre: Setting parameter lustre-client.llite.max_read_ahead_whole_mb in log lustre-client [ 2211.627731] Lustre: Modifying parameter lustre-client.llite.max_read_ahead_whole_mb in log lustre-client [ 2216.996831] Lustre: Modifying parameter lustre-client.llite.max_read_ahead_whole_mb in log lustre-client [ 2226.451900] Lustre: Modifying parameter lustre-client.llite.max_read_ahead_whole_mb in log lustre-client [ 2248.642882] Lustre: Modifying parameter lustre-client.llite.max_read_ahead_whole_mb in log lustre-client [ 2248.652924] Lustre: Skipped 2 previous similar messages [ 2285.613270] Lustre: Modifying parameter lustre-client.llite.max_read_ahead_whole_mb in log lustre-client [ 2285.615926] Lustre: Skipped 4 previous similar messages [ 2352.079573] Lustre: Modifying parameter lustre-client.llite.max_read_ahead_whole_mb in log lustre-client [ 2352.079760] Lustre: Skipped 7 previous similar messages [ 2368.889195] systemd[1]: mnt-lustre.mount: Succeeded. [ 2368.957423] Lustre: Unmounted lustre-client [ 2369.043622] Lustre: Mounted lustre-client [ 2369.349190] systemd[1]: mnt-lustre.mount: Succeeded. [ 2369.869455] Lustre: Unmounted lustre-client [ 2369.869579] Lustre: Skipped 1 previous similar message [ 2369.993252] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2370.240404] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 2370.242887] LustreError: Skipped 2 previous similar messages [ 2370.245116] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2370.245383] Lustre: Skipped 1 previous similar message [ 2370.246640] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2370.246867] Lustre: Skipped 2 previous similar messages [ 2374.240958] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 2374.245138] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2376.198510] Lustre: server umount lustre-OST0000 complete [ 2376.199058] Lustre: Skipped 1 previous similar message [ 2376.542815] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2379.040448] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 2379.045551] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2379.048132] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2379.048286] Lustre: Skipped 2 previous similar messages [ 2382.791936] Lustre: server umount lustre-MDT0000 complete [ 2383.091841] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2383.134361] LustreError: 94218:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679932711 with bad export cookie 3064075127360569908 [ 2383.134894] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2383.136066] LustreError: 94218:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 8 previous similar messages [ 2386.950619] LNet: 97244:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2386.957883] LNet: Removed LNI 192.168.125.30@tcp [ 2388.309776] systemd-udevd[1035]: Specified user 'tss' unknown [ 2388.405877] systemd-udevd[1035]: Specified group 'tss' unknown [ 2388.573503] systemd-udevd[97447]: Using default interface naming scheme 'rhel-8.0'. [ 2389.030552] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 2389.899548] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 2390.737689] Lustre: DEBUG MARKER: == conf-sanity test 30b: Remove failover nids ============ 15:58:38 (1679932718) [ 2391.164612] systemd-udevd[1035]: Specified user 'tss' unknown [ 2391.278529] systemd-udevd[1035]: Specified group 'tss' unknown [ 2391.298587] systemd-udevd[97988]: Using default interface naming scheme 'rhel-8.0'. [ 2392.168295] Lustre: Lustre: Build Version: 2.15.54 [ 2392.313750] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 2392.313974] LNet: Accept secure, port 988 [ 2393.181906] Lustre: Echo OBD driver; http://www.lustre.org/ [ 2395.078248] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 2395.079986] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2396.415461] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2396.432776] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2397.242693] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2397.394938] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 2398.277581] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2399.022425] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2399.492289] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2399.653997] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2400.700500] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2400.937910] Lustre: Mounted lustre-client [ 2402.635514] systemd[1]: mnt-lustre.mount: Succeeded. [ 2402.702961] Lustre: Unmounted lustre-client [ 2402.921637] Lustre: Mounted lustre-client [ 2403.380681] systemd[1]: mnt-lustre.mount: Succeeded. [ 2403.446287] Lustre: Unmounted lustre-client [ 2403.633243] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2405.680469] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 2405.686889] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2405.692008] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2409.726303] Lustre: server umount lustre-OST0000 complete [ 2410.030715] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2410.066222] LustreError: 98663:0:(osp_precreate.c:704:osp_precreate_send()) lustre-OST0000-osc-MDT0000: can't precreate: rc = -5 [ 2410.083895] LustreError: 98663:0:(osp_precreate.c:1405:osp_precreate_thread()) lustre-OST0000-osc-MDT0000: cannot precreate objects: rc = -5 [ 2410.720632] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2410.725113] Lustre: Skipped 1 previous similar message [ 2410.726298] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2410.731033] Lustre: Skipped 2 previous similar messages [ 2415.760430] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2415.765755] Lustre: Skipped 1 previous similar message [ 2420.800477] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2420.808481] Lustre: Skipped 1 previous similar message [ 2424.800041] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 2424.860134] Lustre: server umount lustre-MDT0000 complete [ 2424.971311] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2425.112575] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2425.163109] LustreError: 98619:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679932753 with bad export cookie 5735995795801670881 [ 2425.173997] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2428.541863] LNet: 99995:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2428.552060] LNet: Removed LNI 192.168.125.30@tcp [ 2429.755802] systemd-udevd[1035]: Specified user 'tss' unknown [ 2429.788300] systemd-udevd[1035]: Specified group 'tss' unknown [ 2429.881186] systemd-udevd[100331]: Using default interface naming scheme 'rhel-8.0'. [ 2430.162504] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 2430.733847] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 2431.593902] Lustre: DEBUG MARKER: == conf-sanity test 31: Connect to non-existent node (shouldn't crash) ========================================================== 15:59:19 (1679932759) [ 2432.800586] systemd-udevd[1035]: Specified user 'tss' unknown [ 2432.841708] systemd-udevd[1035]: Specified group 'tss' unknown [ 2432.927906] systemd-udevd[100924]: Using default interface naming scheme 'rhel-8.0'. [ 2434.023316] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 2434.871328] Lustre: DEBUG MARKER: SKIP: conf-sanity test_32a skipping excluded test 32a (base 32) [ 2434.963113] Lustre: DEBUG MARKER: SKIP: conf-sanity test_32b skipping excluded test 32b (base 32) [ 2435.084539] Lustre: DEBUG MARKER: SKIP: conf-sanity test_32c skipping excluded test 32c (base 32) [ 2435.187704] Lustre: DEBUG MARKER: SKIP: conf-sanity test_32d skipping excluded test 32d (base 32) [ 2435.281968] Lustre: DEBUG MARKER: SKIP: conf-sanity test_32e skipping excluded test 32e (base 32) [ 2435.440877] Lustre: DEBUG MARKER: SKIP: conf-sanity test_32f skipping excluded test 32f (base 32) [ 2435.546030] Lustre: DEBUG MARKER: SKIP: conf-sanity test_32g skipping excluded test 32g (base 32) [ 2435.680640] Lustre: DEBUG MARKER: == conf-sanity test 33a: Mount ost with a large index number ========================================================== 15:59:23 (1679932763) [ 2435.806718] Lustre: DEBUG MARKER: SKIP: conf-sanity test_33a mixed loopback and real device not working [ 2435.986087] Lustre: DEBUG MARKER: == conf-sanity test 33b: Drop cancel during umount ======= 15:59:24 (1679932764) [ 2436.299270] systemd-udevd[1035]: Specified user 'tss' unknown [ 2436.309610] systemd-udevd[1035]: Specified group 'tss' unknown [ 2436.448877] systemd-udevd[101580]: Using default interface naming scheme 'rhel-8.0'. [ 2437.044560] Lustre: Lustre: Build Version: 2.15.54 [ 2437.163772] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 2437.163979] LNet: Accept secure, port 988 [ 2437.910530] Lustre: Echo OBD driver; http://www.lustre.org/ [ 2440.035298] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 2440.037019] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2441.307084] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2441.340379] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2442.054876] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2442.235723] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 2443.355085] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2444.225321] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2444.861836] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2445.045323] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2445.924513] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2446.252161] Lustre: Mounted lustre-client [ 2452.709467] systemd[1]: mnt-lustre.mount: Succeeded. [ 2452.768895] Lustre: *** cfs_fail_loc=304, val=0*** [ 2452.809920] Lustre: Unmounted lustre-client [ 2452.949330] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2456.160558] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 2456.165505] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2456.167235] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2457.361179] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2457.361694] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2457.368125] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 2457.372678] Lustre: Skipped 1 previous similar message [ 2459.220123] Lustre: server umount lustre-OST0000 complete [ 2459.438753] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2460.480112] Lustre: 101691:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679932781/real 1679932781] req@00000000c0dadcbb x1761537179003456/t0(0) o103->MGC192.168.125.30@tcp@0@lo:17/18 lens 328/224 e 0 to 1 dl 1679932788 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' [ 2460.483682] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2460.484684] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2460.484868] Lustre: MGS: Client 3ee3d0e0-25be-49fd-8c9f-394f2a4aea4a (at 0@lo) reconnecting [ 2460.485388] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2460.487421] Lustre: MGC192.168.125.30@tcp: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 2465.520471] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2465.528316] Lustre: Skipped 2 previous similar messages [ 2465.628787] Lustre: server umount lustre-MDT0000 complete [ 2465.972085] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2472.161459] Lustre: 103072:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679932794/real 1679932794] req@000000006da0f458 x1761537179006144/t0(0) o251->MGC192.168.125.30@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1679932800 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 2472.284994] Lustre: server umount lustre-MDT0001 complete [ 2475.810371] LNet: 103389:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2475.821501] LNet: Removed LNI 192.168.125.30@tcp [ 2476.985075] systemd-udevd[1035]: Specified user 'tss' unknown [ 2477.015874] systemd-udevd[1035]: Specified group 'tss' unknown [ 2477.199686] systemd-udevd[103687]: Using default interface naming scheme 'rhel-8.0'. [ 2477.572765] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 2478.305888] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 2479.179862] Lustre: DEBUG MARKER: == conf-sanity test 34a: umount with opened file should be fail ========================================================== 16:00:06 (1679932806) [ 2479.712743] systemd-udevd[1035]: Specified user 'tss' unknown [ 2479.767054] systemd-udevd[1035]: Specified group 'tss' unknown [ 2479.814796] systemd-udevd[104041]: Using default interface naming scheme 'rhel-8.0'. [ 2480.467912] Lustre: Lustre: Build Version: 2.15.54 [ 2480.608029] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 2480.608352] LNet: Accept secure, port 988 [ 2481.341035] Lustre: Echo OBD driver; http://www.lustre.org/ [ 2483.718380] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 2483.734861] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2484.969922] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2484.986027] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2485.472714] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2485.632149] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 2486.545780] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2487.398076] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2487.948040] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2488.064197] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2488.846987] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2490.087325] Lustre: lustre-OST0000: deleting orphan objects from 0x280000bd0:35 to 0x280000bd0:65 [ 2495.132314] Lustre: Mounted lustre-client [ 2498.087146] systemd[1]: mnt-lustre.mount: Succeeded. [ 2498.138152] Lustre: Unmounted lustre-client [ 2498.344972] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2500.161389] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2500.162172] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2500.162801] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 2500.162913] Lustre: Skipped 1 previous similar message [ 2504.587467] Lustre: server umount lustre-OST0000 complete [ 2504.927812] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2505.200769] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2505.206074] Lustre: Skipped 1 previous similar message [ 2505.209247] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2505.213252] Lustre: Skipped 1 previous similar message [ 2510.240520] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2510.246000] Lustre: Skipped 1 previous similar message [ 2511.168469] Lustre: server umount lustre-MDT0000 complete [ 2511.472115] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2511.494877] LustreError: 104748:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679932839 with bad export cookie 7588334246529506742 [ 2511.496979] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2511.502024] LustreError: 104748:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 2515.180609] LNet: 106002:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2515.181052] LNet: Removed LNI 192.168.125.30@tcp [ 2516.351595] systemd-udevd[1035]: Specified user 'tss' unknown [ 2516.380180] systemd-udevd[1035]: Specified group 'tss' unknown [ 2516.522188] systemd-udevd[106342]: Using default interface naming scheme 'rhel-8.0'. [ 2516.751147] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 2517.865324] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 2518.815571] Lustre: DEBUG MARKER: == conf-sanity test 34b: force umount with failed mds should be normal ========================================================== 16:00:46 (1679932846) [ 2519.293834] systemd-udevd[1035]: Specified user 'tss' unknown [ 2519.402476] systemd-udevd[1035]: Specified group 'tss' unknown [ 2519.436926] systemd-udevd[106665]: Using default interface naming scheme 'rhel-8.0'. [ 2520.085029] Lustre: Lustre: Build Version: 2.15.54 [ 2520.241057] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 2520.244051] LNet: Accept secure, port 988 [ 2521.220594] Lustre: Echo OBD driver; http://www.lustre.org/ [ 2523.519859] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 2523.522627] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2524.844745] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2524.865287] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2525.754585] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2525.966090] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 2527.068978] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2527.831311] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2528.294303] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2528.471802] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2529.379221] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2530.564224] Lustre: lustre-OST0000: deleting orphan objects from 0x280000bd0:67 to 0x280000bd0:97 [ 2531.614252] Lustre: Mounted lustre-client [ 2536.876732] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2541.040427] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 2541.040670] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2541.041121] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2541.681919] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2541.683227] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2543.156817] Lustre: server umount lustre-MDT0000 complete [ 2543.499713] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2543.517720] LustreError: 107361:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679932871 with bad export cookie 9221030171953978205 [ 2543.520182] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2543.524782] LustreError: 107361:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 2546.720904] Lustre: lustre-MDT0001-mdc-ffff8b784642b000: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2546.721195] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2546.728934] Lustre: Skipped 2 previous similar messages [ 2546.729836] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 2546.729837] Lustre: Skipped 2 previous similar messages [ 2546.730767] LustreError: Skipped 1 previous similar message [ 2548.802388] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 2548.802582] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2548.804672] Lustre: Skipped 1 previous similar message [ 2548.806016] LustreError: Skipped 1 previous similar message [ 2549.854995] Lustre: server umount lustre-MDT0001 complete [ 2550.103028] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 2560.562286] systemd[1]: mnt-lustre.mount: Succeeded. [ 2560.632216] Lustre: Unmounted lustre-client [ 2560.843947] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2567.063473] Lustre: server umount lustre-OST0000 complete [ 2570.780313] LNet: 108661:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2570.789510] LNet: Removed LNI 192.168.125.30@tcp [ 2572.043199] systemd-udevd[1035]: Specified user 'tss' unknown [ 2572.169238] systemd-udevd[1035]: Specified group 'tss' unknown [ 2572.254521] systemd-udevd[108999]: Using default interface naming scheme 'rhel-8.0'. [ 2572.742358] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 2573.468537] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 2574.330503] Lustre: DEBUG MARKER: == conf-sanity test 34c: force umount with failed ost should be normal ========================================================== 16:01:41 (1679932901) [ 2574.721362] systemd-udevd[1035]: Specified user 'tss' unknown [ 2574.953416] systemd-udevd[1035]: Specified group 'tss' unknown [ 2575.038347] systemd-udevd[109399]: Using default interface naming scheme 'rhel-8.0'. [ 2575.738681] Lustre: Lustre: Build Version: 2.15.54 [ 2575.855716] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 2575.855943] LNet: Accept secure, port 988 [ 2576.676946] Lustre: Echo OBD driver; http://www.lustre.org/ [ 2578.958381] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 2578.960793] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2580.282179] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2580.312385] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2581.185262] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2581.387068] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 2582.655760] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2583.661826] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2584.254769] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2584.360620] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2585.083867] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2586.246497] Lustre: lustre-OST0000: deleting orphan objects from 0x280000bd0:99 to 0x280000bd0:129 [ 2591.299793] Lustre: Mounted lustre-client [ 2592.542560] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2596.320702] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 2596.325247] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2596.328370] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2598.848353] Lustre: server umount lustre-OST0000 complete [ 2599.070342] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 2609.522314] systemd[1]: mnt-lustre.mount: Succeeded. [ 2609.578779] Lustre: Unmounted lustre-client [ 2609.856179] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2611.440641] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2611.446172] Lustre: Skipped 2 previous similar messages [ 2611.449349] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2611.453290] Lustre: Skipped 3 previous similar messages [ 2612.480693] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2612.485079] Lustre: Skipped 1 previous similar message [ 2616.113708] Lustre: server umount lustre-MDT0000 complete [ 2616.363630] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2616.388366] LustreError: 110021:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679932944 with bad export cookie 8752510707880010219 [ 2616.396325] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2622.708363] Lustre: server umount lustre-MDT0001 complete [ 2626.020252] LNet: 111264:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2626.029045] LNet: Removed LNI 192.168.125.30@tcp [ 2627.254173] systemd-udevd[1035]: Specified user 'tss' unknown [ 2627.286121] systemd-udevd[1035]: Specified group 'tss' unknown [ 2627.454707] systemd-udevd[111478]: Using default interface naming scheme 'rhel-8.0'. [ 2627.974324] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 2628.608808] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 2629.516102] Lustre: DEBUG MARKER: == conf-sanity test 35a: Reconnect to the last active server first ========================================================== 16:02:36 (1679932956) [ 2630.157513] systemd-udevd[1035]: Specified user 'tss' unknown [ 2630.256522] systemd-udevd[1035]: Specified group 'tss' unknown [ 2630.392519] systemd-udevd[111940]: Using default interface naming scheme 'rhel-8.0'. [ 2631.414054] Lustre: Lustre: Build Version: 2.15.54 [ 2631.582486] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 2631.582769] LNet: Accept secure, port 988 [ 2632.452686] Lustre: Echo OBD driver; http://www.lustre.org/ [ 2634.658061] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 2634.659679] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2635.913193] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2635.966316] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2636.697753] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2636.840395] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 2637.861067] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2638.894256] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2639.517497] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2639.754071] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2640.511728] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2641.685436] Lustre: lustre-OST0000: deleting orphan objects from 0x280000bd0:131 to 0x280000bd0:161 [ 2646.730755] Lustre: Mounted lustre-client [ 2648.025549] Lustre: DEBUG MARKER: Set up a fake failnode for the MDS [ 2648.251729] Lustre: DEBUG MARKER: Wait for RECONNECT_INTERVAL seconds (10s) [ 2652.881232] LNetError: 120-3: Refusing connection from 192.168.125.30 for 127.0.0.2@tcp: No matching NI [ 2652.881577] LNetError: 112158:0:(socklnd_cb.c:1783:ksocknal_recv_hello()) Error -104 reading HELLO from 127.0.0.2 [ 2652.886047] LNetError: 11b-b: Connection to 127.0.0.2@tcp at host 127.0.0.2:988 was reset: is it running a compatible version of Lustre and is 127.0.0.2@tcp one of its NIDs? [ 2658.427748] Lustre: DEBUG MARKER: conf-sanity.sh test_35a 2023-03-2716h03m06s [ 2658.592473] Lustre: DEBUG MARKER: Stopping the MDT: lustre-MDT0000 [ 2658.748550] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2661.840722] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2661.841207] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2661.841406] Lustre: Skipped 3 previous similar messages [ 2661.841802] LNetError: 120-3: Refusing connection from 192.168.125.30 for 127.0.0.2@tcp: No matching NI [ 2661.851580] LNetError: 112159:0:(socklnd_cb.c:1783:ksocknal_recv_hello()) Error -104 reading HELLO from 127.0.0.2 [ 2661.851758] LNetError: 11b-b: Connection to 127.0.0.2@tcp at host 127.0.0.2:988 was reset: is it running a compatible version of Lustre and is 127.0.0.2@tcp one of its NIDs? [ 2665.016286] Lustre: server umount lustre-MDT0000 complete [ 2665.221470] LustreError: 113547:0:(lmv_obd.c:1311:lmv_statfs()) lustre-MDT0000-mdc-ffff8b783ece8000: can't stat MDS #0: rc = -110 [ 2665.376478] Lustre: DEBUG MARKER: Restarting the MDT: lustre-MDT0000 [ 2665.825210] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2665.836676] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2665.836978] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2665.850373] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0x20b82bc4aa0ce3b3 to 0x20b82bc4aa0ce83d [ 2665.850530] Lustre: MGC192.168.125.30@tcp: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 2665.863039] LustreError: Skipped 3 previous similar messages [ 2665.909819] LNetError: 120-3: Refusing connection from 192.168.125.30 for 127.0.0.2@tcp: No matching NI [ 2665.910885] LNetError: 112160:0:(socklnd_cb.c:1783:ksocknal_recv_hello()) Error -104 reading HELLO from 127.0.0.2 [ 2665.911045] LNetError: 11b-b: Connection to 127.0.0.2@tcp at host 127.0.0.2:988 was reset: is it running a compatible version of Lustre and is 127.0.0.2@tcp one of its NIDs? [ 2665.916289] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2665.938543] Lustre: lustre-OST0000: deleting orphan objects from 0x280000bd0:131 to 0x280000bd0:193 [ 2666.205539] Lustre: DEBUG MARKER: Wait for df (113547) ... [ 2666.300627] Lustre: DEBUG MARKER: done [ 2666.595345] systemd[1]: mnt-lustre.mount: Succeeded. [ 2666.634939] Lustre: Unmounted lustre-client [ 2666.733349] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2666.880456] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 2666.886717] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2666.886926] Lustre: Skipped 3 previous similar messages [ 2666.889724] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2670.960886] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2670.968812] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2670.970044] LNetError: 120-3: Refusing connection from 192.168.125.30 for 127.0.0.2@tcp: No matching NI [ 2670.970326] LNetError: 112161:0:(socklnd_cb.c:1783:ksocknal_recv_hello()) Error -104 reading HELLO from 127.0.0.2 [ 2670.970506] LNetError: 11b-b: Connection to 127.0.0.2@tcp at host 127.0.0.2:988 was reset: is it running a compatible version of Lustre and is 127.0.0.2@tcp one of its NIDs? [ 2670.971752] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 2670.973237] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 2672.969490] Lustre: server umount lustre-OST0000 complete [ 2673.340157] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2676.000838] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2676.001646] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2676.001866] Lustre: Skipped 2 previous similar messages [ 2679.570546] Lustre: server umount lustre-MDT0000 complete [ 2679.925010] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2679.951674] LustreError: 112624:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933008 with bad export cookie 2357682528595011645 [ 2679.960926] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2683.470576] LNet: 114216:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2683.476160] LNet: Removed LNI 192.168.125.30@tcp [ 2684.755834] systemd-udevd[1035]: Specified user 'tss' unknown [ 2684.792801] systemd-udevd[1035]: Specified group 'tss' unknown [ 2684.937025] systemd-udevd[114563]: Using default interface naming scheme 'rhel-8.0'. [ 2685.328549] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 2688.174701] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 2689.062690] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 2689.274217] systemd-udevd[1035]: Specified user 'tss' unknown [ 2689.275256] systemd-udevd[1035]: Specified group 'tss' unknown [ 2689.345818] systemd-udevd[115305]: Using default interface naming scheme 'rhel-8.0'. [ 2689.873950] Lustre: Lustre: Build Version: 2.15.54 [ 2689.978948] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 2689.979178] LNet: Accept secure, port 988 [ 2690.659648] Lustre: Echo OBD driver; http://www.lustre.org/ [ 2693.326719] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2693.327095] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2693.327618] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2693.328009] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2693.328311] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2693.328592] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2693.328868] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2693.329150] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2693.329438] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2693.329722] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2694.442113] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 2694.478679] systemd[1]: tmp-mntBxBW03.mount: Succeeded. [ 2696.797107] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 2696.803982] systemd[1]: tmp-mntL9ow25.mount: Succeeded. [ 2698.293284] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 2698.315190] systemd[1]: tmp-mntGxpkUk.mount: Succeeded. [ 2700.320941] print_req_error: 8192 callbacks suppressed [ 2700.320944] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2700.338456] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2700.339080] blk_update_request: operation not supported error, dev loop4, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2700.343732] blk_update_request: operation not supported error, dev loop4, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 2700.423807] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 2700.432487] systemd[1]: tmp-mntARWyfv.mount: Succeeded. [ 2701.112667] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 2701.124436] systemd[1]: tmp-mntu7WJKB.mount: Succeeded. [ 2701.162573] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 2701.170802] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2702.287279] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 2702.295588] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 2702.325191] Lustre: lustre-MDT0000: new disk, initializing [ 2702.356804] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2702.359818] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 2704.188089] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 2704.198094] systemd[1]: tmp-mnth9LuPv.mount: Succeeded. [ 2704.251291] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2704.343974] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 2704.358424] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 2704.358602] Lustre: Skipped 1 previous similar message [ 2704.395953] Lustre: lustre-MDT0001: new disk, initializing [ 2704.469134] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 2704.490156] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 2704.490567] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 2706.785982] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2707.289992] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2707.646259] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 2707.674333] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2707.863160] Lustre: lustre-OST0000: new disk, initializing [ 2707.863880] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 2707.911934] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2709.856505] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2710.099030] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 2710.120393] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 2710.175375] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 2710.870503] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 2711.058046] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 2711.994363] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 2712.215524] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 2712.338653] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2715.200561] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_create to node 0@lo failed: rc = -107 [ 2715.201515] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2715.208907] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2718.621759] Lustre: server umount lustre-OST0000 complete [ 2718.935845] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2719.600379] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 2719.604152] LustreError: Skipped 2 previous similar messages [ 2719.604233] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2719.604320] Lustre: Skipped 1 previous similar message [ 2719.604581] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2719.604640] Lustre: Skipped 1 previous similar message [ 2725.169150] Lustre: server umount lustre-MDT0000 complete [ 2725.295192] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2725.301256] LustreError: Skipped 1 previous similar message [ 2725.541118] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2725.549797] LustreError: 117055:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933053 with bad export cookie 11659888059165466596 [ 2725.551504] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2725.554405] LustreError: 117852:0:(osp_precreate.c:704:osp_precreate_send()) lustre-OST0000-osc-MDT0001: can't precreate: rc = -5 [ 2725.554795] LustreError: 117852:0:(osp_precreate.c:1405:osp_precreate_thread()) lustre-OST0000-osc-MDT0001: cannot precreate objects: rc = -5 [ 2726.655728] Lustre: DEBUG MARKER: == conf-sanity test 35b: Continue reconnection retries, if the active server is busy ========================================================== 16:04:14 (1679933054) [ 2726.988830] Lustre: DEBUG MARKER: SKIP: conf-sanity test_35b local MDS [ 2727.295892] Lustre: DEBUG MARKER: == conf-sanity test 36: df report consistency on OSTs with different block size ========================================================== 16:04:15 (1679933055) [ 2727.504347] Lustre: DEBUG MARKER: SKIP: conf-sanity test_36 mixed loopback and real device not working [ 2727.745362] Lustre: DEBUG MARKER: == conf-sanity test 37: verify set tunables works for symlink device ========================================================== 16:04:16 (1679933056) [ 2728.125207] systemd-udevd[1035]: Specified user 'tss' unknown [ 2728.197137] systemd-udevd[1035]: Specified group 'tss' unknown [ 2728.232453] systemd-udevd[118590]: Using default interface naming scheme 'rhel-8.0'. [ 2730.681895] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2730.921278] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2730.934815] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2731.033075] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2731.054243] Lustre: Failing over lustre-MDT0000 [ 2731.055120] LustreError: 119052:0:(osp_object.c:637:osp_attr_get()) lustre-MDT0001-osp-MDT0000: osp_attr_get update error [0x200000009:0x1:0x0]: rc = -5 [ 2731.062287] LustreError: 119052:0:(lod_sub_object.c:932:lod_sub_prep_llog()) lustre-MDT0000-mdtlov: can't get id from catalogs: rc = -5 [ 2731.062470] LustreError: 119052:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 0, retries 0, failed: rc = -5 [ 2731.217271] Lustre: server umount lustre-MDT0000 complete [ 2731.224498] Lustre: Skipped 1 previous similar message [ 2731.841476] Lustre: DEBUG MARKER: == conf-sanity test 38: MDS recreates missing lov_objid file from OST data ========================================================== 16:04:20 (1679933060) [ 2732.450586] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2732.658527] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2733.385979] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2734.811079] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2735.496020] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2735.954737] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2736.119572] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2736.119760] Lustre: Skipped 2 previous similar messages [ 2736.734957] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2742.971281] Lustre: Mounted lustre-client [ 2744.316291] Lustre: DEBUG MARKER: copying 10 files to /mnt/lustre/d38.conf-sanity [ 2745.191715] systemd[1]: mnt-lustre.mount: Succeeded. [ 2745.315107] Lustre: Unmounted lustre-client [ 2745.592441] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2748.001506] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2748.006062] Lustre: Skipped 1 previous similar message [ 2748.008450] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2748.011923] Lustre: Skipped 3 previous similar messages [ 2750.661487] Lustre: server umount lustre-MDT0000 complete [ 2751.018147] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2751.036909] LustreError: 119246:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933079 with bad export cookie 11659888059165467639 [ 2751.037219] LustreError: 119246:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 2751.037821] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2751.526027] Lustre: DEBUG MARKER: delete lov_objid file on MDS [ 2751.811474] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) [ 2751.952174] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2752.373164] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2757.124168] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0xa1d03e69bf6063f7 to 0xa1d03e69bf606e69 [ 2757.125406] Lustre: MGC192.168.125.30@tcp: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 2757.358011] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2757.415786] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2757.446063] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000bd0 [ 2758.111102] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2758.180306] Lustre: lustre-MDT0001-lwp-OST0000: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2758.180586] Lustre: Skipped 2 previous similar messages [ 2758.236257] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:3 to 0x280000400:33 [ 2759.026345] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2759.942733] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2760.110045] Lustre: lustre-MDT0001-lwp-OST0000: Connection restored to (at 0@lo) [ 2761.200116] Lustre: 115518:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679933081/real 1679933081] req@0000000014221a68 x1761537444329664/t0(0) o400->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679933088 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 2764.240178] Lustre: 115519:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679933085/real 1679933085] req@000000002e765c60 x1761537444329856/t0(0) o400->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679933092 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 2764.245150] Lustre: 115519:0:(client.c:2305:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 2764.324253] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to (at 0@lo) [ 2764.345173] Lustre: Mounted lustre-client [ 2765.301862] systemd[1]: mnt-lustre.mount: Succeeded. [ 2765.373698] Lustre: Unmounted lustre-client [ 2765.591744] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2768.240603] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 2768.245863] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2768.249946] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2768.250181] Lustre: Skipped 1 previous similar message [ 2771.931293] Lustre: server umount lustre-MDT0000 complete [ 2771.931434] Lustre: Skipped 1 previous similar message [ 2772.347719] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2772.376346] LustreError: 119245:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933100 with bad export cookie 11659888059165470313 [ 2772.386235] LustreError: 119245:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 2772.386546] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2773.139137] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) [ 2773.373355] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2774.026147] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2778.484041] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0xa1d03e69bf606e69 to 0xa1d03e69bf6076ce [ 2778.488056] Lustre: MGC192.168.125.30@tcp: Connection restored to (at 0@lo) [ 2778.691431] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2778.738951] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2778.739129] Lustre: Skipped 1 previous similar message [ 2778.757192] Lustre: lustre-OST0000: deleting orphan objects from 0x280000bd0:3 to 0x280000bd0:33 [ 2779.697159] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2779.760614] Lustre: lustre-MDT0001: Not available for connect from 0@lo (not set up) [ 2779.760731] Lustre: Skipped 3 previous similar messages [ 2779.761308] Lustre: lustre-MDT0001-lwp-OST0000: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2779.761643] Lustre: Skipped 2 previous similar messages [ 2779.849252] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:3 to 0x280000400:65 [ 2780.851360] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2780.887154] Lustre: 115519:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679933102/real 1679933102] req@000000000fe38461 x1761537444351424/t0(0) o400->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679933109 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 2781.472641] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2781.682738] Lustre: Mounted lustre-client [ 2782.278381] systemd[1]: mnt-lustre.mount: Succeeded. [ 2782.323134] Lustre: Unmounted lustre-client [ 2782.434995] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2784.000051] Lustre: 115519:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679933105/real 1679933105] req@0000000013654b1d x1761537444351488/t0(0) o400->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679933112 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 2788.673279] Lustre: server umount lustre-MDT0000 complete [ 2788.673454] Lustre: Skipped 1 previous similar message [ 2789.028805] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2789.047321] LustreError: 119247:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933117 with bad export cookie 11659888059165472462 [ 2789.047508] LustreError: 119247:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 2789.048005] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2789.921332] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) [ 2790.052361] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2790.223834] Lustre: DEBUG MARKER: files compared the same [ 2790.329390] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2796.480051] Lustre: 121910:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679933118/real 1679933118] req@00000000010e4e03 x1761537444370112/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679933124 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 2796.484983] Lustre: 121910:0:(client.c:2305:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 2800.370640] LNet: 122304:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2800.371093] LNet: Removed LNI 192.168.125.30@tcp [ 2801.563790] systemd-udevd[1035]: Specified user 'tss' unknown [ 2801.598479] systemd-udevd[1035]: Specified group 'tss' unknown [ 2801.798781] systemd-udevd[122634]: Using default interface naming scheme 'rhel-8.0'. [ 2802.267461] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 2802.956959] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 2803.788029] Lustre: DEBUG MARKER: == conf-sanity test 39: leak_finder recognizes both LUSTRE and LNET malloc messages ========================================================== 16:05:31 (1679933131) [ 2804.161752] systemd-udevd[1035]: Specified user 'tss' unknown [ 2804.176674] systemd-udevd[1035]: Specified group 'tss' unknown [ 2804.269576] systemd-udevd[123098]: Using default interface naming scheme 'rhel-8.0'. [ 2804.854761] Lustre: Lustre: Build Version: 2.15.54 [ 2805.008692] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 2805.009004] LNet: Accept secure, port 988 [ 2805.683155] Lustre: Echo OBD driver; http://www.lustre.org/ [ 2807.387807] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 2807.389457] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2808.573888] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2808.583692] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2809.158892] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2809.287691] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 2810.418326] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2811.023628] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2811.658457] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2811.861081] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2812.900732] Lustre: lustre-OST0000: deleting orphan objects from 0x280000bd0:35 to 0x280000bd0:65 [ 2812.953554] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2813.239508] Lustre: Mounted lustre-client [ 2814.322584] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:3 to 0x280000400:97 [ 2819.662159] systemd[1]: mnt-lustre.mount: Succeeded. [ 2819.736234] Lustre: Unmounted lustre-client [ 2819.896410] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2822.960325] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 2822.965384] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2822.966758] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2824.400317] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 2824.400563] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2824.408376] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2824.413743] Lustre: Skipped 1 previous similar message [ 2826.182725] Lustre: server umount lustre-OST0000 complete [ 2826.592555] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2829.440546] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 2829.445253] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2829.446763] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2829.450203] Lustre: Skipped 1 previous similar message [ 2832.829848] Lustre: server umount lustre-MDT0000 complete [ 2833.245468] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2833.291824] LustreError: 123663:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933161 with bad export cookie 9579746092350797491 [ 2833.301563] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2836.820380] LNet: 124886:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2836.820880] LNet: Removed LNI 192.168.125.30@tcp [ 2837.950300] systemd-udevd[1035]: Specified user 'tss' unknown [ 2837.951268] systemd-udevd[1035]: Specified group 'tss' unknown [ 2838.162747] systemd-udevd[125190]: Using default interface naming scheme 'rhel-8.0'. [ 2838.475327] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 2839.273618] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 2840.129086] Lustre: DEBUG MARKER: == conf-sanity test 40: race during service thread startup ========================================================== 16:06:07 (1679933167) [ 2840.513331] systemd-udevd[1035]: Specified user 'tss' unknown [ 2840.530482] systemd-udevd[1035]: Specified group 'tss' unknown [ 2840.632712] systemd-udevd[125501]: Using default interface naming scheme 'rhel-8.0'. [ 2841.468522] Lustre: Lustre: Build Version: 2.15.54 [ 2841.568512] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 2841.568791] LNet: Accept secure, port 988 [ 2842.402187] Lustre: Echo OBD driver; http://www.lustre.org/ [ 2844.247035] Lustre: lustre-OST0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 2844.248583] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2862.000218] LustreError: 126240:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 1: rc = -5 [ 2893.200088] LustreError: 126240:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 4: rc = -110 [ 2923.360085] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2924.703785] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2925.414461] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2925.451741] Lustre: *** cfs_fail_loc=706, val=0*** [ 2944.405042] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2944.446441] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2944.463924] Lustre: lustre-OST0000: deleting orphan objects from 0x280000bd0:35 to 0x280000bd0:97 [ 2945.439810] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2945.623897] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 2945.665842] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:3 to 0x280000400:129 [ 2946.934179] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2947.922992] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 2948.145118] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2949.440389] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 2949.446491] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2949.450225] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2950.644017] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 2950.644470] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2950.651023] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2950.657518] Lustre: Skipped 1 previous similar message [ 2954.401498] Lustre: server umount lustre-OST0000 complete [ 2954.752471] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2955.680982] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 2955.686192] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2955.689434] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2955.693138] Lustre: Skipped 1 previous similar message [ 2960.720777] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2960.725346] Lustre: Skipped 1 previous similar message [ 2960.989038] Lustre: server umount lustre-MDT0000 complete [ 2961.334616] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2961.402485] LustreError: 126247:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933289 with bad export cookie 5878816370797088520 [ 2961.411515] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2965.051324] LNet: 127400:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2965.062202] LNet: Removed LNI 192.168.125.30@tcp [ 2966.404717] systemd-udevd[1035]: Specified user 'tss' unknown [ 2966.469089] systemd-udevd[1035]: Specified group 'tss' unknown [ 2966.518355] systemd-udevd[127582]: Using default interface naming scheme 'rhel-8.0'. [ 2967.343095] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 2967.969055] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 2968.778827] Lustre: DEBUG MARKER: == conf-sanity test 41a: mount mds with --nosvc and --nomgs ========================================================== 16:08:16 (1679933296) [ 2969.090876] systemd-udevd[1035]: Specified user 'tss' unknown [ 2969.119621] systemd-udevd[1035]: Specified group 'tss' unknown [ 2969.210998] systemd-udevd[128156]: Using default interface naming scheme 'rhel-8.0'. [ 2969.959344] Lustre: Lustre: Build Version: 2.15.54 [ 2970.131361] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 2970.131676] LNet: Accept secure, port 988 [ 2971.114032] Lustre: Echo OBD driver; http://www.lustre.org/ [ 2973.022473] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 2973.024089] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2974.960700] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2975.142956] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2975.198435] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 2975.924328] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2976.084838] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2976.098990] LustreError: Skipped 1 previous similar message [ 2976.104470] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2977.190214] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2977.191004] LustreError: Skipped 2 previous similar messages [ 2977.408091] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2977.453237] Lustre: lustre-OST0000: deleting orphan objects from 0x280000bd0:35 to 0x280000bd0:129 [ 2977.876241] Lustre: Mounted lustre-client [ 2979.934754] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:3 to 0x280000400:161 [ 2985.267024] systemd[1]: mnt-lustre.mount: Succeeded. [ 2985.323667] Lustre: Unmounted lustre-client [ 2985.447802] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 2987.440823] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 2987.446033] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2987.447232] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2990.001777] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2990.002089] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2990.017963] Lustre: Skipped 1 previous similar message [ 2991.670652] Lustre: server umount lustre-OST0000 complete [ 2995.040377] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 2995.041195] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2995.041942] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2995.042634] Lustre: Skipped 1 previous similar message [ 2998.191576] Lustre: server umount lustre-MDT0000 complete [ 2998.483717] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 2998.970988] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 2999.989926] Lustre: DEBUG MARKER: == conf-sanity test 41b: mount mds with --nosvc and --nomgs on first mount ========================================================== 16:08:48 (1679933328) [ 3002.585316] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 3002.906007] systemd-udevd[1035]: Specified user 'tss' unknown [ 3003.006604] systemd-udevd[1035]: Specified group 'tss' unknown [ 3003.065658] systemd-udevd[130321]: Using default interface naming scheme 'rhel-8.0'. [ 3006.153898] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3006.154378] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3006.154959] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3006.155419] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3006.155791] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3006.156144] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3006.156497] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3006.156858] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3006.157223] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3006.157553] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3007.281007] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3007.296887] systemd[1]: tmp-mnt3fXS96.mount: Succeeded. [ 3009.438197] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3010.419136] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3012.130904] print_req_error: 8192 callbacks suppressed [ 3012.130906] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3012.139664] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3012.140311] blk_update_request: operation not supported error, dev loop4, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3012.144962] blk_update_request: operation not supported error, dev loop4, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3012.257291] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3012.263564] systemd[1]: tmp-mnt0Qchaz.mount: Succeeded. [ 3012.937405] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3012.943616] systemd[1]: tmp-mntfP0OOE.mount: Succeeded. [ 3012.962832] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3014.727175] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3014.769816] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3014.927618] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 3014.937270] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 3014.997445] Lustre: lustre-MDT0001: new disk, initializing [ 3015.061494] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 3015.080120] Lustre: srv-lustre-MDT0001: Waiting to contact MDT0000 to allocate super-sequence: rc = -115 [ 3015.080328] LustreError: 132057:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osd: get update log duration 0, retries 0, failed: rc = -115 [ 3016.949025] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3016.955649] systemd[1]: tmp-mnt6XPJ0N.mount: Succeeded. [ 3016.990541] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3017.124974] Lustre: lustre-OST0000: new disk, initializing [ 3017.125498] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 3019.598819] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3020.124781] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 3020.137313] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 3020.157970] Lustre: lustre-MDT0000: new disk, initializing [ 3020.201167] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3020.201954] Lustre: Skipped 1 previous similar message [ 3020.208070] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 3021.120447] Lustre: 132474:0:(client.c:1485:after_reply()) @@@ resending request on EINPROGRESS req@000000000ec431f8 x1761537737903808/t0(0) o700->lustre-OST0000-osc-MDT0001@0@lo:31/4 lens 264/248 e 0 to 0 dl 1679933361 ref 2 fl Rpc:RQU/2/0 rc 0/-115 job:'' [ 3023.360363] Lustre: srv-lustre-OST0000: Waiting to contact MDT0000 to allocate super-sequence: rc = -115 [ 3026.241852] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:0:ost [ 3026.255586] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:0:ost] [ 3026.402004] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x240000401 [ 3030.816883] Lustre: Mounted lustre-client [ 3036.239562] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 3036.241196] LustreError: 132651:0:(file.c:242:ll_close_inode_openhandle()) lustre-clilmv-ffff8b7836111000: inode [0x200000402:0x1:0x0] mdc close failed: rc = -108 [ 3036.260124] Lustre: 128328:0:(llite_lib.c:3707:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.125.30@tcp:/lustre/fid: [0x200000402:0x1:0x0]/ may get corrupted (rc -108) [ 3046.642835] systemd[1]: mnt-lustre.mount: Succeeded. [ 3046.716590] Lustre: Unmounted lustre-client [ 3046.833452] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3050.961428] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3050.966337] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3050.966502] Lustre: Skipped 1 previous similar message [ 3053.040387] Lustre: server umount lustre-OST0000 complete [ 3053.040550] Lustre: Skipped 2 previous similar messages [ 3056.001045] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3056.006331] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3056.006542] Lustre: Skipped 1 previous similar message [ 3056.011573] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3056.011826] Lustre: Skipped 1 previous similar message [ 3059.543944] Lustre: server umount lustre-MDT0000 complete [ 3060.090618] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3066.382858] Lustre: server umount lustre-MDT0001 complete [ 3066.685959] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3067.713176] Lustre: DEBUG MARKER: == conf-sanity test 41c: concurrent mounts of MDT/OST should all fail but one ========================================================== 16:09:55 (1679933395) [ 3071.561319] LNet: 133412:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 3071.561707] LNet: Removed LNI 192.168.125.30@tcp [ 3072.748780] systemd-udevd[1035]: Specified user 'tss' unknown [ 3072.819091] systemd-udevd[1035]: Specified group 'tss' unknown [ 3072.854308] systemd-udevd[133594]: Using default interface naming scheme 'rhel-8.0'. [ 3073.134284] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 3073.985268] systemd-udevd[1035]: Specified user 'tss' unknown [ 3073.985515] systemd-udevd[1035]: Specified group 'tss' unknown [ 3074.135673] systemd-udevd[134098]: Using default interface naming scheme 'rhel-8.0'. [ 3074.386284] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 2 [ 3075.816157] Lustre: Lustre: Build Version: 2.15.54 [ 3075.987809] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 3075.988085] LNet: Accept secure, port 988 [ 3076.985344] Lustre: Echo OBD driver; http://www.lustre.org/ [ 3079.315035] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 3079.320215] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3080.681713] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3080.706321] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3081.214709] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3081.331147] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 3082.075340] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3082.789505] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3083.143452] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3083.322813] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 3083.985534] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3086.253830] Lustre: lustre-OST0000: deleting orphan objects from 0x240000401:3 to 0x240000401:33 [ 3091.292930] Lustre: Mounted lustre-client [ 3092.783400] systemd[1]: mnt-lustre.mount: Succeeded. [ 3092.834139] Lustre: Unmounted lustre-client [ 3092.993420] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3096.320548] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 3096.326419] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3096.327015] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3099.289489] Lustre: server umount lustre-OST0000 complete [ 3099.548998] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3101.360753] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3101.361901] Lustre: Skipped 1 previous similar message [ 3101.362380] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3101.362451] Lustre: Skipped 1 previous similar message [ 3105.788666] Lustre: server umount lustre-MDT0000 complete [ 3106.122324] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3106.155846] LustreError: 134685:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933434 with bad export cookie 14822233461007628548 [ 3106.156478] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3106.160991] LustreError: 134685:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 3109.440782] LNet: 135925:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 3109.441250] LNet: Removed LNI 192.168.125.30@tcp [ 3110.689411] systemd-udevd[1035]: Specified user 'tss' unknown [ 3110.696051] systemd-udevd[1035]: Specified group 'tss' unknown [ 3110.805591] systemd-udevd[136157]: Using default interface naming scheme 'rhel-8.0'. [ 3111.338841] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 3111.430476] systemd-udevd[1035]: Specified user 'tss' unknown [ 3111.443286] systemd-udevd[1035]: Specified group 'tss' unknown [ 3111.632587] systemd-udevd[136553]: Using default interface naming scheme 'rhel-8.0'. [ 3112.061585] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 2 [ 3113.220575] Lustre: Lustre: Build Version: 2.15.54 [ 3113.347109] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 3113.352989] LNet: Accept secure, port 988 [ 3114.002186] Lustre: Echo OBD driver; http://www.lustre.org/ [ 3116.009739] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 3116.009957] LustreError: 137259:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 716 sleeping [ 3116.011318] LustreError: 137262:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 716 waking [ 3116.014550] LustreError: 137259:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 716 awake: rc=499 [ 3116.014866] LustreError: 137259:0:(tgt_mount.c:2048:server_fill_super()) Unable to start osd on /dev/mapper/mds1_flakey: -114 [ 3116.015114] LustreError: 137259:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -114 [ 3116.017265] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3117.262760] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3117.273037] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3117.667060] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3117.807692] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 3118.820024] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing lsmod [ 3118.983660] LustreError: 137617:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 716 sleeping [ 3118.999718] LustreError: 137621:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 716 waking [ 3119.001060] LustreError: 137617:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 716 awake: rc=498 [ 3119.001217] LustreError: 137617:0:(tgt_mount.c:2048:server_fill_super()) Unable to start osd on /dev/mapper/ost1_flakey: -114 [ 3119.001393] LustreError: 137617:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -114 [ 3119.005515] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3119.151785] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 3119.308601] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3122.800365] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3122.806894] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3122.813815] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3123.841217] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3123.841805] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3123.849695] Lustre: Skipped 2 previous similar messages [ 3128.881137] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3133.920057] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 3133.921610] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3133.936057] Lustre: Skipped 2 previous similar messages [ 3133.984468] Lustre: server umount lustre-MDT0000 complete [ 3134.281479] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3134.310524] LustreError: 137272:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933462 with bad export cookie 7849915764172077775 [ 3134.312331] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3134.313642] LustreError: 137272:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 3134.527910] Lustre: server umount lustre-MDT0001 complete [ 3134.751551] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3140.880057] Lustre: 137783:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679933463/real 1679933463] req@000000002046385c x1761537887837952/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679933469 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 3141.042271] Lustre: server umount lustre-OST0000 complete [ 3141.685378] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3141.989784] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3142.013705] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3142.748932] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3143.892066] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3144.770716] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3145.306389] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3146.466716] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3147.687714] Lustre: lustre-OST0000: deleting orphan objects from 0x240000401:3 to 0x240000401:65 [ 3152.733837] Lustre: Mounted lustre-client [ 3153.215232] systemd[1]: mnt-lustre.mount: Succeeded. [ 3153.280830] Lustre: Unmounted lustre-client [ 3153.356418] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3157.761659] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 3157.762171] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3157.767373] LustreError: Skipped 1 previous similar message [ 3157.768053] Lustre: Skipped 2 previous similar messages [ 3157.768603] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3157.772839] Lustre: Skipped 3 previous similar messages [ 3167.840048] Lustre: lustre-OST0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 3167.852626] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3167.852777] Lustre: Skipped 2 previous similar messages [ 3168.013916] Lustre: server umount lustre-OST0000 complete [ 3168.289948] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3172.880884] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3174.550686] Lustre: server umount lustre-MDT0000 complete [ 3174.835077] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3174.866081] LustreError: 137897:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933503 with bad export cookie 7849915764172078433 [ 3174.866870] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3174.867628] LustreError: 137897:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 3178.491468] LNet: 139163:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 3178.497992] LNet: Removed LNI 192.168.125.30@tcp [ 3179.590516] systemd-udevd[1035]: Specified user 'tss' unknown [ 3179.697136] systemd-udevd[1035]: Specified group 'tss' unknown [ 3179.813550] systemd-udevd[139311]: Using default interface naming scheme 'rhel-8.0'. [ 3180.252578] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 3180.948413] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 3181.796368] Lustre: DEBUG MARKER: == conf-sanity test 42: allow client/server mount/unmount with invalid config param ========================================================== 16:11:49 (1679933509) [ 3182.135675] systemd-udevd[1035]: Specified user 'tss' unknown [ 3182.226881] systemd-udevd[1035]: Specified group 'tss' unknown [ 3182.227404] systemd-udevd[139945]: Using default interface naming scheme 'rhel-8.0'. [ 3182.707525] Lustre: Lustre: Build Version: 2.15.54 [ 3182.816313] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 3182.816618] LNet: Accept secure, port 988 [ 3183.576109] Lustre: Echo OBD driver; http://www.lustre.org/ [ 3185.335719] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 3185.342684] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3186.648281] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3186.667194] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3187.437350] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3187.619850] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 3188.502566] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3189.118890] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3189.465639] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3189.573695] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 3190.263167] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3192.405529] Lustre: lustre-OST0000: deleting orphan objects from 0x240000401:67 to 0x240000401:97 [ 3197.451895] Lustre: Mounted lustre-client [ 3198.843822] Lustre: Setting parameter lustre-client.llite.some_wrong_param in log lustre-client [ 3199.110480] systemd[1]: mnt-lustre.mount: Succeeded. [ 3199.171604] Lustre: Unmounted lustre-client [ 3199.306073] systemd-udevd[141372]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'llite.lustre-ffff8b7867274000.some_wrong_param=10'' failed with exit code 2. [ 3199.379116] Lustre: Mounted lustre-client [ 3199.504349] Lustre: Modifying parameter lustre-client.llite.some_wrong_param in log lustre-client [ 3199.733152] systemd[1]: mnt-lustre.mount: Succeeded. [ 3199.801569] Lustre: Unmounted lustre-client [ 3200.102422] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3202.480558] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 3202.486080] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3202.489103] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3204.401766] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3204.406041] Lustre: Skipped 2 previous similar messages [ 3206.156381] Lustre: server umount lustre-OST0000 complete [ 3206.486346] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3207.760462] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3207.767135] LustreError: Skipped 1 previous similar message [ 3207.767247] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3207.767544] Lustre: Skipped 1 previous similar message [ 3207.774193] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3209.440587] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3212.758725] Lustre: server umount lustre-MDT0000 complete [ 3213.049331] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3213.072745] LustreError: 140524:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933541 with bad export cookie 15089848320137823495 [ 3213.073441] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3216.580412] LNet: 141864:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 3216.580872] LNet: Removed LNI 192.168.125.30@tcp [ 3217.956578] systemd-udevd[1035]: Specified user 'tss' unknown [ 3217.970044] systemd-udevd[1035]: Specified group 'tss' unknown [ 3218.165690] systemd-udevd[142208]: Using default interface naming scheme 'rhel-8.0'. [ 3218.491100] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 3219.071945] systemd-udevd[1035]: Specified user 'tss' unknown [ 3219.089940] systemd-udevd[1035]: Specified group 'tss' unknown [ 3219.214550] systemd-udevd[142557]: Using default interface naming scheme 'rhel-8.0'. [ 3219.396533] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 2 [ 3220.784263] Lustre: Lustre: Build Version: 2.15.54 [ 3220.922296] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 3220.922654] LNet: Accept secure, port 988 [ 3221.774282] Lustre: Echo OBD driver; http://www.lustre.org/ [ 3223.649571] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 3223.651517] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3224.948120] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3224.960097] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3225.491522] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3225.589819] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 3226.237226] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3226.782609] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3227.303721] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3227.638203] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 3228.855079] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3229.017444] systemd-udevd[143893]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'llite.lustre-ffff8b783fc4a000.some_wrong_param=20'' failed with exit code 2. [ 3230.088736] Lustre: lustre-OST0000: deleting orphan objects from 0x240000401:99 to 0x240000401:129 [ 3235.129414] Lustre: Mounted lustre-client [ 3236.616589] systemd[1]: mnt-lustre.mount: Succeeded. [ 3236.680818] Lustre: Unmounted lustre-client [ 3236.821501] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3240.160387] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 3240.163989] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3240.165249] LustreError: Skipped 1 previous similar message [ 3240.165947] Lustre: Skipped 1 previous similar message [ 3240.166284] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3240.171730] Lustre: Skipped 1 previous similar message [ 3243.185510] Lustre: server umount lustre-OST0000 complete [ 3243.496353] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3245.200621] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3245.210131] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3245.211295] Lustre: Skipped 1 previous similar message [ 3249.726757] Lustre: server umount lustre-MDT0000 complete [ 3250.124493] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3250.174468] LustreError: 143136:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933578 with bad export cookie 3982724424026718663 [ 3250.174875] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3253.900717] LNet: 144402:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 3253.905706] LNet: Removed LNI 192.168.125.30@tcp [ 3255.054431] systemd-udevd[1035]: Specified user 'tss' unknown [ 3255.123432] systemd-udevd[1035]: Specified group 'tss' unknown [ 3255.243430] systemd-udevd[144573]: Using default interface naming scheme 'rhel-8.0'. [ 3255.577968] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 3256.117183] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 3256.992729] Lustre: DEBUG MARKER: == conf-sanity test 43a: check root_squash and nosquash_nids ========================================================== 16:13:04 (1679933584) [ 3257.119723] Lustre: DEBUG MARKER: SKIP: conf-sanity test_43a missing user with uid=501 gid=501 [ 3257.249142] Lustre: DEBUG MARKER: == conf-sanity test 43b: parse nosquash_nids with commas in expr_list ========================================================== 16:13:05 (1679933585) [ 3257.371389] Lustre: DEBUG MARKER: SKIP: conf-sanity test_43b mixed loopback and real device not working [ 3257.513912] Lustre: DEBUG MARKER: == conf-sanity test 44: mounted client proc entry exists ========================================================== 16:13:05 (1679933585) [ 3257.788978] systemd-udevd[1035]: Specified user 'tss' unknown [ 3257.789582] systemd-udevd[1035]: Specified group 'tss' unknown [ 3257.911079] systemd-udevd[145227]: Using default interface naming scheme 'rhel-8.0'. [ 3258.590062] Lustre: Lustre: Build Version: 2.15.54 [ 3258.742490] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 3258.742691] LNet: Accept secure, port 988 [ 3259.565774] Lustre: Echo OBD driver; http://www.lustre.org/ [ 3261.678801] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 3261.694793] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3262.947516] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3262.970594] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3263.924953] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3264.107919] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 3265.349974] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3266.313103] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3266.936015] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3267.171686] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 3268.141824] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3268.163622] Lustre: lustre-OST0000: deleting orphan objects from 0x240000401:131 to 0x240000401:161 [ 3268.325830] Lustre: Mounted lustre-client [ 3268.336637] systemd-udevd[146635]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'llite.lustre-ffff8b782297d000.some_wrong_param=20'' failed with exit code 2. [ 3275.034797] systemd[1]: mnt-lustre.mount: Succeeded. [ 3275.086765] Lustre: Unmounted lustre-client [ 3275.188818] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3278.240782] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 3278.245473] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3278.246422] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3279.441136] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3279.441651] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 3279.451593] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3281.461958] Lustre: server umount lustre-OST0000 complete [ 3281.774654] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3284.240368] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3284.248661] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3284.250879] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3284.251006] Lustre: Skipped 1 previous similar message [ 3288.061874] Lustre: server umount lustre-MDT0000 complete [ 3288.398282] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3288.444818] LustreError: 145885:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933616 with bad export cookie 13088481535416955344 [ 3288.445210] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3291.760557] LNet: 147151:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 3291.766194] LNet: Removed LNI 192.168.125.30@tcp [ 3292.942231] systemd-udevd[1035]: Specified user 'tss' unknown [ 3293.123568] systemd-udevd[1035]: Specified group 'tss' unknown [ 3293.178831] systemd-udevd[147495]: Using default interface naming scheme 'rhel-8.0'. [ 3293.559480] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 3294.051505] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 3294.919647] Lustre: DEBUG MARKER: SKIP: conf-sanity test_45 skipping SLOW test 45 [ 3295.177703] Lustre: DEBUG MARKER: == conf-sanity test 46a: handle ost additional - wide striped file ========================================================== 16:13:43 (1679933623) [ 3296.773623] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 3297.000200] systemd-udevd[1035]: Specified user 'tss' unknown [ 3297.085404] systemd-udevd[1035]: Specified group 'tss' unknown [ 3297.159198] systemd-udevd[148233]: Using default interface naming scheme 'rhel-8.0'. [ 3297.852480] Lustre: Lustre: Build Version: 2.15.54 [ 3297.969151] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 3297.969401] LNet: Accept secure, port 988 [ 3298.517906] Lustre: Echo OBD driver; http://www.lustre.org/ [ 3301.849512] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3301.849908] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3301.859204] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3301.859690] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3301.859983] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3301.863971] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3301.864270] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3301.864552] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3301.864832] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3301.865120] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3303.101092] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3303.130459] systemd[1]: tmp-mntNHiKXL.mount: Succeeded. [ 3305.426857] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3305.447655] systemd[1]: tmp-mntIDiCqP.mount: Succeeded. [ 3306.725418] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3306.731613] systemd[1]: tmp-mntodonj5.mount: Succeeded. [ 3308.170399] print_req_error: 8192 callbacks suppressed [ 3308.170402] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3308.170823] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3308.171201] blk_update_request: operation not supported error, dev loop4, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3308.175512] blk_update_request: operation not supported error, dev loop4, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3308.283229] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3308.289343] systemd[1]: tmp-mnt5L1kDd.mount: Succeeded. [ 3309.011463] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3309.041515] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 3309.050778] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3310.273527] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 3310.313151] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 3310.382398] Lustre: lustre-MDT0000: new disk, initializing [ 3310.447788] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3310.451247] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 3312.243875] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3312.267924] systemd[1]: tmp-mnt33Zk9k.mount: Succeeded. [ 3312.284236] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3312.322382] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 3312.336815] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 3312.336917] Lustre: Skipped 1 previous similar message [ 3312.425075] Lustre: lustre-MDT0001: new disk, initializing [ 3312.470716] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 3312.492030] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 3312.492440] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 3314.808404] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3315.744738] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3316.261217] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3316.276295] systemd[1]: tmp-mntdCfUtw.mount: Succeeded. [ 3316.317532] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3316.466736] Lustre: lustre-OST0000: new disk, initializing [ 3316.470460] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 3316.521019] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 3318.280208] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 3318.280560] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 3318.332970] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 3318.691503] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3319.619071] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 3319.824171] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 3320.800316] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 3320.993843] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 3321.147577] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3323.360550] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 3323.367032] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3323.367739] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3323.370975] Lustre: Skipped 1 previous similar message [ 3327.379395] Lustre: server umount lustre-OST0000 complete [ 3327.733276] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3328.400929] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3328.407050] Lustre: Skipped 1 previous similar message [ 3328.410193] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3328.413945] Lustre: Skipped 1 previous similar message [ 3333.440570] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3333.444907] Lustre: Skipped 1 previous similar message [ 3333.959019] Lustre: server umount lustre-MDT0000 complete [ 3334.323767] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3334.342830] LustreError: 149888:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933662 with bad export cookie 5092843095993521962 [ 3334.360552] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3335.377362] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3335.652163] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3335.672061] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3336.631229] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3338.142209] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3339.105332] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3339.717535] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3339.867648] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 3339.867847] Lustre: Skipped 1 previous similar message [ 3340.554387] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3341.481100] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 3341.724326] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 3342.504268] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 3342.631573] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 3342.814898] Lustre: Mounted lustre-client [ 3343.465501] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3343.486692] systemd[1]: tmp-mntVGF3B9.mount: Succeeded. [ 3343.503466] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3343.570719] Lustre: lustre-OST0001: new disk, initializing [ 3343.571479] Lustre: srv-lustre-OST0001: No data found on store. Initialize space: rc = -61 [ 3344.026422] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 3344.026813] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x00000002c0000400-0x0000000300000400]:1:ost] [ 3344.168188] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x2c0000401 [ 3346.139944] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 40 [ 3346.362306] Lustre: DEBUG MARKER: os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 3347.287334] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 40 [ 3347.448954] Lustre: DEBUG MARKER: os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 3348.174724] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state (FULL|IDLE) osc.lustre-OST0001-osc-ffff8b7859afd000.ost_server_uuid 40 [ 3348.330190] Lustre: DEBUG MARKER: osc.lustre-OST0001-osc-ffff8b7859afd000.ost_server_uuid in FULL state after 0 sec [ 3348.559852] Lustre: Mounted lustre-client [ 3349.351962] systemd[1]: mnt-lustre2.mount: Succeeded. [ 3349.431273] Lustre: Unmounted lustre-client [ 3349.721568] systemd[1]: mnt-lustre.mount: Succeeded. [ 3349.871337] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 3353.602286] Lustre: lustre-OST0001-osc-MDT0000: Connection to lustre-OST0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3353.602632] Lustre: Skipped 1 previous similar message [ 3353.603357] Lustre: lustre-OST0001: Not available for connect from 0@lo (stopping) [ 3356.011085] Lustre: server umount lustre-OST0001 complete [ 3356.011176] Lustre: Skipped 1 previous similar message [ 3356.382222] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3357.040367] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 3357.046792] LustreError: Skipped 1 previous similar message [ 3357.046904] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3357.047196] Lustre: Skipped 1 previous similar message [ 3358.640674] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3358.642021] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3358.647772] Lustre: Skipped 2 previous similar messages [ 3358.652993] LustreError: Skipped 1 previous similar message [ 3362.583005] Lustre: server umount lustre-OST0000 complete [ 3362.834082] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3363.680927] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3363.681207] Lustre: Skipped 1 previous similar message [ 3368.720836] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3368.721071] Lustre: Skipped 4 previous similar messages [ 3369.065906] Lustre: server umount lustre-MDT0000 complete [ 3369.344094] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3369.382600] LustreError: 151142:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933697 with bad export cookie 5092843095993522739 [ 3369.387307] LustreError: 151142:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 3369.387939] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3373.440730] LNet: 153442:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 3373.441141] LNet: Removed LNI 192.168.125.30@tcp [ 3374.629275] systemd-udevd[1035]: Specified user 'tss' unknown [ 3374.755415] systemd-udevd[1035]: Specified group 'tss' unknown [ 3374.794577] systemd-udevd[153669]: Using default interface naming scheme 'rhel-8.0'. [ 3375.462970] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 3378.403312] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 3379.247983] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 3379.577613] systemd-udevd[1035]: Specified user 'tss' unknown [ 3379.615821] systemd-udevd[1035]: Specified group 'tss' unknown [ 3379.859074] systemd-udevd[154510]: Using default interface naming scheme 'rhel-8.0'. [ 3380.332518] Lustre: Lustre: Build Version: 2.15.54 [ 3380.452630] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 3380.452935] LNet: Accept secure, port 988 [ 3381.174723] Lustre: Echo OBD driver; http://www.lustre.org/ [ 3383.429623] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3383.431988] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3383.432519] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3383.432903] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3383.433187] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3383.433474] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3383.433751] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3383.434026] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3383.434300] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3383.434585] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3384.411240] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3384.431154] systemd[1]: tmp-mntoCIDDn.mount: Succeeded. [ 3386.790083] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3386.800423] systemd[1]: tmp-mntSxvt0r.mount: Succeeded. [ 3387.780911] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3387.789771] systemd[1]: tmp-mntYLZMLy.mount: Succeeded. [ 3388.848517] print_req_error: 8192 callbacks suppressed [ 3388.848520] blk_update_request: operation not supported error, dev loop3, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3388.848988] blk_update_request: operation not supported error, dev loop3, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3388.849489] blk_update_request: operation not supported error, dev loop3, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3388.856483] blk_update_request: operation not supported error, dev loop3, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3388.917795] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3389.371808] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3389.377953] systemd[1]: tmp-mntqsca5J.mount: Succeeded. [ 3389.416205] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 3389.417865] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3390.542231] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 3390.549820] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 3390.584594] Lustre: lustre-MDT0000: new disk, initializing [ 3390.612344] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3390.615489] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 3392.217494] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3392.253480] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3392.288470] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 3392.298544] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 3392.298804] Lustre: Skipped 1 previous similar message [ 3392.334631] Lustre: lustre-MDT0001: new disk, initializing [ 3392.363639] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 3392.377842] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 3392.386986] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 3394.094119] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3394.832576] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3395.372767] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3395.389694] systemd[1]: tmp-mnth4ce7N.mount: Succeeded. [ 3395.423775] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3395.571010] Lustre: lustre-OST0000: new disk, initializing [ 3395.571516] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 3395.626412] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 3397.546262] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3398.129497] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 3399.860401] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 3399.860657] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 3399.909227] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 3400.321948] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 2 sec [ 3401.243998] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 3401.345180] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 3401.412635] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3404.880381] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 3404.882014] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3404.882407] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3404.886348] Lustre: Skipped 1 previous similar message [ 3407.677820] Lustre: server umount lustre-OST0000 complete [ 3407.965927] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3409.920721] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3409.934533] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3409.946363] Lustre: Skipped 2 previous similar messages [ 3414.188493] Lustre: server umount lustre-MDT0000 complete [ 3414.583436] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3414.625539] LustreError: 156253:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933743 with bad export cookie 12639724439030954233 [ 3414.628537] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3415.831086] Lustre: DEBUG MARKER: == conf-sanity test 47: server restart does not make client loss lru_resize settings ========================================================== 16:15:44 (1679933744) [ 3417.431620] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 3417.607525] systemd-udevd[1035]: Specified user 'tss' unknown [ 3417.687129] systemd-udevd[1035]: Specified group 'tss' unknown [ 3417.738253] systemd-udevd[158086]: Using default interface naming scheme 'rhel-8.0'. [ 3420.894863] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3420.895267] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3420.895769] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3420.896160] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3420.896443] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3420.896721] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3420.897000] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3420.897283] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3420.897558] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3420.897835] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3422.085647] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3423.735511] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3424.641122] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3424.647361] systemd[1]: tmp-mntTEFidq.mount: Succeeded. [ 3425.969995] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3426.550558] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3426.564911] systemd[1]: tmp-mnt7Pj8aM.mount: Succeeded. [ 3426.612547] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3426.793274] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 3426.800584] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 3426.877384] Lustre: lustre-MDT0000: new disk, initializing [ 3426.912088] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3426.915215] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 3428.822778] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3428.840234] systemd[1]: tmp-mntrij6vK.mount: Succeeded. [ 3428.879992] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3428.913508] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 3429.108672] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 3431.191774] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3431.697007] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3432.025731] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3432.033539] systemd[1]: tmp-mntywKpSO.mount: Succeeded. [ 3432.102048] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3432.245148] Lustre: lustre-OST0000: new disk, initializing [ 3432.245295] Lustre: Skipped 1 previous similar message [ 3432.245708] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 3432.245838] Lustre: Skipped 2 previous similar messages [ 3432.273932] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 3432.281242] Lustre: Skipped 1 previous similar message [ 3434.241827] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3436.357695] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 3436.357907] Lustre: Skipped 1 previous similar message [ 3436.358128] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 3436.380224] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 3441.453663] Lustre: Mounted lustre-client [ 3441.845611] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3441.884205] Lustre: Failing over lustre-OST0000 [ 3444.007453] Lustre: server umount lustre-OST0000 complete [ 3444.007559] Lustre: Skipped 1 previous similar message [ 3453.280353] Lustre: 154734:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679933774/real 1679933774] req@00000000eebcc07e x1761538167824896/t0(0) o13->lustre-OST0000-osc-MDT0000@0@lo:7/4 lens 224/368 e 0 to 1 dl 1679933781 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' [ 3453.280575] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3453.280666] Lustre: Skipped 1 previous similar message [ 3454.866021] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3455.040687] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 3455.046104] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 3455.449562] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3455.483710] Lustre: Failing over lustre-MDT0000 [ 3455.597372] Lustre: server umount lustre-MDT0000 complete [ 3458.081655] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3458.088929] Lustre: Skipped 2 previous similar messages [ 3458.098699] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3458.530067] Lustre: 154735:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679933779/real 1679933779] req@0000000049d3b519 x1761538167826112/t0(0) o400->lustre-OST0000-osc-MDT0001@0@lo:28/4 lens 224/224 e 0 to 1 dl 1679933786 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 3458.530285] Lustre: 154735:0:(client.c:2305:ptlrpc_expire_one_request()) Skipped 5 previous similar messages [ 3459.120846] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3459.122014] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 3459.127997] LustreError: Skipped 4 previous similar messages [ 3464.160704] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3464.164744] LustreError: Skipped 5 previous similar messages [ 3465.200070] Lustre: 154734:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679933786/real 1679933786] req@00000000bbdfd45e x1761538167829248/t0(0) o400->MGC192.168.125.30@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1679933793 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 3465.205850] Lustre: 154734:0:(client.c:2305:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 3465.206069] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3465.208914] Lustre: 159494:0:(mgc_request.c:1771:mgc_process_log()) MGC192.168.125.30@tcp: IR log lustre-cliir failed, not fatal: rc = -5 [ 3466.344255] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3470.241198] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3470.242322] LustreError: Skipped 7 previous similar messages [ 3471.251514] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0xaf695268bf437bfb to 0xaf695268bf4380e7 [ 3471.257361] Lustre: MGC192.168.125.30@tcp: Connection restored to (at 0@lo) [ 3471.326547] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_connect to node 0@lo failed: rc = -114 [ 3471.341385] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3471.352803] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 3476.402749] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 3476.404006] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 3476.413317] Lustre: lustre-OST0000: Recovery over after 0:17, of 3 clients 3 recovered and 0 were evicted. [ 3476.417726] Lustre: Skipped 1 previous similar message [ 3476.426765] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:3 to 0x280000401:33 [ 3476.696696] systemd[1]: mnt-lustre.mount: Succeeded. [ 3476.769048] Lustre: Unmounted lustre-client [ 3476.834315] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3481.440561] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 3481.440673] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3481.440757] Lustre: Skipped 3 previous similar messages [ 3481.441257] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3483.012531] Lustre: server umount lustre-OST0000 complete [ 3483.251624] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3486.481139] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3486.481642] Lustre: Skipped 2 previous similar messages [ 3489.698109] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3489.748711] LustreError: 159483:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933818 with bad export cookie 12639724439030956263 [ 3489.749371] LustreError: 159483:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 3489.751832] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3493.050615] LNet: 161344:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 3493.060155] LNet: Removed LNI 192.168.125.30@tcp [ 3494.322748] systemd-udevd[1035]: Specified user 'tss' unknown [ 3494.375302] systemd-udevd[1035]: Specified group 'tss' unknown [ 3494.509895] systemd-udevd[161688]: Using default interface naming scheme 'rhel-8.0'. [ 3494.830701] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 3495.325212] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 3496.138339] Lustre: DEBUG MARKER: == conf-sanity test 48: too many acls on file ============ 16:17:03 (1679933823) [ 3496.406128] systemd-udevd[1035]: Specified user 'tss' unknown [ 3496.408061] systemd-udevd[1035]: Specified group 'tss' unknown [ 3496.455915] systemd-udevd[162151]: Using default interface naming scheme 'rhel-8.0'. [ 3496.768326] Lustre: Lustre: Build Version: 2.15.54 [ 3496.829432] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 3496.829640] LNet: Accept secure, port 988 [ 3497.359952] Lustre: Echo OBD driver; http://www.lustre.org/ [ 3498.643586] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 3498.646225] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3499.851134] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3499.860315] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3500.352208] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3500.478055] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 3501.199078] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3501.739348] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3502.140453] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3502.237862] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 3502.743746] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3504.890985] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:3 to 0x280000401:65 [ 3504.925387] Lustre: Mounted lustre-client [ 3518.320055] Lustre: 163684:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679933839/real 1679933839] req@000000008a1a20e7 x1761538290543872/t0(0) o101->lustre-MDT0000-mdc-ffff8b7822993000@0@lo:12/10 lens 584/2128 e 0 to 1 dl 1679933846 ref 2 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' [ 3518.326833] Lustre: lustre-MDT0000-mdc-ffff8b7822993000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3518.335724] Lustre: lustre-MDT0000: Client 99231bf0-7df9-4f37-9db2-bfd0c34f5334 (at 0@lo) reconnecting [ 3518.339848] Lustre: lustre-MDT0000-mdc-ffff8b7822993000: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 3530.480061] Lustre: 164108:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679933851/real 1679933851] req@0000000028a828b3 x1761538290709632/t0(0) o101->lustre-MDT0000-mdc-ffff8b7822993000@0@lo:12/10 lens 584/5520 e 0 to 1 dl 1679933858 ref 2 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' [ 3530.480885] Lustre: lustre-MDT0000-mdc-ffff8b7822993000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3530.483074] Lustre: lustre-MDT0000: Client 99231bf0-7df9-4f37-9db2-bfd0c34f5334 (at 0@lo) reconnecting [ 3530.491290] Lustre: lustre-MDT0000-mdc-ffff8b7822993000: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 3551.600066] Lustre: 165299:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679933872/real 1679933872] req@00000000566f3bf6 x1761538291170240/t0(0) o101->lustre-MDT0000-mdc-ffff8b7822993000@0@lo:12/10 lens 584/15040 e 0 to 1 dl 1679933879 ref 2 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' [ 3551.600305] Lustre: lustre-MDT0000-mdc-ffff8b7822993000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3551.600893] Lustre: lustre-MDT0000: Client 99231bf0-7df9-4f37-9db2-bfd0c34f5334 (at 0@lo) reconnecting [ 3551.608432] Lustre: lustre-MDT0000-mdc-ffff8b7822993000: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 3574.640109] Lustre: 166584:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679933895/real 1679933895] req@0000000052003899 x1761538291668672/t0(0) o101->lustre-MDT0000-mdc-ffff8b7822993000@0@lo:12/10 lens 584/25312 e 0 to 1 dl 1679933902 ref 2 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' [ 3574.649371] Lustre: lustre-MDT0000-mdc-ffff8b7822993000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3574.660803] Lustre: lustre-MDT0000: Client 99231bf0-7df9-4f37-9db2-bfd0c34f5334 (at 0@lo) reconnecting [ 3574.662318] Lustre: lustre-MDT0000-mdc-ffff8b7822993000: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 3598.320074] Lustre: 167959:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679933919/real 1679933919] req@000000004bc4ec8c x1761538292201536/t0(0) o101->lustre-MDT0000-mdc-ffff8b7822993000@0@lo:12/10 lens 584/36312 e 0 to 1 dl 1679933926 ref 2 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' [ 3598.331603] Lustre: lustre-MDT0000-mdc-ffff8b7822993000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3598.336266] Lustre: lustre-MDT0000: Client 99231bf0-7df9-4f37-9db2-bfd0c34f5334 (at 0@lo) reconnecting [ 3598.358628] Lustre: lustre-MDT0000-mdc-ffff8b7822993000: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 3606.905334] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3606.931916] Lustre: Failing over lustre-MDT0000 [ 3607.047409] Lustre: server umount lustre-MDT0000 complete [ 3611.043556] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3611.087409] Lustre: Skipped 1 previous similar message [ 3611.087832] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3611.092607] LustreError: Skipped 3 previous similar messages [ 3616.081486] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3616.087580] LustreError: Skipped 3 previous similar messages [ 3617.587892] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3617.608706] LustreError: 11-0: MGC192.168.125.30@tcp: operation mgs_target_reg to node 0@lo failed: rc = -107 [ 3617.608931] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3617.610397] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0x95f98a53ac198ac6 to 0x95f98a53ac1c72a2 [ 3617.612593] Lustre: MGC192.168.125.30@tcp: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 3617.612687] Lustre: Skipped 1 previous similar message [ 3617.665180] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3617.676691] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 3622.722947] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 3622.732647] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 3622.744836] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:68 to 0x280000401:97 [ 3623.227032] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3623.393739] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3624.610754] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 3635.042571] systemd[1]: mnt-lustre.mount: Succeeded. [ 3635.095941] Lustre: Unmounted lustre-client [ 3635.190470] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3637.840476] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3637.840616] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3637.840731] Lustre: Skipped 3 previous similar messages [ 3637.840983] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3639.921021] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3639.925389] Lustre: Skipped 2 previous similar messages [ 3644.960732] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3644.961585] Lustre: Skipped 2 previous similar messages [ 3649.760043] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 3649.818038] Lustre: server umount lustre-MDT0000 complete [ 3649.983546] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3650.000986] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3650.001137] LustreError: Skipped 4 previous similar messages [ 3650.013989] LustreError: 164402:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679933978 with bad export cookie 10806820872826679970 [ 3650.014136] LustreError: 164402:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 3650.014372] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3655.040390] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3655.041009] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 3655.041275] LustreError: Skipped 2 previous similar messages [ 3656.278303] Lustre: server umount lustre-MDT0001 complete [ 3656.593982] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3662.902201] Lustre: server umount lustre-OST0000 complete [ 3663.972386] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 3664.185165] systemd-udevd[1035]: Specified user 'tss' unknown [ 3664.185447] systemd-udevd[1035]: Specified group 'tss' unknown [ 3664.236548] systemd-udevd[168900]: Using default interface naming scheme 'rhel-8.0'. [ 3667.479302] print_req_error: 8196 callbacks suppressed [ 3667.479305] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3667.479772] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3667.497689] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3667.498150] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3667.498449] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3667.498728] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3667.499013] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3667.499292] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3667.499566] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3667.499843] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3668.083112] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3668.089641] systemd[1]: tmp-mntsmUsF3.mount: Succeeded. [ 3669.861068] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3669.867596] systemd[1]: tmp-mntBV2si5.mount: Succeeded. [ 3670.992412] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3671.008545] systemd[1]: tmp-mntgQTEfq.mount: Succeeded. [ 3672.532373] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3672.538408] systemd[1]: tmp-mnt9YxCvy.mount: Succeeded. [ 3673.281961] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3673.332774] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3673.453302] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 3673.465037] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 3673.493370] Lustre: lustre-MDT0000: new disk, initializing [ 3673.532667] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3673.535257] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 3675.285771] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3675.296773] systemd[1]: tmp-mntMsyRSL.mount: Succeeded. [ 3675.327385] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3675.349617] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 3675.377130] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 3675.377315] Lustre: Skipped 1 previous similar message [ 3675.419977] Lustre: lustre-MDT0001: new disk, initializing [ 3675.471552] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 3675.471914] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 3677.473174] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3678.136093] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3678.637199] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3678.652569] systemd[1]: tmp-mntKItKYS.mount: Succeeded. [ 3678.683741] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3678.847515] Lustre: lustre-OST0000: new disk, initializing [ 3678.848035] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 3680.015384] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 3680.017018] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 3680.046090] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 3680.667187] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3681.308134] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 3681.418591] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 3681.989018] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 3682.145149] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 3682.227105] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3685.040454] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_create to node 0@lo failed: rc = -107 [ 3685.041005] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3685.048420] LustreError: Skipped 1 previous similar message [ 3685.049034] Lustre: Skipped 3 previous similar messages [ 3685.049426] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3685.049548] Lustre: Skipped 1 previous similar message [ 3688.473933] Lustre: server umount lustre-OST0000 complete [ 3688.713841] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3695.165746] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3695.200571] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3695.208634] LustreError: Skipped 2 previous similar messages [ 3695.212583] LustreError: 170447:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934023 with bad export cookie 10806820872826680698 [ 3695.212986] LustreError: 170447:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 3 previous similar messages [ 3695.213751] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3695.224202] LustreError: 171105:0:(osp_precreate.c:704:osp_precreate_send()) lustre-OST0000-osc-MDT0001: can't precreate: rc = -5 [ 3695.224461] LustreError: 171105:0:(osp_precreate.c:1405:osp_precreate_thread()) lustre-OST0000-osc-MDT0001: cannot precreate objects: rc = -5 [ 3696.194056] Lustre: DEBUG MARKER: == conf-sanity test 49a: check PARAM_SYS_LDLM_TIMEOUT option of mkfs.lustre ========================================================== 16:20:24 (1679934024) [ 3697.489715] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 3697.727790] systemd-udevd[1035]: Specified user 'tss' unknown [ 3697.728186] systemd-udevd[1035]: Specified group 'tss' unknown [ 3697.741323] systemd-udevd[172270]: Using default interface naming scheme 'rhel-8.0'. [ 3700.126140] print_req_error: 8196 callbacks suppressed [ 3700.126143] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3700.126740] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3700.127308] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3700.127775] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3700.128137] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3700.128487] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3700.128850] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3700.129215] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3700.129565] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3700.129931] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3700.732879] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3700.739258] systemd[1]: tmp-mnt9bxG8p.mount: Succeeded. [ 3702.398631] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3703.399483] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3704.919676] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3704.926218] systemd[1]: tmp-mntiRauhO.mount: Succeeded. [ 3705.477704] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3705.484276] systemd[1]: tmp-mntXIHHQO.mount: Succeeded. [ 3705.561669] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3705.680498] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 3705.687854] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 3705.720503] Lustre: lustre-MDT0000: new disk, initializing [ 3705.749958] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3705.750318] Lustre: Skipped 2 previous similar messages [ 3705.753203] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 3707.416234] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3707.511763] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3707.649338] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 3709.519412] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3710.065411] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3710.476584] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3710.484918] systemd[1]: tmp-mntHfz2Ta.mount: Succeeded. [ 3710.533515] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3710.636401] Lustre: lustre-OST0000: new disk, initializing [ 3710.636563] Lustre: Skipped 1 previous similar message [ 3710.636988] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 3710.637929] Lustre: Skipped 2 previous similar messages [ 3711.778312] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 3711.778885] Lustre: Skipped 1 previous similar message [ 3711.779254] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 3711.817082] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 3712.506465] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3712.669734] Lustre: Mounted lustre-client [ 3713.149455] systemd[1]: mnt-lustre.mount: Succeeded. [ 3713.200575] Lustre: Unmounted lustre-client [ 3713.352491] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3716.800349] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 3716.807653] LustreError: Skipped 1 previous similar message [ 3716.811843] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3716.812009] Lustre: Skipped 3 previous similar messages [ 3719.620847] Lustre: server umount lustre-OST0000 complete [ 3719.621028] Lustre: Skipped 2 previous similar messages [ 3719.920702] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3722.720375] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3722.724885] LustreError: Skipped 1 previous similar message [ 3726.394811] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3726.436469] LustreError: 173660:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934054 with bad export cookie 10806820872826681475 [ 3726.436772] LustreError: 173660:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 3726.438289] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3727.309477] Lustre: DEBUG MARKER: == conf-sanity test 49b: check PARAM_SYS_LDLM_TIMEOUT option of mkfs.lustre ========================================================== 16:20:55 (1679934055) [ 3728.742490] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 3728.847651] systemd-udevd[1035]: Specified user 'tss' unknown [ 3728.848440] systemd-udevd[1035]: Specified group 'tss' unknown [ 3728.885378] systemd-udevd[175349]: Using default interface naming scheme 'rhel-8.0'. [ 3730.764061] print_req_error: 8196 callbacks suppressed [ 3730.764064] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3730.764359] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3730.764844] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3730.765186] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3730.765369] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3730.765567] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3730.765766] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3730.765951] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3730.766159] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3730.766360] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3731.429994] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3731.435984] systemd[1]: tmp-mnt592SLt.mount: Succeeded. [ 3732.984411] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3732.990762] systemd[1]: tmp-mntlpGuSv.mount: Succeeded. [ 3733.655397] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3733.664372] systemd[1]: tmp-mntWm5sxG.mount: Succeeded. [ 3734.801199] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3735.442116] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3735.464826] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3735.606070] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 3735.606189] Lustre: Skipped 1 previous similar message [ 3735.635565] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 3735.693606] Lustre: lustre-MDT0000: new disk, initializing [ 3735.733705] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3735.734175] Lustre: Skipped 2 previous similar messages [ 3735.736668] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 3737.355458] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3737.392480] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3737.623987] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 3739.435806] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3739.970803] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3740.281357] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3740.333964] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3742.228032] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3743.185286] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 3748.169517] Lustre: Mounted lustre-client [ 3748.481527] systemd[1]: mnt-lustre.mount: Succeeded. [ 3748.554533] Lustre: Unmounted lustre-client [ 3748.694284] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3753.201601] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3753.202037] Lustre: Skipped 7 previous similar messages [ 3753.202612] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3753.202665] Lustre: Skipped 5 previous similar messages [ 3754.938888] Lustre: server umount lustre-OST0000 complete [ 3754.938992] Lustre: Skipped 2 previous similar messages [ 3755.184985] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3757.760339] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3761.711895] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3761.734979] LustreError: 176961:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934090 with bad export cookie 10806820872826682609 [ 3761.735299] LustreError: 176961:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 3761.735587] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3764.900774] LNet: 178064:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 3764.911469] LNet: Removed LNI 192.168.125.30@tcp [ 3765.967890] systemd-udevd[1035]: Specified user 'tss' unknown [ 3765.989614] systemd-udevd[1035]: Specified group 'tss' unknown [ 3766.081416] systemd-udevd[178407]: Using default interface naming scheme 'rhel-8.0'. [ 3766.423854] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 3766.945735] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 3767.784533] Lustre: DEBUG MARKER: == conf-sanity test 50a: lazystatfs all servers available ========================================================== 16:21:35 (1679934095) [ 3768.155549] systemd-udevd[1035]: Specified user 'tss' unknown [ 3768.181192] systemd-udevd[1035]: Specified group 'tss' unknown [ 3768.226265] systemd-udevd[178856]: Using default interface naming scheme 'rhel-8.0'. [ 3768.811006] Lustre: Lustre: Build Version: 2.15.54 [ 3768.928313] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 3768.928820] LNet: Accept secure, port 988 [ 3769.562700] Lustre: Echo OBD driver; http://www.lustre.org/ [ 3771.269898] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 3771.278523] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3772.450498] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3772.460163] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3772.966160] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3773.077667] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 3773.751988] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3774.246098] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3774.577822] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3774.717455] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 3775.188151] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3777.371872] Lustre: Mounted lustre-client [ 3777.372696] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:3 to 0x280000401:33 [ 3792.787242] systemd[1]: mnt-lustre.mount: Succeeded. [ 3792.854060] Lustre: Unmounted lustre-client [ 3792.930912] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3793.520982] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 3793.521009] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3793.529331] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3798.561539] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3798.565948] Lustre: Skipped 2 previous similar messages [ 3799.192599] Lustre: server umount lustre-OST0000 complete [ 3799.399010] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3803.280493] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3803.288252] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3803.288549] Lustre: Skipped 1 previous similar message [ 3803.292902] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3805.637703] Lustre: server umount lustre-MDT0000 complete [ 3805.844596] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3805.891238] LustreError: 179423:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934134 with bad export cookie 9811535707667293152 [ 3805.892683] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3805.895322] LustreError: 179423:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 3809.141013] LNet: 180652:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 3809.141400] LNet: Removed LNI 192.168.125.30@tcp [ 3810.221750] systemd-udevd[1035]: Specified user 'tss' unknown [ 3810.247777] systemd-udevd[1035]: Specified group 'tss' unknown [ 3810.317499] systemd-udevd[180994]: Using default interface naming scheme 'rhel-8.0'. [ 3810.627627] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 3811.142803] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 3811.953630] Lustre: DEBUG MARKER: == conf-sanity test 50b: lazystatfs all servers down ===== 16:22:19 (1679934139) [ 3812.160568] systemd-udevd[1035]: Specified user 'tss' unknown [ 3812.161173] systemd-udevd[1035]: Specified group 'tss' unknown [ 3812.230124] systemd-udevd[181381]: Using default interface naming scheme 'rhel-8.0'. [ 3812.850065] Lustre: Lustre: Build Version: 2.15.54 [ 3812.949760] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 3812.950403] LNet: Accept secure, port 988 [ 3813.555773] Lustre: Echo OBD driver; http://www.lustre.org/ [ 3815.164500] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 3815.168759] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3816.333400] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3816.343234] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3817.039006] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3817.155583] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 3817.961282] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3818.461354] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3818.807139] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3818.918086] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 3819.634448] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3820.884475] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:35 to 0x280000401:65 [ 3825.931254] Lustre: Mounted lustre-client [ 3827.112106] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3830.160512] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 3830.168696] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3830.177934] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3830.960369] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 3830.962308] Lustre: lustre-OST0000-osc-ffff8b7840e3f000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3830.967459] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3830.976216] Lustre: Skipped 1 previous similar message [ 3836.000800] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3836.010626] Lustre: Skipped 1 previous similar message [ 3841.042583] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3841.048626] Lustre: Skipped 2 previous similar messages [ 3841.760041] Lustre: lustre-OST0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 3841.862872] Lustre: server umount lustre-OST0000 complete [ 3842.542660] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state DISCONN osc.lustre-OST0000-osc-ffff8b7840e3f000.ost_server_uuid 40 [ 3842.710616] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff8b7840e3f000.ost_server_uuid in DISCONN state after 0 sec [ 3842.819099] Lustre: DEBUG MARKER: OSCs should all be DISCONN [ 3852.135199] systemd[1]: mnt-lustre.mount: Succeeded. [ 3852.180086] Lustre: Unmounted lustre-client [ 3852.281755] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3852.400462] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3852.405671] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3852.407252] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3852.407448] Lustre: Skipped 2 previous similar messages [ 3856.161010] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3858.511366] Lustre: server umount lustre-MDT0000 complete [ 3858.772744] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3858.822753] LustreError: 182010:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934187 with bad export cookie 11740141019066496574 [ 3858.834824] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3859.643640] Lustre: DEBUG MARKER: == conf-sanity test 50c: lazystatfs one server down ====== 16:23:07 (1679934187) [ 3860.053944] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3860.256632] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3860.280544] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3860.962576] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3861.825733] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3862.636671] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3863.172838] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3864.148394] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3864.548157] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3864.556208] systemd[1]: tmp-mntyeMaLl.mount: Succeeded. [ 3864.582569] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3864.620079] Lustre: lustre-OST0001: new disk, initializing [ 3864.621911] Lustre: srv-lustre-OST0001: No data found on store. Initialize space: rc = -61 [ 3864.644801] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 3864.644921] Lustre: Skipped 2 previous similar messages [ 3865.685050] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:67 to 0x280000401:97 [ 3866.622547] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid [ 3866.785712] Lustre: Mounted lustre-client [ 3866.955615] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3869.296934] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3869.299047] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3869.300276] Lustre: Skipped 4 previous similar messages [ 3869.301510] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 3869.301921] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x00000002c0000400-0x0000000300000400]:1:ost] [ 3869.353300] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x2c0000401 [ 3873.128629] Lustre: server umount lustre-OST0000 complete [ 3873.128799] Lustre: Skipped 1 previous similar message [ 3873.797516] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state DISCONN os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 3873.944426] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in DISCONN state after 0 sec [ 3874.423983] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3874.562077] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state DISCONN os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 3874.726657] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in DISCONN state after 0 sec [ 3879.441152] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3879.449464] LustreError: Skipped 4 previous similar messages [ 3884.101534] systemd[1]: mnt-lustre.mount: Succeeded. [ 3884.178416] Lustre: Unmounted lustre-client [ 3884.252652] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 3884.400372] LustreError: 11-0: lustre-OST0001-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 3884.407282] Lustre: lustre-OST0001-osc-MDT0001: Connection to lustre-OST0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3884.407589] Lustre: Skipped 2 previous similar messages [ 3884.481948] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3884.486195] LustreError: Skipped 1 previous similar message [ 3889.521436] Lustre: lustre-OST0001: Not available for connect from 0@lo (stopping) [ 3889.521576] Lustre: Skipped 3 previous similar messages [ 3894.561465] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3894.561704] LustreError: Skipped 3 previous similar messages [ 3898.720058] Lustre: lustre-OST0001 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 3898.763284] Lustre: server umount lustre-OST0001 complete [ 3898.955306] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3905.149690] Lustre: server umount lustre-MDT0000 complete [ 3905.348670] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3905.394946] LustreError: 183272:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934233 with bad export cookie 11740141019066497687 [ 3905.395098] LustreError: 183272:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 3905.395279] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3905.717910] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3905.725647] systemd[1]: tmp-mntkqW7dP.mount: Succeeded. [ 3905.894627] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3906.081177] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3906.248256] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3906.253706] systemd[1]: tmp-mntbQimNg.mount: Succeeded. [ 3906.587289] Lustre: DEBUG MARKER: == conf-sanity test 50d: lazystatfs client/server conn race ========================================================== 16:23:54 (1679934234) [ 3906.791838] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3906.857997] Lustre: MGS: Logs for fs lustre were removed by user request. All servers must be restarted in order to regenerate the logs: rc = 0 [ 3906.867047] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 3906.906074] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3907.212760] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3907.223244] Lustre: MGS: Regenerating lustre-MDT0001 log by user request: rc = 0 [ 3907.747636] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3908.043837] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3908.254351] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3908.294503] Lustre: MGS: Regenerating lustre-OST0000 log by user request: rc = 0 [ 3908.716220] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3908.968119] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3909.408634] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid [ 3916.700970] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:99 to 0x280000401:129 [ 3921.771261] Lustre: Mounted lustre-client [ 3921.882298] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3926.720944] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 3926.721205] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3926.721365] Lustre: Skipped 3 previous similar messages [ 3926.721844] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3926.721997] Lustre: Skipped 6 previous similar messages [ 3928.011518] Lustre: server umount lustre-OST0000 complete [ 3928.011638] Lustre: Skipped 1 previous similar message [ 3931.841732] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3931.841876] LustreError: Skipped 1 previous similar message [ 3937.356914] systemd[1]: mnt-lustre.mount: Succeeded. [ 3937.428830] Lustre: Unmounted lustre-client [ 3937.504267] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 3941.840346] LustreError: 11-0: lustre-OST0001-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 3943.885348] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3950.121466] Lustre: server umount lustre-MDT0000 complete [ 3950.121681] Lustre: Skipped 1 previous similar message [ 3950.323413] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3950.371389] LustreError: 185204:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934278 with bad export cookie 11740141019066499080 [ 3950.373013] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3950.373231] LustreError: 185204:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 3950.818639] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3950.825619] systemd[1]: tmp-mntAgy8Z0.mount: Succeeded. [ 3951.118198] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3951.296830] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3951.303147] systemd[1]: tmp-mntc2IqO5.mount: Succeeded. [ 3951.594977] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3951.601418] systemd[1]: tmp-mnt51FYOo.mount: Succeeded. [ 3952.051093] Lustre: DEBUG MARKER: == conf-sanity test 50e: normal statfs all servers down == 16:24:40 (1679934280) [ 3953.506856] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 3953.618583] systemd-udevd[1035]: Specified user 'tss' unknown [ 3953.619542] systemd-udevd[1035]: Specified group 'tss' unknown [ 3953.679483] systemd-udevd[187265]: Using default interface naming scheme 'rhel-8.0'. [ 3956.374510] print_req_error: 8196 callbacks suppressed [ 3956.374512] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3956.374968] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3956.383373] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3956.383751] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3956.383991] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3956.384227] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3956.384473] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3956.384709] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3956.384943] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3956.385176] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 3957.081645] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3957.088112] systemd[1]: tmp-mntHWLlVc.mount: Succeeded. [ 3958.478902] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3958.485341] systemd[1]: tmp-mnt7JawEh.mount: Succeeded. [ 3959.364899] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3960.817349] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3960.825785] systemd[1]: tmp-mntsm0loA.mount: Succeeded. [ 3961.394262] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3961.402967] systemd[1]: tmp-mntlNT4aN.mount: Succeeded. [ 3961.471768] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3961.561530] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 3961.561733] Lustre: Skipped 1 previous similar message [ 3961.568711] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 3961.594418] Lustre: lustre-MDT0000: new disk, initializing [ 3961.621953] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3961.622501] Lustre: Skipped 3 previous similar messages [ 3961.625035] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 3963.203975] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3963.212185] systemd[1]: tmp-mntgjB7DK.mount: Succeeded. [ 3963.243818] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3963.270669] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 3963.281539] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 3963.281758] Lustre: Skipped 1 previous similar message [ 3963.311282] Lustre: lustre-MDT0001: new disk, initializing [ 3963.367383] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 3963.374315] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 3964.897715] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3965.217182] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3965.448786] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 3965.455814] systemd[1]: tmp-mntonUJ1M.mount: Succeeded. [ 3965.482693] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3965.589766] Lustre: lustre-OST0000: new disk, initializing [ 3965.590274] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 3967.185015] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3967.571974] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 3969.702217] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 3969.706392] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 3969.770182] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 3970.830795] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 3 sec [ 3971.752134] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 3971.937292] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 3972.046755] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3974.720379] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 3974.724500] LustreError: Skipped 1 previous similar message [ 3974.724582] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3974.728477] Lustre: Skipped 6 previous similar messages [ 3978.592748] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 3984.829507] Lustre: server umount lustre-MDT0000 complete [ 3984.840358] Lustre: Skipped 2 previous similar messages [ 3984.880408] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3984.885122] LustreError: Skipped 7 previous similar messages [ 3985.179469] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 3985.203789] LustreError: 188787:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934313 with bad export cookie 11740141019066500529 [ 3985.205879] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3985.210488] LustreError: 188787:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 3 previous similar messages [ 3986.245499] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3987.468861] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3988.597202] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3989.343184] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3989.733880] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 3990.593370] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3991.203221] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 3993.463539] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 2 sec [ 3994.140920] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 3994.281480] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 3994.376400] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 3998.000559] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 3998.007323] Lustre: Skipped 11 previous similar messages [ 4001.245639] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state DISCONN os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 4001.327484] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in DISCONN state after 0 sec [ 4001.809765] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state DISCONN os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 4001.953479] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in DISCONN state after 0 sec [ 4002.082348] Lustre: Mounted lustre-client [ 4023.419357] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4023.506013] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 4023.506135] Lustre: Skipped 5 previous similar messages [ 4023.896449] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4028.721203] LustreError: 167-0: lustre-OST0000-osc-MDT0000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 4028.721698] Lustre: lustre-OST0000-osc-MDT0001: Connection restored to (at 0@lo) [ 4028.731204] Lustre: Skipped 1 previous similar message [ 4032.298601] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 8 sec [ 4032.580849] systemd[1]: mnt-lustre.mount: Succeeded. [ 4032.642979] Lustre: Unmounted lustre-client [ 4032.764809] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4033.760401] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 4033.762886] LustreError: Skipped 3 previous similar messages [ 4035.176625] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4041.665941] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4041.699909] LustreError: 190059:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934370 with bad export cookie 11740141019066501299 [ 4041.700549] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4042.493546] Lustre: DEBUG MARKER: == conf-sanity test 50f: normal statfs one server in down ========================================================== 16:26:10 (1679934370) [ 4042.806759] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4043.515846] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4044.390428] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4044.901177] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4045.277834] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4046.478854] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4047.037347] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 4049.313919] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 2 sec [ 4050.117225] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 4050.257860] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 4050.778866] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4050.790797] systemd[1]: tmp-mntbeLlQ1.mount: Succeeded. [ 4050.832889] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4050.894182] Lustre: lustre-OST0001: new disk, initializing [ 4050.897424] Lustre: srv-lustre-OST0001: No data found on store. Initialize space: rc = -61 [ 4053.027554] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid [ 4053.597983] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 40 [ 4058.102251] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 4058.102646] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x00000002c0000400-0x0000000300000400]:1:ost] [ 4058.123959] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x2c0000401 [ 4058.969730] Lustre: DEBUG MARKER: os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 5 sec [ 4059.670788] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 40 [ 4059.812142] Lustre: DEBUG MARKER: os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 4059.917752] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 4063.122399] Lustre: lustre-OST0001-osc-MDT0001: Connection to lustre-OST0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4063.130371] Lustre: Skipped 9 previous similar messages [ 4066.088302] Lustre: server umount lustre-OST0001 complete [ 4066.088447] Lustre: Skipped 5 previous similar messages [ 4066.655604] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state DISCONN os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 40 [ 4066.731420] Lustre: DEBUG MARKER: os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in DISCONN state after 0 sec [ 4067.078201] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state DISCONN os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 40 [ 4067.153803] Lustre: DEBUG MARKER: os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in DISCONN state after 0 sec [ 4067.222442] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4067.222602] LustreError: Skipped 2 previous similar messages [ 4067.253825] Lustre: Mounted lustre-client [ 4088.959716] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4089.129080] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 4089.136928] Lustre: Skipped 4 previous similar messages [ 4090.354327] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid [ 4094.164451] LustreError: 167-0: lustre-OST0001-osc-MDT0001: This client was evicted by lustre-OST0001; in progress operations using this service will fail. [ 4094.165348] LustreError: Skipped 1 previous similar message [ 4094.166001] Lustre: lustre-OST0001-osc-MDT0000: Connection restored to (at 0@lo) [ 4094.169427] Lustre: Skipped 1 previous similar message [ 4094.767780] Lustre: DEBUG MARKER: osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 4 sec [ 4094.906946] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 4099.204031] LustreError: 11-0: lustre-OST0001-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4099.209608] LustreError: Skipped 2 previous similar messages [ 4101.514733] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 4111.922620] systemd[1]: mnt-lustre.mount: Succeeded. [ 4111.985632] Lustre: Unmounted lustre-client [ 4112.141586] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4118.708550] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4125.227691] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4125.275300] LustreError: 192081:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934453 with bad export cookie 11740141019066502314 [ 4125.282283] LustreError: 192081:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 4125.300096] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4140.000043] Lustre: lustre-MDT0001 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 4140.744762] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4140.758410] systemd[1]: tmp-mntIwZ4Gx.mount: Succeeded. [ 4141.131316] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4141.136669] systemd[1]: tmp-mntWC8YnR.mount: Succeeded. [ 4141.503048] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4141.919476] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4141.926838] systemd[1]: tmp-mntrBBgpT.mount: Succeeded. [ 4142.538471] Lustre: DEBUG MARKER: == conf-sanity test 50g: deactivated OST should not cause panic ========================================================== 16:27:50 (1679934470) [ 4143.001457] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4143.109320] Lustre: MGS: Logs for fs lustre were removed by user request. All servers must be restarted in order to regenerate the logs: rc = 0 [ 4143.113736] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4143.898789] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4143.927288] Lustre: MGS: Regenerating lustre-MDT0001 log by user request: rc = 0 [ 4143.927458] Lustre: Skipped 1 previous similar message [ 4145.036183] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4145.779604] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4146.263596] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4146.339465] Lustre: MGS: Regenerating lustre-OST0000 log by user request: rc = 0 [ 4147.035006] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4153.770434] Lustre: Mounted lustre-client [ 4155.569292] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4155.597272] Lustre: MGS: Regenerating lustre-OST0001 log by user request: rc = 0 [ 4156.780821] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid [ 4157.004930] Lustre: DEBUG MARKER: osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4157.665496] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 40 [ 4157.783113] Lustre: DEBUG MARKER: os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 4158.329935] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 40 [ 4158.469382] Lustre: DEBUG MARKER: os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 4158.933886] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state (FULL|IDLE) osc.lustre-OST0001-osc-ffff8b7842437000.ost_server_uuid 40 [ 4159.051614] Lustre: DEBUG MARKER: osc.lustre-OST0001-osc-ffff8b7842437000.ost_server_uuid in FULL state after 0 sec [ 4159.094125] Lustre: Permanently deactivating lustre-OST0001 [ 4159.111061] Lustre: Setting parameter lustre-OST0001-osc.osc.active in log lustre-client [ 4159.111274] Lustre: Skipped 1 previous similar message [ 4159.373007] systemd[1]: mnt-lustre.mount: Succeeded. [ 4159.426925] Lustre: Unmounted lustre-client [ 4159.564116] Lustre: setting import lustre-OST0001_UUID INACTIVE by administrator request [ 4159.565111] Lustre: Skipped 3 previous similar messages [ 4159.812204] Lustre: Permanently reactivating lustre-OST0001 [ 4159.856234] LustreError: 196518:0:(sec.c:411:import_sec_validate_get()) import 000000004b48f08a (NEW) with no sec [ 4160.064058] systemd[1]: mnt-lustre.mount: Succeeded. [ 4160.372402] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 4161.200771] Lustre: lustre-OST0001: Not available for connect from 0@lo (stopping) [ 4161.204806] Lustre: Skipped 16 previous similar messages [ 4166.601942] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4173.176303] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4188.000050] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 4188.352942] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4188.392710] LustreError: 194920:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934516 with bad export cookie 11740141019066503644 [ 4188.401745] LustreError: 194920:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 4188.402121] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4189.049040] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4189.102620] systemd[1]: tmp-mntsG6xNS.mount: Succeeded. [ 4189.366830] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4189.374309] systemd[1]: tmp-mntsyOkEb.mount: Succeeded. [ 4189.705719] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4190.058052] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4190.669368] Lustre: DEBUG MARKER: == conf-sanity test 50h: LU-642: activate deactivated OST ========================================================== 16:28:38 (1679934518) [ 4190.799604] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4190.816385] systemd[1]: tmp-mntGk07wi.mount: Succeeded. [ 4191.239715] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4191.381848] Lustre: MGS: Logs for fs lustre were removed by user request. All servers must be restarted in order to regenerate the logs: rc = 0 [ 4191.386139] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4191.386303] Lustre: Skipped 5 previous similar messages [ 4192.172188] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4192.212524] Lustre: MGS: Regenerating lustre-MDT0001 log by user request: rc = 0 [ 4193.432469] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4194.135742] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4194.512647] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4194.524835] systemd[1]: tmp-mntZlbBUz.mount: Succeeded. [ 4194.568034] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4194.668466] Lustre: Permanently deactivating lustre-OST0000 [ 4195.388586] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4195.791846] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4196.616590] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid [ 4196.824409] Lustre: setting import lustre-OST0000_UUID INACTIVE by administrator request [ 4200.334824] Lustre: setting import lustre-OST0000_UUID INACTIVE by administrator request [ 4200.395843] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000401:35 to 0x2c0000401:65 [ 4205.456366] Lustre: Mounted lustre-client [ 4205.457755] Lustre: Skipped 1 previous similar message [ 4205.682770] Lustre: Permanently reactivating lustre-OST0000 [ 4207.764916] LustreError: 198268:0:(obd_config.c:2004:class_config_llog_handler()) MGC192.168.125.30@tcp: cfg command failed: rc = -114 [ 4207.772675] Lustre: cmd=cf00f 0:lustre-OST0000-osc 1:osc.active=1 [ 4207.772675] [ 4207.776458] LustreError: 197151:0:(mgc_request.c:623:do_requeue()) failed processing log: -114 [ 4207.800959] Lustre: lustre-OST0000: Received new MDS connection from 0@lo, remove former export from same NID [ 4207.851275] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:99 to 0x280000401:129 [ 4209.966259] systemd[1]: mnt-lustre.mount: Succeeded. [ 4210.045279] Lustre: Unmounted lustre-client [ 4210.045894] Lustre: Skipped 1 previous similar message [ 4210.197024] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 4210.400418] Lustre: lustre-OST0001-osc-MDT0001: Connection to lustre-OST0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4210.408130] Lustre: Skipped 14 previous similar messages [ 4216.412081] Lustre: server umount lustre-OST0001 complete [ 4216.412173] Lustre: Skipped 8 previous similar messages [ 4216.639563] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4217.840616] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4217.843726] LustreError: Skipped 26 previous similar messages [ 4223.263039] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4227.680351] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 4227.685264] LustreError: Skipped 11 previous similar messages [ 4229.768641] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4229.807919] LustreError: 197137:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934558 with bad export cookie 11740141019066505401 [ 4229.808201] LustreError: 197137:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 4229.808815] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4233.100401] LNet: 198784:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 4233.106883] LNet: Removed LNI 192.168.125.30@tcp [ 4234.030752] systemd-udevd[1035]: Specified user 'tss' unknown [ 4234.036449] systemd-udevd[1035]: Specified group 'tss' unknown [ 4234.081949] systemd-udevd[199021]: Using default interface naming scheme 'rhel-8.0'. [ 4234.291358] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 4234.606091] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 4235.411999] Lustre: DEBUG MARKER: == conf-sanity test 50i: activate deactivated MDT ======== 16:29:22 (1679934562) [ 4235.578374] systemd-udevd[1035]: Specified user 'tss' unknown [ 4235.592714] systemd-udevd[1035]: Specified group 'tss' unknown [ 4235.632728] systemd-udevd[199540]: Using default interface naming scheme 'rhel-8.0'. [ 4235.939967] Lustre: Lustre: Build Version: 2.15.54 [ 4236.016098] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 4236.016345] LNet: Accept secure, port 988 [ 4236.618898] Lustre: Echo OBD driver; http://www.lustre.org/ [ 4238.008313] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4238.585565] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 4238.587196] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4239.813933] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4239.825557] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4240.451831] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4240.458709] systemd[1]: tmp-mnt9iSsLh.mount: Succeeded. [ 4240.490411] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4240.507286] Lustre: Found index 1 for lustre-MDT0001, updating log [ 4240.527693] Lustre: Modifying parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 4240.528135] Lustre: Permanently deactivating lustre-MDT0001 [ 4240.635374] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 4241.358524] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4242.064452] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4242.543316] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4242.705462] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 4243.478765] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4243.932507] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4244.638114] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid [ 4244.780532] Lustre: setting import lustre-MDT0001_UUID INACTIVE by administrator request [ 4245.846900] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:99 to 0x280000401:161 [ 4245.847770] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000401:35 to 0x2c0000401:97 [ 4245.857314] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:131 to 0x280000400:161 [ 4245.885712] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000400:67 to 0x2c0000400:97 [ 4248.176905] Lustre: setting import lustre-MDT0001_UUID INACTIVE by administrator request [ 4250.893325] Lustre: Mounted lustre-client [ 4251.012670] LustreError: 144-1: lustre-MDT0000: MDC0 can not be (de)activated. [ 4251.016404] LustreError: 201237:0:(mgs_llog.c:4396:mgs_write_log_param()) err -1 on param 'mdc.active=0' [ 4251.038496] LustreError: 201237:0:(mgs_handler.c:1024:mgs_iocontrol()) MGS: setparam err: rc = -1 [ 4251.179303] Lustre: Permanently reactivating lustre-MDT0001 [ 4251.196249] Lustre: Modifying parameter lustre-MDT0001-mdc.mdc.active in log lustre-client [ 4251.196410] Lustre: Skipped 3 previous similar messages [ 4254.341422] Lustre: lustre-MDT0001-osp-MDT0000: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4254.359476] Lustre: lustre-MDT0001: Received new MDS connection from 0@lo, keep former export from same NID [ 4254.362363] LustreError: 167-0: lustre-MDT0001-osp-MDT0000: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. [ 4254.362965] LustreError: 201297:0:(llog_osd.c:2133:llog_osd_get_cat_list()) lustre-MDT0001-osp-MDT0000: error reading CATALOGS: rc = -108 [ 4254.363143] LustreError: 201297:0:(lod_sub_object.c:932:lod_sub_prep_llog()) lustre-MDT0000-mdtlov: can't get id from catalogs: rc = -108 [ 4254.363374] LustreError: 201297:0:(obd_config.c:2004:class_config_llog_handler()) MGC192.168.125.30@tcp: cfg command failed: rc = -108 [ 4254.363540] Lustre: cmd=cf00f 0:lustre-MDT0000-mdtlov 1:lustre-MDT0001-osp-MDT0000.active=1 [ 4254.363540] [ 4254.363813] LustreError: 200177:0:(mgc_request.c:623:do_requeue()) failed processing log: -108 [ 4254.365446] Lustre: lustre-MDT0001-osp-MDT0000: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 4255.113706] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4255.429171] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4256.529937] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state (FULL|IDLE) osp.lustre-MDT0000-osp-MDT0001.mdt_server_uuid 40 [ 4256.729782] Lustre: DEBUG MARKER: osp.lustre-MDT0000-osp-MDT0001.mdt_server_uuid in FULL state after 0 sec [ 4257.692711] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state (FULL|IDLE) osp.lustre-MDT0001-osp-MDT0000.mdt_server_uuid 40 [ 4257.864150] Lustre: DEBUG MARKER: osp.lustre-MDT0001-osp-MDT0000.mdt_server_uuid in FULL state after 0 sec [ 4258.244365] Lustre: Permanently deactivating lustre-MDT0001 [ 4258.260183] Lustre: Modifying parameter lustre-MDT0001-mdc.mdc.active in log lustre-client [ 4258.260352] Lustre: Skipped 2 previous similar messages [ 4262.561795] Lustre: setting import lustre-MDT0001_UUID INACTIVE by administrator request [ 4268.913542] systemd[1]: mnt-lustre.mount: Succeeded. [ 4268.957416] Lustre: Unmounted lustre-client [ 4269.058852] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4269.132491] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4269.138290] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4269.142903] Lustre: Skipped 3 previous similar messages [ 4271.243911] Lustre: server umount lustre-MDT0000 complete [ 4271.728359] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4271.765856] LustreError: 200165:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934600 with bad export cookie 15968255702199248654 [ 4271.766645] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4271.767171] LustreError: 200165:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 3 previous similar messages [ 4271.984839] Lustre: server umount lustre-MDT0001 complete [ 4272.250180] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4278.407141] Lustre: server umount lustre-OST0000 complete [ 4279.629405] Lustre: DEBUG MARKER: == conf-sanity test 51: Verify that mdt_reint handles RMF_MDT_MD correctly when an OST is added ========================================================== 16:30:07 (1679934607) [ 4280.255470] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 4281.206449] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 4281.610472] systemd-udevd[1035]: Specified user 'tss' unknown [ 4281.630358] systemd-udevd[1035]: Specified group 'tss' unknown [ 4281.730241] systemd-udevd[202666]: Using default interface naming scheme 'rhel-8.0'. [ 4284.806002] print_req_error: 8196 callbacks suppressed [ 4284.806005] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4284.806477] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4284.806997] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4284.807385] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4284.807673] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4284.807955] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4284.808237] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4284.808516] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4284.808805] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4284.809102] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4285.880786] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4287.740151] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4287.765925] systemd[1]: tmp-mntQndc8Z.mount: Succeeded. [ 4289.004736] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4289.018551] systemd[1]: tmp-mntItAJic.mount: Succeeded. [ 4290.760942] print_req_error: 8192 callbacks suppressed [ 4290.760945] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4290.769712] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4290.770410] blk_update_request: operation not supported error, dev loop4, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4290.775129] blk_update_request: operation not supported error, dev loop4, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4290.868276] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4290.874491] systemd[1]: tmp-mntPjtYbp.mount: Succeeded. [ 4291.486178] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4291.491263] systemd[1]: tmp-mntBjTVfC.mount: Succeeded. [ 4291.561867] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4291.710392] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4291.710606] Lustre: Skipped 2 previous similar messages [ 4291.723311] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 4291.768415] Lustre: lustre-MDT0000: new disk, initializing [ 4291.812040] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4291.812331] Lustre: Skipped 1 previous similar message [ 4291.817233] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 4293.724809] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4293.762391] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4293.823188] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 4293.823435] Lustre: Skipped 1 previous similar message [ 4293.856426] Lustre: lustre-MDT0001: new disk, initializing [ 4293.939818] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 4293.943365] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 4296.057089] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4296.876632] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4297.493645] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4297.502201] systemd[1]: tmp-mntaWuvmy.mount: Succeeded. [ 4297.524549] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4297.641647] Lustre: lustre-OST0000: new disk, initializing [ 4297.644722] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 4297.679795] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 4297.679976] Lustre: Skipped 1 previous similar message [ 4299.584866] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 4299.585775] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 4299.788173] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4299.936776] Lustre: Mounted lustre-client [ 4300.074804] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 4300.262132] LustreError: 204078:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 142 sleeping for 10000ms [ 4302.845959] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4302.892715] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4302.955662] Lustre: lustre-OST0001: new disk, initializing [ 4302.963691] Lustre: srv-lustre-OST0001: No data found on store. Initialize space: rc = -61 [ 4305.028977] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid [ 4305.312919] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 4305.320585] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x00000002c0000400-0x0000000300000400]:1:ost] [ 4305.346123] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x2c0000401 [ 4306.285079] Lustre: DEBUG MARKER: osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 1 sec [ 4310.340052] LustreError: 204078:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 142 awake [ 4310.352471] LustreError: 204078:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 142 sleeping for 10000ms [ 4320.360055] LustreError: 204078:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 142 awake [ 4320.470376] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 4325.522552] LustreError: 11-0: lustre-OST0001-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 4325.523565] Lustre: lustre-OST0001-osc-MDT0000: Connection to lustre-OST0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4325.527015] LustreError: Skipped 1 previous similar message [ 4325.527258] Lustre: lustre-OST0001: Not available for connect from 0@lo (stopping) [ 4325.529332] Lustre: Skipped 5 previous similar messages [ 4326.648670] Lustre: server umount lustre-OST0001 complete [ 4326.648859] Lustre: Skipped 1 previous similar message [ 4327.067066] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 4327.088109] Lustre: Skipped 1 previous similar message [ 4330.561858] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4330.570481] LustreError: Skipped 1 previous similar message [ 4335.600686] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4335.608490] LustreError: Skipped 1 previous similar message [ 4337.522617] systemd[1]: mnt-lustre.mount: Succeeded. [ 4337.581550] Lustre: Unmounted lustre-client [ 4337.711860] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4339.923699] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4339.928295] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4339.928611] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 4339.928756] Lustre: Skipped 3 previous similar messages [ 4340.640997] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4340.646372] LustreError: Skipped 1 previous similar message [ 4343.920177] Lustre: server umount lustre-OST0000 complete [ 4344.317154] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4345.681008] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4345.689425] Lustre: Skipped 1 previous similar message [ 4345.698621] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4345.698773] Lustre: Skipped 2 previous similar messages [ 4350.720506] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4350.728418] Lustre: Skipped 2 previous similar messages [ 4358.880053] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 4358.920860] Lustre: server umount lustre-MDT0000 complete [ 4359.264860] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4359.284243] LustreError: 204061:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934687 with bad export cookie 15968255702199251629 [ 4359.284383] LustreError: 204061:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 4359.284645] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4369.250372] LNet: 205753:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 4369.256724] LNet: Removed LNI 192.168.125.30@tcp [ 4370.399368] systemd-udevd[1035]: Specified user 'tss' unknown [ 4370.399601] systemd-udevd[1035]: Specified group 'tss' unknown [ 4370.553627] systemd-udevd[206096]: Using default interface naming scheme 'rhel-8.0'. [ 4370.947271] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 4373.722644] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 4374.564748] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 4374.836845] systemd-udevd[1035]: Specified user 'tss' unknown [ 4374.846981] systemd-udevd[1035]: Specified group 'tss' unknown [ 4374.901705] systemd-udevd[206930]: Using default interface naming scheme 'rhel-8.0'. [ 4375.623060] Lustre: Lustre: Build Version: 2.15.54 [ 4375.785305] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 4375.785592] LNet: Accept secure, port 988 [ 4376.697042] Lustre: Echo OBD driver; http://www.lustre.org/ [ 4379.690926] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4379.694422] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4379.695115] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4379.697885] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4379.698195] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4379.698746] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4379.699044] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4379.703435] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4379.703759] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4379.704048] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4380.626928] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4380.644144] systemd[1]: tmp-mnthxk9UQ.mount: Succeeded. [ 4383.378230] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4384.732077] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4384.747256] systemd[1]: tmp-mnt73E6w1.mount: Succeeded. [ 4385.867691] print_req_error: 8192 callbacks suppressed [ 4385.867693] blk_update_request: operation not supported error, dev loop3, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4385.868028] blk_update_request: operation not supported error, dev loop3, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4385.868406] blk_update_request: operation not supported error, dev loop3, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4385.872970] blk_update_request: operation not supported error, dev loop3, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4385.980452] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4386.494828] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4386.504188] systemd[1]: tmp-mntS3IzPe.mount: Succeeded. [ 4386.521599] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 4386.535908] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4387.716863] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4387.726449] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 4387.758454] Lustre: lustre-MDT0000: new disk, initializing [ 4387.809805] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4387.812591] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 4389.414078] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4389.463207] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4389.516397] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 4389.541627] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 4389.541840] Lustre: Skipped 1 previous similar message [ 4389.579459] Lustre: lustre-MDT0001: new disk, initializing [ 4389.610392] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 4389.634431] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 4389.640339] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 4391.377465] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4391.924201] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4392.257248] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4392.264898] systemd[1]: tmp-mntAKFpej.mount: Succeeded. [ 4392.282737] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4392.404172] Lustre: lustre-OST0000: new disk, initializing [ 4392.408991] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 4392.439625] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 4392.816702] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 4392.821738] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 4394.239918] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4395.057149] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 4395.166772] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 4395.942257] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 4396.057163] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 4396.124559] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4397.840546] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation seq_query to node 0@lo failed: rc = -107 [ 4397.841777] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4397.847226] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 4397.847531] Lustre: Skipped 1 previous similar message [ 4402.880549] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 4402.881936] Lustre: Skipped 1 previous similar message [ 4407.921155] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 4407.921912] Lustre: Skipped 1 previous similar message [ 4410.720042] Lustre: lustre-OST0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 4410.834628] Lustre: server umount lustre-OST0000 complete [ 4411.111781] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4411.139171] LustreError: 209230:0:(fid_request.c:233:seq_client_alloc_seq()) cli-cli-lustre-OST0000-osc-MDT0000: Cannot allocate new meta-sequence: rc = -5 [ 4411.146827] LustreError: 209230:0:(fid_request.c:275:seq_client_get_seq()) cli-cli-lustre-OST0000-osc-MDT0000: Can't allocate new sequence: rc = -5 [ 4411.147039] LustreError: 209230:0:(osp_precreate.c:521:osp_precreate_rollover_new_seq()) lustre-OST0000-osc-MDT0000: alloc fid error: rc = -5 [ 4412.960949] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4412.973911] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4412.974072] Lustre: Skipped 1 previous similar message [ 4417.323034] Lustre: server umount lustre-MDT0000 complete [ 4417.653783] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4417.697742] LustreError: 208566:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934746 with bad export cookie 309763364771714214 [ 4417.698468] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4418.697122] Lustre: DEBUG MARKER: == conf-sanity test 52: check recovering objects from lost+found ========================================================== 16:32:26 (1679934746) [ 4419.141999] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4419.350491] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4419.372076] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4420.153296] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4421.056784] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4421.569483] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4421.974522] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4423.047489] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4424.251765] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000bd0 [ 4425.291597] Lustre: Mounted lustre-client [ 4432.636881] systemd[1]: mnt-lustre.mount: Succeeded. [ 4432.908811] Lustre: Unmounted lustre-client [ 4433.021619] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4434.320418] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4434.325551] LustreError: Skipped 1 previous similar message [ 4434.325634] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4434.325843] Lustre: Skipped 1 previous similar message [ 4434.328594] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 4434.328754] Lustre: Skipped 1 previous similar message [ 4435.361805] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 4439.305962] Lustre: server umount lustre-OST0000 complete [ 4439.308495] Lustre: Skipped 1 previous similar message [ 4439.553494] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null) [ 4439.992833] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4440.491003] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4440.657577] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 4440.658596] Lustre: Skipped 2 previous similar messages [ 4441.539063] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4441.683119] Lustre: Mounted lustre-client [ 4442.879235] systemd[1]: mnt-lustre.mount: Succeeded. [ 4443.165076] Lustre: Unmounted lustre-client [ 4443.272933] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4449.518779] Lustre: server umount lustre-OST0000 complete [ 4449.840321] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4449.896458] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4449.897027] Lustre: Skipped 1 previous similar message [ 4449.906725] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4449.906897] Lustre: Skipped 2 previous similar messages [ 4456.109312] Lustre: server umount lustre-MDT0000 complete [ 4456.352442] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4456.395798] LustreError: 209904:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934784 with bad export cookie 309763364771714984 [ 4456.396203] LustreError: 209904:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 4456.396520] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4459.780747] LNet: 211622:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 4459.791565] LNet: Removed LNI 192.168.125.30@tcp [ 4460.944048] systemd-udevd[1035]: Specified user 'tss' unknown [ 4461.100869] systemd-udevd[1035]: Specified group 'tss' unknown [ 4461.133143] systemd-udevd[211966]: Using default interface naming scheme 'rhel-8.0'. [ 4461.481197] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 4461.997028] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 4462.796626] Lustre: DEBUG MARKER: SKIP: conf-sanity test_53a skipping excluded test 53a (base 53) [ 4462.907271] Lustre: DEBUG MARKER: SKIP: conf-sanity test_53b skipping excluded test 53b (base 53) [ 4463.126993] Lustre: DEBUG MARKER: == conf-sanity test 54a: test llverdev and partial verify of device ========================================================== 16:33:11 (1679934791) [ 4463.696226] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing run_llverdev /dev/mapper/ost1_flakey -p [ 4465.515200] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 4465.782320] systemd-udevd[1035]: Specified user 'tss' unknown [ 4465.802487] systemd-udevd[1035]: Specified group 'tss' unknown [ 4465.869255] systemd-udevd[212810]: Using default interface naming scheme 'rhel-8.0'. [ 4466.796255] Lustre: Lustre: Build Version: 2.15.54 [ 4466.961190] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 4466.961396] LNet: Accept secure, port 988 [ 4467.754646] Lustre: Echo OBD driver; http://www.lustre.org/ [ 4470.686098] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4470.686561] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4470.702970] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4470.703422] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4470.703702] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4470.705419] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4470.705705] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4470.705979] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4470.706261] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4470.706535] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4471.655437] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4474.288149] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4475.723694] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4475.757785] systemd[1]: tmp-mnto4gKqf.mount: Succeeded. [ 4477.380926] print_req_error: 8192 callbacks suppressed [ 4477.380929] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4477.391237] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4477.391750] blk_update_request: operation not supported error, dev loop4, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4477.396353] blk_update_request: operation not supported error, dev loop4, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4477.502650] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4477.519853] systemd[1]: tmp-mnt0Temym.mount: Succeeded. [ 4478.203985] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4478.241513] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 4478.248515] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4479.429699] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4479.442454] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 4479.486557] Lustre: lustre-MDT0000: new disk, initializing [ 4479.543647] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4479.546619] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 4481.437491] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4481.453582] systemd[1]: tmp-mntwSJRUv.mount: Succeeded. [ 4481.482436] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4481.512944] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 4481.523618] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 4481.523797] Lustre: Skipped 1 previous similar message [ 4481.559179] Lustre: lustre-MDT0001: new disk, initializing [ 4481.600468] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 4481.614699] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 4481.615065] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 4483.646457] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4484.344187] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4484.886225] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4484.932449] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4485.069033] Lustre: lustre-OST0000: new disk, initializing [ 4485.069530] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 4485.101463] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 4487.066076] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4487.705450] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 4488.735990] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 4488.740207] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 4488.814319] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 4489.027848] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 1 sec [ 4489.806046] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 4489.985533] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 4490.113071] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4493.762255] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4493.770229] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4493.770354] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 4493.774947] Lustre: Skipped 1 previous similar message [ 4496.415603] Lustre: server umount lustre-OST0000 complete [ 4496.810589] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4498.800889] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4498.806069] Lustre: Skipped 1 previous similar message [ 4498.809105] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4498.813080] Lustre: Skipped 1 previous similar message [ 4503.093557] Lustre: server umount lustre-MDT0000 complete [ 4503.470598] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4503.501306] LustreError: 214468:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934831 with bad export cookie 13573878979477009973 [ 4503.519396] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4504.498762] Lustre: DEBUG MARKER: == conf-sanity test 54b: test llverfs and partial verify of filesystem ========================================================== 16:33:52 (1679934832) [ 4504.971229] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4505.248279] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4505.261860] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4506.147216] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4507.150760] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4507.757139] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4508.107708] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4508.922719] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4511.215166] Lustre: Mounted lustre-client [ 4512.938979] systemd[1]: mnt-lustre.mount: Succeeded. [ 4513.115360] Lustre: Unmounted lustre-client [ 4513.275757] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4515.200441] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4515.206941] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4515.207248] Lustre: Skipped 1 previous similar message [ 4515.211987] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 4516.242377] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 4519.517200] Lustre: server umount lustre-OST0000 complete [ 4519.525749] Lustre: Skipped 1 previous similar message [ 4519.764562] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4521.280968] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4521.285704] Lustre: Skipped 1 previous similar message [ 4521.286046] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4521.286266] Lustre: Skipped 3 previous similar messages [ 4526.029135] Lustre: server umount lustre-MDT0000 complete [ 4526.332096] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4526.347235] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4526.385579] LustreError: 215817:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934854 with bad export cookie 13573878979477010743 [ 4526.386204] LustreError: 215817:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 4526.390071] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4530.140375] LNet: 217047:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 4530.140791] LNet: Removed LNI 192.168.125.30@tcp [ 4531.424922] systemd-udevd[1035]: Specified user 'tss' unknown [ 4531.430948] systemd-udevd[1035]: Specified group 'tss' unknown [ 4531.596302] systemd-udevd[217323]: Using default interface naming scheme 'rhel-8.0'. [ 4532.195827] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 4532.929847] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 4533.795998] Lustre: DEBUG MARKER: == conf-sanity test 55: check lov_objid size ============= 16:34:21 (1679934861) [ 4535.065569] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4535.066050] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4535.066644] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4535.067125] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4535.067494] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4535.067852] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4535.068213] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4535.068577] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4535.068936] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4535.069297] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4535.902726] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4536.691222] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4537.089811] systemd-udevd[1035]: Specified user 'tss' unknown [ 4537.104066] systemd-udevd[1035]: Specified group 'tss' unknown [ 4537.135228] systemd-udevd[218280]: Using default interface naming scheme 'rhel-8.0'. [ 4537.729660] Lustre: Lustre: Build Version: 2.15.54 [ 4537.839510] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 4537.839786] LNet: Accept secure, port 988 [ 4538.579638] Lustre: Echo OBD driver; http://www.lustre.org/ [ 4540.116370] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4540.135261] systemd[1]: tmp-mntPMDP9o.mount: Succeeded. [ 4540.161774] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 4540.173264] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4541.364861] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4541.374443] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 4541.429677] Lustre: lustre-MDT0000: new disk, initializing [ 4541.469743] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4541.476500] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 4543.315429] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4543.332126] LustreError: 13b-9: lustre-MDT0001 claims to have registered, but this MGS does not know about it, preventing registration. [ 4543.332742] LustreError: 160-7: lustre-MDT0001: the MGS refuses to allow this server to start: rc = -2. Please see messages on the MGS. [ 4543.333088] LustreError: 219072:0:(tgt_mount.c:2081:server_fill_super()) Unable to start targets: -2 [ 4543.333275] LustreError: 219072:0:(tgt_mount.c:1669:server_put_super()) no obd lustre-MDT0001 [ 4543.333414] LustreError: 219072:0:(tgt_mount.c:132:server_deregister_mount()) lustre-MDT0001 not registered [ 4543.334959] Lustre: server umount lustre-MDT0001 complete [ 4543.335062] LustreError: 219072:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -2 [ 4543.738310] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4543.783359] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4543.893335] Lustre: lustre-OST03ff: new disk, initializing [ 4543.893862] Lustre: srv-lustre-OST03ff: No data found on store. Initialize space: rc = -61 [ 4543.895747] Lustre: Skipped 1 previous similar message [ 4543.924892] Lustre: lustre-OST03ff: Imperative Recovery not enabled, recovery window 60-180 [ 4544.849167] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST03ff-osc-[-0-9a-f]*.ost_server_uuid [ 4551.543318] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:3ff:ost [ 4551.543619] Lustre: cli-lustre-OST03ff-super: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:3ff:ost] [ 4551.550266] Lustre: lustre-OST03ff-osc-MDT0000: update sequence from 0x103ff0000 to 0x240000400 [ 4556.569769] Lustre: Mounted lustre-client [ 4556.891616] systemd[1]: mnt-lustre.mount: Succeeded. [ 4556.933695] Lustre: Unmounted lustre-client [ 4557.062687] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4561.600956] Lustre: lustre-MDT0000-lwp-OST03ff: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4561.601366] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4563.298975] Lustre: server umount lustre-MDT0000 complete [ 4563.817667] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4563.856332] LustreError: 218871:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934892 with bad export cookie 17605237257124737240 [ 4563.856832] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4564.798938] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4564.966516] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4565.530678] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4565.547344] LustreError: 13b-9: lustre-MDT0001 claims to have registered, but this MGS does not know about it, preventing registration. [ 4565.547804] LustreError: 160-7: lustre-MDT0001: the MGS refuses to allow this server to start: rc = -2. Please see messages on the MGS. [ 4565.548142] LustreError: 219888:0:(tgt_mount.c:2081:server_fill_super()) Unable to start targets: -2 [ 4565.548334] LustreError: 219888:0:(tgt_mount.c:1669:server_put_super()) no obd lustre-MDT0001 [ 4565.548465] LustreError: 219888:0:(tgt_mount.c:132:server_deregister_mount()) lustre-MDT0001 not registered [ 4565.549728] Lustre: server umount lustre-MDT0001 complete [ 4565.549827] Lustre: Skipped 1 previous similar message [ 4565.549906] LustreError: 219888:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -2 [ 4565.975996] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4566.948386] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST03ff-osc-[-0-9a-f]*.ost_server_uuid [ 4575.208758] Lustre: Mounted lustre-client [ 4575.767693] systemd[1]: mnt-lustre.mount: Succeeded. [ 4575.845409] Lustre: Unmounted lustre-client [ 4576.109782] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4580.240851] Lustre: lustre-MDT0000-lwp-OST03ff: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4580.248280] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4582.427711] Lustre: server umount lustre-MDT0000 complete [ 4583.136610] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4583.178511] LustreError: 219702:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934911 with bad export cookie 17605237257124737842 [ 4583.178784] LustreError: 219702:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 4583.179131] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4585.352256] print_req_error: 4093 callbacks suppressed [ 4585.352258] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4585.352858] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4585.353451] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4585.353913] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4585.354276] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4585.354633] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4585.354991] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4585.355351] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4585.355704] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4585.356072] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4586.482685] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4588.071833] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4588.816544] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4588.827264] systemd[1]: tmp-mntaDAzMJ.mount: Succeeded. [ 4588.862629] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4589.066234] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4589.088548] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 4589.165436] Lustre: lustre-MDT0000: new disk, initializing [ 4589.230807] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4589.231191] Lustre: Skipped 1 previous similar message [ 4589.235732] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 4591.070745] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4591.090334] LustreError: 13b-9: lustre-MDT0001 claims to have registered, but this MGS does not know about it, preventing registration. [ 4591.126537] LustreError: 160-7: lustre-MDT0001: the MGS refuses to allow this server to start: rc = -2. Please see messages on the MGS. [ 4591.127008] LustreError: 221165:0:(tgt_mount.c:2081:server_fill_super()) Unable to start targets: -2 [ 4591.127199] LustreError: 221165:0:(tgt_mount.c:1669:server_put_super()) no obd lustre-MDT0001 [ 4591.127327] LustreError: 221165:0:(tgt_mount.c:132:server_deregister_mount()) lustre-MDT0001 not registered [ 4591.129887] Lustre: server umount lustre-MDT0001 complete [ 4591.129998] Lustre: Skipped 1 previous similar message [ 4591.130096] LustreError: 221165:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -2 [ 4591.488114] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4591.496647] systemd[1]: tmp-mnt8OFKq9.mount: Succeeded. [ 4591.542789] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4591.738976] Lustre: lustre-OST0800: new disk, initializing [ 4591.739523] Lustre: srv-lustre-OST0800: No data found on store. Initialize space: rc = -61 [ 4591.739649] Lustre: Skipped 1 previous similar message [ 4594.202760] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0800-osc-[-0-9a-f]*.ost_server_uuid [ 4598.260799] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:800:ost [ 4598.261186] Lustre: cli-lustre-OST0800-super: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:800:ost] [ 4598.273856] Lustre: lustre-OST0800-osc-MDT0000: update sequence from 0x108000000 to 0x240000400 [ 4603.288825] Lustre: Mounted lustre-client [ 4603.596521] systemd[1]: mnt-lustre.mount: Succeeded. [ 4603.650904] Lustre: Unmounted lustre-client [ 4603.765548] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4608.320622] Lustre: lustre-MDT0000-lwp-OST0800: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4608.326591] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4610.070355] Lustre: server umount lustre-MDT0000 complete [ 4610.806020] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4610.862600] LustreError: 220964:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934939 with bad export cookie 17605237257124738381 [ 4610.868243] LustreError: 220964:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 4610.868685] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4610.871689] LustreError: 218421:0:(client.c:1255:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@0000000092ecb884 x1761539382073024/t0(0) o250->MGC192.168.125.30@tcp@0@lo:26/25 lens 520/544 e 0 to 0 dl 0 ref 1 fl Rpc:NQU/0/ffffffff rc 0/-1 job:'' [ 4612.169715] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4612.460349] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4612.460515] Lustre: Skipped 1 previous similar message [ 4613.207325] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4613.229522] LustreError: 13b-9: lustre-MDT0001 claims to have registered, but this MGS does not know about it, preventing registration. [ 4613.230014] LustreError: 160-7: lustre-MDT0001: the MGS refuses to allow this server to start: rc = -2. Please see messages on the MGS. [ 4613.230452] LustreError: 221989:0:(tgt_mount.c:2081:server_fill_super()) Unable to start targets: -2 [ 4613.230641] LustreError: 221989:0:(tgt_mount.c:1669:server_put_super()) no obd lustre-MDT0001 [ 4613.230773] LustreError: 221989:0:(tgt_mount.c:132:server_deregister_mount()) lustre-MDT0001 not registered [ 4613.240902] LustreError: 221989:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -2 [ 4613.740465] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4614.754575] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0800-osc-[-0-9a-f]*.ost_server_uuid [ 4620.013161] Lustre: Mounted lustre-client [ 4620.556392] systemd[1]: mnt-lustre.mount: Succeeded. [ 4620.592165] Lustre: Unmounted lustre-client [ 4620.788424] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4625.040872] Lustre: lustre-MDT0000-lwp-OST0800: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4625.041304] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4627.097872] Lustre: server umount lustre-MDT0000 complete [ 4627.098033] Lustre: Skipped 2 previous similar messages [ 4627.671815] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4627.690455] LustreError: 221803:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934956 with bad export cookie 17605237257124738983 [ 4627.692924] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4627.696913] LustreError: 221803:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 3 previous similar messages [ 4630.738947] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 4630.962349] systemd-udevd[1035]: Specified user 'tss' unknown [ 4630.986716] systemd-udevd[1035]: Specified group 'tss' unknown [ 4631.105166] systemd-udevd[223104]: Using default interface naming scheme 'rhel-8.0'. [ 4634.614689] print_req_error: 4093 callbacks suppressed [ 4634.614692] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4634.615155] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4634.631887] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4634.632308] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4634.632554] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4634.632800] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4634.633037] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4634.633274] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4634.633513] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4634.633757] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4635.484433] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4638.378466] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4638.390918] systemd[1]: tmp-mntVc7TT4.mount: Succeeded. [ 4639.958495] print_req_error: 8188 callbacks suppressed [ 4639.958498] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4639.958964] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4639.959461] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4639.966013] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4640.100140] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4640.111189] systemd[1]: tmp-mntXEtmIe.mount: Succeeded. [ 4642.057703] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4642.058076] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4642.061990] blk_update_request: operation not supported error, dev loop4, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4642.066567] blk_update_request: operation not supported error, dev loop4, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4642.135743] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4642.916882] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4642.924360] systemd[1]: tmp-mntY1fhTB.mount: Succeeded. [ 4642.962260] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4643.181768] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4643.199942] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 4643.248439] Lustre: lustre-MDT0000: new disk, initializing [ 4643.315660] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4643.315820] Lustre: Skipped 1 previous similar message [ 4643.318707] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 4645.407453] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4645.453137] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4645.495328] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 4645.746559] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 4648.339304] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4649.007957] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4649.473171] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4649.492147] systemd[1]: tmp-mntrO4qzF.mount: Succeeded. [ 4649.523431] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4651.735601] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4652.507547] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 4652.765393] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 4652.765621] Lustre: Skipped 1 previous similar message [ 4652.766506] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 4652.820132] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 4653.770853] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 1 sec [ 4654.680994] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 4654.861101] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 4654.978769] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4657.840379] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 4657.840765] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4657.840933] LustreError: Skipped 1 previous similar message [ 4657.841728] Lustre: Skipped 1 previous similar message [ 4657.842265] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 4657.848629] Lustre: Skipped 1 previous similar message [ 4661.230668] Lustre: server umount lustre-OST0000 complete [ 4661.235332] Lustre: Skipped 1 previous similar message [ 4661.644455] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4667.940474] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4668.309835] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4668.349905] LustreError: 224479:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679934996 with bad export cookie 17605237257124739536 [ 4668.353533] LustreError: 224479:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 4668.354157] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4669.528244] Lustre: DEBUG MARKER: == conf-sanity test 56a: check big OST indexes and out-of-index-order start ========================================================== 16:36:37 (1679934997) [ 4671.599939] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 4671.980167] systemd-udevd[1035]: Specified user 'tss' unknown [ 4672.000196] systemd-udevd[1035]: Specified group 'tss' unknown [ 4672.062689] systemd-udevd[226310]: Using default interface naming scheme 'rhel-8.0'. [ 4675.785080] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4675.785571] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4675.786156] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4675.786630] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4675.786990] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4675.787306] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4675.787619] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4675.787929] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4675.788232] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4675.788549] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4676.528752] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4676.546306] systemd[1]: tmp-mntQfJOiU.mount: Succeeded. [ 4678.811921] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4680.157698] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4680.165247] systemd[1]: tmp-mnt2HEsV3.mount: Succeeded. [ 4681.941644] print_req_error: 8192 callbacks suppressed [ 4681.941646] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4681.955615] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4681.956253] blk_update_request: operation not supported error, dev loop4, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4681.961051] blk_update_request: operation not supported error, dev loop4, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4682.066443] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4682.073831] systemd[1]: tmp-mntvzKGcf.mount: Succeeded. [ 4683.470884] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4683.471569] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4683.472276] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4683.478494] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4683.569248] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4685.560925] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4685.567859] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4685.726133] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4686.639538] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4686.652652] systemd[1]: tmp-mnthr6jjn.mount: Succeeded. [ 4686.692727] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4686.820590] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4686.831108] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 4686.831301] Lustre: Skipped 3 previous similar messages [ 4686.859834] Lustre: lustre-MDT0000: new disk, initializing [ 4686.859986] Lustre: Skipped 2 previous similar messages [ 4686.912174] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4686.912470] Lustre: Skipped 2 previous similar messages [ 4686.916353] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 4688.843949] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4688.902225] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4689.143646] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 4691.262786] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4691.913033] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4692.335153] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4692.343104] systemd[1]: tmp-mntaAciWw.mount: Succeeded. [ 4692.383809] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4694.912935] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST2710-osc-[-0-9a-f]*.ost_server_uuid [ 4695.525867] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4695.548748] systemd[1]: tmp-mntWjIvFN.mount: Succeeded. [ 4695.578839] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4696.745829] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST03e8-osc-[-0-9a-f]*.ost_server_uuid [ 4696.986006] Lustre: Mounted lustre-client [ 4697.166195] Lustre: lustre-OST2710-osc-MDT0000: update sequence from 0x127100000 to 0x280000401 [ 4697.761434] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST2710-osc-MDT0000.ost_server_uuid 40 [ 4697.888076] Lustre: DEBUG MARKER: os[cp].lustre-OST2710-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 4698.577387] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST2710-osc-MDT0001.ost_server_uuid 40 [ 4698.731872] Lustre: DEBUG MARKER: os[cp].lustre-OST2710-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 4699.406876] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST03e8-osc-MDT0000.ost_server_uuid 40 [ 4703.859030] Lustre: cli-lustre-OST03e8-super: Allocated super-sequence [0x00000002c0000400-0x0000000300000400]:3e8:ost] [ 4703.859260] Lustre: Skipped 1 previous similar message [ 4703.893398] Lustre: lustre-OST03e8-osc-MDT0000: update sequence from 0x103e80000 to 0x2c0000401 [ 4704.763838] Lustre: DEBUG MARKER: os[cp].lustre-OST03e8-osc-MDT0000.ost_server_uuid in FULL state after 5 sec [ 4705.470753] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST03e8-osc-MDT0001.ost_server_uuid 40 [ 4705.633314] Lustre: DEBUG MARKER: os[cp].lustre-OST03e8-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 4705.957907] systemd[1]: mnt-lustre.mount: Succeeded. [ 4706.038759] Lustre: Unmounted lustre-client [ 4706.232364] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4708.882616] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4708.884984] Lustre: Skipped 2 previous similar messages [ 4708.886620] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4708.886795] Lustre: Skipped 2 previous similar messages [ 4712.688806] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4712.722091] LustreError: 228154:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935041 with bad export cookie 17605237257124740306 [ 4712.726360] LustreError: 228154:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 4712.730262] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4713.333840] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4719.440070] Lustre: 229961:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679935041/real 1679935041] req@00000000d7341061 x1761539382116800/t0(0) o39->lustre-MDT0001-lwp-OST2710@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679935047 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 4719.815337] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 4725.920098] Lustre: 229999:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679935048/real 1679935048] req@00000000b40bbde5 x1761539382117248/t0(0) o39->lustre-MDT0001-lwp-OST03e8@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679935054 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 4726.034704] Lustre: server umount lustre-OST03e8 complete [ 4726.034874] Lustre: Skipped 5 previous similar messages [ 4728.284517] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 4728.710704] systemd-udevd[1035]: Specified user 'tss' unknown [ 4728.810176] systemd-udevd[1035]: Specified group 'tss' unknown [ 4728.914293] systemd-udevd[230629]: Using default interface naming scheme 'rhel-8.0'. [ 4733.504959] print_req_error: 2 callbacks suppressed [ 4733.504962] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4733.505435] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4733.506006] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4733.506401] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4733.506682] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4733.506971] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4733.507729] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4733.508364] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4733.508735] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4733.509390] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4734.973273] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4734.996694] systemd[1]: tmp-mntYHToci.mount: Succeeded. [ 4738.304939] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4738.326493] systemd[1]: tmp-mntgeTeog.mount: Succeeded. [ 4739.936903] print_req_error: 8188 callbacks suppressed [ 4739.936906] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4739.942439] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4739.945208] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4739.949818] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4740.006411] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4740.023714] systemd[1]: tmp-mntzPr0cs.mount: Succeeded. [ 4741.670769] blk_update_request: operation not supported error, dev loop3, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4741.671753] blk_update_request: operation not supported error, dev loop3, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4741.672523] blk_update_request: operation not supported error, dev loop3, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4741.677583] blk_update_request: operation not supported error, dev loop3, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4741.804393] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4742.499421] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4742.507027] systemd[1]: tmp-mntxg6AEG.mount: Succeeded. [ 4742.544605] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4742.707475] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4742.707672] Lustre: Skipped 1 previous similar message [ 4742.726688] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 4742.726854] Lustre: Skipped 4 previous similar messages [ 4742.777738] Lustre: lustre-MDT0000: new disk, initializing [ 4742.792703] Lustre: Skipped 3 previous similar messages [ 4742.845052] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4742.845233] Lustre: Skipped 3 previous similar messages [ 4742.849103] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 4742.849293] Lustre: Skipped 3 previous similar messages [ 4744.772985] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4744.793610] systemd[1]: tmp-mntFBs9MC.mount: Succeeded. [ 4744.825142] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4745.025093] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 4747.088592] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4747.985773] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4748.441393] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4748.452928] systemd[1]: tmp-mntWFkaAG.mount: Succeeded. [ 4748.512449] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4751.013098] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4751.863100] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 4756.367253] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 4757.296006] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 5 sec [ 4757.976304] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 4758.108441] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 4758.199824] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4761.361865] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4761.367672] Lustre: Skipped 3 previous similar messages [ 4761.373322] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 4761.373490] Lustre: Skipped 3 previous similar messages [ 4764.773604] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4765.120358] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 4771.377843] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4771.414613] LustreError: 231991:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935099 with bad export cookie 17605237257124742056 [ 4771.416618] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4771.416918] LustreError: 231991:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 3 previous similar messages [ 4772.473167] Lustre: DEBUG MARKER: == conf-sanity test 56b: test target_obd correctness with nonconsecutive MDTs ========================================================== 16:38:20 (1679935100) [ 4772.607254] Lustre: DEBUG MARKER: SKIP: conf-sanity test_56b needs >= 3 MDTs [ 4772.889439] Lustre: DEBUG MARKER: == conf-sanity test 57a: initial registration from failnode should fail (should return errs) ========================================================== 16:38:21 (1679935101) [ 4773.639213] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing load_modules_local [ 4773.836405] systemd-udevd[1035]: Specified user 'tss' unknown [ 4773.871462] systemd-udevd[1035]: Specified group 'tss' unknown [ 4773.961798] systemd-udevd[233560]: Using default interface naming scheme 'rhel-8.0'. [ 4776.395559] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4776.412928] systemd[1]: tmp-mntbbjFcT.mount: Succeeded. [ 4776.785334] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4776.801457] systemd[1]: tmp-mntXR2TNa.mount: Succeeded. [ 4777.163504] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4777.398633] systemd[1]: tmp-mntIJ8wOZ.mount: Succeeded. [ 4777.615988] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4777.986284] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4778.018263] systemd[1]: tmp-mntB61wJt.mount: Succeeded. [ 4778.576762] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4778.775314] Lustre: MGS: Logs for fs lustre were removed by user request. All servers must be restarted in order to regenerate the logs: rc = 0 [ 4778.778448] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4778.778663] Lustre: Skipped 1 previous similar message [ 4779.700485] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4779.721637] Lustre: MGS: Regenerating lustre-MDT0001 log by user request: rc = 0 [ 4780.766784] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4781.744475] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4782.204247] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4782.222746] systemd[1]: tmp-mntqkRzdE.mount: Succeeded. [ 4782.270909] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4782.382349] Lustre: Denying initial registration attempt from nid 192.168.125.30@tcp, specified as failover [ 4782.382818] LustreError: 160-7: lustre-OST0000: the MGS refuses to allow this server to start: rc = -99. Please see messages on the MGS. [ 4782.383151] LustreError: 234853:0:(tgt_mount.c:2081:server_fill_super()) Unable to start targets: -99 [ 4782.383334] LustreError: 234853:0:(tgt_mount.c:1669:server_put_super()) no obd lustre-OST0000 [ 4782.383469] LustreError: 234853:0:(tgt_mount.c:132:server_deregister_mount()) lustre-OST0000 not registered [ 4782.425683] LustreError: 234853:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -99 [ 4782.882318] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4784.800411] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 4789.282951] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4792.720876] LNet: 235299:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 4792.732083] LNet: Removed LNI 192.168.125.30@tcp [ 4793.948598] systemd-udevd[1035]: Specified user 'tss' unknown [ 4793.955574] systemd-udevd[1035]: Specified group 'tss' unknown [ 4794.075796] systemd-udevd[235461]: Using default interface naming scheme 'rhel-8.0'. [ 4794.729934] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 4795.564953] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 4796.409832] Lustre: DEBUG MARKER: == conf-sanity test 57b: initial registration from servicenode should not fail ========================================================== 16:38:43 (1679935123) [ 4796.928811] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing load_modules_local [ 4797.217362] systemd-udevd[1035]: Specified user 'tss' unknown [ 4797.226059] systemd-udevd[1035]: Specified group 'tss' unknown [ 4797.256206] systemd-udevd[236138]: Using default interface naming scheme 'rhel-8.0'. [ 4797.714014] Lustre: Lustre: Build Version: 2.15.54 [ 4797.796067] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 4797.796380] LNet: Accept secure, port 988 [ 4798.368599] Lustre: Echo OBD driver; http://www.lustre.org/ [ 4800.228576] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4800.583205] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4800.967221] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4801.432542] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4801.462808] systemd[1]: tmp-mntux1rbT.mount: Succeeded. [ 4801.760944] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4802.224048] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 4802.225625] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4803.452800] Lustre: MGS: Logs for fs lustre were removed by user request. All servers must be restarted in order to regenerate the logs: rc = 0 [ 4803.458909] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4803.546187] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4804.206549] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4804.243041] Lustre: MGS: Regenerating lustre-MDT0001 log by user request: rc = 0 [ 4804.248004] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 4804.413814] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 4805.730441] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4806.770424] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4807.595272] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4807.650964] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4807.828464] Lustre: MGS: Regenerating lustre-OST0000 log by user request: rc = 0 [ 4807.948209] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 4809.447606] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4809.646944] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4809.796047] Lustre: server umount lustre-OST0000 complete [ 4810.200192] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4814.480427] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 4814.485294] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4814.486509] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4815.920992] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4815.925220] Lustre: Skipped 2 previous similar messages [ 4816.447365] Lustre: server umount lustre-MDT0000 complete [ 4816.723828] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4816.765342] LustreError: 237009:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935145 with bad export cookie 10330547566250494224 [ 4816.774507] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4820.540305] LNet: 238189:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 4820.554398] LNet: Removed LNI 192.168.125.30@tcp [ 4821.865553] systemd-udevd[1035]: Specified user 'tss' unknown [ 4821.973381] systemd-udevd[1035]: Specified group 'tss' unknown [ 4822.020403] systemd-udevd[238536]: Using default interface naming scheme 'rhel-8.0'. [ 4822.622689] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 4823.492779] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 4824.308436] Lustre: DEBUG MARKER: == conf-sanity test 58: missing llog files must not prevent MDT from mounting ========================================================== 16:39:11 (1679935151) [ 4825.138736] systemd-udevd[1035]: Specified user 'tss' unknown [ 4825.173674] systemd-udevd[1035]: Specified group 'tss' unknown [ 4825.286087] systemd-udevd[239008]: Using default interface naming scheme 'rhel-8.0'. [ 4826.391001] Lustre: Lustre: Build Version: 2.15.54 [ 4826.574789] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 4826.575116] LNet: Accept secure, port 988 [ 4827.378870] Lustre: Echo OBD driver; http://www.lustre.org/ [ 4830.074605] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 4830.085805] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4831.367894] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4831.399096] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4832.400755] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4832.611251] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 4834.071288] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4835.023404] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4835.760977] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4836.013463] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 4837.330809] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4837.542547] Lustre: Mounted lustre-client [ 4838.914317] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4842.561610] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4842.562947] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4842.563341] Lustre: Skipped 3 previous similar messages [ 4845.171754] Lustre: server umount lustre-MDT0000 complete [ 4845.580588] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4845.621158] LustreError: 239578:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935174 with bad export cookie 8295490109041762745 [ 4845.623713] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4845.627333] LustreError: 239578:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 4847.601734] Lustre: lustre-MDT0001-lwp-OST0000: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4847.602257] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4847.609116] Lustre: Skipped 3 previous similar messages [ 4847.609746] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 4847.610567] LustreError: Skipped 1 previous similar message [ 4850.640850] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4850.641123] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 4850.642686] Lustre: Skipped 2 previous similar messages [ 4851.875882] Lustre: server umount lustre-MDT0001 complete [ 4852.231756] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) [ 4852.302163] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4852.808991] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4862.960683] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0x731f804d833361b9 to 0x731f804d8333808a [ 4862.972982] Lustre: MGC192.168.125.30@tcp: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 4863.174604] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4863.176823] LustreError: Skipped 3 previous similar messages [ 4863.205467] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4863.219398] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:134 to 0x280000401:161 [ 4864.189496] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4865.560325] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4877.763599] Lustre: lustre-MDT0001-lwp-OST0000: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 4877.770376] LustreError: 167-0: lustre-MDT0001-mdc-ffff8b78218d9000: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. [ 4881.928079] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 4881.929517] Lustre: Skipped 1 previous similar message [ 4881.929696] LustreError: 167-0: lustre-MDT0000-mdc-ffff8b78218d9000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 4882.653740] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 16 sec [ 4883.777127] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4884.076400] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4884.591073] systemd[1]: mnt-lustre.mount: Succeeded. [ 4884.641171] Lustre: Unmounted lustre-client [ 4884.802146] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4886.961382] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4886.961835] Lustre: Skipped 1 previous similar message [ 4886.962496] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 4886.962736] Lustre: Skipped 3 previous similar messages [ 4891.049371] Lustre: server umount lustre-OST0000 complete [ 4891.446932] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4892.000815] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4892.005200] Lustre: Skipped 1 previous similar message [ 4892.006712] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4892.012758] Lustre: Skipped 1 previous similar message [ 4897.653574] Lustre: server umount lustre-MDT0000 complete [ 4898.066756] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4898.094752] LustreError: 239580:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935226 with bad export cookie 8295490109041770634 [ 4898.095024] LustreError: 239580:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 3 previous similar messages [ 4898.096179] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4902.430554] LNet: 241702:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 4902.437159] LNet: Removed LNI 192.168.125.30@tcp [ 4903.848227] systemd-udevd[1035]: Specified user 'tss' unknown [ 4903.849186] systemd-udevd[1035]: Specified group 'tss' unknown [ 4903.979585] systemd-udevd[242043]: Using default interface naming scheme 'rhel-8.0'. [ 4904.850637] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 4905.401097] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 4906.274224] Lustre: DEBUG MARKER: == conf-sanity test 59: writeconf mount option =========== 16:40:33 (1679935233) [ 4906.813580] systemd-udevd[1035]: Specified user 'tss' unknown [ 4906.823929] systemd-udevd[1035]: Specified group 'tss' unknown [ 4907.016760] systemd-udevd[242335]: Using default interface naming scheme 'rhel-8.0'. [ 4907.900957] Lustre: Lustre: Build Version: 2.15.54 [ 4908.062699] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 4908.062898] LNet: Accept secure, port 988 [ 4909.000352] Lustre: Echo OBD driver; http://www.lustre.org/ [ 4911.868173] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 4911.869796] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4913.164341] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4913.174396] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4914.106779] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4914.202283] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 4915.154750] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4915.892098] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4916.165464] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4916.211841] LustreError: 243130:0:(osp_object.c:637:osp_attr_get()) lustre-MDT0001-osp-MDT0000: osp_attr_get update error [0x200000009:0x1:0x0]: rc = -5 [ 4916.216227] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4916.216816] LustreError: 243130:0:(lod_sub_object.c:932:lod_sub_prep_llog()) lustre-MDT0000-mdtlov: can't get id from catalogs: rc = -5 [ 4916.216820] LustreError: 243130:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 3, retries 0, failed: rc = -5 [ 4916.217838] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4916.218250] Lustre: Skipped 1 previous similar message [ 4919.280901] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4919.287238] Lustre: Skipped 1 previous similar message [ 4922.429075] Lustre: server umount lustre-MDT0000 complete [ 4922.789579] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4922.805552] LustreError: 243076:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935251 with bad export cookie 11110665811632520496 [ 4922.806328] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4923.029897] Lustre: server umount lustre-MDT0001 complete [ 4924.219438] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4924.354571] Lustre: MGS: Logs for fs lustre were removed by user request. All servers must be restarted in order to regenerate the logs: rc = 0 [ 4924.387486] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4924.485088] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4925.299411] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4925.322796] Lustre: MGS: Regenerating lustre-MDT0001 log by user request: rc = 0 [ 4925.329432] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 4926.455266] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4927.477849] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4928.261057] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4928.375787] LustreError: 13b-9: lustre-OST0000 claims to have registered, but this MGS does not know about it, preventing registration. [ 4928.376299] LustreError: 160-7: lustre-OST0000: the MGS refuses to allow this server to start: rc = -2. Please see messages on the MGS. [ 4928.376633] LustreError: 244404:0:(tgt_mount.c:2081:server_fill_super()) Unable to start targets: -2 [ 4928.376817] LustreError: 244404:0:(tgt_mount.c:1669:server_put_super()) no obd lustre-OST0000 [ 4928.376946] LustreError: 244404:0:(tgt_mount.c:132:server_deregister_mount()) lustre-OST0000 not registered [ 4928.468601] Lustre: server umount lustre-OST0000 complete [ 4928.469320] LustreError: 244404:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -2 [ 4929.056824] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4929.135021] Lustre: MGS: Regenerating lustre-OST0000 log by user request: rc = 0 [ 4929.183630] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 4929.183816] Lustre: Skipped 1 previous similar message [ 4929.767445] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4930.601833] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4930.619417] systemd[1]: tmp-mntCsNVhM.mount: Succeeded. [ 4930.654485] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4930.686236] Lustre: MGS: Regenerating lustre-OST0001 log by user request: rc = 0 [ 4930.699454] Lustre: lustre-OST0001: new disk, initializing [ 4930.707694] Lustre: srv-lustre-OST0001: No data found on store. Initialize space: rc = -61 [ 4931.796069] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid [ 4931.962186] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 4932.016786] Lustre: server umount lustre-OST0001 complete [ 4932.224700] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4932.638291] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4933.368280] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4933.368908] Lustre: Skipped 1 previous similar message [ 4933.370220] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4938.401138] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4938.407134] Lustre: Skipped 2 previous similar messages [ 4938.840837] Lustre: server umount lustre-MDT0000 complete [ 4938.841005] Lustre: Skipped 1 previous similar message [ 4939.171997] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4939.210525] LustreError: 243848:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935267 with bad export cookie 11110665811632521014 [ 4939.210786] LustreError: 243848:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 4939.211346] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4943.130327] LNet: 245459:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 4943.136223] LNet: Removed LNI 192.168.125.30@tcp [ 4944.315580] systemd-udevd[1035]: Specified user 'tss' unknown [ 4944.379194] systemd-udevd[1035]: Specified group 'tss' unknown [ 4944.505763] systemd-udevd[245802]: Using default interface naming scheme 'rhel-8.0'. [ 4945.192668] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 4948.803186] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 4949.656647] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 4949.951865] systemd-udevd[1035]: Specified user 'tss' unknown [ 4950.014204] systemd-udevd[1035]: Specified group 'tss' unknown [ 4950.132253] systemd-udevd[246467]: Using default interface naming scheme 'rhel-8.0'. [ 4950.675757] Lustre: Lustre: Build Version: 2.15.54 [ 4950.784793] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 4950.785056] LNet: Accept secure, port 988 [ 4951.530618] Lustre: Echo OBD driver; http://www.lustre.org/ [ 4954.013845] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4954.014264] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4954.014777] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4954.015345] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4954.015640] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4954.015929] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4954.016214] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4954.016497] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4954.016790] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4954.017079] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4954.993142] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4957.398309] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4957.406263] systemd[1]: tmp-mntKoWm8j.mount: Succeeded. [ 4958.733091] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4960.407478] print_req_error: 8192 callbacks suppressed [ 4960.407481] blk_update_request: operation not supported error, dev loop3, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4960.408071] blk_update_request: operation not supported error, dev loop3, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4960.408901] blk_update_request: operation not supported error, dev loop3, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4960.415674] blk_update_request: operation not supported error, dev loop3, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4960.531129] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4961.333268] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4961.352200] systemd[1]: tmp-mntsbKkRI.mount: Succeeded. [ 4961.371354] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 4961.376058] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4962.597645] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 4962.611777] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 4962.680333] Lustre: lustre-MDT0000: new disk, initializing [ 4962.733861] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4962.736611] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 4964.928301] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4964.942534] systemd[1]: tmp-mntIcerkO.mount: Succeeded. [ 4964.993368] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4965.026360] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 4965.067016] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 4965.068495] Lustre: Skipped 1 previous similar message [ 4965.124872] Lustre: lustre-MDT0001: new disk, initializing [ 4965.179284] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 4965.192748] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 4965.197263] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 4967.586875] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4968.517665] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4969.111154] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4969.129954] systemd[1]: tmp-mntSP7QKV.mount: Succeeded. [ 4969.173430] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4969.324668] Lustre: lustre-OST0000: new disk, initializing [ 4969.325168] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 4969.399850] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 4971.938373] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4972.230142] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 4972.230544] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 4972.287596] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 4973.088415] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 4973.348691] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 4974.374628] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 4974.628358] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 4974.809485] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 4977.281942] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4977.282208] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4977.282970] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 4981.038231] Lustre: server umount lustre-OST0000 complete [ 4981.333538] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 4982.320937] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4982.322808] Lustre: Skipped 1 previous similar message [ 4982.323289] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4982.324520] Lustre: Skipped 1 previous similar message [ 4987.360990] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4987.367699] Lustre: Skipped 2 previous similar messages [ 4987.579420] Lustre: server umount lustre-MDT0000 complete [ 4987.963339] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 4988.003896] LustreError: 248274:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935316 with bad export cookie 4336725440546453898 [ 4988.015383] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4989.311445] Lustre: DEBUG MARKER: == conf-sanity test 60a: check mkfs.lustre --mkfsoptions -E -O options setting ========================================================== 16:41:57 (1679935317) [ 4990.589933] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4990.597473] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4990.632573] blk_update_request: operation not supported error, dev loop0, sector 53536 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4990.678476] blk_update_request: operation not supported error, dev loop0, sector 106864 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4990.719987] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4990.720481] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4990.720786] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4990.721076] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4990.721367] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4990.721671] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 4992.186167] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4992.213856] systemd[1]: tmp-mntMMYBjv.mount: Succeeded. [ 4995.368557] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 4995.386524] systemd[1]: tmp-mnt42ZiBh.mount: Succeeded. [ 4998.450533] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 4998.709771] systemd-udevd[1035]: Specified user 'tss' unknown [ 4998.753218] systemd-udevd[1035]: Specified group 'tss' unknown [ 4998.930357] systemd-udevd[250557]: Using default interface naming scheme 'rhel-8.0'. [ 5002.829731] print_req_error: 8192 callbacks suppressed [ 5002.829734] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5002.831782] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5002.832331] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5002.832745] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5002.833031] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5002.833309] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5002.833596] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5002.833872] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5002.834146] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5002.834424] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5004.426476] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5004.457184] systemd[1]: tmp-mnt92RTvt.mount: Succeeded. [ 5007.578210] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5007.586845] systemd[1]: tmp-mntZ2Xi6u.mount: Succeeded. [ 5009.131006] print_req_error: 8188 callbacks suppressed [ 5009.131009] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5009.136518] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5009.136960] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5009.153966] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5009.271476] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5009.292550] systemd[1]: tmp-mntCr0nTN.mount: Succeeded. [ 5011.520926] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5011.526324] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5011.526887] blk_update_request: operation not supported error, dev loop4, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5011.531683] blk_update_request: operation not supported error, dev loop4, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5011.645360] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5011.665104] systemd[1]: tmp-mntFUJ8eV.mount: Succeeded. [ 5012.465300] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5012.477733] systemd[1]: tmp-mnt3aU7cY.mount: Succeeded. [ 5012.514170] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5012.690541] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 5012.707219] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 5012.762641] Lustre: lustre-MDT0000: new disk, initializing [ 5012.812718] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5012.817695] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 5014.895684] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5014.907432] systemd[1]: tmp-mntJzEtGU.mount: Succeeded. [ 5014.952358] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5014.989893] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 5015.173561] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 5017.586461] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5018.488144] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5019.013142] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5019.031949] systemd[1]: tmp-mntis2oM2.mount: Succeeded. [ 5019.063315] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5019.206673] Lustre: lustre-OST0000: new disk, initializing [ 5019.206768] Lustre: Skipped 1 previous similar message [ 5019.207150] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 5019.217295] Lustre: Skipped 2 previous similar messages [ 5019.255683] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 5019.259481] Lustre: Skipped 1 previous similar message [ 5019.415186] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 5019.415594] Lustre: Skipped 1 previous similar message [ 5019.415811] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 5019.519967] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 5021.856580] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5022.879361] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 5023.080584] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 5024.251683] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 5024.448236] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 5024.596180] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5029.521097] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 5029.525364] LustreError: Skipped 1 previous similar message [ 5029.525449] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5029.525651] Lustre: Skipped 1 previous similar message [ 5029.527173] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5030.921359] Lustre: server umount lustre-OST0000 complete [ 5030.921587] Lustre: Skipped 1 previous similar message [ 5031.319807] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5034.640649] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5034.646397] Lustre: Skipped 1 previous similar message [ 5034.649575] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5034.653512] Lustre: Skipped 2 previous similar messages [ 5037.574513] Lustre: server umount lustre-MDT0000 complete [ 5037.909157] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5037.951152] LustreError: 252059:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935366 with bad export cookie 4336725440546454668 [ 5037.954522] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5037.956487] LustreError: 252059:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 5 previous similar messages [ 5038.952284] Lustre: DEBUG MARKER: == conf-sanity test 60b: check mkfs.lustre MDT default features ========================================================== 16:42:47 (1679935367) [ 5039.589603] Lustre: DEBUG MARKER: == conf-sanity test 61a: large xattr ===================== 16:42:47 (1679935367) [ 5040.182663] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5040.386488] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5040.404090] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5041.129460] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5042.224022] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5042.811693] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5043.205897] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5044.088414] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5045.377459] Lustre: Mounted lustre-client [ 5052.621586] Lustre: DEBUG MARKER: save large xattr of 65536 bytes on trusted.big on /mnt/lustre/f61a.conf-sanity [ 5052.857752] Lustre: DEBUG MARKER: shrink value of trusted.big on /mnt/lustre/f61a.conf-sanity [ 5053.097688] Lustre: DEBUG MARKER: grow value of trusted.big on /mnt/lustre/f61a.conf-sanity [ 5053.310897] Lustre: DEBUG MARKER: check value of trusted.big on /mnt/lustre/f61a.conf-sanity after remounting MDS [ 5053.454135] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5053.488969] Lustre: Failing over lustre-MDT0000 [ 5053.597265] Lustre: server umount lustre-MDT0000 complete [ 5053.602291] Lustre: Skipped 1 previous similar message [ 5056.320345] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5056.327149] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5056.327355] Lustre: Skipped 1 previous similar message [ 5056.331213] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5061.521058] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5061.527312] LustreError: Skipped 6 previous similar messages [ 5063.600052] Lustre: 246752:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679935384/real 1679935384] req@000000008b56472d x1761539815150144/t0(0) o400->MGC192.168.125.30@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1679935391 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 5063.605443] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5063.608814] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5063.614149] LustreError: Skipped 3 previous similar messages [ 5064.534481] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5068.640995] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5068.642735] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0x3c2f24cfac2bbf8e to 0x3c2f24cfac2bc3e7 [ 5068.649208] LustreError: Skipped 3 previous similar messages [ 5068.660710] Lustre: MGC192.168.125.30@tcp: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 5068.806069] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5068.806256] Lustre: Skipped 2 previous similar messages [ 5068.829169] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 5073.840943] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 5073.844082] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 5073.854178] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 5073.879845] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:35 to 0x280000401:65 [ 5074.870884] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5075.388488] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5076.583986] Lustre: DEBUG MARKER: remove large xattr trusted.big from /mnt/lustre/f61a.conf-sanity [ 5076.714402] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5076.763518] Lustre: Failing over lustre-MDT0000 [ 5076.881064] Lustre: server umount lustre-MDT0000 complete [ 5078.881525] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5078.881943] Lustre: Skipped 3 previous similar messages [ 5078.882181] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5078.882317] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5086.160293] Lustre: 246752:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679935407/real 1679935407] req@00000000b4fab1f4 x1761539815159872/t0(0) o400->MGC192.168.125.30@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1679935414 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 5086.160544] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5088.004227] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5092.242563] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0x3c2f24cfac2bc3e7 to 0x3c2f24cfac2bc713 [ 5092.249412] Lustre: MGC192.168.125.30@tcp: Connection restored to (at 0@lo) [ 5092.251852] Lustre: Skipped 3 previous similar messages [ 5092.373164] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 5097.362451] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 5097.364432] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 5097.374525] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 5097.403611] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:35 to 0x280000401:97 [ 5098.272437] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5098.556760] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5099.827833] systemd[1]: mnt-lustre.mount: Succeeded. [ 5099.924637] Lustre: Unmounted lustre-client [ 5100.231145] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5102.401404] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5102.408264] Lustre: Skipped 3 previous similar messages [ 5102.413777] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5102.414446] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5106.491102] Lustre: server umount lustre-MDT0000 complete [ 5106.783589] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5106.805150] LustreError: 253529:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935435 with bad export cookie 4336725440546457363 [ 5106.805447] LustreError: 253529:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 3 previous similar messages [ 5106.806107] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5107.445822] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5113.600065] Lustre: 255366:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679935435/real 1679935435] req@000000000070ba78 x1761539815170432/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679935441 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 5115.718600] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5116.010084] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5116.010499] LustreError: Skipped 19 previous similar messages [ 5116.044897] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5116.045649] Lustre: Skipped 1 previous similar message [ 5116.992673] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5118.428895] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5119.377298] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5120.041272] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5121.294287] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:35 to 0x280000401:129 [ 5121.527164] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5121.861024] Lustre: Mounted lustre-client [ 5123.227992] systemd[1]: mnt-lustre.mount: Succeeded. [ 5123.316658] Lustre: Unmounted lustre-client [ 5123.489518] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5126.322964] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 5126.338730] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5126.338929] Lustre: Skipped 2 previous similar messages [ 5130.251243] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5144.800043] Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 5144.867761] Lustre: server umount lustre-MDT0000 complete [ 5144.871112] Lustre: Skipped 3 previous similar messages [ 5145.065512] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5145.105603] LustreError: 255601:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935473 with bad export cookie 4336725440546458063 [ 5145.105880] LustreError: 255601:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 5145.106289] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5149.160704] LNet: 256823:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5149.173641] LNet: Removed LNI 192.168.125.30@tcp [ 5150.656038] systemd-udevd[1035]: Specified user 'tss' unknown [ 5150.705686] systemd-udevd[1035]: Specified group 'tss' unknown [ 5150.939821] systemd-udevd[257172]: Using default interface naming scheme 'rhel-8.0'. [ 5151.822477] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 5152.628015] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 5153.527099] Lustre: DEBUG MARKER: == conf-sanity test 61b: large xattr ===================== 16:44:40 (1679935480) [ 5154.429180] systemd-udevd[1035]: Specified user 'tss' unknown [ 5154.551436] systemd-udevd[1035]: Specified group 'tss' unknown [ 5154.689731] systemd-udevd[257639]: Using default interface naming scheme 'rhel-8.0'. [ 5155.871045] Lustre: Lustre: Build Version: 2.15.54 [ 5155.992891] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 5155.993181] LNet: Accept secure, port 988 [ 5157.268082] Lustre: Echo OBD driver; http://www.lustre.org/ [ 5160.537614] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 5160.543358] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5161.927402] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5161.956269] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5163.174453] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5163.413843] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 5164.975094] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5166.206759] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5167.083522] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5167.379294] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 5168.455970] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:35 to 0x280000401:161 [ 5168.807837] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5169.106983] Lustre: Mounted lustre-client [ 5171.609084] systemd[1]: mnt-lustre.mount: Succeeded. [ 5171.680945] Lustre: Unmounted lustre-client [ 5171.927733] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5173.440463] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5173.445262] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5173.446490] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5174.161248] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5174.161569] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5174.172220] Lustre: Skipped 2 previous similar messages [ 5178.199253] Lustre: server umount lustre-MDT0000 complete [ 5178.532974] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5178.566351] LustreError: 258211:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935506 with bad export cookie 16751606603352148055 [ 5178.568549] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5178.791219] Lustre: server umount lustre-MDT0001 complete [ 5179.254813] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5185.360059] Lustre: 259140:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679935507/real 1679935507] req@0000000058e12aed x1761540030082752/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679935513 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 5185.544143] Lustre: server umount lustre-OST0000 complete [ 5187.263713] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5187.472190] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5187.511492] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5188.242898] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5189.930961] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5190.944385] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5191.369992] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5192.414021] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5192.531327] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:163 to 0x280000401:193 [ 5192.732435] Lustre: Mounted lustre-client [ 5198.320670] systemd[1]: mnt-lustre.mount: Succeeded. [ 5198.394517] Lustre: Unmounted lustre-client [ 5198.595890] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5198.800861] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5198.805046] Lustre: Skipped 1 previous similar message [ 5198.806676] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5198.809903] Lustre: Skipped 1 previous similar message [ 5203.520365] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5203.525861] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5203.526070] Lustre: Skipped 1 previous similar message [ 5203.528900] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5204.829230] Lustre: server umount lustre-MDT0000 complete [ 5205.112416] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5205.165261] LustreError: 259341:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935533 with bad export cookie 16751606603352149189 [ 5205.169992] LustreError: 259341:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 5205.180542] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5205.714574] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5211.840057] Lustre: 260272:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679935534/real 1679935534] req@0000000060ea31c1 x1761540030094144/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679935540 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 5212.063650] Lustre: server umount lustre-OST0000 complete [ 5212.063814] Lustre: Skipped 1 previous similar message [ 5214.025822] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5214.344017] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5214.376983] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5214.377160] Lustre: Skipped 2 previous similar messages [ 5215.328770] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5217.146814] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5218.347178] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5219.167208] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5220.403767] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:163 to 0x280000401:225 [ 5221.005347] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5221.348930] Lustre: Mounted lustre-client [ 5221.955961] systemd[1]: mnt-lustre.mount: Succeeded. [ 5222.034780] Lustre: Unmounted lustre-client [ 5222.307650] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5225.440364] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 5225.440662] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5225.448361] LustreError: Skipped 1 previous similar message [ 5225.448682] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5225.448684] Lustre: Skipped 2 previous similar messages [ 5225.453284] Lustre: Skipped 1 previous similar message [ 5228.610723] Lustre: server umount lustre-OST0000 complete [ 5228.982760] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5230.640914] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5235.669396] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5235.675490] LustreError: 260502:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935564 with bad export cookie 16751606603352150253 [ 5235.679479] LustreError: 260502:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 5235.681696] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5239.431613] LNet: 261720:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5239.440864] LNet: Removed LNI 192.168.125.30@tcp [ 5240.683721] systemd-udevd[1035]: Specified user 'tss' unknown [ 5240.705405] systemd-udevd[1035]: Specified group 'tss' unknown [ 5240.920945] systemd-udevd[261918]: Using default interface naming scheme 'rhel-8.0'. [ 5241.761891] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 5242.572636] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 5243.475821] Lustre: DEBUG MARKER: == conf-sanity test 62: start with disabled journal ====== 16:46:10 (1679935570) [ 5244.065386] systemd-udevd[1035]: Specified user 'tss' unknown [ 5244.080733] systemd-udevd[1035]: Specified group 'tss' unknown [ 5244.203865] systemd-udevd[262516]: Using default interface naming scheme 'rhel-8.0'. [ 5245.077790] Lustre: Lustre: Build Version: 2.15.54 [ 5245.302395] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 5245.302704] LNet: Accept secure, port 988 [ 5246.578162] Lustre: Echo OBD driver; http://www.lustre.org/ [ 5249.232794] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 5249.240510] LDISKFS-fs (dm-0): mounted filesystem without journal. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5249.240716] LustreError: 263089:0:(osd_handler.c:8221:osd_mount()) lustre-MDT0000-osd: device /dev/mapper/mds1_flakey is mounted w/o journal [ 5249.241056] LustreError: 263089:0:(obd_config.c:776:class_setup()) setup lustre-MDT0000-osd failed (-22) [ 5249.241184] LustreError: 263089:0:(obd_mount.c:200:lustre_start_simple()) lustre-MDT0000-osd setup error -22 [ 5249.241333] LustreError: 263089:0:(tgt_mount.c:2048:server_fill_super()) Unable to start osd on /dev/mapper/mds1_flakey: -22 [ 5249.241502] LustreError: 263089:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -22 [ 5250.112371] LDISKFS-fs (dm-2): mounted filesystem without journal. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5250.112605] LustreError: 263183:0:(osd_handler.c:8221:osd_mount()) lustre-OST0000-osd: device /dev/mapper/ost1_flakey is mounted w/o journal [ 5250.112955] LustreError: 263183:0:(obd_config.c:776:class_setup()) setup lustre-OST0000-osd failed (-22) [ 5250.113089] LustreError: 263183:0:(obd_mount.c:200:lustre_start_simple()) lustre-OST0000-osd setup error -22 [ 5250.113238] LustreError: 263183:0:(tgt_mount.c:2048:server_fill_super()) Unable to start osd on /dev/mapper/ost1_flakey: -22 [ 5250.113963] LustreError: 263183:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -22 [ 5254.790303] LNet: 263597:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5254.796990] LNet: Removed LNI 192.168.125.30@tcp [ 5256.198402] systemd-udevd[1035]: Specified user 'tss' unknown [ 5256.210054] systemd-udevd[1035]: Specified group 'tss' unknown [ 5256.362839] systemd-udevd[263935]: Using default interface naming scheme 'rhel-8.0'. [ 5257.109791] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 5259.367916] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 5260.277310] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 5260.589811] systemd-udevd[1035]: Specified user 'tss' unknown [ 5260.590886] systemd-udevd[1035]: Specified group 'tss' unknown [ 5260.701483] systemd-udevd[264455]: Using default interface naming scheme 'rhel-8.0'. [ 5261.559742] Lustre: Lustre: Build Version: 2.15.54 [ 5261.675586] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 5261.675862] LNet: Accept secure, port 988 [ 5262.648765] Lustre: Echo OBD driver; http://www.lustre.org/ [ 5266.739284] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5266.739633] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5266.748989] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5266.753277] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5266.755503] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5266.755786] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5266.756070] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5266.756368] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5266.756723] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5266.757461] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5267.814605] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5267.852262] systemd[1]: tmp-mntOsZbdx.mount: Succeeded. [ 5270.360897] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5270.367853] systemd[1]: tmp-mnti9BTHx.mount: Succeeded. [ 5271.741699] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5273.850943] print_req_error: 8192 callbacks suppressed [ 5273.850946] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5273.862364] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5273.862895] blk_update_request: operation not supported error, dev loop4, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5273.868674] blk_update_request: operation not supported error, dev loop4, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5273.963335] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5274.000700] systemd[1]: tmp-mnt0ijF5T.mount: Succeeded. [ 5274.945696] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5274.981409] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 5274.982630] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5276.151462] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 5276.161769] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 5276.215135] Lustre: lustre-MDT0000: new disk, initializing [ 5276.248692] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5276.253328] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 5278.205307] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5278.227804] systemd[1]: tmp-mntQPxNqY.mount: Succeeded. [ 5278.291729] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5278.342240] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 5278.359113] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 5278.359291] Lustre: Skipped 1 previous similar message [ 5278.421419] Lustre: lustre-MDT0001: new disk, initializing [ 5278.500520] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 5278.512814] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 5278.515429] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 5280.889093] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5281.831210] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5282.553123] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5282.560621] systemd[1]: tmp-mntUNPKVt.mount: Succeeded. [ 5282.592707] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5282.761806] Lustre: lustre-OST0000: new disk, initializing [ 5282.765211] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 5282.803179] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 5285.255976] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5286.294009] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 5289.457746] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 5289.458117] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 5289.543857] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 5290.691765] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 4 sec [ 5291.662259] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 5291.894479] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 5292.015137] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5294.480369] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_create to node 0@lo failed: rc = -107 [ 5294.480634] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5294.488325] LustreError: Skipped 1 previous similar message [ 5294.489232] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5298.233063] Lustre: server umount lustre-OST0000 complete [ 5298.629848] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5299.600768] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5299.606177] Lustre: Skipped 1 previous similar message [ 5299.609248] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5299.613205] Lustre: Skipped 2 previous similar messages [ 5304.640773] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5304.644949] Lustre: Skipped 1 previous similar message [ 5304.857924] Lustre: server umount lustre-MDT0000 complete [ 5305.308982] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5305.346569] LustreError: 266223:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935633 with bad export cookie 15274123494600984114 [ 5305.346893] LustreError: 267121:0:(osp_precreate.c:704:osp_precreate_send()) lustre-OST0000-osc-MDT0001: can't precreate: rc = -5 [ 5305.350523] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5305.354199] LustreError: 267121:0:(osp_precreate.c:1405:osp_precreate_thread()) lustre-OST0000-osc-MDT0001: cannot precreate objects: rc = -5 [ 5306.572667] Lustre: DEBUG MARKER: SKIP: conf-sanity test_63 skipping excluded test 63 [ 5306.753418] Lustre: DEBUG MARKER: == conf-sanity test 64: check lfs df --lazy ============== 16:47:15 (1679935635) [ 5307.107449] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5307.421932] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5307.449862] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5308.439798] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5309.835002] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5310.587475] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5311.080851] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5312.471607] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5313.156738] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5313.192392] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5313.242067] Lustre: lustre-OST0001: new disk, initializing [ 5313.243945] Lustre: srv-lustre-OST0001: No data found on store. Initialize space: rc = -61 [ 5313.307739] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 5313.308081] Lustre: Skipped 2 previous similar messages [ 5315.504662] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid [ 5315.704437] Lustre: Mounted lustre-client [ 5315.851768] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 5316.188024] Lustre: lustre-OST0001-osc-ffff8b78476d6000: Connection to lustre-OST0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5316.191550] Lustre: lustre-OST0001: Not available for connect from 0@lo (stopping) [ 5316.193630] Lustre: Skipped 1 previous similar message [ 5321.283277] Lustre: lustre-OST0001: Not available for connect from 0@lo (stopping) [ 5321.287845] Lustre: Skipped 2 previous similar messages [ 5322.008782] Lustre: server umount lustre-OST0001 complete [ 5322.009528] Lustre: Skipped 1 previous similar message [ 5322.576419] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 5326.322630] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5327.362639] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5327.365147] LustreError: Skipped 1 previous similar message [ 5332.400591] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5332.407897] LustreError: Skipped 2 previous similar messages [ 5333.042639] systemd[1]: mnt-lustre.mount: Succeeded. [ 5333.099121] Lustre: Unmounted lustre-client [ 5333.279422] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5334.480438] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 5334.485876] LustreError: Skipped 1 previous similar message [ 5334.485989] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5334.488758] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5334.488949] Lustre: Skipped 2 previous similar messages [ 5337.441833] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5337.447776] LustreError: Skipped 1 previous similar message [ 5339.541436] Lustre: server umount lustre-OST0000 complete [ 5339.839366] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5342.481070] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5342.485827] Lustre: Skipped 1 previous similar message [ 5346.110399] Lustre: server umount lustre-MDT0000 complete [ 5346.512738] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5346.544163] LustreError: 267616:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935674 with bad export cookie 15274123494600984954 [ 5346.566940] LustreError: 267616:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 5346.567206] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5356.700629] LNet: 269186:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5356.707237] LNet: Removed LNI 192.168.125.30@tcp [ 5358.068786] systemd-udevd[1035]: Specified user 'tss' unknown [ 5358.149647] systemd-udevd[1035]: Specified group 'tss' unknown [ 5358.407053] systemd-udevd[269458]: Using default interface naming scheme 'rhel-8.0'. [ 5359.306359] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 5363.124096] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 5363.977881] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 5364.395226] systemd-udevd[1035]: Specified user 'tss' unknown [ 5364.483783] systemd-udevd[1035]: Specified group 'tss' unknown [ 5364.589306] systemd-udevd[270365]: Using default interface naming scheme 'rhel-8.0'. [ 5365.549284] Lustre: Lustre: Build Version: 2.15.54 [ 5365.706395] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 5365.706706] LNet: Accept secure, port 988 [ 5366.587642] Lustre: Echo OBD driver; http://www.lustre.org/ [ 5370.685227] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5370.685638] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5370.686131] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5370.686521] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5370.686804] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5370.687075] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5370.687352] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5370.687627] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5370.687901] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5370.688172] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5372.181482] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5375.073198] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5376.529834] print_req_error: 8188 callbacks suppressed [ 5376.529837] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5376.530989] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5376.531496] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5376.536112] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5376.607509] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5378.283204] blk_update_request: operation not supported error, dev loop3, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5378.283652] blk_update_request: operation not supported error, dev loop3, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5378.284167] blk_update_request: operation not supported error, dev loop3, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5378.294069] blk_update_request: operation not supported error, dev loop3, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5378.359584] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5379.091235] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5379.113742] systemd[1]: tmp-mntdyPc3U.mount: Succeeded. [ 5379.148339] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 5379.152178] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5380.357230] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 5380.371734] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 5380.455791] Lustre: lustre-MDT0000: new disk, initializing [ 5380.490841] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5380.519976] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 5382.453690] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5382.467800] systemd[1]: tmp-mntD3qwiQ.mount: Succeeded. [ 5382.510626] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5382.551736] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 5382.587874] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 5382.588108] Lustre: Skipped 1 previous similar message [ 5382.646002] Lustre: lustre-MDT0001: new disk, initializing [ 5382.693154] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 5382.739311] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 5382.742998] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 5385.106465] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5386.083088] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5386.749849] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5386.771173] systemd[1]: tmp-mnt1X7B4i.mount: Succeeded. [ 5386.792297] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5386.937324] Lustre: lustre-OST0000: new disk, initializing [ 5386.937841] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 5386.967518] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 5389.622217] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5390.810194] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 5395.699073] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 5395.699938] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 5395.722473] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 5396.301545] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 5 sec [ 5397.522950] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 5397.798990] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 5397.942414] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5400.721633] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5400.729090] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 5400.729699] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5404.188674] Lustre: server umount lustre-OST0000 complete [ 5404.497182] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5405.761154] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5405.766395] Lustre: Skipped 1 previous similar message [ 5405.767604] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5405.768126] Lustre: Skipped 1 previous similar message [ 5410.759371] Lustre: server umount lustre-MDT0000 complete [ 5410.800948] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5410.807145] LustreError: Skipped 1 previous similar message [ 5411.112805] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5411.148059] LustreError: 272000:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935739 with bad export cookie 3419387187032039425 [ 5411.148851] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5412.359416] Lustre: DEBUG MARKER: == conf-sanity test 65: re-create the lost last_rcvd file when server mount ========================================================== 16:49:00 (1679935740) [ 5413.094227] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) [ 5413.210316] systemd[1]: mnt-lustre\x2dbrpt.mount: Succeeded. [ 5413.794046] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5414.069063] Lustre: lustre-MDT0000: new disk, initializing [ 5414.078540] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5414.101779] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5415.015170] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5416.440424] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5417.417442] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5417.639469] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5417.661692] LustreError: 273554:0:(osp_object.c:637:osp_attr_get()) lustre-MDT0001-osp-MDT0000: osp_attr_get update error [0x200000009:0x1:0x0]: rc = -5 [ 5417.671208] LustreError: 273554:0:(lod_sub_object.c:932:lod_sub_prep_llog()) lustre-MDT0000-mdtlov: can't get id from catalogs: rc = -5 [ 5417.671489] LustreError: 273554:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 4, retries 0, failed: rc = -5 [ 5420.240435] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5420.241150] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5420.248551] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5420.248553] Lustre: Skipped 1 previous similar message [ 5420.252473] Lustre: Skipped 2 previous similar messages [ 5423.886674] Lustre: server umount lustre-MDT0000 complete [ 5423.886842] Lustre: Skipped 1 previous similar message [ 5424.205515] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5424.252464] LustreError: 273501:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935752 with bad export cookie 3419387187032040258 [ 5424.262560] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5424.269310] LustreError: 273501:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 8 previous similar messages [ 5425.504306] Lustre: DEBUG MARKER: == conf-sanity test 66: replace nids ===================== 16:49:13 (1679935753) [ 5426.122338] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5426.436253] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5426.464815] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5426.464976] Lustre: Skipped 1 previous similar message [ 5427.408180] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5428.886294] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5430.006508] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5430.622496] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5432.173198] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5432.484627] Lustre: Mounted lustre-client [ 5433.868684] Lustre: Permanently deactivating lustre-OST0000 [ 5433.874557] Lustre: Setting parameter lustre-OST0000-osc.osc.active in log lustre-client [ 5435.682101] Lustre: setting import lustre-OST0000_UUID INACTIVE by administrator request [ 5436.044526] LustreError: 275091:0:(mgs_llog.c:1605:mgs_replace_nids()) Only MGS is allowed to be started [ 5436.049063] LustreError: 275091:0:(mgs_handler.c:1071:mgs_iocontrol()) MGS: error replacing nids: rc = -115 [ 5436.341843] systemd[1]: mnt-lustre.mount: Succeeded. [ 5436.404038] Lustre: Unmounted lustre-client [ 5436.665293] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5438.867068] Lustre: server umount lustre-OST0000 complete [ 5438.873165] Lustre: Skipped 1 previous similar message [ 5439.127544] LustreError: 275168:0:(mgs_llog.c:1605:mgs_replace_nids()) Only MGS is allowed to be started [ 5439.129193] LustreError: 275168:0:(mgs_llog.c:1605:mgs_replace_nids()) Skipped 1 previous similar message [ 5439.147815] LustreError: 275168:0:(mgs_handler.c:1071:mgs_iocontrol()) MGS: error replacing nids: rc = -115 [ 5439.150796] LustreError: 275168:0:(mgs_handler.c:1071:mgs_iocontrol()) Skipped 1 previous similar message [ 5439.310983] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5442.560556] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5442.565399] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5442.569751] Lustre: Skipped 2 previous similar messages [ 5445.440064] Lustre: 275190:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679935767/real 1679935767] req@00000000cd839eaa x1761540250305024/t0(0) o9->lustre-OST0000-osc-MDT0000@0@lo:28/4 lens 224/224 e 0 to 1 dl 1679935773 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 5447.548420] Lustre: server umount lustre-MDT0000 complete [ 5447.610136] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5447.611031] LustreError: Skipped 1 previous similar message [ 5447.980036] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5447.985245] LustreError: 274245:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935776 with bad export cookie 3419387187032040783 [ 5447.985611] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5454.080210] Lustre: 275236:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679935776/real 1679935776] req@00000000130d3368 x1761540250305856/t0(0) o9->lustre-OST0000-osc-MDT0001@0@lo:28/4 lens 224/224 e 0 to 1 dl 1679935782 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 5455.259952] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5455.569465] LustreError: 275428:0:(mgs_handler.c:1071:mgs_iocontrol()) MGS: error replacing nids: rc = -6 [ 5455.899129] Lustre: 275479:0:(mgs_llog.c:1311:mgs_replace_nids_handler()) Previous failover is deleted, but new one is not set. This means you configure system without failover or passed wrong replace_nids command parameters. Device lustre-MDT0000, passed nids 192.168.125.30@tcp [ 5456.127073] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5456.175253] Lustre: server umount MGS complete [ 5456.175402] Lustre: Skipped 1 previous similar message [ 5456.977382] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5457.200674] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5457.201929] Lustre: setting import lustre-OST0000_UUID INACTIVE by administrator request [ 5457.202175] Lustre: Skipped 2 previous similar messages [ 5457.219480] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5457.219670] Lustre: Skipped 2 previous similar messages [ 5457.790047] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5459.006237] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5459.828958] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5459.940266] Lustre: Permanently reactivating lustre-OST0000 [ 5459.960275] Lustre: Modifying parameter lustre-OST0000-osc.osc.active in log lustre-client [ 5459.960443] Lustre: Skipped 2 previous similar messages [ 5484.640085] LustreError: 276200:0:(osp_dev.c:695:osp_process_config()) lustre-OST0000-osc-MDT0001: unknown param osc.active=1 [ 5485.423387] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5485.648189] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 5485.648377] Lustre: Skipped 1 previous similar message [ 5486.537719] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5494.813654] Lustre: Mounted lustre-client [ 5495.442637] systemd[1]: mnt-lustre.mount: Succeeded. [ 5495.517446] Lustre: Unmounted lustre-client [ 5495.647683] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5498.800456] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 5498.805469] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5498.805768] Lustre: Skipped 1 previous similar message [ 5498.807160] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5501.855442] Lustre: server umount lustre-OST0000 complete [ 5502.203804] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5503.280436] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5508.898378] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5508.923880] LustreError: 275688:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935837 with bad export cookie 3419387187032041784 [ 5508.925645] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5513.121635] LNet: 277098:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5513.128251] LNet: Removed LNI 192.168.125.30@tcp [ 5514.329689] systemd-udevd[1035]: Specified user 'tss' unknown [ 5514.420185] systemd-udevd[1035]: Specified group 'tss' unknown [ 5514.468719] systemd-udevd[277227]: Using default interface naming scheme 'rhel-8.0'. [ 5515.364540] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 5517.691884] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 5518.592458] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 5518.980880] systemd-udevd[1035]: Specified user 'tss' unknown [ 5519.021668] systemd-udevd[1035]: Specified group 'tss' unknown [ 5519.214133] systemd-udevd[278083]: Using default interface naming scheme 'rhel-8.0'. [ 5520.208235] Lustre: Lustre: Build Version: 2.15.54 [ 5520.354148] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 5520.354359] LNet: Accept secure, port 988 [ 5521.340120] Lustre: Echo OBD driver; http://www.lustre.org/ [ 5525.324782] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5525.338359] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5525.339048] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5525.339573] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5525.339854] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5525.340256] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5525.340551] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5525.342086] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5525.342394] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5525.342681] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5526.251317] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5529.455419] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5531.019733] print_req_error: 8188 callbacks suppressed [ 5531.019736] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5531.020331] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5531.020877] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5531.026087] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5531.127571] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5533.140933] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5533.147513] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5533.148084] blk_update_request: operation not supported error, dev loop4, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5533.152829] blk_update_request: operation not supported error, dev loop4, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5533.276436] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5534.001791] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5534.008032] systemd[1]: tmp-mntxjpeSG.mount: Succeeded. [ 5534.051498] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 5534.066751] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5535.311942] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 5535.328345] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 5535.403896] Lustre: lustre-MDT0000: new disk, initializing [ 5535.501073] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5535.514326] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 5537.796814] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5537.851386] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5537.879068] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 5537.904243] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 5537.904482] Lustre: Skipped 1 previous similar message [ 5537.980494] Lustre: lustre-MDT0001: new disk, initializing [ 5538.047621] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 5538.084900] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 5538.088061] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 5540.736090] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5541.762674] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5542.387811] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5542.422512] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5542.581157] Lustre: lustre-OST0000: new disk, initializing [ 5542.581702] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 5542.632412] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 5542.780745] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 5542.781368] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 5542.874625] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 5544.897342] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5545.984034] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 5546.292427] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 5547.334797] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 5547.529362] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 5547.697969] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5547.841644] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5547.860203] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5552.880573] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5552.889837] Lustre: Skipped 1 previous similar message [ 5553.929726] Lustre: server umount lustre-OST0000 complete [ 5554.329213] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5557.920528] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5557.921251] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5557.921513] Lustre: Skipped 2 previous similar messages [ 5557.921995] Lustre: Skipped 2 previous similar messages [ 5560.616753] Lustre: server umount lustre-MDT0000 complete [ 5560.974053] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5561.003204] LustreError: 279722:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935889 with bad export cookie 11301221650930152471 [ 5561.005554] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5561.005792] LustreError: 279722:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 5562.322320] Lustre: DEBUG MARKER: == conf-sanity test 67: test routes conversion and configuration ========================================================== 16:51:30 (1679935890) [ 5563.101265] Lustre: DEBUG MARKER: == conf-sanity test 68: be able to reserve specific sequences in FLDB ========================================================== 16:51:31 (1679935891) [ 5563.709790] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5563.937728] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5563.957902] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5564.687478] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5566.059853] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5567.025854] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5567.622319] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5569.139419] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5569.350287] Lustre: ctl-lustre-MDT0000: [0x00000002c0000400-0x0000000300000400]:0:mdt sequences allocated: rc = 0 [ 5569.621419] Lustre: Mounted lustre-client [ 5569.771913] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000300000400-0x0000000340000400]:0:mdt [ 5576.124764] systemd[1]: mnt-lustre.mount: Succeeded. [ 5576.192809] Lustre: Unmounted lustre-client [ 5576.325645] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5578.960354] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 5578.964604] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5578.966230] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5582.563576] Lustre: server umount lustre-OST0000 complete [ 5582.569615] Lustre: Skipped 1 previous similar message [ 5582.939512] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5584.960438] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5584.962567] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5584.962671] Lustre: Skipped 1 previous similar message [ 5584.962898] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5584.962952] Lustre: Skipped 2 previous similar messages [ 5589.146961] Lustre: server umount lustre-MDT0000 complete [ 5589.531686] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5589.574805] LustreError: 281198:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935917 with bad export cookie 11301221650930153241 [ 5589.577180] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5589.581689] LustreError: 281198:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 5593.440867] LNet: 282448:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5593.451752] LNet: Removed LNI 192.168.125.30@tcp [ 5594.765470] systemd-udevd[1035]: Specified user 'tss' unknown [ 5594.777318] systemd-udevd[1035]: Specified group 'tss' unknown [ 5594.983128] systemd-udevd[282795]: Using default interface naming scheme 'rhel-8.0'. [ 5595.892937] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 5596.693277] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 5597.494717] Lustre: DEBUG MARKER: SKIP: conf-sanity test_69 skipping SLOW test 69 [ 5597.729054] Lustre: DEBUG MARKER: == conf-sanity test 70a: start MDT0, then OST, then MDT1 ========================================================== 16:52:05 (1679935925) [ 5599.256983] systemd-udevd[1035]: Specified user 'tss' unknown [ 5599.280080] systemd-udevd[1035]: Specified group 'tss' unknown [ 5599.375071] systemd-udevd[283398]: Using default interface naming scheme 'rhel-8.0'. [ 5600.632276] systemd-udevd[1035]: Specified user 'tss' unknown [ 5600.636639] systemd-udevd[1035]: Specified group 'tss' unknown [ 5600.748439] systemd-udevd[283677]: Using default interface naming scheme 'rhel-8.0'. [ 5601.080322] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 2 [ 5602.402896] Lustre: Lustre: Build Version: 2.15.54 [ 5602.627643] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 5602.627870] LNet: Accept secure, port 988 [ 5603.549395] Lustre: Echo OBD driver; http://www.lustre.org/ [ 5605.663119] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 5605.665529] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5606.894715] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5606.908874] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5607.762351] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5607.913537] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5607.919866] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 5608.625933] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5609.194115] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5609.387867] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 5611.933993] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:35 to 0x280000401:65 [ 5611.941635] Lustre: Mounted lustre-client [ 5612.622957] systemd[1]: mnt-lustre.mount: Succeeded. [ 5612.673392] Lustre: Unmounted lustre-client [ 5612.845289] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5614.400690] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 5614.405814] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5614.408436] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5616.960634] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 5616.962002] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5616.966277] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5616.966613] Lustre: Skipped 1 previous similar message [ 5619.109394] Lustre: server umount lustre-OST0000 complete [ 5619.505977] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5622.000916] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5622.006988] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5625.712535] Lustre: server umount lustre-MDT0000 complete [ 5626.122623] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5626.142513] LustreError: 284319:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679935954 with bad export cookie 15658833652669319055 [ 5626.143098] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5630.070696] LNet: 285378:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5630.071103] LNet: Removed LNI 192.168.125.30@tcp [ 5631.383167] systemd-udevd[1035]: Specified user 'tss' unknown [ 5631.418535] systemd-udevd[1035]: Specified group 'tss' unknown [ 5631.599263] systemd-udevd[285722]: Using default interface naming scheme 'rhel-8.0'. [ 5632.397602] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 5633.065808] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 5633.898778] Lustre: DEBUG MARKER: == conf-sanity test 70b: start OST, MDT1, MDT0 =========== 16:52:41 (1679935961) [ 5634.199429] systemd-udevd[1035]: Specified user 'tss' unknown [ 5634.206639] systemd-udevd[1035]: Specified group 'tss' unknown [ 5634.349599] systemd-udevd[285974]: Using default interface naming scheme 'rhel-8.0'. [ 5635.130730] Lustre: Lustre: Build Version: 2.15.54 [ 5635.274877] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 5635.275157] LNet: Accept secure, port 988 [ 5636.310238] Lustre: Echo OBD driver; http://www.lustre.org/ [ 5638.962625] Lustre: lustre-OST0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 5638.964154] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5656.800222] LustreError: 286729:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 1: rc = -5 [ 5688.000090] LustreError: 286729:0:(mgc_request.c:252:do_config_log_add()) MGC192.168.125.30@tcp: failed processing log, type 4: rc = -110 [ 5718.160092] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 5719.299703] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5719.809907] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5739.084384] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5739.107262] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5739.141003] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:35 to 0x280000401:97 [ 5740.071117] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5740.262888] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 5741.596158] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5742.599212] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5747.929732] Lustre: Mounted lustre-client [ 5748.557400] systemd[1]: mnt-lustre.mount: Succeeded. [ 5748.625500] Lustre: Unmounted lustre-client [ 5748.759145] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5748.794783] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5748.801424] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5753.841563] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5753.845983] Lustre: Skipped 1 previous similar message [ 5754.980198] Lustre: server umount lustre-OST0000 complete [ 5755.263640] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5755.360434] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5755.360570] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5755.360654] Lustre: Skipped 1 previous similar message [ 5755.360901] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5755.360972] Lustre: Skipped 1 previous similar message [ 5758.880558] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5758.880925] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5758.883262] Lustre: Skipped 1 previous similar message [ 5761.468736] Lustre: server umount lustre-MDT0000 complete [ 5761.783437] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5761.816519] LustreError: 286737:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679936090 with bad export cookie 15454088629833376605 [ 5761.823671] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5765.730420] LNet: 287955:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5765.738108] LNet: Removed LNI 192.168.125.30@tcp [ 5767.094163] systemd-udevd[1035]: Specified user 'tss' unknown [ 5767.112075] systemd-udevd[1035]: Specified group 'tss' unknown [ 5767.316159] systemd-udevd[288299]: Using default interface naming scheme 'rhel-8.0'. [ 5768.265954] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 5769.167754] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 5770.055519] Lustre: DEBUG MARKER: == conf-sanity test 70c: stop MDT0, mkdir fail, create remote dir fail ========================================================== 16:54:57 (1679936097) [ 5770.504546] systemd-udevd[1035]: Specified user 'tss' unknown [ 5770.552600] systemd-udevd[1035]: Specified group 'tss' unknown [ 5770.700380] systemd-udevd[288741]: Using default interface naming scheme 'rhel-8.0'. [ 5771.698488] Lustre: Lustre: Build Version: 2.15.54 [ 5771.839791] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 5771.840291] LNet: Accept secure, port 988 [ 5772.689418] Lustre: Echo OBD driver; http://www.lustre.org/ [ 5775.323191] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 5775.331489] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5776.644372] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5776.671355] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5777.680953] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5777.833733] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 5779.254651] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5780.254269] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5780.928344] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5781.108836] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 5782.174204] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:35 to 0x280000401:129 [ 5782.432738] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5782.688509] Lustre: Mounted lustre-client [ 5782.861886] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5783.681681] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5783.691676] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5783.695905] Lustre: Skipped 1 previous similar message [ 5788.721501] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5788.727789] Lustre: Skipped 5 previous similar messages [ 5789.070533] Lustre: server umount lustre-MDT0000 complete [ 5789.299453] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 5789.324584] LustreError: 290130:0:(file.c:5388:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 [ 5789.494869] systemd[1]: mnt-lustre.mount: Succeeded. [ 5789.541736] LustreError: 289314:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679936117 with bad export cookie 2331949119259452855 [ 5789.548997] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5789.568198] Lustre: Unmounted lustre-client [ 5789.690686] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5793.760377] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 5793.760809] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5793.770962] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5793.770963] Lustre: Skipped 3 previous similar messages [ 5793.771465] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5793.773462] LustreError: Skipped 1 previous similar message [ 5794.800937] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5794.805913] LustreError: Skipped 1 previous similar message [ 5795.934277] Lustre: server umount lustre-OST0000 complete [ 5796.579682] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5800.450321] LNet: 290574:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5800.458798] LNet: Removed LNI 192.168.125.30@tcp [ 5801.758649] systemd-udevd[1035]: Specified user 'tss' unknown [ 5801.857788] systemd-udevd[1035]: Specified group 'tss' unknown [ 5802.038687] systemd-udevd[290817]: Using default interface naming scheme 'rhel-8.0'. [ 5802.770243] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 5803.708989] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 5804.590641] Lustre: DEBUG MARKER: == conf-sanity test 70d: stop MDT1, mkdir succeed, create remote dir fail ========================================================== 16:55:31 (1679936131) [ 5805.073768] systemd-udevd[1035]: Specified user 'tss' unknown [ 5805.100220] systemd-udevd[1035]: Specified group 'tss' unknown [ 5805.332258] systemd-udevd[291235]: Using default interface naming scheme 'rhel-8.0'. [ 5806.147620] Lustre: Lustre: Build Version: 2.15.54 [ 5806.289975] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 5806.293625] LNet: Accept secure, port 988 [ 5807.292535] Lustre: Echo OBD driver; http://www.lustre.org/ [ 5809.746909] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 5809.762397] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5811.085636] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5811.097636] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5811.834285] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5811.999431] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 5813.434344] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5814.422302] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5815.059307] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5815.273491] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 5816.256892] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:35 to 0x280000401:161 [ 5816.501430] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5816.794392] Lustre: Mounted lustre-client [ 5817.084854] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5820.240461] LustreError: 11-0: lustre-MDT0001-osp-MDT0000: operation mds_statfs to node 0@lo failed: rc = -107 [ 5820.246616] Lustre: lustre-MDT0001-osp-MDT0000: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5820.249575] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 5821.760478] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 5821.761430] Lustre: lustre-MDT0001-lwp-OST0000: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5823.309270] Lustre: server umount lustre-MDT0001 complete [ 5823.609936] Lustre: setting import lustre-MDT0001_UUID INACTIVE by administrator request [ 5823.952353] systemd[1]: mnt-lustre.mount: Succeeded. [ 5824.025362] Lustre: Unmounted lustre-client [ 5824.213596] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5826.320416] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 5826.320588] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5826.320925] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5826.329522] Lustre: Skipped 2 previous similar messages [ 5826.800554] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5830.534799] Lustre: server umount lustre-OST0000 complete [ 5830.795801] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5834.500347] LNet: 293201:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5834.502637] LNet: Removed LNI 192.168.125.30@tcp [ 5835.760451] systemd-udevd[1035]: Specified user 'tss' unknown [ 5835.883484] systemd-udevd[1035]: Specified group 'tss' unknown [ 5835.897479] systemd-udevd[293449]: Using default interface naming scheme 'rhel-8.0'. [ 5836.743382] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 5837.605349] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 5838.496838] Lustre: DEBUG MARKER: == conf-sanity test 70e: Sync-on-Cancel will be enabled by default on DNE ========================================================== 16:56:05 (1679936165) [ 5841.020741] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 5841.288440] systemd-udevd[1035]: Specified user 'tss' unknown [ 5841.337125] systemd-udevd[1035]: Specified group 'tss' unknown [ 5841.412813] systemd-udevd[294081]: Using default interface naming scheme 'rhel-8.0'. [ 5842.699239] Lustre: Lustre: Build Version: 2.15.54 [ 5842.888578] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 5842.888808] LNet: Accept secure, port 988 [ 5843.994017] Lustre: Echo OBD driver; http://www.lustre.org/ [ 5848.193192] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5848.194238] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5848.194820] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5848.195320] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5848.195599] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5848.195875] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5848.196152] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5848.196432] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5848.196705] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5848.196976] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5849.582922] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5852.930694] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5854.635264] print_req_error: 8188 callbacks suppressed [ 5854.635266] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5854.635969] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5854.636348] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5854.651991] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5854.723331] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5856.543922] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5856.565682] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5856.566271] blk_update_request: operation not supported error, dev loop4, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5856.571061] blk_update_request: operation not supported error, dev loop4, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5856.698803] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5856.705038] systemd[1]: tmp-mntgor9Ht.mount: Succeeded. [ 5857.203147] systemd-udevd[1035]: Specified user 'tss' unknown [ 5857.468982] systemd-udevd[1035]: Specified group 'tss' unknown [ 5857.600415] systemd-udevd[296075]: Using default interface naming scheme 'rhel-8.0'. [ 5861.858368] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5861.859145] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5861.860914] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5861.861388] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5861.861746] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5861.862107] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5861.864138] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5861.864495] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5861.864843] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5861.865205] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5862.991473] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5863.706546] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5863.733792] systemd[1]: tmp-mntQmLPkI.mount: Succeeded. [ 5863.761870] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 5863.771810] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5864.986738] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 5865.004739] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 5865.044957] Lustre: lustre-MDT0000: new disk, initializing [ 5865.101709] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5865.112062] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 5868.035800] print_req_error: 4089 callbacks suppressed [ 5868.035802] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5868.036120] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5868.038515] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5868.043205] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5868.161881] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5868.180204] systemd[1]: tmp-mntNkmgLy.mount: Succeeded. [ 5868.849339] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5868.882410] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5869.026523] Lustre: lustre-OST0000: new disk, initializing [ 5869.027017] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 5869.027146] Lustre: Skipped 1 previous similar message [ 5869.071865] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 5871.321094] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5873.299039] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:0:ost [ 5873.299416] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:0:ost] [ 5873.306485] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x240000400 [ 5878.328002] Lustre: Mounted lustre-client [ 5879.897260] blk_update_request: operation not supported error, dev loop1, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5879.913145] blk_update_request: operation not supported error, dev loop1, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5879.913677] blk_update_request: operation not supported error, dev loop1, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5879.914077] blk_update_request: operation not supported error, dev loop1, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5879.914357] blk_update_request: operation not supported error, dev loop1, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5879.914634] blk_update_request: operation not supported error, dev loop1, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5879.914930] blk_update_request: operation not supported error, dev loop1, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5879.915212] blk_update_request: operation not supported error, dev loop1, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5879.915489] blk_update_request: operation not supported error, dev loop1, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5879.915764] blk_update_request: operation not supported error, dev loop1, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5881.325003] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5881.363779] systemd[1]: tmp-mntb57EKJ.mount: Succeeded. [ 5882.054096] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5882.092571] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5882.127921] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 5882.167168] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 5882.225682] Lustre: lustre-MDT0001: new disk, initializing [ 5882.360330] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 5882.393070] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:1:mdt [ 5882.398427] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:1:mdt] [ 5884.795074] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state (FULL|IDLE) osp.lustre-MDT0000-osp-MDT0001.mdt_server_uuid 40 [ 5885.009696] Lustre: DEBUG MARKER: osp.lustre-MDT0000-osp-MDT0001.mdt_server_uuid in FULL state after 0 sec [ 5886.045490] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state (FULL|IDLE) osp.lustre-MDT0001-osp-MDT0000.mdt_server_uuid 40 [ 5886.266597] Lustre: DEBUG MARKER: osp.lustre-MDT0001-osp-MDT0000.mdt_server_uuid in FULL state after 0 sec [ 5886.665043] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5888.242096] Lustre: lustre-MDT0001-lwp-OST0000: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5888.242424] LustreError: 11-0: lustre-MDT0001-osp-MDT0000: operation mds_statfs to node 0@lo failed: rc = -107 [ 5888.247546] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 5892.922455] Lustre: server umount lustre-MDT0001 complete [ 5893.281070] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5893.284537] LustreError: Skipped 2 previous similar messages [ 5893.434203] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 5898.321539] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5898.321966] LustreError: Skipped 1 previous similar message [ 5903.361415] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5903.362037] LustreError: Skipped 1 previous similar message [ 5903.842355] systemd[1]: mnt-lustre.mount: Succeeded. [ 5903.896608] Lustre: Unmounted lustre-client [ 5904.096746] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5908.400719] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5908.401984] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5908.407923] Lustre: Skipped 2 previous similar messages [ 5908.409113] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5908.409115] Lustre: Skipped 2 previous similar messages [ 5913.440867] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5913.441082] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5918.481207] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5918.560049] Lustre: lustre-OST0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 5918.658238] Lustre: server umount lustre-OST0000 complete [ 5918.871822] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5925.285047] Lustre: server umount lustre-MDT0000 complete [ 5929.130590] LNet: 298552:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5929.139819] LNet: Removed LNI 192.168.125.30@tcp [ 5930.394177] systemd-udevd[1035]: Specified user 'tss' unknown [ 5930.468479] systemd-udevd[1035]: Specified group 'tss' unknown [ 5930.516965] systemd-udevd[298714]: Using default interface naming scheme 'rhel-8.0'. [ 5931.228881] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 5932.076463] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 5932.974932] Lustre: DEBUG MARKER: == conf-sanity test 71a: start MDT0 OST0, MDT1, OST1 ===== 16:57:40 (1679936260) [ 5933.176710] Lustre: DEBUG MARKER: SKIP: conf-sanity test_71a needs separate MGS/MDT [ 5933.473468] Lustre: DEBUG MARKER: == conf-sanity test 71b: start MDT1, OST0, MDT0, OST1 ==== 16:57:41 (1679936261) [ 5933.682304] Lustre: DEBUG MARKER: SKIP: conf-sanity test_71b needs separate MGS/MDT [ 5933.949954] Lustre: DEBUG MARKER: == conf-sanity test 71c: start OST0, OST1, MDT1, MDT0 ==== 16:57:42 (1679936262) [ 5934.188718] Lustre: DEBUG MARKER: SKIP: conf-sanity test_71c needs separate MGS/MDT [ 5934.445099] Lustre: DEBUG MARKER: == conf-sanity test 71d: start OST0, MDT1, MDT0, OST1 ==== 16:57:42 (1679936262) [ 5934.669224] Lustre: DEBUG MARKER: SKIP: conf-sanity test_71d needs separate MGS/MDT [ 5934.998727] Lustre: DEBUG MARKER: == conf-sanity test 71e: start OST0, MDT1, OST1, MDT0 ==== 16:57:43 (1679936263) [ 5935.231276] Lustre: DEBUG MARKER: SKIP: conf-sanity test_71e needs separate MGS/MDT [ 5935.502832] Lustre: DEBUG MARKER: == conf-sanity test 72: test fast symlink with extents flag enabled ========================================================== 16:57:43 (1679936263) [ 5937.021295] systemd-udevd[1035]: Specified user 'tss' unknown [ 5937.029796] systemd-udevd[1035]: Specified group 'tss' unknown [ 5937.223223] systemd-udevd[299820]: Using default interface naming scheme 'rhel-8.0'. [ 5938.425801] systemd-udevd[1035]: Specified user 'tss' unknown [ 5938.433649] systemd-udevd[1035]: Specified group 'tss' unknown [ 5938.721843] systemd-udevd[300017]: Using default interface naming scheme 'rhel-8.0'. [ 5939.115940] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 2 [ 5940.360752] Lustre: Lustre: Build Version: 2.15.54 [ 5940.579234] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 5940.579561] LNet: Accept secure, port 988 [ 5941.753510] Lustre: Echo OBD driver; http://www.lustre.org/ [ 5945.315479] print_req_error: 4089 callbacks suppressed [ 5945.315482] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5945.318101] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5945.318660] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5945.319271] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5945.319834] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5945.328964] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5945.329330] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5945.329621] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5945.329908] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5945.330300] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5946.675846] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5946.698503] systemd[1]: tmp-mntr2R5Qv.mount: Succeeded. [ 5949.627453] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5951.222882] print_req_error: 8188 callbacks suppressed [ 5951.222885] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5951.229789] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5951.244836] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5951.254766] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 5951.323668] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5951.976750] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5951.993824] systemd[1]: tmp-mnt0Bw2Dd.mount: Succeeded. [ 5952.032405] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 5952.043083] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5953.456813] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 5953.468519] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 5953.542065] Lustre: lustre-MDT0000: new disk, initializing [ 5953.589461] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5953.592338] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 5956.003311] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5956.043427] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5956.106647] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 5956.127560] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 5956.127747] Lustre: Skipped 1 previous similar message [ 5956.186347] Lustre: lustre-MDT0001: new disk, initializing [ 5956.296016] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 5956.343148] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 5956.343548] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 5958.769896] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5959.777618] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5960.444422] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5960.468530] systemd[1]: tmp-mntRtuBoS.mount: Succeeded. [ 5960.493444] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5960.774985] Lustre: lustre-OST0000: new disk, initializing [ 5960.775513] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 5960.820489] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 5963.114762] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5964.984093] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 5964.984475] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 5965.003140] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 5970.009869] Lustre: Mounted lustre-client [ 5970.811722] systemd[1]: mnt-lustre.mount: Succeeded. [ 5970.883430] Lustre: Unmounted lustre-client [ 5971.141211] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 5971.360379] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5971.362252] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5971.362575] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5975.040616] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5975.041416] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5977.436169] Lustre: server umount lustre-MDT0000 complete [ 5977.784248] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 5977.841172] LustreError: 301460:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679936306 with bad export cookie 428275515291143042 [ 5977.842309] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5977.848505] LustreError: 301460:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 5978.138340] Lustre: server umount lustre-MDT0001 complete [ 5978.519186] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5984.640057] Lustre: 302455:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679936306/real 1679936306] req@000000001bf7683d x1761540852173568/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679936312 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 5984.932187] Lustre: server umount lustre-OST0000 complete [ 5985.938128] Lustre: DEBUG MARKER: == conf-sanity test 73: failnode to update from mountdata properly ========================================================== 16:58:34 (1679936314) [ 5986.075446] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5986.082866] systemd[1]: tmp-mntSwE1aG.mount: Succeeded. [ 5986.608872] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5986.977212] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5987.006062] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5988.019046] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5989.618578] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5990.415132] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5990.921178] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 5990.971839] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5991.143805] Lustre: Found index 0 for lustre-OST0000, updating log [ 5991.231814] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 5991.232006] Lustre: Skipped 1 previous similar message [ 5992.106737] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5992.244915] Lustre: lustre-OST0000: deleting orphan objects from 0x280000401:5 to 0x280000401:33 [ 5992.411306] Lustre: Mounted lustre-client [ 5992.886366] systemd[1]: mnt-lustre.mount: Succeeded. [ 5992.942486] Lustre: Unmounted lustre-client [ 5993.075880] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 5993.441506] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5993.441673] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5993.448108] Lustre: Skipped 1 previous similar message [ 5993.452320] Lustre: Skipped 3 previous similar messages [ 5996.652113] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5999.342935] Lustre: server umount lustre-OST0000 complete [ 5999.704977] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6001.680605] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6001.689328] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 6001.693478] Lustre: Skipped 1 previous similar message [ 6003.840046] LustreError: 303608:0:(import.c:355:ptlrpc_invalidate_import()) lustre-OST0000_UUID: timeout waiting for callback (1 != 0) [ 6003.846584] LustreError: 303608:0:(import.c:378:ptlrpc_invalidate_import()) @@@ still on sending list req@00000000f96cb30a x1761540852184704/t0(0) o8->lustre-OST0000-osc-MDT0000@1.2.3.4@tcp:28/4 lens 520/544 e 0 to 0 dl 1679936331 ref 2 fl UnregRPC:ENU/0/ffffffff rc -5/-1 job:'' [ 6003.847038] LustreError: 303608:0:(import.c:389:ptlrpc_invalidate_import()) lustre-OST0000_UUID: Unregistering RPCs found (1). Network is sluggish? Waiting for them to error out. [ 6011.760810] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 6011.765946] Lustre: Skipped 2 previous similar messages [ 6024.160123] LustreError: 303608:0:(import.c:355:ptlrpc_invalidate_import()) lustre-OST0000_UUID: timeout waiting for callback (1 != 0) [ 6024.164980] LustreError: 303608:0:(import.c:378:ptlrpc_invalidate_import()) @@@ still on sending list req@00000000f96cb30a x1761540852184704/t0(0) o8->lustre-OST0000-osc-MDT0000@1.2.3.4@tcp:28/4 lens 520/544 e 0 to 0 dl 1679936331 ref 2 fl UnregRPC:ENU/0/ffffffff rc -5/-1 job:'' [ 6024.165432] LustreError: 303608:0:(import.c:389:ptlrpc_invalidate_import()) lustre-OST0000_UUID: Unregistering RPCs found (1). Network is sluggish? Waiting for them to error out. [ 6031.920548] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 6031.924736] Lustre: Skipped 7 previous similar messages [ 6044.640058] LustreError: 303608:0:(import.c:355:ptlrpc_invalidate_import()) lustre-OST0000_UUID: timeout waiting for callback (1 != 0) [ 6044.646691] LustreError: 303608:0:(import.c:378:ptlrpc_invalidate_import()) @@@ still on sending list req@00000000f96cb30a x1761540852184704/t0(0) o8->lustre-OST0000-osc-MDT0000@1.2.3.4@tcp:28/4 lens 520/544 e 0 to 0 dl 1679936331 ref 2 fl UnregRPC:ENU/0/ffffffff rc -5/-1 job:'' [ 6044.647143] LustreError: 303608:0:(import.c:389:ptlrpc_invalidate_import()) lustre-OST0000_UUID: Unregistering RPCs found (1). Network is sluggish? Waiting for them to error out. [ 6049.881232] Lustre: server umount lustre-MDT0000 complete [ 6050.253787] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 6050.292708] LustreError: 302713:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679936378 with bad export cookie 428275515291144596 [ 6050.293488] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6050.293718] LustreError: 302713:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 3 previous similar messages [ 6051.453594] Lustre: DEBUG MARKER: == conf-sanity test 75: The order of --index should be irrelevant ========================================================== 16:59:39 (1679936379) [ 6052.653284] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6052.653778] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6052.654368] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6052.654837] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6052.655192] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6052.655673] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6052.656035] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6052.656390] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6052.656750] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6052.657102] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6053.639760] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6053.651831] systemd[1]: tmp-mnthMJ4bW.mount: Succeeded. [ 6054.643858] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6054.664760] systemd[1]: tmp-mntwrSpzS.mount: Succeeded. [ 6056.978825] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6056.993108] systemd[1]: tmp-mnt8xk9WN.mount: Succeeded. [ 6057.845408] print_req_error: 8192 callbacks suppressed [ 6057.845411] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6057.845972] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6057.846551] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6057.864688] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6057.949763] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6057.981454] systemd[1]: tmp-mntRruztE.mount: Succeeded. [ 6059.934951] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 6060.280500] systemd-udevd[1035]: Specified user 'tss' unknown [ 6060.345843] systemd-udevd[1035]: Specified group 'tss' unknown [ 6060.360240] systemd-udevd[305058]: Using default interface naming scheme 'rhel-8.0'. [ 6064.599970] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6064.611228] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6064.611676] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6064.612899] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6064.613391] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6064.613960] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6064.614962] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6064.616344] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6064.616628] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6064.616910] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6065.960253] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6069.206901] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6069.212772] systemd[1]: tmp-mnt4nxkor.mount: Succeeded. [ 6070.867128] print_req_error: 8188 callbacks suppressed [ 6070.867131] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6070.888053] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6070.888593] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6070.897170] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6070.996347] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6073.310911] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6073.314521] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6073.314907] blk_update_request: operation not supported error, dev loop4, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6073.319472] blk_update_request: operation not supported error, dev loop4, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6073.452998] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6073.458525] systemd[1]: tmp-mntgdeLYL.mount: Succeeded. [ 6074.465838] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6074.475198] systemd[1]: tmp-mnt3zadDN.mount: Succeeded. [ 6074.503922] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6074.859726] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 6074.883486] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 6074.969933] Lustre: lustre-MDT0000: new disk, initializing [ 6075.009247] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 6075.014753] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 6077.137507] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6077.144457] systemd[1]: tmp-mntA9MfyW.mount: Succeeded. [ 6077.186766] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6077.206966] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 6077.391227] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 6079.810585] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6080.794214] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6081.479284] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6081.522608] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6081.780730] Lustre: lustre-OST0000: new disk, initializing [ 6081.780929] Lustre: Skipped 1 previous similar message [ 6081.781349] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 6081.798370] Lustre: Skipped 2 previous similar messages [ 6084.212184] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 6084.334392] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 6084.334616] Lustre: Skipped 1 previous similar message [ 6084.334857] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 6084.480226] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 6085.342337] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 6085.508039] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 6086.443620] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 6086.648851] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 6086.779657] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6089.360383] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 6089.362494] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6089.362782] Lustre: Skipped 1 previous similar message [ 6089.363233] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 6089.363610] Lustre: Skipped 9 previous similar messages [ 6093.096777] Lustre: server umount lustre-OST0000 complete [ 6093.106037] Lustre: Skipped 1 previous similar message [ 6093.427836] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6100.087371] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 6100.117873] LustreError: 306546:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679936428 with bad export cookie 428275515291145695 [ 6100.118237] LustreError: 306546:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 6100.118926] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6101.506488] Lustre: DEBUG MARKER: == conf-sanity test 76a: set permanent params with lctl across mounts ========================================================== 17:00:29 (1679936429) [ 6102.093730] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6102.374806] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6102.401734] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 6102.401965] Lustre: Skipped 2 previous similar messages [ 6103.250396] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6104.688086] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6105.687686] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6106.367178] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6107.844368] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 6108.204428] Lustre: Mounted lustre-client [ 6109.390711] Lustre: Modifying parameter general.osc.*.max_dirty_mb in log params [ 6118.182623] systemd[1]: mnt-lustre.mount: Succeeded. [ 6118.264328] Lustre: Unmounted lustre-client [ 6118.476185] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6123.281259] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6123.281597] Lustre: Skipped 3 previous similar messages [ 6124.777896] Lustre: server umount lustre-MDT0000 complete [ 6124.780945] Lustre: Skipped 2 previous similar messages [ 6125.041581] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 6125.084735] LustreError: 307917:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679936453 with bad export cookie 428275515291146458 [ 6125.084894] LustreError: 307917:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 6125.085073] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6125.846186] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6131.920102] Lustre: 308950:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679936454/real 1679936454] req@00000000aa6e1800 x1761540852214272/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679936460 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 6133.592110] systemd-udevd[1035]: Specified user 'tss' unknown [ 6133.612903] systemd-udevd[1035]: Specified group 'tss' unknown [ 6133.715720] systemd-udevd[309273]: Using default interface naming scheme 'rhel-8.0'. [ 6136.257135] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6136.437520] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6136.446929] systemd-udevd[309829]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6136.457206] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 6136.457389] Lustre: Skipped 2 previous similar messages [ 6137.127559] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6137.358340] systemd-udevd[309989]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6138.383007] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6138.577542] systemd-udevd[310169]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6139.248062] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6139.262305] systemd[1]: tmp-mnt1V8AjM.mount: Succeeded. [ 6139.272430] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6139.339303] Lustre: lustre-OST0001: new disk, initializing [ 6139.339816] Lustre: srv-lustre-OST0001: No data found on store. Initialize space: rc = -61 [ 6139.394485] systemd-udevd[310347]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6141.496187] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 6141.496730] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x00000002c0000400-0x0000000300000400]:1:ost] [ 6141.537156] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x2c0000401 [ 6142.588434] Lustre: Mounted lustre-client [ 6144.426107] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 6151.084014] Lustre: Setting parameter general.lod.*.mdt_hash in log params [ 6158.758304] systemd[1]: mnt-lustre.mount: Succeeded. [ 6158.851101] Lustre: Unmounted lustre-client [ 6159.012297] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6162.480376] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 6162.486759] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6162.486967] Lustre: Skipped 2 previous similar messages [ 6162.487281] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 6162.487408] Lustre: Skipped 8 previous similar messages [ 6165.310280] Lustre: server umount lustre-MDT0000 complete [ 6165.313079] Lustre: Skipped 2 previous similar messages [ 6165.630217] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 6165.652436] LustreError: 309770:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679936494 with bad export cookie 428275515291147361 [ 6165.653194] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6166.488835] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6172.560070] Lustre: 310938:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679936494/real 1679936494] req@00000000a4636c34 x1761540852245568/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679936500 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 6172.986286] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 6179.120075] Lustre: 310977:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679936501/real 1679936501] req@00000000594c42fa x1761540852246144/t0(0) o39->lustre-MDT0001-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679936507 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 6180.569189] systemd-udevd[1035]: Specified user 'tss' unknown [ 6180.569506] systemd-udevd[1035]: Specified group 'tss' unknown [ 6180.648737] systemd-udevd[311216]: Using default interface naming scheme 'rhel-8.0'. [ 6183.612688] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6183.972020] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6184.006616] systemd-udevd[311827]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'obdfilter.*.client_cache_count=256'' failed with exit code 2. [ 6184.029011] systemd-udevd[311825]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6184.904891] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6185.121922] systemd-udevd[311997]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6185.139555] systemd-udevd[311999]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'obdfilter.*.client_cache_count=256'' failed with exit code 2. [ 6186.034470] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6186.319337] systemd-udevd[312181]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6187.511832] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6187.622414] systemd-udevd[312346]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6190.252535] Lustre: Mounted lustre-client [ 6195.776461] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 6196.151210] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 6196.151404] Lustre: Skipped 1 previous similar message [ 6201.570742] systemd[1]: mnt-lustre.mount: Succeeded. [ 6201.637362] Lustre: Unmounted lustre-client [ 6201.867980] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6205.200375] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 6205.204229] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6205.204318] Lustre: Skipped 3 previous similar messages [ 6208.082361] LustreError: 312690:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 6208.601483] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 6208.643037] LustreError: 311765:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679936537 with bad export cookie 428275515291148796 [ 6208.643343] LustreError: 311765:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 6208.645139] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6208.735599] LustreError: 312729:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 6208.735735] LustreError: 312729:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 1 previous similar message [ 6209.530950] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6215.600058] Lustre: 312789:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679936537/real 1679936537] req@000000008e0461c4 x1761540852267968/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679936543 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 6215.610910] LustreError: 312789:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 6215.611112] LustreError: 312789:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 1 previous similar message [ 6215.988245] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 6222.161960] LustreError: 312828:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 6223.713633] Lustre: DEBUG MARKER: == conf-sanity test 76b: verify params log setup correctly ========================================================== 17:02:31 (1679936551) [ 6225.072281] systemd-udevd[1035]: Specified user 'tss' unknown [ 6225.091141] systemd-udevd[1035]: Specified group 'tss' unknown [ 6225.201362] systemd-udevd[313545]: Using default interface naming scheme 'rhel-8.0'. [ 6227.506988] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6227.817478] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6227.817872] LustreError: Skipped 1 previous similar message [ 6227.837220] systemd-udevd[313956]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6227.856689] systemd-udevd[313957]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'obdfilter.*.client_cache_count=256'' failed with exit code 2. [ 6227.875598] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 6227.875775] Lustre: Skipped 7 previous similar messages [ 6228.533453] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6228.713189] systemd-udevd[314128]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6228.730865] systemd-udevd[314129]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'obdfilter.*.client_cache_count=256'' failed with exit code 2. [ 6229.512291] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6229.758750] systemd-udevd[314312]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6230.568663] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6230.622972] systemd-udevd[314487]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6234.251232] Lustre: Mounted lustre-client [ 6235.910113] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 6236.094799] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 6241.467266] systemd[1]: mnt-lustre.mount: Succeeded. [ 6241.538212] Lustre: Unmounted lustre-client [ 6241.631229] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6243.840356] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 6247.840850] LustreError: 314824:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 6247.880549] Lustre: server umount lustre-MDT0000 complete [ 6247.895884] Lustre: Skipped 7 previous similar messages [ 6248.241081] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 6248.265564] LustreError: 313895:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679936576 with bad export cookie 428275515291149958 [ 6248.266333] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6248.266633] LustreError: 313895:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 6 previous similar messages [ 6249.155566] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6255.280061] Lustre: 314923:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679936577/real 1679936577] req@000000004d15bea7 x1761540852288640/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679936583 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 6255.286180] Lustre: 314923:0:(client.c:2305:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 6255.642113] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 6261.762082] LustreError: 314962:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 6261.766740] LustreError: 314962:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 4 previous similar messages [ 6263.226523] Lustre: DEBUG MARKER: == conf-sanity test 76c: verify changelog_mask is applied with lctl set_param -P ========================================================== 17:03:11 (1679936591) [ 6264.196335] systemd-udevd[1035]: Specified user 'tss' unknown [ 6264.209742] systemd-udevd[1035]: Specified group 'tss' unknown [ 6264.448174] systemd-udevd[315343]: Using default interface naming scheme 'rhel-8.0'. [ 6266.828246] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6267.249076] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6267.292277] systemd-udevd[315907]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'obdfilter.*.client_cache_count=256'' failed with exit code 2. [ 6267.302652] systemd-udevd[315906]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6268.382403] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6268.649044] systemd-udevd[316074]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6268.663954] systemd-udevd[316075]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'obdfilter.*.client_cache_count=256'' failed with exit code 2. [ 6269.548564] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6269.820379] systemd-udevd[316259]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6270.816973] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6270.874795] systemd-udevd[316431]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6273.566349] Lustre: Mounted lustre-client [ 6278.995216] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 6284.590981] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6288.642824] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6288.646583] Lustre: Skipped 7 previous similar messages [ 6290.800857] LustreError: 316758:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 6291.248746] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 6291.281145] LustreError: 315842:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679936619 with bad export cookie 428275515291151092 [ 6291.282249] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6291.282707] LustreError: 315842:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 6293.682359] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6293.688094] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 6293.688437] Lustre: Skipped 17 previous similar messages [ 6297.760567] Lustre: lustre-OST0000-osc-ffff8b782b66f000: disconnect after 20s idle [ 6305.760046] Lustre: lustre-MDT0001 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 6306.577441] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6308.963599] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0x5f18a4fe2ed2af4 to 0x5f18a4fe2ed2fa1 [ 6308.972926] Lustre: MGC192.168.125.30@tcp: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 6310.426199] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6311.921549] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6333.525662] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.125.30@tcp (at 0@lo) [ 6333.529768] LustreError: 167-0: lustre-MDT0000-mdc-ffff8b782b66f000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 6334.110436] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 21 sec [ 6335.053178] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6335.348172] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6335.767791] systemd[1]: mnt-lustre.mount: Succeeded. [ 6335.849408] Lustre: Unmounted lustre-client [ 6336.073354] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6342.242347] LustreError: 317690:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 6342.246094] LustreError: 317690:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 3 previous similar messages [ 6342.729707] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 6343.687490] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6349.760086] Lustre: 300251:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679936671/real 1679936671] req@0000000018a100f4 x1761540852331200/t0(0) o400->lustre-MDT0001-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679936678 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u4:3.0' [ 6349.767479] Lustre: 300251:0:(client.c:2305:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 6350.114696] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 6351.523902] Lustre: DEBUG MARKER: == conf-sanity test 76d: verify llite.*.xattr_cache can be set by 'lctl set_param -P' correctly ========================================================== 17:04:39 (1679936679) [ 6352.667587] systemd-udevd[1035]: Specified user 'tss' unknown [ 6352.668553] systemd-udevd[1035]: Specified group 'tss' unknown [ 6352.772175] systemd-udevd[318338]: Using default interface naming scheme 'rhel-8.0'. [ 6355.141773] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6355.442065] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6355.442357] LustreError: Skipped 15 previous similar messages [ 6355.514938] systemd-udevd[318767]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'obdfilter.*.client_cache_count=256'' failed with exit code 2. [ 6355.519816] systemd-udevd[318766]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6356.344671] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6356.581627] systemd-udevd[318941]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'obdfilter.*.client_cache_count=256'' failed with exit code 2. [ 6356.613064] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 6356.614189] Lustre: Skipped 10 previous similar messages [ 6356.614000] systemd-udevd[318940]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6357.618376] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6357.882592] systemd-udevd[319127]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6358.682117] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6358.737917] systemd-udevd[319307]: Process '/mnt/build/lustre/tests/../utils/lctl set_param 'osc.*.max_dirty_mb=652'' failed with exit code 2. [ 6361.303856] Lustre: Mounted lustre-client [ 6366.667924] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 6367.033982] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 6367.034154] Lustre: Skipped 2 previous similar messages [ 6374.537715] systemd[1]: mnt-lustre.mount: Succeeded. [ 6374.569522] Lustre: Unmounted lustre-client [ 6375.376656] systemd[1]: mnt-lustre2.mount: Succeeded. [ 6375.696008] systemd[1]: mnt-lustre.mount: Succeeded. [ 6375.915401] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6376.640363] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 6382.109364] Lustre: server umount lustre-MDT0000 complete [ 6382.109528] Lustre: Skipped 9 previous similar messages [ 6382.310654] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 6382.345347] LustreError: 318710:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679936710 with bad export cookie 428275515291153115 [ 6382.346812] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6382.347024] LustreError: 318710:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 3 previous similar messages [ 6382.347721] LustreError: Skipped 1 previous similar message [ 6382.805287] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6388.880055] Lustre: 319963:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679936711/real 1679936711] req@0000000026ae476a x1761540852360832/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1679936717 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 6388.880729] Lustre: 319963:0:(client.c:2305:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 6389.112895] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 6396.538853] Lustre: DEBUG MARKER: == conf-sanity test 77: comma-separated MGS NIDs and failover node NIDs ========================================================== 17:05:24 (1679936724) [ 6396.716488] Lustre: DEBUG MARKER: SKIP: conf-sanity test_77 mixed loopback and real device not working [ 6397.008563] Lustre: DEBUG MARKER: == conf-sanity test 78: run resize2fs on MDT and OST filesystems ========================================================== 17:05:25 (1679936725) [ 6398.346293] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 6398.487758] systemd-udevd[1035]: Specified user 'tss' unknown [ 6398.530157] systemd-udevd[1035]: Specified group 'tss' unknown [ 6398.657040] systemd-udevd[320693]: Using default interface naming scheme 'rhel-8.0'. [ 6401.011177] blk_update_request: operation not supported error, dev loop0, sector 359808 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6401.011681] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6401.012272] blk_update_request: operation not supported error, dev loop0, sector 144248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6401.012748] blk_update_request: operation not supported error, dev loop0, sector 144256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6401.013116] blk_update_request: operation not supported error, dev loop0, sector 144264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6401.013481] blk_update_request: operation not supported error, dev loop0, sector 144272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6401.013837] blk_update_request: operation not supported error, dev loop0, sector 144280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6401.014193] blk_update_request: operation not supported error, dev loop0, sector 144288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6401.014555] blk_update_request: operation not supported error, dev loop0, sector 144296 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6401.014917] blk_update_request: operation not supported error, dev loop0, sector 144304 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6401.688708] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6401.695464] systemd[1]: tmp-mntTIAT9K.mount: Succeeded. [ 6402.589720] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6402.595885] systemd[1]: tmp-mntrifA3t.mount: Succeeded. [ 6403.063087] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6403.073917] systemd[1]: tmp-mntswoqce.mount: Succeeded. [ 6403.117634] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6403.346537] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 6403.392867] Lustre: lustre-MDT0000: new disk, initializing [ 6403.423948] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 6405.329666] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6405.657479] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6405.666097] systemd[1]: tmp-mnt15Wk9b.mount: Succeeded. [ 6405.703165] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6406.069769] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:0:ost] [ 6406.074495] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x240000400 [ 6407.520815] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 6407.900055] Lustre: DEBUG MARKER: create test files [ 6416.625482] systemd[1]: mnt-lustre.mount: Succeeded. [ 6417.342713] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6417.841112] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6417.846573] Lustre: Skipped 16 previous similar messages [ 6423.520262] LustreError: 322218:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 6423.527376] LustreError: 322218:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 9 previous similar messages [ 6423.954023] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6427.470928] LNet: 322574:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 6427.488675] LNet: Removed LNI 192.168.125.30@tcp [ 6428.686404] systemd-udevd[1035]: Specified user 'tss' unknown [ 6428.710175] systemd-udevd[1035]: Specified group 'tss' unknown [ 6428.809047] systemd-udevd[322920]: Using default interface naming scheme 'rhel-8.0'. [ 6429.391462] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 6429.903715] print_req_error: 4093 callbacks suppressed [ 6429.903717] blk_update_request: operation not supported error, dev loop0, sector 144248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6429.938534] blk_update_request: operation not supported error, dev loop2, sector 53272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6430.498948] systemd-udevd[1035]: Specified user 'tss' unknown [ 6430.559786] systemd-udevd[1035]: Specified group 'tss' unknown [ 6430.644413] systemd-udevd[323380]: Using default interface naming scheme 'rhel-8.0'. [ 6430.955849] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 2 [ 6431.996322] Lustre: Lustre: Build Version: 2.15.54 [ 6432.086696] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 6432.086961] LNet: Accept secure, port 988 [ 6432.780810] Lustre: Echo OBD driver; http://www.lustre.org/ [ 6434.317468] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 6434.319086] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6435.734269] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 6436.570718] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6436.969290] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6437.256810] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 6438.022549] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 6441.203272] Lustre: lustre-OST0000: deleting orphan objects from 0x240000400:102 to 0x240000400:129 [ 6446.252044] Lustre: Mounted lustre-client [ 6447.573893] Lustre: DEBUG MARKER: check files after expanding the MDT and OST filesystems [ 6448.523878] Lustre: DEBUG MARKER: create more files after expanding the MDT and OST filesystems [ 6449.320348] systemd[1]: mnt-lustre.mount: Succeeded. [ 6449.652980] Lustre: Unmounted lustre-client [ 6449.752438] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6451.280913] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 6451.285183] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6451.285617] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 6456.085809] Lustre: server umount lustre-OST0000 complete [ 6456.445602] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6456.681514] Lustre: server umount lustre-MDT0000 complete [ 6459.930668] LNet: 325013:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 6459.937789] LNet: Removed LNI 192.168.125.30@tcp [ 6461.190862] systemd-udevd[1035]: Specified user 'tss' unknown [ 6461.191903] systemd-udevd[1035]: Specified group 'tss' unknown [ 6461.310860] systemd-udevd[325200]: Using default interface naming scheme 'rhel-8.0'. [ 6461.925943] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 6462.466291] blk_update_request: operation not supported error, dev loop0, sector 144248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6462.551338] blk_update_request: operation not supported error, dev loop2, sector 53272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6463.521344] systemd-udevd[1035]: Specified user 'tss' unknown [ 6463.526892] systemd-udevd[1035]: Specified group 'tss' unknown [ 6463.678930] systemd-udevd[325617]: Using default interface naming scheme 'rhel-8.0'. [ 6464.046414] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 2 [ 6465.167587] Lustre: Lustre: Build Version: 2.15.54 [ 6465.304687] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 6465.304909] LNet: Accept secure, port 988 [ 6466.133485] Lustre: Echo OBD driver; http://www.lustre.org/ [ 6468.026137] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 6468.027751] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6469.274838] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 6470.061731] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6470.539713] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6470.838534] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 6471.542313] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 6473.763453] Lustre: lustre-OST0000: deleting orphan objects from 0x240000400:140 to 0x240000400:161 [ 6478.809161] Lustre: Mounted lustre-client [ 6480.111311] Lustre: DEBUG MARKER: check files after shrinking the MDT and OST filesystems [ 6481.812023] systemd[1]: mnt-lustre.mount: Succeeded. [ 6482.046749] Lustre: Unmounted lustre-client [ 6482.164947] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6483.840362] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 6483.856910] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6483.857371] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 6488.533947] Lustre: server umount lustre-OST0000 complete [ 6488.871859] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6489.234557] Lustre: server umount lustre-MDT0000 complete [ 6492.590656] LNet: 327451:0:(lib-ptl.c:956:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 6492.591107] LNet: Removed LNI 192.168.125.30@tcp [ 6493.833484] systemd-udevd[1035]: Specified user 'tss' unknown [ 6494.000578] systemd-udevd[1035]: Specified group 'tss' unknown [ 6494.100173] systemd-udevd[327640]: Using default interface naming scheme 'rhel-8.0'. [ 6494.631074] systemd[1]: usr-sbin-mount.lustre.mount: Succeeded. [ 6497.003604] LNet: HW NUMA nodes: 1, HW CPU cores: 2, npartitions: 1 [ 6497.846753] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 6498.063889] systemd-udevd[1035]: Specified user 'tss' unknown [ 6498.070165] systemd-udevd[1035]: Specified group 'tss' unknown [ 6498.165297] systemd-udevd[328436]: Using default interface naming scheme 'rhel-8.0'. [ 6498.817909] Lustre: Lustre: Build Version: 2.15.54 [ 6498.918615] LNet: Added LNI 192.168.125.30@tcp [8/256/0/180] [ 6498.918924] LNet: Accept secure, port 988 [ 6499.529817] Lustre: Echo OBD driver; http://www.lustre.org/ [ 6502.983187] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6502.985297] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6502.988970] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6502.989364] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6502.989658] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6502.989941] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6502.991730] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6502.992021] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6502.992299] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6502.992584] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6503.747928] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6506.067430] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6507.499561] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6507.510950] systemd[1]: tmp-mntFuJMAb.mount: Succeeded. [ 6509.410940] print_req_error: 8192 callbacks suppressed [ 6509.410942] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6509.424621] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6509.425333] blk_update_request: operation not supported error, dev loop4, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6509.441622] blk_update_request: operation not supported error, dev loop4, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6509.558168] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6510.163290] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6510.211538] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 6510.213551] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6511.318349] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 6511.325654] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 6511.358548] Lustre: lustre-MDT0000: new disk, initializing [ 6511.384340] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 6511.387628] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 6513.156600] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6513.163598] systemd[1]: tmp-mntbRC2Gl.mount: Succeeded. [ 6513.213888] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6513.249049] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 6513.274967] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 6513.275071] Lustre: Skipped 1 previous similar message [ 6513.298920] Lustre: lustre-MDT0001: new disk, initializing [ 6513.351924] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 6513.364957] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 6513.381182] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 6515.336796] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6515.840232] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6516.204456] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6516.213150] systemd[1]: tmp-mnt5sVk0x.mount: Succeeded. [ 6516.260910] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6516.355396] Lustre: lustre-OST0000: new disk, initializing [ 6516.355938] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 6516.392442] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 6518.165053] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 6518.542639] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 6518.543724] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 6518.564290] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 6518.741236] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 6518.851803] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 6519.431432] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 6519.585384] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 6519.655791] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6523.600564] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 6523.608601] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6523.614375] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 6525.903455] Lustre: server umount lustre-OST0000 complete [ 6526.092869] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6528.480439] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 6528.484854] LustreError: Skipped 1 previous similar message [ 6528.484937] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6528.485150] Lustre: Skipped 1 previous similar message [ 6528.485992] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 6528.486134] Lustre: Skipped 1 previous similar message [ 6532.351503] Lustre: server umount lustre-MDT0000 complete [ 6532.661300] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 6532.691875] LustreError: 330090:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679936861 with bad export cookie 11747857301999617070 [ 6532.694405] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6532.698018] LustreError: 330090:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 6533.472730] Lustre: DEBUG MARKER: == conf-sanity test 79: format MDT/OST without mgs option (should return errors) ========================================================== 17:07:41 (1679936861) [ 6534.063579] systemd-udevd[1035]: Specified user 'tss' unknown [ 6534.067541] systemd-udevd[1035]: Specified group 'tss' unknown [ 6534.232380] systemd-udevd[331756]: Using default interface naming scheme 'rhel-8.0'. [ 6537.241364] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6537.241769] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6537.242275] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6537.242660] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6537.242944] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6537.243236] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6537.243520] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6537.243798] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6537.244072] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6537.244358] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6537.881406] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6540.420063] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 6540.584416] systemd-udevd[1035]: Specified user 'tss' unknown [ 6540.614013] systemd-udevd[1035]: Specified group 'tss' unknown [ 6540.660977] systemd-udevd[333297]: Using default interface naming scheme 'rhel-8.0'. [ 6543.150407] print_req_error: 4089 callbacks suppressed [ 6543.150410] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6543.150982] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6543.151565] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6543.152032] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6543.152404] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6543.152762] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6543.153138] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6543.153504] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6543.153860] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6543.154227] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6543.819896] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6543.832121] systemd[1]: tmp-mntYPDeM2.mount: Succeeded. [ 6545.587841] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6545.594201] systemd[1]: tmp-mntlStpT4.mount: Succeeded. [ 6546.489585] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6546.497035] systemd[1]: tmp-mntEvGskm.mount: Succeeded. [ 6548.184799] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6548.225843] systemd[1]: tmp-mntJlkJkv.mount: Succeeded. [ 6548.990601] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6549.032166] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6549.165824] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 6549.181164] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 6549.228732] Lustre: lustre-MDT0000: new disk, initializing [ 6549.272141] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 6549.280467] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 6551.069718] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6551.076960] systemd[1]: tmp-mntkXQwkv.mount: Succeeded. [ 6551.113732] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6551.323318] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 6553.449587] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6554.035596] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6554.433725] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6554.472617] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6554.595043] Lustre: lustre-OST0000: new disk, initializing [ 6554.597218] Lustre: Skipped 1 previous similar message [ 6554.597666] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 6554.606313] Lustre: Skipped 2 previous similar messages [ 6554.631765] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 6554.631995] Lustre: Skipped 1 previous similar message [ 6556.527193] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 6556.835299] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 6556.836137] Lustre: Skipped 1 previous similar message [ 6556.836943] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 6556.891621] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 6557.170542] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 6557.266407] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 6557.791703] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 6557.912180] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 6557.995666] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6561.921355] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 6561.925514] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6561.925815] Lustre: Skipped 1 previous similar message [ 6561.927122] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 6561.927548] Lustre: Skipped 1 previous similar message [ 6564.258833] Lustre: server umount lustre-OST0000 complete [ 6564.259178] Lustre: Skipped 1 previous similar message [ 6564.584734] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6566.400355] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 6566.400527] LustreError: Skipped 1 previous similar message [ 6566.400560] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6566.400643] Lustre: Skipped 1 previous similar message [ 6566.400909] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 6566.406108] Lustre: Skipped 1 previous similar message [ 6570.829730] Lustre: server umount lustre-MDT0000 complete [ 6571.036613] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 6571.078902] LustreError: 334681:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679936899 with bad export cookie 11747857301999617833 [ 6571.079377] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6571.843823] Lustre: DEBUG MARKER: == conf-sanity test 80: mgc import reconnect race ======== 17:08:20 (1679936900) [ 6572.107862] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6572.270251] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6572.283329] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 6572.791643] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6573.616682] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6574.137301] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6574.445125] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6575.204908] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 6575.377608] Lustre: 336761:0:(genops.c:1720:obd_export_evict_by_uuid()) MGS: evicting 4ffc87d8-1880-4638-aa51-359e8d76dd7b at adminstrative request [ 6577.602640] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6577.610361] Lustre: *** cfs_fail_loc=906, val=2147483648*** [ 6582.722019] Lustre: MGS: Client 4ffc87d8-1880-4638-aa51-359e8d76dd7b (at 0@lo) reconnecting [ 6582.730015] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0xa308c5f92daa8a32 to 0xa308c5f92daa8c85 [ 6582.736775] Lustre: MGC192.168.125.30@tcp: Connection restored to (at 0@lo) [ 6605.729273] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6605.737910] systemd[1]: tmp-mnteanO2N.mount: Succeeded. [ 6605.791488] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6605.836535] Lustre: lustre-OST0001: new disk, initializing [ 6605.837075] Lustre: srv-lustre-OST0001: No data found on store. Initialize space: rc = -61 [ 6605.878684] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 6605.879262] Lustre: Skipped 2 previous similar messages [ 6607.669167] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid [ 6607.839161] systemd[1]: mnt-lustre\x2dost2.mount: Succeeded. [ 6607.907002] Lustre: server umount lustre-OST0001 complete [ 6607.907169] Lustre: Skipped 1 previous similar message [ 6608.126710] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6608.880433] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 6608.880568] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6608.880658] Lustre: Skipped 1 previous similar message [ 6608.880904] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 6608.880997] Lustre: Skipped 1 previous similar message [ 6612.971427] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6618.082743] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 6618.086675] Lustre: Skipped 6 previous similar messages [ 6618.089709] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6618.090180] LustreError: Skipped 2 previous similar messages [ 6622.560048] Lustre: lustre-OST0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 2. Is it stuck? [ 6622.627009] Lustre: server umount lustre-OST0000 complete [ 6622.986661] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6623.120574] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6623.121223] Lustre: Skipped 2 previous similar messages [ 6629.478643] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 6629.528890] LustreError: 336023:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679936957 with bad export cookie 11747857301999619205 [ 6629.529198] LustreError: 336023:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) Skipped 1 previous similar message [ 6629.529857] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6630.708791] Lustre: DEBUG MARKER: == conf-sanity test 81: sparse OST indexing ============== 17:09:18 (1679936958) [ 6630.885470] Lustre: DEBUG MARKER: SKIP: conf-sanity test_81 needs >= 3 OSTs [ 6631.228912] Lustre: DEBUG MARKER: == conf-sanity test 82a: specify OSTs for file (succeed) or directory (succeed) ========================================================== 17:09:19 (1679936959) [ 6631.415212] Lustre: DEBUG MARKER: SKIP: conf-sanity test_82a needs >= 3 OSTs [ 6631.670926] Lustre: DEBUG MARKER: == conf-sanity test 82b: specify OSTs for file with --pool and --ost-list options ========================================================== 17:09:19 (1679936959) [ 6631.763615] Lustre: DEBUG MARKER: SKIP: conf-sanity test_82b needs >= 4 OSTs [ 6631.916636] Lustre: DEBUG MARKER: == conf-sanity test 83: ENOSPACE on OST doesn't cause message VFS: Busy inodes after unmount ... ========================================================== 17:09:20 (1679936960) [ 6632.015615] Lustre: DEBUG MARKER: mount the OST /dev/mapper/ost1_flakey as a ldiskfs filesystem [ 6632.837703] print_req_error: 8196 callbacks suppressed [ 6632.837705] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6632.838623] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6632.839207] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6632.846534] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6632.904083] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6632.912858] systemd[1]: tmp-mntWQwKYX.mount: Succeeded. [ 6633.042991] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null) [ 6633.251275] Lustre: DEBUG MARKER: run llverfs in partial mode on the OST ldiskfs /mnt/lustre-ost1 [ 6634.085296] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing run_llverfs /mnt/lustre-ost1 -vpl no [ 6635.038778] Lustre: DEBUG MARKER: unmount the OST /dev/mapper/ost1_flakey [ 6635.147437] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6635.904970] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6635.911107] systemd[1]: tmp-mnt1F1JS5.mount: Succeeded. [ 6636.473691] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6636.494990] systemd[1]: tmp-mntD4yzXn.mount: Succeeded. [ 6636.838537] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6636.856737] systemd[1]: tmp-mntcslWGb.mount: Succeeded. [ 6637.238932] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6637.985694] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6637.996900] systemd[1]: tmp-mntoCHZUj.mount: Succeeded. [ 6638.053021] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6638.053991] LustreError: 338214:0:(obd_config.c:776:class_setup()) setup lustre-OST0000-osd failed (-28) [ 6638.054067] LustreError: 338214:0:(obd_mount.c:200:lustre_start_simple()) lustre-OST0000-osd setup error -28 [ 6638.059597] LustreError: 338214:0:(tgt_mount.c:2048:server_fill_super()) Unable to start osd on /dev/mapper/ost1_flakey: -28 [ 6638.059849] LustreError: 338214:0:(super25.c:188:lustre_fill_super()) llite: Unable to mount : rc = -28 [ 6640.454370] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing set_hostid [ 6640.709483] systemd-udevd[1035]: Specified user 'tss' unknown [ 6640.748904] systemd-udevd[1035]: Specified group 'tss' unknown [ 6640.892663] systemd-udevd[338634]: Using default interface naming scheme 'rhel-8.0'. [ 6644.292734] blk_update_request: operation not supported error, dev loop0, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6644.294149] blk_update_request: operation not supported error, dev loop0, sector 208 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6644.294879] blk_update_request: operation not supported error, dev loop0, sector 160232 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6644.302439] blk_update_request: operation not supported error, dev loop0, sector 160240 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6644.312684] blk_update_request: operation not supported error, dev loop0, sector 160248 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6644.314594] blk_update_request: operation not supported error, dev loop0, sector 160256 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6644.314789] blk_update_request: operation not supported error, dev loop0, sector 160264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6644.315132] blk_update_request: operation not supported error, dev loop0, sector 160272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6644.315319] blk_update_request: operation not supported error, dev loop0, sector 160280 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6644.315508] blk_update_request: operation not supported error, dev loop0, sector 160288 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6645.565061] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6647.982666] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6647.989328] systemd[1]: tmp-mntipMKZT.mount: Succeeded. [ 6649.591526] print_req_error: 8188 callbacks suppressed [ 6649.591529] blk_update_request: operation not supported error, dev loop2, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6649.591772] blk_update_request: operation not supported error, dev loop2, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6649.592041] blk_update_request: operation not supported error, dev loop2, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6649.596437] blk_update_request: operation not supported error, dev loop2, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6649.634586] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6649.650172] systemd[1]: tmp-mntHidfQ3.mount: Succeeded. [ 6650.724702] blk_update_request: operation not supported error, dev loop4, sector 399872 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6650.725033] blk_update_request: operation not supported error, dev loop4, sector 8224 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6650.725406] blk_update_request: operation not supported error, dev loop4, sector 58264 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6650.729820] blk_update_request: operation not supported error, dev loop4, sector 58272 op 0x9:(WRITE_ZEROES) flags 0x400800 phys_seg 0 prio class 0 [ 6650.827910] LDISKFS-fs (loop4): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6650.834174] systemd[1]: tmp-mntUWSVCf.mount: Succeeded. [ 6651.522461] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6651.532858] systemd[1]: tmp-mntbzOoMm.mount: Succeeded. [ 6651.577523] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6651.759168] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 6651.759367] Lustre: Skipped 1 previous similar message [ 6651.771140] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 6651.826185] Lustre: lustre-MDT0000: new disk, initializing [ 6651.876615] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 6651.879301] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 6653.736357] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6653.749454] systemd[1]: tmp-mntMk450o.mount: Succeeded. [ 6653.784419] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6653.970466] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 6656.088995] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6656.712197] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6657.108807] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6657.116763] systemd[1]: tmp-mntrVaBRv.mount: Succeeded. [ 6657.153644] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6659.491922] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 6660.576090] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 6661.570428] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 6661.775766] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 1 sec [ 6662.469001] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 6662.611041] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 6662.730162] systemd[1]: mnt-lustre\x2dost1.mount: Succeeded. [ 6666.561795] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 6666.567260] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6666.572211] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 6666.572586] Lustre: Skipped 6 previous similar messages [ 6668.948270] Lustre: server umount lustre-OST0000 complete [ 6668.962410] Lustre: Skipped 2 previous similar messages [ 6669.361506] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6675.876620] systemd[1]: mnt-lustre\x2dmds2.mount: Succeeded. [ 6675.932174] LustreError: 340196:0:(ldlm_lockd.c:2569:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1679937004 with bad export cookie 11747857301999619758 [ 6675.932903] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6677.048416] Lustre: DEBUG MARKER: == conf-sanity test 84: check recovery_hard_time ========= 17:10:05 (1679937005) [ 6677.512964] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6677.658032] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6677.658315] LustreError: Skipped 1 previous similar message [ 6678.478726] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6679.438262] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6680.131389] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6680.721279] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6681.846181] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 6682.414090] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 6682.454371] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6684.527387] Lustre: DEBUG MARKER: tmp.20a7BdrJQP: executing wait_import_state_mount FULL osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid [ 6684.760425] Lustre: Mounted lustre-client [ 6686.111594] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 6686.127442] Lustre: Skipped 2 previous similar messages [ 6686.127744] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x00000002c0000400-0x0000000300000400]:1:ost] [ 6686.127895] Lustre: Skipped 1 previous similar message [ 6691.011345] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 6691.028634] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 6691.211977] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x2c0000401 [ 6692.967973] systemd[1]: mnt-lustre\x2dmds1.mount: Succeeded. [ 6693.003490] Lustre: Failing over lustre-MDT0000 [ 6693.704934] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 6693.705420] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6702.880058] Lustre: 328556:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1679937024/real 1679937024] req@00000000d0dc2a00 x1761541438707136/t0(0) o400->MGC192.168.125.30@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1679937031 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'' [ 6702.887242] LustreError: 166-1: MGC192.168.125.30@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6702.892913] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6702.893148] LustreError: Skipped 12 previous similar messages [ 6704.548797] LDISKFS-fs (dm-0): recovery complete [ 6704.548999] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6708.964134] Lustre: Evicted from MGS (at 192.168.125.30@tcp) after server handle changed from 0xa308c5f92daa91b0 to 0xa308c5f92dab8d94 [ 6708.976990] Lustre: MGC192.168.125.30@tcp: Connection restored to (at 0@lo) [ 6709.146156] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 6712.163523] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 6712.165424] Lustre: 341568:0:(ldlm_lib.c:1989:extend_recovery_timer()) lustre-MDT0000: extended recovery timer reached hard limit: 60, extend: 0 [ 6712.169420] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 6712.207320] LustreError: 343158:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 709 sleeping for 300ms [ 6712.207494] LustreError: 328554:0:(client.c:3253:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@000000009cfc109f x1761541438383552/t8589934597(8589934597) o101->lustre-MDT0000-mdc-ffff8b7824a07000@0@lo:12/10 lens 592/608 e 0 to 0 dl 1679937047 ref 2 fl Interpret:RQU/4/0 rc 301/301 job:'' [ 6712.530052] LustreError: 343158:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 709 awake [ 6712.861038] Lustre: 343158:0:(ldlm_lib.c:1989:extend_recovery_timer()) lustre-MDT0000: extended recovery timer reached hard limit: 60, extend: 1 [ 6712.861129] LustreError: 328554:0:(client.c:3253:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@000000008c982720 x1761541438383872/t8589934599(8589934599) o101->lustre-MDT0000-mdc-ffff8b7824a07000@0@lo:12/10 lens 592/608 e 0 to 0 dl 1679937047 ref 2 fl Interpret:RQU/4/0 rc 301/301 job:'' [ 6712.867558] Lustre: 343158:0:(ldlm_lib.c:1989:extend_recovery_timer()) Skipped 8 previous similar messages [ 6712.867614] LustreError: 343158:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 709 sleeping for 300ms [ 6712.867615] LustreError: 343158:0:(fail.c:138:__cfs_fail_timeout_set()) Skipped 1 previous similar message [ 6713.190114] LustreError: 343158:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 709 awake [ 6713.197847] LustreError: 343158:0:(fail.c:149:__cfs_fail_timeout_set()) Skipped 1 previous similar message [ 6714.190050] LustreError: 343158:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 709 awake [ 6714.195251] LustreError: 343158:0:(fail.c:149:__cfs_fail_timeout_set()) Skipped 2 previous similar messages [ 6714.198940] Lustre: 343158:0:(ldlm_lib.c:1989:extend_recovery_timer()) lustre-MDT0000: extended recovery timer reached hard limit: 60, extend: 1 [ 6714.199143] Lustre: 343158:0:(ldlm_lib.c:1989:extend_recovery_timer()) Skipped 3 previous similar messages [ 6714.199347] LustreError: 328554:0:(client.c:3253:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@0000000065f7411a x1761541438384512/t8589934603(8589934603) o101->lustre-MDT0000-mdc-ffff8b7824a07000@0@lo:12/10 lens 592/608 e 0 to 0 dl 1679937049 ref 2 fl Interpret:RQU/4/0 rc 301/301 job:'' [ 6714.199670] LustreError: 328554:0:(client.c:3253:ptlrpc_replay_interpret()) Skipped 1 previous similar message [ 6714.199947] LustreError: 343158:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 709 sleeping for 300ms [ 6714.200112] LustreError: 343158:0:(fail.c:138:__cfs_fail_timeout_set()) Skipped 3 previous similar messages [ 6716.510052] LustreError: 343158:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 709 awake [ 6716.518941] LustreError: 343158:0:(fail.c:149:__cfs_fail_timeout_set()) Skipped 6 previous similar messages [ 6716.531745] Lustre: 343158:0:(ldlm_lib.c:1989:extend_recovery_timer()) lustre-MDT0000: extended recovery timer reached hard limit: 60, extend: 1 [ 6716.531965] Lustre: 343158:0:(ldlm_lib.c:1989:extend_recovery_timer()) Skipped 6 previous similar messages [ 6716.532173] LustreError: 343158:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 709 sleeping for 300ms [ 6716.532319] LustreError: 343158:0:(fail.c:138:__cfs_fail_timeout_set()) Skipped 6 previous similar messages [ 6716.861172] LustreError: 328554:0:(client.c:3253:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@00000000786a54b5 x1761541438385792/t8589934611(8589934611) o101->lustre-MDT0000-mdc-ffff8b7824a07000@0@lo:12/10 lens 592/608 e 0 to 0 dl 1679937051 ref 2 fl Interpret:RQU/4/0 rc 301/301 job:'' [ 6716.868070] LustreError: 328554:0:(client.c:3253:ptlrpc_replay_interpret()) Skipped 3 previous similar messages [ 6720.830050] LustreError: 343158:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 709 awake [ 6720.835287] LustreError: 343158:0:(fail.c:149:__cfs_fail_timeout_set()) Skipped 12 previous similar messages [ 6720.838648] Lustre: 343158:0:(ldlm_lib.c:1989:extend_recovery_timer()) lustre-MDT0000: extended recovery timer reached hard limit: 60, extend: 1 [ 6720.838844] Lustre: 343158:0:(ldlm_lib.c:1989:extend_recovery_timer()) Skipped 12 previous similar messages [ 6720.839012] LustreError: 343158:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 709 sleeping for 300ms [ 6720.839158] LustreError: 343158:0:(fail.c:138:__cfs_fail_timeout_set()) Skipped 12 previous similar messages [ 6721.491189] LustreError: 328554:0:(client.c:3253:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@0000000013e4d622 x1761541438388032/t8589934625(8589934625) o101->lustre-MDT0000-mdc-ffff8b7824a07000@0@lo:12/10 lens 592/608 e 0 to 0 dl 1679937056 ref 2 fl Interpret:RQU/4/0 rc 301/301 job:'' [ 6721.497021] LustreError: 328554:0:(client.c:3253:ptlrpc_replay_interpret()) Skipped 6 previous similar messages [ 6727.432199] LustreError: 343158:0:(osp_internal.h:530:osp_fid_diff()) ASSERTION( fid_seq(fid1) == fid_seq(fid2) ) failed: fid1:[0x2c0000401:0x2:0x0], fid2:[0x100010000:0x1:0x0] [ 6727.436720] LustreError: 343158:0:(osp_internal.h:530:osp_fid_diff()) LBUG [ 6727.436766] Pid: 343158, comm: tgt_recover_0 4.18.0 #2 SMP Sun Oct 23 17:58:04 UTC 2022 [ 6727.436813] Call Trace TBD: [ 6727.436859] [<0>] libcfs_call_trace+0x67/0x90 [libcfs] [ 6727.436896] [<0>] lbug_with_loc+0x3e/0x80 [libcfs] [ 6727.436941] [<0>] osp_create+0x871/0xa70 [osp] [ 6727.436992] [<0>] lod_sub_create+0x28e/0x480 [lod] [ 6727.437040] [<0>] lod_striped_create+0x1ab/0x5b0 [lod] [ 6727.437085] [<0>] lod_xattr_set+0xee0/0x1a20 [lod] [ 6727.437130] [<0>] mdd_create_object+0xab3/0x19f0 [mdd] [ 6727.437172] [<0>] mdd_create+0xfa8/0x2540 [mdd] [ 6727.437224] [<0>] mdt_reint_open+0x2558/0x33f0 [mdt] [ 6727.437270] [<0>] mdt_reint_rec+0x10f/0x260 [mdt] [ 6727.437312] [<0>] mdt_reint_internal+0x586/0xb30 [mdt] [ 6727.437361] [<0>] mdt_intent_open+0x132/0x420 [mdt] [ 6727.437407] [<0>] mdt_intent_policy+0x419/0x1030 [mdt] [ 6727.437502] [<0>] ldlm_lock_enqueue+0x43f/0xae0 [ptlrpc] [ 6727.437572] [<0>] ldlm_handle_enqueue0+0x5e6/0x1730 [ptlrpc] [ 6727.437653] [<0>] tgt_enqueue+0x9f/0x210 [ptlrpc] [ 6727.437726] [<0>] tgt_request_handle+0x977/0x1a40 [ptlrpc] [ 6727.437796] [<0>] handle_recovery_req+0x13c/0x260 [ptlrpc] [ 6727.437866] [<0>] target_recovery_thread+0xdf0/0x1c00 [ptlrpc] [ 6727.437924] [<0>] kthread+0x129/0x140 [ 6727.437954] [<0>] ret_from_fork+0x1f/0x30 [ 6727.437978] Kernel panic - not syncing: LBUG [ 6727.438006] CPU: 1 PID: 343158 Comm: tgt_recover_0 Tainted: G W O --------- - - 4.18.0 #2 [ 6727.438058] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 [ 6727.438089] Call Trace: [ 6727.438104] dump_stack+0x5c/0x80 [ 6727.438134] panic+0xd2/0x27d [ 6727.438161] ? ret_from_fork+0x1f/0x30 [ 6727.438184] ? lbug_with_loc+0x3e/0x80 [libcfs] [ 6727.438221] lbug_with_loc.cold.6+0x18/0x18 [libcfs] [ 6727.438261] ? osp_create+0x865/0xa70 [osp] [ 6727.438293] osp_create+0x871/0xa70 [osp] [ 6727.438330] lod_sub_create+0x28e/0x480 [lod] [ 6727.438373] lod_striped_create+0x1ab/0x5b0 [lod] [ 6727.438415] lod_xattr_set+0xee0/0x1a20 [lod] [ 6727.438456] mdd_create_object+0xab3/0x19f0 [mdd] [ 6727.438531] ? top_trans_start+0x445/0x9a0 [ptlrpc] [ 6727.438608] ? top_trans_start+0x445/0x9a0 [ptlrpc] [ 6727.438655] mdd_create+0xfa8/0x2540 [mdd] [ 6727.438712] ? lustre_msg_get_flags+0x21/0x90 [ptlrpc] [ 6727.438762] mdt_reint_open+0x2558/0x33f0 [mdt] [ 6727.438808] mdt_reint_rec+0x10f/0x260 [mdt] [ 6727.438854] mdt_reint_internal+0x586/0xb30 [mdt] [ 6727.438898] mdt_intent_open+0x132/0x420 [mdt] [ 6727.438945] mdt_intent_policy+0x419/0x1030 [mdt] [ 6727.438989] ? mdt_intent_fixup_resent+0x1f0/0x1f0 [mdt] [ 6727.439057] ldlm_lock_enqueue+0x43f/0xae0 [ptlrpc] [ 6727.439097] ? cfs_hash_bd_add_locked+0x17/0xa0 [libcfs] [ 6727.439128] ? _raw_read_unlock+0x1a/0x30 [ 6727.439190] ldlm_handle_enqueue0+0x5e6/0x1730 [ptlrpc] [ 6727.439266] tgt_enqueue+0x9f/0x210 [ptlrpc] [ 6727.439338] tgt_request_handle+0x977/0x1a40 [ptlrpc] [ 6727.439414] ? tgt_checksum_niobuf_rw+0x15c0/0x15c0 [ptlrpc] [ 6727.439495] ? tgt_checksum_niobuf_rw+0x15c0/0x15c0 [ptlrpc] [ 6727.439574] ? tgt_checksum_niobuf_rw+0x15c0/0x15c0 [ptlrpc] [ 6727.439657] handle_recovery_req+0x13c/0x260 [ptlrpc] [ 6727.439728] target_recovery_thread+0xdf0/0x1c00 [ptlrpc] [ 6727.439801] ? target_send_reply+0x770/0x770 [ptlrpc] [ 6727.439840] kthread+0x129/0x140 [ 6727.439869] ? kthread_flush_work_fn+0x10/0x10 [ 6727.439895] ret_from_fork+0x1f/0x30 [ 6727.440012] Kernel Offset: 0xe000000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff) [ 6727.440072] Rebooting in 60 seconds..