Details
-
Bug
-
Resolution: Fixed
-
Blocker
-
Lustre 2.4.0
-
A patch pushed via git.
-
3
-
6849
Description
From this test run:
https://maloo.whamcloud.com/test_sessions/f62dc660-7943-11e2-9cb9-52540035b04c
The patch being tests is not involved in this area of the code.
conf-sanity test_64
Error: 'test failed to respond and timed out'
Failure Rate: 4.00% of last 100 executions [all branches]
In the MDS the following is seen:
09:37:51:Lustre: DEBUG MARKER: == conf-sanity test 64: check lfs df --lazy == 09:37:45 (1361122665) 09:37:51:Lustre: DEBUG MARKER: mkdir -p /mnt/mds1 09:37:51:Lustre: DEBUG MARKER: test -b /dev/lvm-MDS/P1 09:37:51:Lustre: DEBUG MARKER: mkdir -p /mnt/mds1; mount -t lustre -o user_xattr,acl /dev/lvm-MDS/P1 /mnt/mds1 09:37:51:LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. quota=on. Opts: 09:37:51:Lustre: lustre-MDT0000: used disk, loading 09:37:51:Lustre: DEBUG MARKER: PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/usr/lib64/lustre/tests//usr/lib64/lustre/tests:/usr/lib64/lustre/tests:/usr/lib64/lustre/tests/../utils:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lust 09:37:51:Lustre: DEBUG MARKER: e2label /dev/lvm-MDS/P1 2>/dev/null 09:38:02:Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 10.10.17.34@tcp) was lost; in progress operations using this service will wait for recovery to complete 09:38:14:Lustre: DEBUG MARKER: grep -c /mnt/mds1' ' /proc/mounts 09:38:14:Lustre: DEBUG MARKER: umount -d -f /mnt/mds1 09:38:14:LustreError: 7883:0:(client.c:1048:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff88006c4a2c00 x1427239946161272/t0(0) o13->lustre-OST0000-osc-MDT0000@10.10.17.34@tcp:7/4 lens 224/368 e 0 to 0 dl 0 ref 1 fl Rpc:/0/ffffffff rc 0/-1 09:38:14:LustreError: 24649:0:(dt_object.h:979:dt_declare_record_write()) ASSERTION( dt != NULL ) failed: dt is NULL when we want to write record 09:38:14:LustreError: 24649:0:(dt_object.h:979:dt_declare_record_write()) LBUG 09:38:14:Pid: 24649, comm: osp-pre-1 09:38:14: 09:38:14:Call Trace: 09:38:14: [<ffffffffa0ee7895>] libcfs_debug_dumpstack+0x55/0x80 [libcfs] 09:38:14: [<ffffffffa0ee7e97>] lbug_with_loc+0x47/0xb0 [libcfs] 09:38:14: [<ffffffffa0704ca5>] osp_write_last_oid_seq_files+0x595/0x6a0 [osp] 09:38:14: [<ffffffffa070918d>] osp_precreate_thread+0x80d/0x1460 [osp] 09:38:14: [<ffffffffa0708980>] ? osp_precreate_thread+0x0/0x1460 [osp] 09:38:14: [<ffffffff8100c0ca>] child_rip+0xa/0x20 09:38:14: [<ffffffffa0708980>] ? osp_precreate_thread+0x0/0x1460 [osp] 09:38:14: [<ffffffffa0708980>] ? osp_precreate_thread+0x0/0x1460 [osp] 09:38:14: [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
Looks like the MDS paniced on unmount.
Attachments
Issue Links
- mentioned in
-
Page Loading...