Details
-
Bug
-
Resolution: Duplicate
-
Major
-
None
-
None
-
None
-
3
-
15371
Description
We have upgraded from 1.8.7 to 2.5.2
After mounting the FS at approximately 17:29
At 18:49 we see the following
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: LNet: Service thread pid 15310 was inactive for 200.00s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: Pid: 15310, comm: mdt01_000
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel:
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: Call Trace:
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa057ffd5>] ? cfs_hash_bd_lookup_intent+0x65/0x130 [libcfs]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa06bdeab>] lu_object_find_at+0xab/0x360 [obdclass]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffff81282965>] ? _atomic_dec_and_lock+0x55/0x80
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa06be19f>] lu_object_find_slice+0x1f/0x80 [obdclass]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0f99370>] mdd_object_find+0x10/0x70 [mdd]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0fa97ba>] mdd_is_parent+0xaa/0x3a0 [mdd]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0fa9c0c>] mdd_is_subdir+0x15c/0x200 [mdd]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0e6d509>] mdt_reint_rename+0x1049/0x1c20 [mdt]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0e48940>] ? mdt_blocking_ast+0x0/0x2a0 [mdt]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0819f60>] ? ldlm_completion_ast+0x0/0x920 [ptlrpc]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0586f76>] ? upcall_cache_get_entry+0x296/0x880 [libcfs]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa06ddc00>] ? lu_ucred+0x20/0x30 [obdclass]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0e684a1>] mdt_reint_rec+0x41/0xe0 [mdt]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0e4dcb3>] mdt_reint_internal+0x4c3/0x780 [mdt]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0e4dfb4>] mdt_reint+0x44/0xe0 [mdt]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0e5158a>] mdt_handle_common+0x52a/0x1470 [mdt]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0e8d755>] mds_regular_handle+0x15/0x20 [mdt]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0853bc5>] ptlrpc_server_handle_request+0x385/0xc00 [ptlrpc]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa056b4ce>] ? cfs_timer_arm+0xe/0x10 [libcfs]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa057c3cf>] ? lc_watchdog_touch+0x6f/0x170 [libcfs]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa084b2a9>] ? ptlrpc_wait_event+0xa9/0x2d0 [ptlrpc]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffff810546b9>] ? __wake_up_common+0x59/0x90
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0854f2d>] ptlrpc_main+0xaed/0x1740 [ptlrpc]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffffa0854440>] ? ptlrpc_main+0x0/0x1740 [ptlrpc]
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffff8109ab56>] kthread+0x96/0xa0
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffff8100c20a>] child_rip+0xa/0x20
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffff8109aac0>] ? kthread+0x0/0xa0
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: [<ffffffff8100c200>] ? child_rip+0x0/0x20
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel:
Aug 19 18:49:09 cs04r-sc-mds03-01 kernel: LustreError: dumping log to /tmp/lustre-log.1408470549.15310
Aug 19 18:51:19 cs04r-sc-mds03-01 kernel: LNet: Service thread pid 15470 was inactive for 200.00s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Aug 19 18:51:19 cs04r-sc-mds03-01 kernel: Pid: 15470, comm: mdt03_004
Attachments
Issue Links
- is related to
-
LU-4725 wrong lock ordering in rename leads to deadlocks
-
- Resolved
-
There's a number of processes/applications that tend to create files with a temporary extension and than rename them to the proper name, to avoid any race conditions where processing starts before the data is there, so not that uncommon. And considering that we first saw it with basically only ior test runs, I'd say, something is doing it quite frequently...
Cheers,
Frederik