<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:23:38 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-9146] Backport patches from upstream to resolve deadlock in xattr</title>
                <link>https://jira.whamcloud.com/browse/LU-9146</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We need backport below patches from upstream to resolve deadlock for i_data_sem.&lt;/p&gt;

&lt;p&gt;From a521100231f816f8cdd9c8e77da14ff1e42c2b17 Mon Sep 17 00:00:00 2001&lt;br/&gt;
From: Theodore Ts&apos;o &amp;lt;tytso@mit.edu&amp;gt;&lt;br/&gt;
Date: Thu, 4 Sep 2014 18:06:25 -0400&lt;br/&gt;
Subject: &lt;span class=&quot;error&quot;&gt;&amp;#91;PATCH&amp;#93;&lt;/span&gt; ext4: pass allocation_request struct to&lt;br/&gt;
 ext4_(alloc,splice)_branch&lt;/p&gt;

&lt;p&gt;Instead of initializing the allocation_request structure in&lt;br/&gt;
ext4_alloc_branch(), set it up in ext4_ind_map_blocks(), and then pass&lt;br/&gt;
it to ext4_alloc_branch() and ext4_splice_branch().&lt;/p&gt;

&lt;p&gt;This allows ext4_ind_map_blocks to pass flags in the allocation&lt;br/&gt;
request structure without having to add Yet Another argument to&lt;br/&gt;
ext4_alloc_branch().&lt;/p&gt;

&lt;p&gt;Signed-off-by: Theodore Ts&apos;o &amp;lt;tytso@mit.edu&amp;gt;&lt;br/&gt;
Reviewed-by: Jan Kara &amp;lt;jack@suse.cz&amp;gt;&lt;/p&gt;

&lt;p&gt;From e3cf5d5d9a86df1c5e413bdd3725c25a16ff854c Mon Sep 17 00:00:00 2001&lt;br/&gt;
From: Theodore Ts&apos;o &amp;lt;tytso@mit.edu&amp;gt;&lt;br/&gt;
Date: Thu, 4 Sep 2014 18:07:25 -0400&lt;br/&gt;
Subject: &lt;span class=&quot;error&quot;&gt;&amp;#91;PATCH&amp;#93;&lt;/span&gt; ext4: prepare to drop EXT4_STATE_DELALLOC_RESERVED&lt;/p&gt;

&lt;p&gt;The EXT4_STATE_DELALLOC_RESERVED flag was originally implemented&lt;br/&gt;
because it was too hard to make sure the mballoc and get_block flags&lt;br/&gt;
could be reliably passed down through all of the codepaths that end up&lt;br/&gt;
calling ext4_mb_new_blocks().&lt;/p&gt;

&lt;p&gt;Since then, we have mb_flags passed down through most of the code&lt;br/&gt;
paths, so getting rid of EXT4_STATE_DELALLOC_RESERVED isn&apos;t as tricky&lt;br/&gt;
as it used to.&lt;/p&gt;

&lt;p&gt;This commit plumbs in the last of what is required, and then adds a&lt;br/&gt;
WARN_ON check to make sure we haven&apos;t missed anything.  If this passes&lt;br/&gt;
a full regression test run, we can then drop&lt;br/&gt;
EXT4_STATE_DELALLOC_RESERVED.&lt;/p&gt;

&lt;p&gt;Signed-off-by: Theodore Ts&apos;o &amp;lt;tytso@mit.edu&amp;gt;&lt;br/&gt;
Reviewed-by: Jan Kara &amp;lt;jack@suse.cz&amp;gt;&lt;/p&gt;

&lt;p&gt;From 2e81a4eeedcaa66e35f58b81e0755b87057ce392 Mon Sep 17 00:00:00 2001&lt;br/&gt;
From: Jan Kara &amp;lt;jack@suse.cz&amp;gt;&lt;br/&gt;
Date: Thu, 11 Aug 2016 12:38:55 -0400&lt;br/&gt;
Subject: &lt;span class=&quot;error&quot;&gt;&amp;#91;PATCH&amp;#93;&lt;/span&gt; ext4: avoid deadlock when expanding inode size&lt;/p&gt;

&lt;p&gt;When we need to move xattrs into external xattr block, we call&lt;br/&gt;
ext4_xattr_block_set() from ext4_expand_extra_isize_ea(). That may end&lt;br/&gt;
up calling ext4_mark_inode_dirty() again which will recurse back into&lt;br/&gt;
the inode expansion code leading to deadlocks.&lt;/p&gt;

&lt;p&gt;Protect from recursion using EXT4_STATE_NO_EXPAND inode flag and move&lt;br/&gt;
its management into ext4_expand_extra_isize_ea() since its manipulation&lt;br/&gt;
is safe there (due to xattr_sem) from possible races with&lt;br/&gt;
ext4_xattr_set_handle() which plays with it as well.&lt;/p&gt;

&lt;p&gt;CC: stable@vger.kernel.org   # 4.4.x&lt;br/&gt;
Signed-off-by: Jan Kara &amp;lt;jack@suse.cz&amp;gt;&lt;br/&gt;
Signed-off-by: Theodore Ts&apos;o &amp;lt;tytso@mit.edu&amp;gt;&lt;/p&gt;</description>
                <environment></environment>
        <key id="44046">LU-9146</key>
            <summary>Backport patches from upstream to resolve deadlock in xattr</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="ys">Yang Sheng</assignee>
                                    <reporter username="ys">Yang Sheng</reporter>
                        <labels>
                    </labels>
                <created>Thu, 23 Feb 2017 07:15:27 +0000</created>
                <updated>Wed, 16 Jan 2019 20:40:50 +0000</updated>
                            <resolved>Thu, 9 Mar 2017 07:33:48 +0000</resolved>
                                                    <fixVersion>Lustre 2.10.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="185963" author="gerrit" created="Thu, 23 Feb 2017 13:03:04 +0000"  >&lt;p&gt;Yang Sheng (yang.sheng@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/25595&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/25595&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9146&quot; title=&quot;Backport patches from upstream to resolve deadlock in xattr&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9146&quot;&gt;&lt;del&gt;LU-9146&lt;/del&gt;&lt;/a&gt; ldiskfs: backport a few patches to resolve deadlock&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 8b514e5fcd9ff38868af33623c80e86072187397&lt;/p&gt;</comment>
                            <comment id="187606" author="gerrit" created="Thu, 9 Mar 2017 06:13:31 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/25595/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/25595/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9146&quot; title=&quot;Backport patches from upstream to resolve deadlock in xattr&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9146&quot;&gt;&lt;del&gt;LU-9146&lt;/del&gt;&lt;/a&gt; ldiskfs: backport a few patches to resolve deadlock&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 18120272b73a018a2590f1e5a895331b35df75e9&lt;/p&gt;</comment>
                            <comment id="187622" author="pjones" created="Thu, 9 Mar 2017 07:33:48 +0000"  >&lt;p&gt;Landed for 2.10&lt;/p&gt;</comment>
                            <comment id="187635" author="green" created="Thu, 9 Mar 2017 13:22:51 +0000"  >&lt;p&gt;We can do it, but since we always patch ldiskfs ourselves anyway, it has no imact on our patchlessness.&lt;/p&gt;</comment>
                            <comment id="195630" author="ys" created="Fri, 12 May 2017 05:52:25 +0000"  >&lt;p&gt;Just for record.&lt;br/&gt;
OSS stack trace from host gio12&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Feb 22 12:41:41 gio12 kernel: Lustre: Skipped 686202 previous similar messages
Feb 22 12:41:50 gio12 kernel: Lustre: dtemp-OST001b: recovery is timed out, evict stale exports
Feb 22 12:41:50 gio12 kernel: Lustre: dtemp-OST001b: disconnecting 1 stale clients
Feb 22 12:41:50 gio12 kernel: Lustre: dtemp-OST001b: Client a83807d9-ca3b-9fd3-3cbc-1d2b648b12d1 (at 172.22.160.62@o2ib6) reconnecting
Feb 22 12:41:50 gio12 kernel: Lustre: Skipped 894 previous similar messages
Feb 22 12:41:52 gio12 kernel: Lustre: dtemp-OST001b: Recovery over after 14:56, of 1435 clients 1434 recovered and 1 was evicted.
Feb 22 12:41:52 gio12 kernel: Lustre: Skipped 1 previous similar message
Feb 22 12:41:52 gio12 kernel: Lustre: dtemp-OST001b: deleting orphan objects from 0x0:7139820 to 0x0:7139873
Feb 22 12:44:16 gio12 kernel: INFO: task ll_ost_io01_002:14056 blocked for more than 120 seconds.
Feb 22 12:44:16 gio12 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Feb 22 12:44:16 gio12 kernel: ll_ost_io01_002 D 0000000000000000     0 14056      2 0x00000080
Feb 22 12:44:16 gio12 kernel:  ffff881020b9b898 0000000000000046 ffff88102151b980 ffff881020b9bfd8
Feb 22 12:44:16 gio12 kernel:  ffff881020b9bfd8 ffff881020b9bfd8 ffff88102151b980 ffff88102151b980
Feb 22 12:44:16 gio12 kernel:  ffff8807a92bda90 fffffffeffffffff ffff8807a92bda98 0000000000000000
Feb 22 12:44:16 gio12 kernel: Call Trace:
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff8163bf19&amp;gt;] schedule+0x29/0x70
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff8163d8d5&amp;gt;] rwsem_down_read_failed+0xf5/0x170
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff81301e54&amp;gt;] call_rwsem_down_read_failed+0x14/0x30
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff8163b130&amp;gt;] ? down_read+0x20/0x30
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa0def84b&amp;gt;] ldiskfs_xattr_block_set+0x62b/0xa80 [ldiskfs]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa0df09d4&amp;gt;] ldiskfs_expand_extra_isize_ea+0x404/0x810 [ldiskfs]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa0df6d9f&amp;gt;] ldiskfs_mark_inode_dirty+0x1af/0x210 [ldiskfs]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa0de0884&amp;gt;] ldiskfs_ext_truncate+0x24/0xe0 [ldiskfs]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa0df83b7&amp;gt;] ldiskfs_truncate+0x3b7/0x3f0 [ldiskfs]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa0e92e08&amp;gt;] osd_punch+0x138/0x5e0 [osd_ldiskfs]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa0c84346&amp;gt;] ofd_object_punch+0x6e6/0xc30 [ofd]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa0c715e6&amp;gt;] ofd_punch_hdl+0x466/0x720 [ofd]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa109bc9b&amp;gt;] tgt_request_handle+0x8fb/0x11f0 [ptlrpc]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa103ea3b&amp;gt;] ptlrpc_server_handle_request+0x21b/0xa90 [ptlrpc]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa0815cf8&amp;gt;] ? lc_watchdog_touch+0x68/0x180 [libcfs]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa103bb08&amp;gt;] ? ptlrpc_wait_event+0x98/0x330 [ptlrpc]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff8163e05b&amp;gt;] ? _raw_spin_unlock_irqrestore+0x1b/0x40
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa1042360&amp;gt;] ptlrpc_main+0xc00/0x1f60 [ptlrpc]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa1041760&amp;gt;] ? ptlrpc_register_service+0x1070/0x1070 [ptlrpc]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff810a5baf&amp;gt;] kthread+0xcf/0xe0
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff81646e58&amp;gt;] ret_from_fork+0x58/0x90
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:44:16 gio12 kernel: INFO: task jbd2/dm-11-8:15759 blocked for more than 120 seconds.
Feb 22 12:44:16 gio12 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Feb 22 12:44:16 gio12 kernel: jbd2/dm-11-8    D ffff880036446800     0 15759      2 0x00000080
Feb 22 12:44:16 gio12 kernel:  ffff880fdd21bc88 0000000000000046 ffff88104f747300 ffff880fdd21bfd8
Feb 22 12:44:16 gio12 kernel:  ffff880fdd21bfd8 ffff880fdd21bfd8 ffff88104f747300 ffff880fdd21bda0
Feb 22 12:44:16 gio12 kernel:  ffff881016e128c0 ffff88104f747300 ffff880fdd21bd88 ffff880036446800
Feb 22 12:44:16 gio12 kernel: Call Trace:
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff8163bf19&amp;gt;] schedule+0x29/0x70
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa018d138&amp;gt;] jbd2_journal_commit_transaction+0x248/0x19e0 [jbd2]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff810c15fc&amp;gt;] ? update_curr+0xcc/0x150
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff810c1ac6&amp;gt;] ? dequeue_entity+0x106/0x520
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff81013588&amp;gt;] ? __switch_to+0xf8/0x4b0
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff810a6ba0&amp;gt;] ? wake_up_atomic_t+0x30/0x30
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff8108d7be&amp;gt;] ? try_to_del_timer_sync+0x5e/0x90
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa0192e99&amp;gt;] kjournald2+0xc9/0x260 [jbd2]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff810a6ba0&amp;gt;] ? wake_up_atomic_t+0x30/0x30
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa0192dd0&amp;gt;] ? commit_timeout+0x10/0x10 [jbd2]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff810a5baf&amp;gt;] kthread+0xcf/0xe0
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff81646e58&amp;gt;] ret_from_fork+0x58/0x90
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:44:16 gio12 kernel: INFO: task kworker/u33:2:28976 blocked for more than 120 seconds.
Feb 22 12:44:16 gio12 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Feb 22 12:44:16 gio12 kernel: kworker/u33:2   D ffff880850ea8030     0 28976      2 0x00000080
Feb 22 12:44:16 gio12 kernel: Workqueue: writeback bdi_writeback_workfn (flush-253:11)
Feb 22 12:44:16 gio12 kernel:  ffff880462d2f8e8 0000000000000046 ffff88084f0a8b80 ffff880462d2ffd8
Feb 22 12:44:16 gio12 kernel:  ffff880462d2ffd8 ffff880462d2ffd8 ffff88084f0a8b80 ffff881016e12800
Feb 22 12:44:16 gio12 kernel:  ffff881016e12878 000000000d83b523 ffff880036446800 ffff880850ea8030
Feb 22 12:44:16 gio12 kernel: Call Trace:
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff8163bf19&amp;gt;] schedule+0x29/0x70
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa018a085&amp;gt;] wait_transaction_locked+0x85/0xd0 [jbd2]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff810a6ba0&amp;gt;] ? wake_up_atomic_t+0x30/0x30
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa018a400&amp;gt;] start_this_handle+0x2b0/0x5d0 [jbd2]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff811c176a&amp;gt;] ? kmem_cache_alloc+0x1ba/0x1d0
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa018a933&amp;gt;] jbd2__journal_start+0xf3/0x1e0 [jbd2]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa0df7254&amp;gt;] ? ldiskfs_writepages+0x454/0xd80 [ldiskfs]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa0dd6829&amp;gt;] __ldiskfs_journal_start_sb+0x69/0xe0 [ldiskfs]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffffa0df7254&amp;gt;] ldiskfs_writepages+0x454/0xd80 [ldiskfs]
Feb 22 12:44:16 gio12 kernel:  [&amp;lt;ffffffff81174d08&amp;gt;] ? generic_writepages+0x58/0x80
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff81175dae&amp;gt;] do_writepages+0x1e/0x40
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff81208c90&amp;gt;] __writeback_single_inode+0x40/0x220
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff812096fe&amp;gt;] writeback_sb_inodes+0x25e/0x420
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff8120995f&amp;gt;] __writeback_inodes_wb+0x9f/0xd0
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff8120a1a3&amp;gt;] wb_writeback+0x263/0x2f0
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff811f8fac&amp;gt;] ? get_nr_inodes+0x4c/0x70
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff8120c42b&amp;gt;] bdi_writeback_workfn+0x2cb/0x460
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff8109d6bb&amp;gt;] process_one_work+0x17b/0x470
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff8109e48b&amp;gt;] worker_thread+0x11b/0x400
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff8109e370&amp;gt;] ? rescuer_thread+0x400/0x400
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff810a5baf&amp;gt;] kthread+0xcf/0xe0
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff81646e58&amp;gt;] ret_from_fork+0x58/0x90
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:44:17 gio12 kernel: INFO: task ll_ost_io03_004:32100 blocked for more than 120 seconds.
Feb 22 12:44:17 gio12 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Feb 22 12:44:17 gio12 kernel: ll_ost_io03_004 D ffff881053a24060     0 32100      2 0x00000080
Feb 22 12:44:17 gio12 kernel:  ffff88031eb6f9f0 0000000000000046 ffff8808a62c2280 ffff88031eb6ffd8
Feb 22 12:44:17 gio12 kernel:  ffff88031eb6ffd8 ffff88031eb6ffd8 ffff8808a62c2280 ffff881016e12800
Feb 22 12:44:17 gio12 kernel:  ffff881016e12878 000000000d83b523 ffff880036446800 ffff881053a24060
Feb 22 12:44:17 gio12 kernel: Call Trace:
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff8163bf19&amp;gt;] schedule+0x29/0x70
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa018a085&amp;gt;] wait_transaction_locked+0x85/0xd0 [jbd2]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff810a6ba0&amp;gt;] ? wake_up_atomic_t+0x30/0x30
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa018a400&amp;gt;] start_this_handle+0x2b0/0x5d0 [jbd2]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa0e713d4&amp;gt;] ? osd_declare_xattr_set+0xe4/0x2e0 [osd_ldiskfs]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff811c176a&amp;gt;] ? kmem_cache_alloc+0x1ba/0x1d0
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa018a933&amp;gt;] jbd2__journal_start+0xf3/0x1e0 [jbd2]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa0e78534&amp;gt;] ? osd_trans_start+0x174/0x410 [osd_ldiskfs]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa0dd6829&amp;gt;] __ldiskfs_journal_start_sb+0x69/0xe0 [ldiskfs]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa0e78534&amp;gt;] osd_trans_start+0x174/0x410 [osd_ldiskfs]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa0c80d7b&amp;gt;] ofd_trans_start+0x6b/0xe0 [ofd]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa0c8428a&amp;gt;] ofd_object_punch+0x62a/0xc30 [ofd]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa0c715e6&amp;gt;] ofd_punch_hdl+0x466/0x720 [ofd]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa109bc9b&amp;gt;] tgt_request_handle+0x8fb/0x11f0 [ptlrpc]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa103ea3b&amp;gt;] ptlrpc_server_handle_request+0x21b/0xa90 [ptlrpc]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa0815cf8&amp;gt;] ? lc_watchdog_touch+0x68/0x180 [libcfs]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa103bb08&amp;gt;] ? ptlrpc_wait_event+0x98/0x330 [ptlrpc]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff810af0e8&amp;gt;] ? __wake_up_common+0x58/0x90
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa1042360&amp;gt;] ptlrpc_main+0xc00/0x1f60 [ptlrpc]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffffa1041760&amp;gt;] ? ptlrpc_register_service+0x1070/0x1070 [ptlrpc]
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff810a5baf&amp;gt;] kthread+0xcf/0xe0
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff81646e58&amp;gt;] ret_from_fork+0x58/0x90
Feb 22 12:44:17 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:44:39 gio12 kernel: LustreError: 137-5: dtemp-OST001c_UUID: not available for connect from 172.22.166.12@o2ib6 (no target). If you are running an HA pair check that the target is mounted on the other server.
Feb 22 12:44:39 gio12 kernel: LustreError: Skipped 1679 previous similar messages
...
Feb 22 12:45:42 gio12 kernel: LustreError: 137-5: dtemp-OST001c_UUID: not available for connect from 172.22.166.11@o2ib6 (no target). If you are running an HA pair check that the target is mounted on the other server.
Feb 22 12:46:17 gio12 kernel: INFO: task ll_ost_io01_002:14056 blocked for more than 120 seconds.
Feb 22 12:46:17 gio12 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Feb 22 12:46:17 gio12 kernel: ll_ost_io01_002 D 0000000000000000     0 14056      2 0x00000080
Feb 22 12:46:17 gio12 kernel:  ffff881020b9b898 0000000000000046 ffff88102151b980 ffff881020b9bfd8
Feb 22 12:46:17 gio12 kernel:  ffff881020b9bfd8 ffff881020b9bfd8 ffff88102151b980 ffff88102151b980
Feb 22 12:46:17 gio12 kernel:  ffff8807a92bda90 fffffffeffffffff ffff8807a92bda98 0000000000000000
Feb 22 12:46:17 gio12 kernel: Call Trace:
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff8163bf19&amp;gt;] schedule+0x29/0x70
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff8163d8d5&amp;gt;] rwsem_down_read_failed+0xf5/0x170
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff81301e54&amp;gt;] call_rwsem_down_read_failed+0x14/0x30
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff8163b130&amp;gt;] ? down_read+0x20/0x30
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa0def84b&amp;gt;] ldiskfs_xattr_block_set+0x62b/0xa80 [ldiskfs]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa0df09d4&amp;gt;] ldiskfs_expand_extra_isize_ea+0x404/0x810 [ldiskfs]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa0df6d9f&amp;gt;] ldiskfs_mark_inode_dirty+0x1af/0x210 [ldiskfs]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa0de0884&amp;gt;] ldiskfs_ext_truncate+0x24/0xe0 [ldiskfs]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa0df83b7&amp;gt;] ldiskfs_truncate+0x3b7/0x3f0 [ldiskfs]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa0e92e08&amp;gt;] osd_punch+0x138/0x5e0 [osd_ldiskfs]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa0c84346&amp;gt;] ofd_object_punch+0x6e6/0xc30 [ofd]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa0c715e6&amp;gt;] ofd_punch_hdl+0x466/0x720 [ofd]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa109bc9b&amp;gt;] tgt_request_handle+0x8fb/0x11f0 [ptlrpc]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa103ea3b&amp;gt;] ptlrpc_server_handle_request+0x21b/0xa90 [ptlrpc]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa0815cf8&amp;gt;] ? lc_watchdog_touch+0x68/0x180 [libcfs]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa103bb08&amp;gt;] ? ptlrpc_wait_event+0x98/0x330 [ptlrpc]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff8163e05b&amp;gt;] ? _raw_spin_unlock_irqrestore+0x1b/0x40
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa1042360&amp;gt;] ptlrpc_main+0xc00/0x1f60 [ptlrpc]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa1041760&amp;gt;] ? ptlrpc_register_service+0x1070/0x1070 [ptlrpc]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff810a5baf&amp;gt;] kthread+0xcf/0xe0
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff81646e58&amp;gt;] ret_from_fork+0x58/0x90
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:46:17 gio12 kernel: INFO: task jbd2/dm-11-8:15759 blocked for more than 120 seconds.
Feb 22 12:46:17 gio12 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Feb 22 12:46:17 gio12 kernel: jbd2/dm-11-8    D ffff880036446800     0 15759      2 0x00000080
Feb 22 12:46:17 gio12 kernel:  ffff880fdd21bc88 0000000000000046 ffff88104f747300 ffff880fdd21bfd8
Feb 22 12:46:17 gio12 kernel:  ffff880fdd21bfd8 ffff880fdd21bfd8 ffff88104f747300 ffff880fdd21bda0
Feb 22 12:46:17 gio12 kernel:  ffff881016e128c0 ffff88104f747300 ffff880fdd21bd88 ffff880036446800
Feb 22 12:46:17 gio12 kernel: Call Trace:
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff8163bf19&amp;gt;] schedule+0x29/0x70
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa018d138&amp;gt;] jbd2_journal_commit_transaction+0x248/0x19e0 [jbd2]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff810c15fc&amp;gt;] ? update_curr+0xcc/0x150
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff810c1ac6&amp;gt;] ? dequeue_entity+0x106/0x520
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff81013588&amp;gt;] ? __switch_to+0xf8/0x4b0
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff810a6ba0&amp;gt;] ? wake_up_atomic_t+0x30/0x30
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff8108d7be&amp;gt;] ? try_to_del_timer_sync+0x5e/0x90
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa0192e99&amp;gt;] kjournald2+0xc9/0x260 [jbd2]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff810a6ba0&amp;gt;] ? wake_up_atomic_t+0x30/0x30
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa0192dd0&amp;gt;] ? commit_timeout+0x10/0x10 [jbd2]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff810a5baf&amp;gt;] kthread+0xcf/0xe0
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff81646e58&amp;gt;] ret_from_fork+0x58/0x90
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:46:17 gio12 kernel: INFO: task kworker/u33:2:28976 blocked for more than 120 seconds.
Feb 22 12:46:17 gio12 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Feb 22 12:46:17 gio12 kernel: kworker/u33:2   D ffff880850ea8030     0 28976      2 0x00000080
Feb 22 12:46:17 gio12 kernel: Workqueue: writeback bdi_writeback_workfn (flush-253:11)
Feb 22 12:46:17 gio12 kernel:  ffff880462d2f8e8 0000000000000046 ffff88084f0a8b80 ffff880462d2ffd8
Feb 22 12:46:17 gio12 kernel:  ffff880462d2ffd8 ffff880462d2ffd8 ffff88084f0a8b80 ffff881016e12800
Feb 22 12:46:17 gio12 kernel:  ffff881016e12878 000000000d83b523 ffff880036446800 ffff880850ea8030
Feb 22 12:46:17 gio12 kernel: Call Trace:
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff8163bf19&amp;gt;] schedule+0x29/0x70
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa018a085&amp;gt;] wait_transaction_locked+0x85/0xd0 [jbd2]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff810a6ba0&amp;gt;] ? wake_up_atomic_t+0x30/0x30
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa018a400&amp;gt;] start_this_handle+0x2b0/0x5d0 [jbd2]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff811c176a&amp;gt;] ? kmem_cache_alloc+0x1ba/0x1d0
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa018a933&amp;gt;] jbd2__journal_start+0xf3/0x1e0 [jbd2]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa0df7254&amp;gt;] ? ldiskfs_writepages+0x454/0xd80 [ldiskfs]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa0dd6829&amp;gt;] __ldiskfs_journal_start_sb+0x69/0xe0 [ldiskfs]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa0df7254&amp;gt;] ldiskfs_writepages+0x454/0xd80 [ldiskfs]
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff81174d08&amp;gt;] ? generic_writepages+0x58/0x80
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff81175dae&amp;gt;] do_writepages+0x1e/0x40
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff81208c90&amp;gt;] __writeback_single_inode+0x40/0x220
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff812096fe&amp;gt;] writeback_sb_inodes+0x25e/0x420
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff8120995f&amp;gt;] __writeback_inodes_wb+0x9f/0xd0
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff8120a1a3&amp;gt;] wb_writeback+0x263/0x2f0
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff811f8fac&amp;gt;] ? get_nr_inodes+0x4c/0x70
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff8120c42b&amp;gt;] bdi_writeback_workfn+0x2cb/0x460
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff8109d6bb&amp;gt;] process_one_work+0x17b/0x470
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff8109e48b&amp;gt;] worker_thread+0x11b/0x400
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff8109e370&amp;gt;] ? rescuer_thread+0x400/0x400
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff810a5baf&amp;gt;] kthread+0xcf/0xe0
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff81646e58&amp;gt;] ret_from_fork+0x58/0x90
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:46:17 gio12 kernel: INFO: task ll_ost_io03_004:32100 blocked for more than 120 seconds.
Feb 22 12:46:17 gio12 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Feb 22 12:46:17 gio12 kernel: ll_ost_io03_004 D ffff881053a24060     0 32100      2 0x00000080
Feb 22 12:46:17 gio12 kernel:  ffff88031eb6f9f0 0000000000000046 ffff8808a62c2280 ffff88031eb6ffd8
Feb 22 12:46:17 gio12 kernel:  ffff88031eb6ffd8 ffff88031eb6ffd8 ffff8808a62c2280 ffff881016e12800
Feb 22 12:46:17 gio12 kernel:  ffff881016e12878 000000000d83b523 ffff880036446800 ffff881053a24060
Feb 22 12:46:17 gio12 kernel: Call Trace:
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffff8163bf19&amp;gt;] schedule+0x29/0x70
Feb 22 12:46:17 gio12 kernel:  [&amp;lt;ffffffffa018a085&amp;gt;] wait_transaction_locked+0x85/0xd0 [jbd2]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffff810a6ba0&amp;gt;] ? wake_up_atomic_t+0x30/0x30
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa018a400&amp;gt;] start_this_handle+0x2b0/0x5d0 [jbd2]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa0e713d4&amp;gt;] ? osd_declare_xattr_set+0xe4/0x2e0 [osd_ldiskfs]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffff811c176a&amp;gt;] ? kmem_cache_alloc+0x1ba/0x1d0
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa018a933&amp;gt;] jbd2__journal_start+0xf3/0x1e0 [jbd2]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa0e78534&amp;gt;] ? osd_trans_start+0x174/0x410 [osd_ldiskfs]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa0dd6829&amp;gt;] __ldiskfs_journal_start_sb+0x69/0xe0 [ldiskfs]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa0e78534&amp;gt;] osd_trans_start+0x174/0x410 [osd_ldiskfs]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa0c80d7b&amp;gt;] ofd_trans_start+0x6b/0xe0 [ofd]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa0c8428a&amp;gt;] ofd_object_punch+0x62a/0xc30 [ofd]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa0c715e6&amp;gt;] ofd_punch_hdl+0x466/0x720 [ofd]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa109bc9b&amp;gt;] tgt_request_handle+0x8fb/0x11f0 [ptlrpc]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa103ea3b&amp;gt;] ptlrpc_server_handle_request+0x21b/0xa90 [ptlrpc]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa0815cf8&amp;gt;] ? lc_watchdog_touch+0x68/0x180 [libcfs]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa103bb08&amp;gt;] ? ptlrpc_wait_event+0x98/0x330 [ptlrpc]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffff810af0e8&amp;gt;] ? __wake_up_common+0x58/0x90
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa1042360&amp;gt;] ptlrpc_main+0xc00/0x1f60 [ptlrpc]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffffa1041760&amp;gt;] ? ptlrpc_register_service+0x1070/0x1070 [ptlrpc]
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffff810a5baf&amp;gt;] kthread+0xcf/0xe0
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffff81646e58&amp;gt;] ret_from_fork+0x58/0x90
Feb 22 12:46:18 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:47:09 gio12 kernel: LustreError: 137-5: dtemp-OST001c_UUID: not available for connect from 172.22.166.12@o2ib6 (no target). If you are running an HA pair check that the target is mounted on the other server.
Feb 22 12:48:18 gio12 kernel: INFO: task ll_ost_io01_002:14056 blocked for more than 120 seconds.
Feb 22 12:48:18 gio12 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Feb 22 12:48:18 gio12 kernel: ll_ost_io01_002 D 0000000000000000     0 14056      2 0x00000080
Feb 22 12:48:18 gio12 kernel:  ffff881020b9b898 0000000000000046 ffff88102151b980 ffff881020b9bfd8
Feb 22 12:48:18 gio12 kernel:  ffff881020b9bfd8 ffff881020b9bfd8 ffff88102151b980 ffff88102151b980
Feb 22 12:48:18 gio12 kernel:  ffff8807a92bda90 fffffffeffffffff ffff8807a92bda98 0000000000000000
Feb 22 12:48:18 gio12 kernel: Call Trace:
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff8163bf19&amp;gt;] schedule+0x29/0x70
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff8163d8d5&amp;gt;] rwsem_down_read_failed+0xf5/0x170
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff81301e54&amp;gt;] call_rwsem_down_read_failed+0x14/0x30
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff8163b130&amp;gt;] ? down_read+0x20/0x30
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa0def84b&amp;gt;] ldiskfs_xattr_block_set+0x62b/0xa80 [ldiskfs]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa0df09d4&amp;gt;] ldiskfs_expand_extra_isize_ea+0x404/0x810 [ldiskfs]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa0df6d9f&amp;gt;] ldiskfs_mark_inode_dirty+0x1af/0x210 [ldiskfs]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa0de0884&amp;gt;] ldiskfs_ext_truncate+0x24/0xe0 [ldiskfs]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa0df83b7&amp;gt;] ldiskfs_truncate+0x3b7/0x3f0 [ldiskfs]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa0e92e08&amp;gt;] osd_punch+0x138/0x5e0 [osd_ldiskfs]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa0c84346&amp;gt;] ofd_object_punch+0x6e6/0xc30 [ofd]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa0c715e6&amp;gt;] ofd_punch_hdl+0x466/0x720 [ofd]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa109bc9b&amp;gt;] tgt_request_handle+0x8fb/0x11f0 [ptlrpc]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa103ea3b&amp;gt;] ptlrpc_server_handle_request+0x21b/0xa90 [ptlrpc]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa0815cf8&amp;gt;] ? lc_watchdog_touch+0x68/0x180 [libcfs]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa103bb08&amp;gt;] ? ptlrpc_wait_event+0x98/0x330 [ptlrpc]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff8163e05b&amp;gt;] ? _raw_spin_unlock_irqrestore+0x1b/0x40
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa1042360&amp;gt;] ptlrpc_main+0xc00/0x1f60 [ptlrpc]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa1041760&amp;gt;] ? ptlrpc_register_service+0x1070/0x1070 [ptlrpc]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff810a5baf&amp;gt;] kthread+0xcf/0xe0
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff81646e58&amp;gt;] ret_from_fork+0x58/0x90
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:48:18 gio12 kernel: INFO: task jbd2/dm-11-8:15759 blocked for more than 120 seconds.
Feb 22 12:48:18 gio12 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Feb 22 12:48:18 gio12 kernel: jbd2/dm-11-8    D ffff880036446800     0 15759      2 0x00000080
Feb 22 12:48:18 gio12 kernel:  ffff880fdd21bc88 0000000000000046 ffff88104f747300 ffff880fdd21bfd8
Feb 22 12:48:18 gio12 kernel:  ffff880fdd21bfd8 ffff880fdd21bfd8 ffff88104f747300 ffff880fdd21bda0
Feb 22 12:48:18 gio12 kernel:  ffff881016e128c0 ffff88104f747300 ffff880fdd21bd88 ffff880036446800
Feb 22 12:48:18 gio12 kernel: Call Trace:
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff8163bf19&amp;gt;] schedule+0x29/0x70
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa018d138&amp;gt;] jbd2_journal_commit_transaction+0x248/0x19e0 [jbd2]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff810c15fc&amp;gt;] ? update_curr+0xcc/0x150
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff810c1ac6&amp;gt;] ? dequeue_entity+0x106/0x520
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff81013588&amp;gt;] ? __switch_to+0xf8/0x4b0
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff810a6ba0&amp;gt;] ? wake_up_atomic_t+0x30/0x30
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff8108d7be&amp;gt;] ? try_to_del_timer_sync+0x5e/0x90
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa0192e99&amp;gt;] kjournald2+0xc9/0x260 [jbd2]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff810a6ba0&amp;gt;] ? wake_up_atomic_t+0x30/0x30
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffffa0192dd0&amp;gt;] ? commit_timeout+0x10/0x10 [jbd2]
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff810a5baf&amp;gt;] kthread+0xcf/0xe0
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff81646e58&amp;gt;] ret_from_fork+0x58/0x90
Feb 22 12:48:18 gio12 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;MDS also had watchdog stack traces occur for several mdt tasks. MDT watchdog traces were triggered once but OSS has repeated watchdog traces. Example MDT stack trace&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Feb 22 03:52:03 gio0 kernel: INFO: task mdt01_003:9154 blocked for more than 120 seconds.
Feb 22 03:52:03 gio0 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
Feb 22 03:52:03 gio0 kernel: mdt01_003       D ffff880838cab1d8     0  9154      2 0x00000080
Feb 22 03:52:03 gio0 kernel:  ffff8810371ab4c8 0000000000000046 ffff8810507eb980 ffff8810371abfd8
Feb 22 03:52:03 gio0 kernel:  ffff8810371abfd8 ffff8810371abfd8 ffff8810507eb980 ffff8810507eb980
Feb 22 03:52:03 gio0 kernel:  ffff880838cab1c8 ffff880838cab1d0 ffffffff00000000 ffff880838cab1d8
Feb 22 03:52:03 gio0 kernel: Call Trace:
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffff8163bf19&amp;gt;] schedule+0x29/0x70
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffff8163d6d5&amp;gt;] rwsem_down_write_failed+0x115/0x220
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffff812134dc&amp;gt;] ? __find_get_block+0xbc/0x120
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffff81301e83&amp;gt;] call_rwsem_down_write_failed+0x13/0x20
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffff8163b16d&amp;gt;] ? down_write+0x2d/0x30
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa142bba7&amp;gt;] lod_alloc_qos.constprop.15+0x187/0x1400 [lod]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffff8121292d&amp;gt;] ? __brelse+0x3d/0x50
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa10ed65f&amp;gt;] ? ldiskfs_xattr_ibody_get+0xef/0x1a0 [ldiskfs]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa10ec6af&amp;gt;] ? ldiskfs_xattr_find_entry+0x9f/0x130 [ldiskfs]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa142e7fd&amp;gt;] lod_qos_prep_create+0x10cd/0x1fbc [lod]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa11aaac9&amp;gt;] ? osd_declare_qid+0x279/0x4b0 [osd_ldiskfs]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa11aaeae&amp;gt;] ? osd_declare_inode_qid+0x1ae/0x290 [osd_ldiskfs]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa1427fdd&amp;gt;] lod_declare_striped_object+0x1fd/0x810 [lod]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa1171e23&amp;gt;] ? osd_declare_object_create+0x113/0x2b0 [osd_ldiskfs]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa1429661&amp;gt;] lod_declare_object_create+0x231/0x4b0 [lod]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa14836af&amp;gt;] mdd_declare_object_create_internal+0xdf/0x2f0 [mdd]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa1478038&amp;gt;] mdd_declare_create+0x48/0xef0 [mdd]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa1479669&amp;gt;] mdd_create+0x789/0x12a0 [mdd]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa134ed52&amp;gt;] mdt_reint_open+0x1f92/0x2e00 [mdt]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa08aa1a9&amp;gt;] ? upcall_cache_get_entry+0x3e9/0x8e0 [libcfs]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffff812fc212&amp;gt;] ? strlcpy+0x42/0x60
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa1341e30&amp;gt;] mdt_reint_rec+0x80/0x210 [mdt]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa1322921&amp;gt;] mdt_reint_internal+0x5e1/0xb30 [mdt]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa1322fd2&amp;gt;] mdt_intent_reint+0x162/0x420 [mdt]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa0c95797&amp;gt;] ? lustre_msg_buf+0x17/0x60 [ptlrpc]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa13268b5&amp;gt;] mdt_intent_opc+0x215/0xa30 [mdt]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa0c99e30&amp;gt;] ? lustre_swab_ldlm_policy_data+0x30/0x30 [ptlrpc]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa132e478&amp;gt;] mdt_intent_policy+0x138/0x320 [mdt]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa0c491d7&amp;gt;] ldlm_lock_enqueue+0x357/0x9c0 [ptlrpc]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa0c6f7b2&amp;gt;] ldlm_handle_enqueue0+0x4f2/0x16f0 [ptlrpc]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa0c99eb0&amp;gt;] ? lustre_swab_ldlm_lock_desc+0x30/0x30 [ptlrpc]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa0cfcc32&amp;gt;] tgt_enqueue+0x62/0x210 [ptlrpc]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa0d01c9b&amp;gt;] tgt_request_handle+0x8fb/0x11f0 [ptlrpc]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa0ca4a3b&amp;gt;] ptlrpc_server_handle_request+0x21b/0xa90 [ptlrpc]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa089ecf8&amp;gt;] ? lc_watchdog_touch+0x68/0x180 [libcfs]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa0ca1b08&amp;gt;] ? ptlrpc_wait_event+0x98/0x330 [ptlrpc]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffff810af0e8&amp;gt;] ? __wake_up_common+0x58/0x90
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa0ca8360&amp;gt;] ptlrpc_main+0xc00/0x1f60 [ptlrpc]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffffa0ca7760&amp;gt;] ? ptlrpc_register_service+0x1070/0x1070 [ptlrpc]
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffff810a5baf&amp;gt;] kthread+0xcf/0xe0
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffff81646e58&amp;gt;] ret_from_fork+0x58/0x90
Feb 22 03:52:03 gio0 kernel:  [&amp;lt;ffffffff810a5ae0&amp;gt;] ? kthread_create_on_node+0x140/0x140
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                                        </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="45935">LU-9469</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzz4pr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>