<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:50:48 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-12234] sanity-benchmark test iozone hangs in txg_sync</title>
                <link>https://jira.whamcloud.com/browse/LU-12234</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;sanity-benchmark test_iozone hangs in txg_sync for ZFS/DNE testing.&lt;/p&gt;

&lt;p&gt;Looking at a recent failure, logs at &lt;a href=&quot;https://testing.whamcloud.com/test_sets/04682612-66aa-11e9-a6f2-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/04682612-66aa-11e9-a6f2-52540065bddc&lt;/a&gt; , the last thing seen in the suite_log is&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;running as uid/gid/euid/egid 500/500/500/500, groups:
 [iozone] [-i] [0] [-i] [1] [-i] [2] [-e] [-+d] [-r] [512] [-s] [1719368] [-t] [2] [-F] [/mnt/lustre/d0.iozone/iozone.1] [/mnt/lustre/d0.iozone/iozone.2]
	Iozone: Performance Test of File I/O
	        Version $Revision: 3.373 $
		Compiled for 64 bit mode.
		Build: linux-AMD64 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer.

	Run began: Tue Apr 23 10:44:20 2019

	Include fsync in write timing
	&amp;gt;&amp;gt;&amp;gt; I/O Diagnostic mode enabled. &amp;lt;&amp;lt;&amp;lt;
	Performance measurements are invalid in this mode.
	Record Size 512 KB
	File size set to 1719368 KB
	Command line used: iozone -i 0 -i 1 -i 2 -e -+d -r 512 -s 1719368 -t 2 -F /mnt/lustre/d0.iozone/iozone.1 /mnt/lustre/d0.iozone/iozone.2
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
	Throughput test with 2 processes
	Each process writes a 1719368 Kbyte file in 512 Kbyte records

	Children see throughput for  2 initial writers 	=    5465.87 KB/sec
	Parent sees throughput for  2 initial writers 	=    5007.29 KB/sec
	Min throughput per process 			=    2523.98 KB/sec 
	Max throughput per process 			=    2941.89 KB/sec
	Avg throughput per process 			=    2732.94 KB/sec
	Min xfer 					= 1475072.00 KB

	Children see throughput for  2 rewriters 	=    5791.13 KB/sec
	Parent sees throughput for  2 rewriters 	=    5787.53 KB/sec
	Min throughput per process 			=    2895.57 KB/sec 
	Max throughput per process 			=    2895.57 KB/sec
	Avg throughput per process 			=    2895.57 KB/sec
	Min xfer 					= 1719296.00 KB
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt; The OSS console has the following call trace&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[38923.077282] Lustre: lustre-OST0001: Connection restored to 2011b398-4625-e110-f053-c52f3747dc69 (at 10.9.5.215@tcp)
[38923.077284] Lustre: lustre-OST0000: Connection restored to 2011b398-4625-e110-f053-c52f3747dc69 (at 10.9.5.215@tcp)
[38931.596843] Lustre: lustre-OST0002: Connection restored to 2011b398-4625-e110-f053-c52f3747dc69 (at 10.9.5.215@tcp)
[39020.749857] LNet: Service thread pid 7985 was inactive for 64.24s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
[39020.752951] Pid: 7985, comm: ll_ost_io00_048 3.10.0-957.10.1.el7_lustre.x86_64 #1 SMP Mon Apr 22 22:25:47 UTC 2019
[39020.754687] Call Trace:
[39020.755302]  [&amp;lt;ffffffffc03c72d5&amp;gt;] cv_wait_common+0x125/0x150 [spl]
[39020.756438]  [&amp;lt;ffffffffc03c7315&amp;gt;] __cv_wait+0x15/0x20 [spl]
[39020.757446]  [&amp;lt;ffffffffc053e4d3&amp;gt;] txg_wait_open+0xc3/0x110 [zfs]
[39020.758847]  [&amp;lt;ffffffffc04f3dca&amp;gt;] dmu_tx_wait+0x3aa/0x3c0 [zfs]
[39020.759930]  [&amp;lt;ffffffffc04f3e72&amp;gt;] dmu_tx_assign+0x92/0x490 [zfs]
[39020.761024]  [&amp;lt;ffffffffc1184009&amp;gt;] osd_trans_start+0x199/0x440 [osd_zfs]
[39020.762239]  [&amp;lt;ffffffffc12c1c85&amp;gt;] ofd_trans_start+0x75/0xf0 [ofd]
[39020.763368]  [&amp;lt;ffffffffc12c8881&amp;gt;] ofd_commitrw_write+0xa31/0x1d40 [ofd]
[39020.764542]  [&amp;lt;ffffffffc12ccc6c&amp;gt;] ofd_commitrw+0x48c/0x9e0 [ofd]
[39020.765635]  [&amp;lt;ffffffffc0fb747c&amp;gt;] tgt_brw_write+0x10cc/0x1cf0 [ptlrpc]
[39020.767164]  [&amp;lt;ffffffffc0fb31da&amp;gt;] tgt_request_handle+0xaea/0x1580 [ptlrpc]
[39020.768414]  [&amp;lt;ffffffffc0f5880b&amp;gt;] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
[39020.769802]  [&amp;lt;ffffffffc0f5c13c&amp;gt;] ptlrpc_main+0xafc/0x1fc0 [ptlrpc]
[39020.771023]  [&amp;lt;ffffffff97cc1c71&amp;gt;] kthread+0xd1/0xe0
[39020.772060]  [&amp;lt;ffffffff98375c37&amp;gt;] ret_from_fork_nospec_end+0x0/0x39
[39020.773194]  [&amp;lt;ffffffffffffffff&amp;gt;] 0xffffffffffffffff
[39020.774176] LustreError: dumping log to /tmp/lustre-log.1556016359.7985
[39025.011155] LNet: Service thread pid 7985 completed after 68.51s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).
[39501.389765] LNet: Service thread pid 16546 was inactive for 40.06s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
[39501.393077] Pid: 16546, comm: ll_ost00_009 3.10.0-957.10.1.el7_lustre.x86_64 #1 SMP Mon Apr 22 22:25:47 UTC 2019
[39501.394929] Call Trace:
[39501.395582]  [&amp;lt;ffffffffc03c72d5&amp;gt;] cv_wait_common+0x125/0x150 [spl]
[39501.396880]  [&amp;lt;ffffffffc03c7315&amp;gt;] __cv_wait+0x15/0x20 [spl]
[39501.397884]  [&amp;lt;ffffffffc053e2bf&amp;gt;] txg_wait_synced+0xef/0x140 [zfs]
[39501.399302]  [&amp;lt;ffffffffc118b69e&amp;gt;] osd_object_sync+0x16e/0x180 [osd_zfs]
[39501.400686]  [&amp;lt;ffffffffc0fad8a7&amp;gt;] tgt_sync+0xb7/0x270 [ptlrpc]
[39501.402075]  [&amp;lt;ffffffffc12af731&amp;gt;] ofd_sync_hdl+0x111/0x530 [ofd]
[39501.403303]  [&amp;lt;ffffffffc0fb31da&amp;gt;] tgt_request_handle+0xaea/0x1580 [ptlrpc]
[39501.404733]  [&amp;lt;ffffffffc0f5880b&amp;gt;] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
[39501.406112]  [&amp;lt;ffffffffc0f5c13c&amp;gt;] ptlrpc_main+0xafc/0x1fc0 [ptlrpc]
[39501.407350]  [&amp;lt;ffffffff97cc1c71&amp;gt;] kthread+0xd1/0xe0
[39501.408407]  [&amp;lt;ffffffff98375c37&amp;gt;] ret_from_fork_nospec_end+0x0/0x39
[39501.409609]  [&amp;lt;ffffffffffffffff&amp;gt;] 0xffffffffffffffff
[39501.410651] LustreError: dumping log to /tmp/lustre-log.1556016839.16546
&#8230;
[40140.279045] in:imjournal    D ffff911dbbed5140     0  3670      1 0x00000080
[40140.280809] Call Trace:
[40140.281623]  [&amp;lt;ffffffff98366d60&amp;gt;] ? bit_wait+0x50/0x50
[40140.282901]  [&amp;lt;ffffffff98368c49&amp;gt;] schedule+0x29/0x70
[40140.283998]  [&amp;lt;ffffffff98366721&amp;gt;] schedule_timeout+0x221/0x2d0
[40140.285254]  [&amp;lt;ffffffff97c6a0fe&amp;gt;] ? kvm_clock_get_cycles+0x1e/0x20
[40140.286833]  [&amp;lt;ffffffff97d01092&amp;gt;] ? ktime_get_ts64+0x52/0xf0
[40140.288092]  [&amp;lt;ffffffff98366d60&amp;gt;] ? bit_wait+0x50/0x50
[40140.289171]  [&amp;lt;ffffffff983682ed&amp;gt;] io_schedule_timeout+0xad/0x130
[40140.290445]  [&amp;lt;ffffffff98368388&amp;gt;] io_schedule+0x18/0x20
[40140.291769]  [&amp;lt;ffffffff98366d71&amp;gt;] bit_wait_io+0x11/0x50
[40140.293061]  [&amp;lt;ffffffff98366897&amp;gt;] __wait_on_bit+0x67/0x90
[40140.294200]  [&amp;lt;ffffffff97db93de&amp;gt;] ? __find_get_pages+0x11e/0x1c0
[40140.295549]  [&amp;lt;ffffffff97db5881&amp;gt;] wait_on_page_bit+0x81/0xa0
[40140.296959]  [&amp;lt;ffffffff97cc2e00&amp;gt;] ? wake_bit_function+0x40/0x40
[40140.298307]  [&amp;lt;ffffffff97dc706b&amp;gt;] truncate_inode_pages_range+0x42b/0x700
[40140.342038]  [&amp;lt;ffffffffc02a5dbc&amp;gt;] ? __ext4_journal_stop+0x3c/0xb0 [ext4]
[40140.343859]  [&amp;lt;ffffffffc0281d58&amp;gt;] ? ext4_rename+0x168/0x890 [ext4]
[40140.345266]  [&amp;lt;ffffffff97e51151&amp;gt;] ? link_path_walk+0x81/0x8b0
[40140.346503]  [&amp;lt;ffffffff97eaf38a&amp;gt;] ? __dquot_initialize+0x3a/0x240
[40140.347746]  [&amp;lt;ffffffff97e6fe5a&amp;gt;] ? __inode_wait_for_writeback+0x7a/0xf0
[40140.349584]  [&amp;lt;ffffffff97dc73af&amp;gt;] truncate_inode_pages_final+0x4f/0x60
[40140.350965]  [&amp;lt;ffffffffc027841f&amp;gt;] ext4_evict_inode+0x10f/0x480 [ext4]
[40140.352310]  [&amp;lt;ffffffff97e5eeb4&amp;gt;] evict+0xb4/0x180
[40140.353476]  [&amp;lt;ffffffff97e5f7bc&amp;gt;] iput+0xfc/0x190
[40140.354709]  [&amp;lt;ffffffff97e5a020&amp;gt;] __dentry_kill+0x120/0x180
[40140.356039]  [&amp;lt;ffffffff97e5a130&amp;gt;] dput+0xb0/0x160
[40140.357170]  [&amp;lt;ffffffff97e53f58&amp;gt;] SYSC_renameat2+0x518/0x5a0
[40140.358364]  [&amp;lt;ffffffff97defa61&amp;gt;] ? __vma_rb_erase+0x121/0x220
[40140.359563]  [&amp;lt;ffffffff98375d21&amp;gt;] ? system_call_after_swapgs+0xae/0x146
[40140.361085]  [&amp;lt;ffffffff98375d15&amp;gt;] ? system_call_after_swapgs+0xa2/0x146
[40140.362608]  [&amp;lt;ffffffff98375d21&amp;gt;] ? system_call_after_swapgs+0xae/0x146
[40140.364012]  [&amp;lt;ffffffff98375d15&amp;gt;] ? system_call_after_swapgs+0xa2/0x146
[40140.365472]  [&amp;lt;ffffffff97e54e5e&amp;gt;] SyS_renameat2+0xe/0x10
[40140.366695]  [&amp;lt;ffffffff97e54e9e&amp;gt;] SyS_rename+0x1e/0x20
[40140.367909]  [&amp;lt;ffffffff98375ddb&amp;gt;] system_call_fastpath+0x22/0x27
[40140.369258]  [&amp;lt;ffffffff98375d21&amp;gt;] ? system_call_after_swapgs+0xae/0x146
&#8230;
[40141.471161] txg_sync        D ffff911dbc145140     0 27944      2 0x00000080
[40141.472432] Call Trace:
[40141.472873]  [&amp;lt;ffffffff97cceca4&amp;gt;] ? __wake_up+0x44/0x50
[40141.473770]  [&amp;lt;ffffffff98368c49&amp;gt;] schedule+0x29/0x70
[40141.474622]  [&amp;lt;ffffffff98366721&amp;gt;] schedule_timeout+0x221/0x2d0
[40141.475635]  [&amp;lt;ffffffffc052c57e&amp;gt;] ? spa_taskq_dispatch_ent+0x8e/0xc0 [zfs]
[40141.476797]  [&amp;lt;ffffffff97c6a0fe&amp;gt;] ? kvm_clock_get_cycles+0x1e/0x20
[40141.477853]  [&amp;lt;ffffffff983682ed&amp;gt;] io_schedule_timeout+0xad/0x130
[40141.478872]  [&amp;lt;ffffffff97cc28c6&amp;gt;] ? prepare_to_wait_exclusive+0x56/0x90
[40141.479986]  [&amp;lt;ffffffff98368388&amp;gt;] io_schedule+0x18/0x20
[40141.480884]  [&amp;lt;ffffffffc03c7262&amp;gt;] cv_wait_common+0xb2/0x150 [spl]
[40141.481929]  [&amp;lt;ffffffff97cc2d40&amp;gt;] ? wake_up_atomic_t+0x30/0x30
[40141.482928]  [&amp;lt;ffffffffc03c7338&amp;gt;] __cv_wait_io+0x18/0x20 [spl]
[40141.483963]  [&amp;lt;ffffffffc0596a3b&amp;gt;] zio_wait+0x11b/0x1c0 [zfs]
[40141.484952]  [&amp;lt;ffffffffc050cb50&amp;gt;] dsl_pool_sync+0x3e0/0x440 [zfs]
[40141.486020]  [&amp;lt;ffffffffc052a907&amp;gt;] spa_sync+0x437/0xd90 [zfs]
[40141.487015]  [&amp;lt;ffffffff97cd6802&amp;gt;] ? default_wake_function+0x12/0x20
[40141.488130]  [&amp;lt;ffffffff97cceca4&amp;gt;] ? __wake_up+0x44/0x50
[40141.489066]  [&amp;lt;ffffffffc053f321&amp;gt;] txg_sync_thread+0x301/0x510 [zfs]
[40141.490151]  [&amp;lt;ffffffffc053f020&amp;gt;] ? txg_fini+0x2a0/0x2a0 [zfs]
[40141.491152]  [&amp;lt;ffffffffc03c2063&amp;gt;] thread_generic_wrapper+0x73/0x80 [spl]
[40141.492288]  [&amp;lt;ffffffffc03c1ff0&amp;gt;] ? __thread_exit+0x20/0x20 [spl]
[40141.493328]  [&amp;lt;ffffffff97cc1c71&amp;gt;] kthread+0xd1/0xe0
[40141.494186]  [&amp;lt;ffffffff97cc1ba0&amp;gt;] ? insert_kthread_work+0x40/0x40
[40141.495223]  [&amp;lt;ffffffff98375c37&amp;gt;] ret_from_fork_nospec_begin+0x21/0x21
[40141.496325]  [&amp;lt;ffffffff97cc1ba0&amp;gt;] ? insert_kthread_work+0x40/0x40
&#8230;

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This hang looks similar to &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5575&quot; title=&quot;Failure on test suite replay-ost-single test_5&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5575&quot;&gt;&lt;del&gt;LU-5575&lt;/del&gt;&lt;/a&gt;, but &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5575&quot; title=&quot;Failure on test suite replay-ost-single test_5&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5575&quot;&gt;&lt;del&gt;LU-5575&lt;/del&gt;&lt;/a&gt; is closed as a duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4950&quot; title=&quot;sanity-benchmark test fsx hung: txg_sync was stuck on OSS&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4950&quot;&gt;&lt;del&gt;LU-4950&lt;/del&gt;&lt;/a&gt;. Looking at the comment from Alex on 13/Mar/17, he essentially says, if you don&#8217;t see ofd_destroy() in the traces, then the hang is probably not &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4950&quot; title=&quot;sanity-benchmark test fsx hung: txg_sync was stuck on OSS&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4950&quot;&gt;&lt;del&gt;LU-4950&lt;/del&gt;&lt;/a&gt;. Thus, I&#8217;ve opened this ticket to capture a current failure.&lt;/p&gt;</description>
                <environment></environment>
        <key id="55515">LU-12234</key>
            <summary>sanity-benchmark test iozone hangs in txg_sync</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                            <label>dne</label>
                            <label>zfs</label>
                    </labels>
                <created>Fri, 26 Apr 2019 22:34:08 +0000</created>
                <updated>Fri, 3 Dec 2021 19:11:56 +0000</updated>
                                            <version>Lustre 2.13.0</version>
                    <version>Lustre 2.12.1</version>
                    <version>Lustre 2.10.8</version>
                    <version>Lustre 2.12.4</version>
                    <version>Lustre 2.12.6</version>
                    <version>Lustre 2.12.8</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="247403" author="sarah" created="Mon, 20 May 2019 19:26:53 +0000"  >&lt;p&gt;hit similar issue in 2.10.8 testing zfs, none-DNE&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/206ba5a2-7602-11e9-aeec-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/206ba5a2-7602-11e9-aeec-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="253148" author="yujian" created="Thu, 15 Aug 2019 17:40:04 +0000"  >&lt;p&gt;obdfilter-survey test 1c hung in ZFS testing on master branch:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[47508.054761] txg_sync        D ffff8f857a62c1c0     0 18730      2 0x00000080
[47508.055558] Call Trace:
[47508.055822]  [&amp;lt;ffffffffc04e7017&amp;gt;] ? taskq_dispatch_ent+0x57/0x170 [spl]
[47508.056531]  [&amp;lt;ffffffffc06a7900&amp;gt;] ? zio_taskq_member.isra.7.constprop.10+0x80/0x80 [zfs]
[47508.057368]  [&amp;lt;ffffffffaeb7f1c9&amp;gt;] schedule+0x29/0x70
[47508.057871]  [&amp;lt;ffffffffaeb7cb51&amp;gt;] schedule_timeout+0x221/0x2d0
[47508.058507]  [&amp;lt;ffffffffc06a691f&amp;gt;] ? zio_taskq_dispatch+0x8f/0xa0 [zfs]
[47508.059211]  [&amp;lt;ffffffffae46c27e&amp;gt;] ? kvm_clock_get_cycles+0x1e/0x20
[47508.059826]  [&amp;lt;ffffffffaeb7e73d&amp;gt;] io_schedule_timeout+0xad/0x130
[47508.060455]  [&amp;lt;ffffffffae4c5d46&amp;gt;] ? prepare_to_wait_exclusive+0x56/0x90
[47508.061121]  [&amp;lt;ffffffffaeb7e7d8&amp;gt;] io_schedule+0x18/0x20
[47508.061667]  [&amp;lt;ffffffffc04eb262&amp;gt;] cv_wait_common+0xb2/0x150 [spl]
[47508.062306]  [&amp;lt;ffffffffae4c61c0&amp;gt;] ? wake_up_atomic_t+0x30/0x30
[47508.062892]  [&amp;lt;ffffffffc04eb338&amp;gt;] __cv_wait_io+0x18/0x20 [spl]
[47508.063546]  [&amp;lt;ffffffffc06aaa6b&amp;gt;] zio_wait+0x11b/0x1c0 [zfs]
[47508.064163]  [&amp;lt;ffffffffc062085f&amp;gt;] dsl_pool_sync+0xbf/0x440 [zfs]
[47508.064801]  [&amp;lt;ffffffffc063e937&amp;gt;] spa_sync+0x437/0xd90 [zfs]
[47508.065428]  [&amp;lt;ffffffffc0653351&amp;gt;] txg_sync_thread+0x301/0x510 [zfs]
[47508.066090]  [&amp;lt;ffffffffc0653050&amp;gt;] ? txg_fini+0x2a0/0x2a0 [zfs]
[47508.066700]  [&amp;lt;ffffffffc04e6063&amp;gt;] thread_generic_wrapper+0x73/0x80 [spl]
[47508.067394]  [&amp;lt;ffffffffc04e5ff0&amp;gt;] ? __thread_exit+0x20/0x20 [spl]
[47508.068021]  [&amp;lt;ffffffffae4c50d1&amp;gt;] kthread+0xd1/0xe0
[47508.068528]  [&amp;lt;ffffffffae4c5000&amp;gt;] ? insert_kthread_work+0x40/0x40
[47508.069141]  [&amp;lt;ffffffffaeb8bd37&amp;gt;] ret_from_fork_nospec_begin+0x21/0x21
[47508.069805]  [&amp;lt;ffffffffae4c5000&amp;gt;] ? insert_kthread_work+0x40/0x40
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[47511.400931] lctl            D ffff8f8576ad0000     0 13993  13986 0x00000080
[47511.401695] Call Trace:
[47511.401957]  [&amp;lt;ffffffffaeb7f1c9&amp;gt;] schedule+0x29/0x70
[47511.402484]  [&amp;lt;ffffffffc04eb2d5&amp;gt;] cv_wait_common+0x125/0x150 [spl]
[47511.403128]  [&amp;lt;ffffffffae4c61c0&amp;gt;] ? wake_up_atomic_t+0x30/0x30
[47511.403724]  [&amp;lt;ffffffffc04eb315&amp;gt;] __cv_wait+0x15/0x20 [spl]
[47511.404343]  [&amp;lt;ffffffffc0652503&amp;gt;] txg_wait_open+0xc3/0x110 [zfs]
[47511.404975]  [&amp;lt;ffffffffc0607dfa&amp;gt;] dmu_tx_wait+0x3aa/0x3c0 [zfs]
[47511.405612]  [&amp;lt;ffffffffc0607ea2&amp;gt;] dmu_tx_assign+0x92/0x490 [zfs]
[47511.406272]  [&amp;lt;ffffffffc1373fd9&amp;gt;] osd_trans_start+0x199/0x440 [osd_zfs]
[47511.406956]  [&amp;lt;ffffffffc1498c35&amp;gt;] ofd_trans_start+0x75/0xf0 [ofd]
[47511.407589]  [&amp;lt;ffffffffc149f821&amp;gt;] ofd_commitrw_write+0xa31/0x1d40 [ofd]
[47511.408283]  [&amp;lt;ffffffffc14a3c2c&amp;gt;] ofd_commitrw+0x48c/0x9e0 [ofd]
[47511.408942]  [&amp;lt;ffffffffc153dfb0&amp;gt;] echo_client_prep_commit.isra.50+0x5b0/0xea0 [obdecho]
[47511.409768]  [&amp;lt;ffffffffc1540994&amp;gt;] echo_client_iocontrol+0x914/0x1c50 [obdecho]
[47511.410546]  [&amp;lt;ffffffffc0e2e4aa&amp;gt;] class_handle_ioctl+0x192a/0x1e30 [obdclass]
[47511.411302]  [&amp;lt;ffffffffae701cbe&amp;gt;] ? security_capable+0x1e/0x20
[47511.411907]  [&amp;lt;ffffffffc0e2ea25&amp;gt;] obd_class_ioctl+0x75/0x170 [obdclass]
[47511.412585]  [&amp;lt;ffffffffae65d9e0&amp;gt;] do_vfs_ioctl+0x3a0/0x5a0
[47511.413167]  [&amp;lt;ffffffffae65dc81&amp;gt;] SyS_ioctl+0xa1/0xc0
[47511.413676]  [&amp;lt;ffffffffaeb8be15&amp;gt;] ? system_call_after_swapgs+0xa2/0x146
[47511.414356]  [&amp;lt;ffffffffaeb8bede&amp;gt;] system_call_fastpath+0x25/0x2a
[47511.414966]  [&amp;lt;ffffffffaeb8be21&amp;gt;] ? system_call_after_swapgs+0xae/0x146
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.whamcloud.com/test_sets/a8573658-bf1c-11e9-98c8-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/a8573658-bf1c-11e9-98c8-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="319563" author="JIRAUSER17102" created="Tue, 30 Nov 2021 14:24:38 +0000"  >&lt;p&gt;Hit something similar on 2.12.8: &lt;a href=&quot;https://testing.whamcloud.com/test_sets/0531f6af-85a8-4730-b9e3-d80acc3b6639&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/0531f6af-85a8-4730-b9e3-d80acc3b6639&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="58044">LU-13230</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="24369">LU-4950</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="26283">LU-5575</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="48383">LU-10009</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="55547">LU-12258</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00fin:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>