<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:24:43 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-9273] replay-ost-single test_5: timeout after ost failover</title>
                <link>https://jira.whamcloud.com/browse/LU-9273</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This issue was created by maloo for sarah_lw &amp;lt;wei3.liu@intel.com&amp;gt;&lt;/p&gt;

&lt;p&gt;This issue relates to the following test suite run: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/d94fa898-0a02-11e7-9053-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/d94fa898-0a02-11e7-9053-5254006e85c2&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The sub-test test_5 failed with the following error:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;test failed to respond and timed out
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Env:&lt;br/&gt;
server: tag-2.9.54 el7&lt;br/&gt;
client: tag-2.9.54 SLES12SP2&lt;/p&gt;

&lt;p&gt;test log&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== replay-ost-single test 5: Fail OST during iozone ================================================== 04:17:10 (1489576630)
iozone bg pid=7403
+ iozone -i 0 -i 1 -i 2 -+d -r 4 -s 1048576 -f /mnt/lustre/d0.replay-ost-single/f5.replay-ost-single
tmppipe=/tmp/replay-ost-single.test_5.pipe
iozone pid=7406
Iozone: Performance Test of File I/O
Version $Revision: 3.373 $
Compiled for 64 bit mode.
Build: linux-AMD64

Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer.

Run began: Wed Mar 15 04:17:10 2017

&amp;gt;&amp;gt;&amp;gt; I/O Diagnostic mode enabled. &amp;lt;&amp;lt;&amp;lt;
Performance measurements are invalid in this mode.
Record Size 4 KB
File size set to 1048576 KB
Command line used: iozone -i 0 -i 1 -i 2 -+d -r 4 -s 1048576 -f /mnt/lustre/d0.replay-ost-single/f5.replay-ost-single
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
random  random    bkwd   record   stride
KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
Failing ost1 on onyx-32vm4
CMD: onyx-32vm4 grep -c /mnt/lustre-ost1&apos; &apos; /proc/mounts
Stopping /mnt/lustre-ost1 (opts:) on onyx-32vm4
CMD: onyx-32vm4 umount /mnt/lustre-ost1
CMD: onyx-32vm4 lsmod | grep lnet &amp;gt; /dev/null &amp;amp;&amp;amp; lctl dl | grep &apos; ST &apos;
reboot facets: ost1
Failover ost1 to onyx-32vm4
04:17:30 (1489576650) waiting for onyx-32vm4 network 900 secs ...
04:17:30 (1489576650) network interface is UP
CMD: onyx-32vm4 hostname
mount facets: ost1
CMD: onyx-32vm4 test -b /dev/lvm-Role_OSS/P1
CMD: onyx-32vm4 e2label /dev/lvm-Role_OSS/P1
Starting ost1:   /dev/lvm-Role_OSS/P1 /mnt/lustre-ost1
CMD: onyx-32vm4 mkdir -p /mnt/lustre-ost1; mount -t lustre   		                   /dev/lvm-Role_OSS/P1 /mnt/lustre-ost1
CMD: onyx-32vm4 /usr/sbin/lctl get_param -n health_check
CMD: onyx-32vm4 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/mpi/gcc/openmpi/bin:/sbin:/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/sbin:/sbin::/sbin:/bin:/usr/sbin: NAME=autotest_config sh rpc.sh set_default_debug \&quot;vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck\&quot; \&quot;all\&quot; 4 
CMD: onyx-32vm4 e2label /dev/lvm-Role_OSS/P1 				2&amp;gt;/dev/null | grep -E &apos;:[a-zA-Z]{3}[0-9]{4}&apos;
CMD: onyx-32vm4 e2label /dev/lvm-Role_OSS/P1 				2&amp;gt;/dev/null | grep -E &apos;:[a-zA-Z]{3}[0-9]{4}&apos;
CMD: onyx-32vm4 e2label /dev/lvm-Role_OSS/P1 2&amp;gt;/dev/null
Started lustre-OST0000
CMD: onyx-32vm5,onyx-32vm6 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/mpi/gcc/openmpi/bin:/sbin:/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/sbin:/sbin::/sbin:/bin:/usr/sbin: NAME=autotest_config sh rpc.sh wait_import_state_mount FULL osc.lustre-OST0000-osc-*.ost_server_uuid 
onyx-32vm5: CMD: onyx-32vm5 lctl get_param -n at_max
onyx-32vm6: CMD: onyx-32vm6 lctl get_param -n at_max
onyx-32vm5: osc.lustre-OST0000-osc-*.ost_server_uuid in FULL state after 2 sec
onyx-32vm6: osc.lustre-OST0000-osc-*.ost_server_uuid in FULL state after 2 sec

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Info required for matching: replay-ost-single 5&lt;/p&gt;</description>
                <environment></environment>
        <key id="45109">LU-9273</key>
            <summary>replay-ost-single test_5: timeout after ost failover</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="bzzz">Alex Zhuravlev</assignee>
                                    <reporter username="maloo">Maloo</reporter>
                        <labels>
                    </labels>
                <created>Wed, 29 Mar 2017 21:32:29 +0000</created>
                <updated>Tue, 11 Sep 2018 20:47:36 +0000</updated>
                            <resolved>Tue, 11 Sep 2018 20:47:36 +0000</resolved>
                                    <version>Lustre 2.10.0</version>
                    <version>Lustre 2.11.0</version>
                    <version>Lustre 2.10.3</version>
                    <version>Lustre 2.10.4</version>
                                    <fixVersion>Lustre 2.12.0</fixVersion>
                    <fixVersion>Lustre 2.10.6</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="196951" author="pjones" created="Wed, 24 May 2017 18:34:00 +0000"  >&lt;p&gt;Hongchao&lt;/p&gt;

&lt;p&gt;Can you please advise on this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="197000" author="casperjx" created="Wed, 24 May 2017 23:24:52 +0000"  >&lt;p&gt;2.9.57, b3575:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/b0586f6c-86da-440c-bce4-5b37b5c9d9e8&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/b0586f6c-86da-440c-bce4-5b37b5c9d9e8&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="197226" author="hongchao.zhang" created="Sat, 27 May 2017 09:15:00 +0000"  >&lt;p&gt;this test failure was shown up a long time ago, &lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/sub_tests/query?builds=&amp;amp;commit=Update+results&amp;amp;gerrit=&amp;amp;hosts=&amp;amp;page=6&amp;amp;query_bugs=&amp;amp;status%5B%5D=TIMEOUT&amp;amp;sub_test%5Bsub_test_script_id%5D=56dfc79e-4a46-11e0-a7f6-52540025f9af&amp;amp;test_node%5Barchitecture_type_id%5D=&amp;amp;test_node%5Bdistribution_type_id%5D=&amp;amp;test_node%5Bfile_system_type_id%5D=9cc52180-2da5-11e1-819b-5254004bbbd3&amp;amp;test_node%5Blustre_branch_id%5D=&amp;amp;test_node%5Bos_type_id%5D=&amp;amp;test_node_network%5Bnetwork_type_id%5D=&amp;amp;test_session%5Bend_date%5D=&amp;amp;test_session%5Bquery_recent_period%5D=&amp;amp;test_session%5Bstart_date%5D=&amp;amp;test_set%5Btest_set_script_id%5D=79cbec9c-3db2-11e0-80c0-52540025f9af&amp;amp;utf8=&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/sub_tests/query?builds=&amp;amp;commit=Update+results&amp;amp;gerrit=&amp;amp;hosts=&amp;amp;page=6&amp;amp;query_bugs=&amp;amp;status%5B%5D=TIMEOUT&amp;amp;sub_test%5Bsub_test_script_id%5D=56dfc79e-4a46-11e0-a7f6-52540025f9af&amp;amp;test_node%5Barchitecture_type_id%5D=&amp;amp;test_node%5Bdistribution_type_id%5D=&amp;amp;test_node%5Bfile_system_type_id%5D=9cc52180-2da5-11e1-819b-5254004bbbd3&amp;amp;test_node%5Blustre_branch_id%5D=&amp;amp;test_node%5Bos_type_id%5D=&amp;amp;test_node_network%5Bnetwork_type_id%5D=&amp;amp;test_session%5Bend_date%5D=&amp;amp;test_session%5Bquery_recent_period%5D=&amp;amp;test_session%5Bstart_date%5D=&amp;amp;test_set%5Btest_set_script_id%5D=79cbec9c-3db2-11e0-80c0-52540025f9af&amp;amp;utf8=&lt;/a&gt;&#10003;&amp;amp;warn%5Bnotice%5D=true&lt;/p&gt;

&lt;p&gt;the iozone process has been running on client, and there is one kernel thread with &quot;D&quot; state at OST&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;01:13:27:ll_ost_io00_0 D 0000000000000000     0  7956      2 0x00000080
01:13:27: ffff88007cfeb950 0000000000000046 ffff880079a8d450 ffff88007b8963d8
01:13:27: ffff88005807e380 ffff880057e11800 0000000000000000 ffff88005807e3f0
01:13:27: ffff88007cfeb920 ffffffffa040d9da ffff880079249ad8 ffff88007cfebfd8
01:13:27:Call Trace:
01:13:27: [&amp;lt;ffffffffa040d9da&amp;gt;] ? jbd2_journal_stop+0x17a/0x2c0 [jbd2]
01:13:27: [&amp;lt;ffffffff810a6b6e&amp;gt;] ? prepare_to_wait+0x4e/0x80
01:13:27: [&amp;lt;ffffffffa0cd7005&amp;gt;] osd_trans_stop+0x265/0x780 [osd_ldiskfs]
01:13:27: [&amp;lt;ffffffff810a6840&amp;gt;] ? autoremove_wake_function+0x0/0x40
01:13:27: [&amp;lt;ffffffffa0e64a0f&amp;gt;] ofd_trans_stop+0x1f/0x60 [ofd]
01:13:27: [&amp;lt;ffffffffa0e6c100&amp;gt;] ofd_commitrw_write+0x4d0/0xfa0 [ofd]
01:13:27: [&amp;lt;ffffffffa0e6d18f&amp;gt;] ofd_commitrw+0x5bf/0xb10 [ofd]
01:13:27: [&amp;lt;ffffffff81150311&amp;gt;] ? kzfree+0x31/0x40
01:13:27: [&amp;lt;ffffffffa05e8121&amp;gt;] ? lprocfs_counter_add+0x151/0x1c0 [obdclass]
01:13:27: [&amp;lt;ffffffffa08613f4&amp;gt;] obd_commitrw+0x114/0x380 [ptlrpc]
01:13:27: [&amp;lt;ffffffffa086a190&amp;gt;] tgt_brw_write+0xc70/0x1530 [ptlrpc]
01:13:27: [&amp;lt;ffffffffa07bee20&amp;gt;] ? target_bulk_timeout+0x0/0xc0 [ptlrpc]
01:13:27: [&amp;lt;ffffffffa08689cc&amp;gt;] tgt_request_handle+0x8ec/0x1440 [ptlrpc]
01:13:27: [&amp;lt;ffffffffa08154d1&amp;gt;] ptlrpc_main+0xd31/0x1800 [ptlrpc]
01:13:27: [&amp;lt;ffffffffa08147a0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
01:13:27: [&amp;lt;ffffffff810a63ae&amp;gt;] kthread+0x9e/0xc0
01:13:27: [&amp;lt;ffffffff8100c28a&amp;gt;] child_rip+0xa/0x20
01:13:27: [&amp;lt;ffffffff810a6310&amp;gt;] ? kthread+0x0/0xc0
01:13:27: [&amp;lt;ffffffff8100c280&amp;gt;] ? child_rip+0x0/0x20
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="197704" author="sarah" created="Wed, 31 May 2017 16:21:07 +0000"  >&lt;p&gt;Is this one related to &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9247&quot; title=&quot;replay-ost-single test_5: test failed to respond and timed out&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9247&quot;&gt;&lt;del&gt;LU-9247&lt;/del&gt;&lt;/a&gt;?&lt;/p&gt;</comment>
                            <comment id="197781" author="hongchao.zhang" created="Thu, 1 Jun 2017 08:34:05 +0000"  >&lt;p&gt;This one is not the same as &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9247&quot; title=&quot;replay-ost-single test_5: test failed to respond and timed out&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9247&quot;&gt;&lt;del&gt;LU-9247&lt;/del&gt;&lt;/a&gt;, the issue was found when the backend filesystem is ldiskfs.&lt;/p&gt;</comment>
                            <comment id="202881" author="bzzz" created="Thu, 20 Jul 2017 11:56:49 +0000"  >&lt;p&gt;few testing sessions have been initiated with a debugging patch.&lt;/p&gt;</comment>
                            <comment id="203190" author="hongchao.zhang" created="Sat, 22 Jul 2017 11:09:29 +0000"  >&lt;p&gt;the jbd2 thread is stuck (device is dm-0, the journal inode is 8)&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;jbd2/dm-0-8     D ffffffff8168a8a0     0 32087      2 0x00000080
Jul 18 16:46:02 trevis-3vm3 kernel: ffff88005dc0fac0 0000000000000046 ffff88005099bec0 ffff88005dc0ffd8
Jul 18 16:46:02 trevis-3vm3 kernel: ffff88005dc0ffd8 ffff88005dc0ffd8 ffff88005099bec0 ffff88007fd16c40
Jul 18 16:46:02 trevis-3vm3 kernel: 0000000000000000 7fffffffffffffff ffff88007ff57200 ffffffff8168a8a0
Jul 18 16:46:02 trevis-3vm3 kernel: Call Trace:
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff8168a8a0&amp;gt;] ? bit_wait+0x50/0x50
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff8168c849&amp;gt;] schedule+0x29/0x70
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff8168a289&amp;gt;] schedule_timeout+0x239/0x2c0
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff81060c1f&amp;gt;] ? kvm_clock_get_cycles+0x1f/0x30
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff8168a8a0&amp;gt;] ? bit_wait+0x50/0x50
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff8168bdee&amp;gt;] io_schedule_timeout+0xae/0x130
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff8168be88&amp;gt;] io_schedule+0x18/0x20
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff8168a8b1&amp;gt;] bit_wait_io+0x11/0x50
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff8168a3d5&amp;gt;] __wait_on_bit+0x65/0x90
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff8168a8a0&amp;gt;] ? bit_wait+0x50/0x50
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff8168a481&amp;gt;] out_of_line_wait_on_bit+0x81/0xb0
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff810b1be0&amp;gt;] ? wake_bit_function+0x40/0x40
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff8123341a&amp;gt;] __wait_on_buffer+0x2a/0x30
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffffa0198742&amp;gt;] jbd2_journal_commit_transaction+0x1752/0x19a0 [jbd2]
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff81029569&amp;gt;] ? __switch_to+0xd9/0x4c0
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffffa019ce99&amp;gt;] kjournald2+0xc9/0x260 [jbd2]
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff810b1b20&amp;gt;] ? wake_up_atomic_t+0x30/0x30
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffffa019cdd0&amp;gt;] ? commit_timeout+0x10/0x10 [jbd2]
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff810b0a4f&amp;gt;] kthread+0xcf/0xe0
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff810b0980&amp;gt;] ? kthread_create_on_node+0x140/0x140
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff81697798&amp;gt;] ret_from_fork+0x58/0x90
Jul 18 16:46:02 trevis-3vm3 kernel: [&amp;lt;ffffffff810b0980&amp;gt;] ? kthread_create_on_node+0x140/0x140
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="203479" author="bzzz" created="Tue, 25 Jul 2017 11:05:38 +0000"  >&lt;p&gt;I&apos;ve spent quite amount of time trying to reproduce locally and with autotest.. no single hit.&lt;br/&gt;
working on a simple debugging patch to land to master and get better coverage.&lt;/p&gt;</comment>
                            <comment id="220337" author="pjones" created="Wed, 7 Feb 2018 18:22:04 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=bzzz&quot; class=&quot;user-hover&quot; rel=&quot;bzzz&quot;&gt;bzzz&lt;/a&gt; have you made any progress towards a debugging patch for this one?&lt;/p&gt;</comment>
                            <comment id="220339" author="bzzz" created="Wed, 7 Feb 2018 18:25:02 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=pjones&quot; class=&quot;user-hover&quot; rel=&quot;pjones&quot;&gt;pjones&lt;/a&gt; yes, I ran it many times, no success though..&lt;/p&gt;</comment>
                            <comment id="220343" author="pjones" created="Wed, 7 Feb 2018 18:45:13 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=bzzz&quot; class=&quot;user-hover&quot; rel=&quot;bzzz&quot;&gt;bzzz&lt;/a&gt; how about landing the debug patch to master then?&lt;/p&gt;</comment>
                            <comment id="220345" author="bzzz" created="Wed, 7 Feb 2018 18:51:02 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=pjones&quot; class=&quot;user-hover&quot; rel=&quot;pjones&quot;&gt;pjones&lt;/a&gt; oh, I&apos;ll try to find it as it was abandoned long ago.. also, I&apos;ll check for new instances.&lt;/p&gt;
</comment>
                            <comment id="220984" author="mdiep" created="Wed, 14 Feb 2018 16:03:41 +0000"  >&lt;p&gt;+1 on b2_10&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/1ef84160-111a-11e8-bd00-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/1ef84160-111a-11e8-bd00-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="220985" author="bzzz" created="Wed, 14 Feb 2018 16:11:16 +0000"  >&lt;p&gt;all the recent reports miss backtraces so it&apos;s barely possible to understand what&apos;s going no.&lt;br/&gt;
I was told the backtraces will be back with next Maloo update..&lt;/p&gt;</comment>
                            <comment id="221841" author="jamesanunez" created="Tue, 27 Feb 2018 18:43:17 +0000"  >&lt;p&gt;Alex - Does this test session have the detail that you need: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/de612b6c-18d0-11e8-a10a-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/de612b6c-18d0-11e8-a10a-52540065bddc&lt;/a&gt; ?&lt;/p&gt;

&lt;p&gt;An update to autotest was installed on Thursday last week and &lt;b&gt;should&lt;/b&gt; improve the collection of logs.&lt;/p&gt;

&lt;p&gt;Thanks for looking into this failure.&lt;/p&gt;</comment>
                            <comment id="223037" author="bzzz" created="Sun, 11 Mar 2018 09:54:31 +0000"  >&lt;p&gt;first of all, every report I&apos;ve learnt has no signs of deadlock or something similar. they all were making progress.&lt;br/&gt;
but according to the log a lot of requests were sync due to lack of grants. also, in all the cases the IOs weren&apos;t linear.&lt;br/&gt;
given that CLIO estimates ~1.8MB for an extent and 4KB I/Os, the client was hitting out-of-grants for ~40% write syscalls.&lt;br/&gt;
I&apos;ve been running the test with disabled random I/O:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;local iozone_opts=&quot;-i 0 -i 1 -i 2 -+d -r 4 -s $size -f $TDIR/$tfile&quot;&lt;br/&gt;
+       local iozone_opts=&quot;-i 0 -i 1 -+d -r 4 -s $size -f $TDIR/$tfile&quot;&lt;br/&gt;
and can&apos;t reproduce the issue since that.&lt;/li&gt;
&lt;/ul&gt;
</comment>
                            <comment id="223606" author="bzzz" created="Wed, 14 Mar 2018 14:26:19 +0000"  >&lt;p&gt;File size set to 209715 KB&lt;/p&gt;

&lt;p&gt;ldiskfs:&#160;86% of 303 OST_WRITE RPCs were 512 and 1024 pages&lt;/p&gt;

&lt;p&gt;ZFS: 95% of&#160;26783 OST_WRITE RPCs were 2 pages&lt;/p&gt;

&lt;p&gt;going to check what&apos;s wrong with ZFS..&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="223671" author="jamesanunez" created="Thu, 15 Mar 2018 03:15:47 +0000"  >&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;It looks like we are seeing this issue again. Please see &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/5ba4a4fe-2746-11e8-9e0e-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/5ba4a4fe-2746-11e8-9e0e-52540065bddc&lt;/a&gt; for more logs. In the MDS console, we see&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;
[56215.091992] jbd2/vda1-8&#160;&#160;&#160;&#160; D ffff880036d2bf40&#160;&#160;&#160;&#160; 0&#160;&#160; 265&#160;&#160;&#160;&#160;&#160; 2 0x00000000

[56215.092773] Call Trace:

[56215.093088]&#160; [&amp;lt;ffffffff816b2060&amp;gt;] ? bit_wait+0x50/0x50

[56215.093589]&#160; [&amp;lt;ffffffff816b40e9&amp;gt;] schedule+0x29/0x70

[56215.094079]&#160; [&amp;lt;ffffffff816b1a49&amp;gt;] schedule_timeout+0x239/0x2c0

[56215.094745]&#160; [&amp;lt;ffffffff812fd2d0&amp;gt;] ? generic_make_request_checks+0x1a0/0x3a0

[56215.095421]&#160; [&amp;lt;ffffffff81063f5e&amp;gt;] ? kvm_clock_get_cycles+0x1e/0x20

[56215.096029]&#160; [&amp;lt;ffffffff816b2060&amp;gt;] ? bit_wait+0x50/0x50

[56215.096605]&#160; [&amp;lt;ffffffff816b35ed&amp;gt;] io_schedule_timeout+0xad/0x130

[56215.097194]&#160; [&amp;lt;ffffffff816b3688&amp;gt;] io_schedule+0x18/0x20

[56215.097791]&#160; [&amp;lt;ffffffff816b2071&amp;gt;] bit_wait_io+0x11/0x50

[56215.098300]&#160; [&amp;lt;ffffffff816b1b97&amp;gt;] __wait_on_bit+0x67/0x90

[56215.098849]&#160; [&amp;lt;ffffffff816b2060&amp;gt;] ? bit_wait+0x50/0x50

[56215.099422]&#160; [&amp;lt;ffffffff816b1c41&amp;gt;] out_of_line_wait_on_bit+0x81/0xb0

[56215.100036]&#160; [&amp;lt;ffffffff810b5080&amp;gt;] ? wake_bit_function+0x40/0x40

[56215.100689]&#160; [&amp;lt;ffffffff8123b3fa&amp;gt;] __wait_on_buffer+0x2a/0x30

[56215.101386]&#160; [&amp;lt;ffffffffc00d4891&amp;gt;] jbd2_journal_commit_transaction+0x1781/0x19b0 [jbd2]

[56215.102164]&#160; [&amp;lt;ffffffff810c28a0&amp;gt;] ? finish_task_switch+0x50/0x170

[56215.102869]&#160; [&amp;lt;ffffffffc00d9b69&amp;gt;] kjournald2+0xc9/0x260 [jbd2]

[56215.103444]&#160; [&amp;lt;ffffffff810b4fc0&amp;gt;] ? wake_up_atomic_t+0x30/0x30

[56215.104030]&#160; [&amp;lt;ffffffffc00d9aa0&amp;gt;] ? commit_timeout+0x10/0x10 [jbd2]

[56215.104734]&#160; [&amp;lt;ffffffff810b4031&amp;gt;] kthread+0xd1/0xe0

[56215.105204]&#160; [&amp;lt;ffffffff810b3f60&amp;gt;] ? insert_kthread_work+0x40/0x40

[56215.105807]&#160; [&amp;lt;ffffffff816c0577&amp;gt;] ret_from_fork+0x77/0xb0

[56215.106405]&#160; [&amp;lt;ffffffff810b3f60&amp;gt;] ? insert_kthread_work+0x40/0x40

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Yet, on the OST console, I see&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;
[56212.701386] txg_sync&#160;&#160;&#160;&#160;&#160;&#160;&#160; D ffff88003fa35ee0&#160;&#160;&#160;&#160; 0 16826&#160;&#160;&#160;&#160;&#160; 2 0x00000080

[56212.702137] Call Trace:

[56212.702393]&#160; [&amp;lt;ffffffff81240605&amp;gt;] ? bio_alloc_bioset+0x115/0x310

[56212.703000]&#160; [&amp;lt;ffffffff816b40e9&amp;gt;] schedule+0x29/0x70

[56212.703610]&#160; [&amp;lt;ffffffff816b1a49&amp;gt;] schedule_timeout+0x239/0x2c0

[56212.704224]&#160; [&amp;lt;ffffffff81063f5e&amp;gt;] ? kvm_clock_get_cycles+0x1e/0x20

[56212.704842]&#160; [&amp;lt;ffffffff810ecec2&amp;gt;] ? ktime_get_ts64+0x52/0xf0

[56212.705458]&#160; [&amp;lt;ffffffff816b35ed&amp;gt;] io_schedule_timeout+0xad/0x130

[56212.706073]&#160; [&amp;lt;ffffffff810b4cb6&amp;gt;] ? prepare_to_wait_exclusive+0x56/0x90

[56212.706806]&#160; [&amp;lt;ffffffff816b3688&amp;gt;] io_schedule+0x18/0x20

[56212.707364]&#160; [&amp;lt;ffffffffc065e502&amp;gt;] cv_wait_common+0xb2/0x150 [spl]

[56212.707983]&#160; [&amp;lt;ffffffff810b4fc0&amp;gt;] ? wake_up_atomic_t+0x30/0x30

[56212.708637]&#160; [&amp;lt;ffffffffc065e5f8&amp;gt;] __cv_wait_io+0x18/0x20 [spl]

[56212.709275]&#160; [&amp;lt;ffffffffc0806833&amp;gt;] zio_wait+0x113/0x1c0 [zfs]

[56212.709879]&#160; [&amp;lt;ffffffffc07bafd1&amp;gt;] vdev_config_sync+0xf1/0x180 [zfs]

[56212.710612]&#160; [&amp;lt;ffffffffc079b2b4&amp;gt;] spa_sync+0xa24/0xdf0 [zfs]

[56212.711225]&#160; [&amp;lt;ffffffff810c7c82&amp;gt;] ? default_wake_function+0x12/0x20

[56212.711883]&#160; [&amp;lt;ffffffffc07aef91&amp;gt;] txg_sync_thread+0x301/0x510 [zfs]

[56212.712604]&#160; [&amp;lt;ffffffffc07aec90&amp;gt;] ? txg_fini+0x2a0/0x2a0 [zfs]

[56212.713231]&#160; [&amp;lt;ffffffffc0658fc3&amp;gt;] thread_generic_wrapper+0x73/0x80 [spl]

[56212.713926]&#160; [&amp;lt;ffffffffc0658f50&amp;gt;] ? __thread_exit+0x20/0x20 [spl]

[56212.714604]&#160; [&amp;lt;ffffffff810b4031&amp;gt;] kthread+0xd1/0xe0

[56212.715128]&#160; [&amp;lt;ffffffff810b3f60&amp;gt;] ? insert_kthread_work+0x40/0x40

[56212.715746]&#160; [&amp;lt;ffffffff816c0577&amp;gt;] ret_from_fork+0x77/0xb0

[56212.716360]&#160; [&amp;lt;ffffffff810b3f60&amp;gt;] ? insert_kthread_work+0x40/0x40

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Which looks like &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9247&quot; title=&quot;replay-ost-single test_5: test failed to respond and timed out&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9247&quot;&gt;&lt;del&gt;LU-9247&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Here&#8217;s are other examples&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/03b2ab72-2761-11e8-9e0e-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/03b2ab72-2761-11e8-9e0e-52540065bddc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/73816850-2773-11e8-b3c6-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/73816850-2773-11e8-b3c6-52540065bddc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="223674" author="bzzz" created="Thu, 15 Mar 2018 04:39:21 +0000"  >&lt;p&gt;well, it&apos;s ZFS in the reported cases and I think I understand the root cause roughly. probably it makes sense to disable this subtest with ZFS for a while.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="223675" author="bzzz" created="Thu, 15 Mar 2018 04:43:01 +0000"  >&lt;p&gt;these data collected with autotest confirm my local findings:&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;ldiskfs:&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;read write&lt;br/&gt;
 pages per rpc rpcs % cum % | rpcs % cum %&lt;br/&gt;
 1: 0 0 0 | 19 3 3&lt;br/&gt;
 2: 0 0 0 | 1 0 3&lt;br/&gt;
 4: 0 0 0 | 3 0 4&lt;br/&gt;
 8: 0 0 0 | 0 0 4&lt;br/&gt;
 16: 0 0 0 | 18 3 8&lt;br/&gt;
 32: 0 0 0 | 18 3 11&lt;br/&gt;
 64: 0 0 0 | 59 11 23&lt;br/&gt;
 128: 0 0 0 | 31 6 29&lt;br/&gt;
 256: 0 0 0 | 29 5 35&lt;br/&gt;
 512: 0 0 0 | 202 39 74&lt;br/&gt;
 1024: 0 0 0 | 128 25 100&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;ZFS:&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;read write&lt;br/&gt;
 pages per rpc rpcs % cum % | rpcs % cum %&lt;br/&gt;
 1: 0 0 0 | 2 0 0&lt;br/&gt;
 2: 0 0 0 | 32534 98 98&lt;br/&gt;
 4: 0 0 0 | 144 0 98&lt;br/&gt;
 8: 0 0 0 | 0 0 98&lt;br/&gt;
 16: 0 0 0 | 1 0 98&lt;br/&gt;
 32: 0 0 0 | 1 0 98&lt;br/&gt;
 64: 0 0 0 | 0 0 98&lt;br/&gt;
 128: 0 0 0 | 0 0 98&lt;br/&gt;
 256: 0 0 0 | 512 1 100&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;random writes consume granted space too quickly causing early writes to recycle grants.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="223828" author="gerrit" created="Fri, 16 Mar 2018 11:31:44 +0000"  >&lt;p&gt;Alex Zhuravlev (alexey.zhuravlev@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/31671&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/31671&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9273&quot; title=&quot;replay-ost-single test_5: timeout after ost failover&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9273&quot;&gt;&lt;del&gt;LU-9273&lt;/del&gt;&lt;/a&gt; tests: disable random I/O in replay-ost-single/5&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 2a63c9ad83eb910128fe476250e9bb0b799459b8&lt;/p&gt;</comment>
                            <comment id="225472" author="gerrit" created="Mon, 9 Apr 2018 19:46:02 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/31671/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/31671/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9273&quot; title=&quot;replay-ost-single test_5: timeout after ost failover&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9273&quot;&gt;&lt;del&gt;LU-9273&lt;/del&gt;&lt;/a&gt; tests: disable random I/O in replay-ost-single/5&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: e3bc6e681666aa2c60ada5f997966efa31fae68c&lt;/p&gt;</comment>
                            <comment id="228045" author="sarah" created="Wed, 16 May 2018 23:42:33 +0000"  >&lt;p&gt;+1 on b2_10 &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/3fba602a-5910-11e8-93e6-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/3fba602a-5910-11e8-93e6-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="230732" author="bzzz" created="Mon, 23 Jul 2018 09:06:14 +0000"  >&lt;p&gt;I think this isn&apos;t an issue any more?&lt;/p&gt;</comment>
                            <comment id="231671" author="jamesanunez" created="Wed, 8 Aug 2018 20:49:32 +0000"  >&lt;p&gt;Alex - Is this another instance of this hang with ZFS &lt;a href=&quot;https://testing.whamcloud.com/test_sets/420c0390-9ac1-11e8-b0aa-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/420c0390-9ac1-11e8-b0aa-52540065bddc&lt;/a&gt; ?&lt;/p&gt;</comment>
                            <comment id="232434" author="gerrit" created="Wed, 22 Aug 2018 15:28:22 +0000"  >&lt;p&gt;James Nunez (jnunez@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/33053&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/33053&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9273&quot; title=&quot;replay-ost-single test_5: timeout after ost failover&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9273&quot;&gt;&lt;del&gt;LU-9273&lt;/del&gt;&lt;/a&gt; tests: disable random I/O in replay-ost-single/5&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_10&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 3362371fa079a532b82bb8922781a1dc6ad54572&lt;/p&gt;</comment>
                            <comment id="233352" author="gerrit" created="Tue, 11 Sep 2018 20:17:37 +0000"  >&lt;p&gt;John L. Hammond (jhammond@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/33053/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/33053/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9273&quot; title=&quot;replay-ost-single test_5: timeout after ost failover&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9273&quot;&gt;&lt;del&gt;LU-9273&lt;/del&gt;&lt;/a&gt; tests: disable random I/O in replay-ost-single/5&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_10&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 52809289d5e81557784346bc53a436541214690f&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                            <outwardlinks description="duplicates">
                                                        </outwardlinks>
                                                        </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="25190">LU-5214</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzz8ov:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>