<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:12:27 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-14749] runtests test 1 hangs on MDS unmount</title>
                <link>https://jira.whamcloud.com/browse/LU-14749</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;runtests test_1 is hanging on MDS umount only on b2_12 with DNE. We&#8217;ve seen this hang six times starting on 04 JUNE 2021. &lt;/p&gt;

&lt;p&gt;Looking at the hang at &lt;a href=&quot;https://testing.whamcloud.com/test_sets/e9a05afa-2c52-4e7a-9528-e85f89d9d571&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/e9a05afa-2c52-4e7a-9528-e85f89d9d571&lt;/a&gt;, the last thing we see in the suite_log is&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;CMD: trevis-45vm4 grep -c /mnt/lustre-mds3&apos; &apos; /proc/mounts || true
Stopping /mnt/lustre-mds3 (opts:-f) on trevis-45vm4
CMD: trevis-45vm4 umount -d -f /mnt/lustre-mds3
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Looking at the console log for MDS1/3 (vm4), we see the MDS is hanging on unmount (many umount process hung with):&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 4271.014683] Lustre: DEBUG MARKER: ! zpool list -H lustre-mdt1 &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 ||
[ 4271.014683] 			grep -q ^lustre-mdt1/ /proc/mounts ||
[ 4271.014683] 			zpool export  lustre-mdt1
[ 4271.481809] LustreError: 11-0: lustre-MDT0000-osp-MDT0002: operation mds_statfs to node 0@lo failed: rc = -107
[ 4271.483819] LustreError: Skipped 2 previous similar messages
[ 4271.484875] Lustre: lustre-MDT0000-osp-MDT0002: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
[ 4271.487858] Lustre: Skipped 23 previous similar messages
[ 4271.489169] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
[ 4271.492149] LustreError: Skipped 1108 previous similar messages
[ 4278.514239] Lustre: 14222:0:(client.c:2169:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1623099860/real 1623099860]  req@ffff9e71cdd2fa80 x1701939306318336/t0(0) o400-&amp;gt;MGC10.9.6.22@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1623099867 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
[ 4278.518985] Lustre: 14222:0:(client.c:2169:ptlrpc_expire_one_request()) Skipped 12 previous similar messages
[ 4279.969273] Lustre: DEBUG MARKER: grep -c /mnt/lustre-mds3&apos; &apos; /proc/mounts || true
[ 4280.342472] Lustre: DEBUG MARKER: umount -d -f /mnt/lustre-mds3
[ 4440.193242] INFO: task umount:25803 blocked for more than 120 seconds.
[ 4440.194850] &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
[ 4440.196316] umount          D ffff9e71ce9fe300     0 25803  25802 0x00000080
[ 4440.197835] Call Trace:
[ 4440.198401]  [&amp;lt;ffffffffb6b89e69&amp;gt;] schedule_preempt_disabled+0x29/0x70
[ 4440.199694]  [&amp;lt;ffffffffb6b87dc7&amp;gt;] __mutex_lock_slowpath+0xc7/0x1d0
[ 4440.200883]  [&amp;lt;ffffffffb6b8719f&amp;gt;] mutex_lock+0x1f/0x2f
[ 4440.201932]  [&amp;lt;ffffffffc0f904d7&amp;gt;] mgc_process_config+0x207/0x13f0 [mgc]
[ 4440.203416]  [&amp;lt;ffffffffc0d186d5&amp;gt;] obd_process_config.constprop.14+0x75/0x210 [obdclass]
[ 4440.205020]  [&amp;lt;ffffffffc0baa177&amp;gt;] ? libcfs_debug_msg+0x57/0x80 [libcfs]
[ 4440.206335]  [&amp;lt;ffffffffc0d04cd9&amp;gt;] ? lprocfs_counter_add+0xf9/0x160 [obdclass]
[ 4440.207675]  [&amp;lt;ffffffffc0d1994f&amp;gt;] lustre_end_log+0x1ff/0x550 [obdclass]
[ 4440.209047]  [&amp;lt;ffffffffc0d46bbe&amp;gt;] server_put_super+0x82e/0xd00 [obdclass]
[ 4440.210386]  [&amp;lt;ffffffffb6697359&amp;gt;] ? fsnotify_unmount_inodes+0x119/0x1d0
[ 4440.211643]  [&amp;lt;ffffffffb66507cd&amp;gt;] generic_shutdown_super+0x6d/0x100
[ 4440.212885]  [&amp;lt;ffffffffb6650bd2&amp;gt;] kill_anon_super+0x12/0x20
[ 4440.213994]  [&amp;lt;ffffffffc0d17d72&amp;gt;] lustre_kill_super+0x32/0x50 [obdclass]
[ 4440.215280]  [&amp;lt;ffffffffb6650fae&amp;gt;] deactivate_locked_super+0x4e/0x70
[ 4440.216493]  [&amp;lt;ffffffffb6651736&amp;gt;] deactivate_super+0x46/0x60
[ 4440.217607]  [&amp;lt;ffffffffb6670dcf&amp;gt;] cleanup_mnt+0x3f/0x80
[ 4440.218648]  [&amp;lt;ffffffffb6670e62&amp;gt;] __cleanup_mnt+0x12/0x20
[ 4440.219731]  [&amp;lt;ffffffffb64c28db&amp;gt;] task_work_run+0xbb/0xe0
[ 4440.220786]  [&amp;lt;ffffffffb642cc65&amp;gt;] do_notify_resume+0xa5/0xc0
[ 4440.221907]  [&amp;lt;ffffffffb6b962ef&amp;gt;] int_signal+0x12/0x17
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;On the client (vm1), we see&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 4178.949972] echo Stopping client $(hostname) /mnt/lustre opts:;
[ 4178.949972] lsof /mnt/lustre || need_kill=no;
[ 4178.949972] if [ x != x -a x$need_kill != xno ]; then
[ 4178.949972]     pids=$(lsof -t /mnt/lustre | sort -u);
[ 4178.949972]     if 
[ 4320.175524] INFO: task umount:18671 blocked for more than 120 seconds.
[ 4320.176904] &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
[ 4320.178305] umount          D ffff92277fc1acc0     0 18671  18662 0x00000080
[ 4320.179747] Call Trace:
[ 4320.180288]  [&amp;lt;ffffffff91589e69&amp;gt;] schedule_preempt_disabled+0x29/0x70
[ 4320.181455]  [&amp;lt;ffffffff91587dc7&amp;gt;] __mutex_lock_slowpath+0xc7/0x1d0
[ 4320.182589]  [&amp;lt;ffffffff9158719f&amp;gt;] mutex_lock+0x1f/0x2f
[ 4320.183545]  [&amp;lt;ffffffffc0a73297&amp;gt;] mgc_process_config+0x207/0x13f0 [mgc]
[ 4320.185118]  [&amp;lt;ffffffffc0773315&amp;gt;] obd_process_config.constprop.14+0x75/0x210 [obdclass]
[ 4320.186590]  [&amp;lt;ffffffffc075fb99&amp;gt;] ? lprocfs_counter_add+0xf9/0x160 [obdclass]
[ 4320.187966]  [&amp;lt;ffffffffc077458f&amp;gt;] lustre_end_log+0x1ff/0x550 [obdclass]
[ 4320.189271]  [&amp;lt;ffffffffc0bf49ee&amp;gt;] ll_put_super+0x8e/0x9b0 [lustre]
[ 4320.190454]  [&amp;lt;ffffffff90f598ad&amp;gt;] ? call_rcu_sched+0x1d/0x20
[ 4320.191554]  [&amp;lt;ffffffffc0c1c7cc&amp;gt;] ? ll_destroy_inode+0x1c/0x20 [lustre]
[ 4320.192801]  [&amp;lt;ffffffff9106c31b&amp;gt;] ? destroy_inode+0x3b/0x60
[ 4320.193851]  [&amp;lt;ffffffff9106c455&amp;gt;] ? evict+0x115/0x180
[ 4320.194822]  [&amp;lt;ffffffff9106c503&amp;gt;] ? dispose_list+0x43/0x60
[ 4320.195865]  [&amp;lt;ffffffff91097279&amp;gt;] ? fsnotify_unmount_inodes+0x119/0x1d0
[ 4320.197108]  [&amp;lt;ffffffff910507cd&amp;gt;] generic_shutdown_super+0x6d/0x100
[ 4320.198280]  [&amp;lt;ffffffff91050bd2&amp;gt;] kill_anon_super+0x12/0x20
[ 4320.199352]  [&amp;lt;ffffffffc07729c5&amp;gt;] lustre_kill_super+0x45/0x50 [obdclass]
[ 4320.200597]  [&amp;lt;ffffffff91050fae&amp;gt;] deactivate_locked_super+0x4e/0x70
[ 4320.201784]  [&amp;lt;ffffffff91051736&amp;gt;] deactivate_super+0x46/0x60
[ 4320.202858]  [&amp;lt;ffffffff91070dbf&amp;gt;] cleanup_mnt+0x3f/0x80
[ 4320.203851]  [&amp;lt;ffffffff91070e52&amp;gt;] __cleanup_mnt+0x12/0x20
[ 4320.204889]  [&amp;lt;ffffffff90ec28db&amp;gt;] task_work_run+0xbb/0xe0
[ 4320.205935]  [&amp;lt;ffffffff90e2cc65&amp;gt;] do_notify_resume+0xa5/0xc0
[ 4320.207018]  [&amp;lt;ffffffff915962ef&amp;gt;] int_signal+0x12/0x17
[ 4367.764390] LustreError: 166-1: MGC10.9.6.22@tcp: Connection to MGS (at 10.9.6.22@tcp) was lost; in progress operations using this service will fail
[ 4367.766831] LustreError: Skipped 8 previous similar messages
[ 4367.767935] LustreError: 375:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1623099556, 300s ago), entering recovery for MGS@10.9.6.22@tcp ns: MGC10.9.6.22@tcp lock: ffff9227613cf200/0x605efa0a5bb89566 lrc: 4/1,0 mode: --/CR res: [0x65727473756c:0x2:0x0].0x0 rrc: 2 type: PLN flags: 0x1000000000000 nid: local remote: 0x8475cec67ff59212 expref: -99 pid: 375 timeout: 0 lvb_type: 0
[ 4367.776090] LustreError: 18684:0:(ldlm_resource.c:1137:ldlm_resource_complain()) MGC10.9.6.22@tcp: namespace resource [0x65727473756c:0x2:0x0].0x0 (ffff9227638ad600) refcount nonzero (1) after lock cleanup; forcing cleanup.
[ 4367.961924] Lustre: Unmounted lustre-client
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Log for other test sessions with this hang are at&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/6cbb5be8-e342-41f1-99ac-0681b5db24a9&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/6cbb5be8-e342-41f1-99ac-0681b5db24a9&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/1649ef8e-387d-47f5-aff8-d1449a7c9d7c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/1649ef8e-387d-47f5-aff8-d1449a7c9d7c&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/6440455d-7572-44d3-ad4a-9a55e6bcaa4c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/6440455d-7572-44d3-ad4a-9a55e6bcaa4c&lt;/a&gt;&lt;/p&gt;</description>
                <environment></environment>
        <key id="64608">LU-14749</key>
            <summary>runtests test 1 hangs on MDS unmount</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                    </labels>
                <created>Wed, 9 Jun 2021 17:42:41 +0000</created>
                <updated>Sun, 26 Mar 2023 23:24:22 +0000</updated>
                            <resolved>Sun, 26 Mar 2023 23:24:22 +0000</resolved>
                                    <version>Lustre 2.12.6</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>2</watches>
                                                                            <comments>
                            <comment id="304200" author="adilger" created="Fri, 11 Jun 2021 00:17:03 +0000"  >&lt;p&gt;The test has failed 12 times since 2021-06-04, but only for patch review test sessions (where it runs after replay-dual), not for full test sessions.  in the console logs I see an MGS lock timeout which predates the start of &lt;tt&gt;runtests&lt;/tt&gt; in the logs:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 4191.078567] LustreError: 14293:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1623319496, 300s ago), entering recovery for MGS@10.9.4.167@tcp ns: MGC10.9.4.167@tcp lock: ffff9920c80ca900/0x3c8702c41b0f74bc lrc: 4/1,0 mode: --/CR res: [0x65727473756c:0x2:0x0].0x0 rrc: 2 type: PLN flags: 0x1000000000000 nid: local remote: 0x3c8702c41b0f74c3 expref: -99 pid: 14293 timeout: 0 lvb_type: 0
[ 4223.498558] Lustre: DEBUG MARKER: == runtests test 1: All Runtests =================================== 10:10:28 (1623319828)
[ 4491.085906] LustreError: 14293:0:(ldlm_request.c:148:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1623319796, 300s ago), entering recovery for MGS@10.9.4.167@tcp ns: MGC10.9.4.167@tcp lock: ffff9920c1c8d440/0x3c8702c41b0f77a9 lrc: 4/1,0 mode: --/CR res: [0x65727473756c:0x2:0x0].0x0 rrc: 2 type: PLN flags: 0x1000000000000 nid: local remote: 0x3c8702c41b0f77b0 expref: -99 pid: 14293 timeout: 0 lvb_type: 0
[ 4491.092849] LustreError: 5423:0:(ldlm_resource.c:1137:ldlm_resource_complain()) MGC10.9.4.167@tcp: namespace resource [0x65727473756c:0x2:0x0].0x0 (ffff9920c6afce40) refcount nonzero (1) after lock cleanup; forcing cleanup.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;and on the other MDS:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt; 4181.557888] LustreError: 24494:0:(ldlm_request.c:130:ldlm_expired_completion_wait()) ### lock timed out (enqueued at 1623319487, 300s ago); not entering recovery in server code, just going back to sleep ns: MGS lock: ffff9920dcdbb440/0x3c8702c41b0f7461 lrc: 3/0,1 mode: --/EX res: [0x65727473756c:0x2:0x0].0x0 rrc: 16 type: PLN flags: 0x40210400000020 nid: local remote: 0x0 expref: -99 pid: 24494 timeout: 0 lvb_type: 0
[ 4181.564239] LustreError: dumping log to /tmp/lustre-log.1623319787.24494
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The &apos;&lt;tt&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;0x65727473756c:0x2:0x0&amp;#93;&lt;/span&gt;&lt;/tt&gt;&apos; resource is &quot;&lt;tt&gt;lustre&lt;/tt&gt;&quot; in ASCII, which I guess is the filesystem name enqueued for the configuration llog on the MGS?  &lt;/p&gt;

&lt;p&gt;One possibility is that &lt;tt&gt;replay-dual&lt;/tt&gt; may be exiting with the MGC unmounted, and this is somehow causing problems for &lt;tt&gt;runtests&lt;/tt&gt; to unmount or remount the filesystem?  The passing &lt;tt&gt;full-part-1&lt;/tt&gt; session has &lt;tt&gt;runtests&lt;/tt&gt; immediately &lt;b&gt;before&lt;/b&gt; &lt;tt&gt;replay-dual&lt;/tt&gt; and runs with a single MDT.   I suspect an &quot;easy out&quot; would be to reverse the order of these tests, but it may be that the timeout will move to the next test down the line...&lt;/p&gt;</comment>
                            <comment id="304202" author="adilger" created="Fri, 11 Jun 2021 00:35:42 +0000"  >&lt;p&gt;This looks like a duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7372&quot; title=&quot;replay-dual test_26: test failed to respond and timed out&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7372&quot;&gt;&lt;del&gt;LU-7372&lt;/del&gt;&lt;/a&gt;, which is listed as the reason why &lt;tt&gt;replay-dual test_26&lt;/tt&gt; is in the &lt;tt&gt;ALWAYS_EXCEPT&lt;/tt&gt; list due to constant test failures.&lt;/p&gt;

&lt;p&gt;If you follow the breadcrumbs of the MGS timeout messages from &lt;tt&gt;runtests&lt;/tt&gt;, &quot;&lt;tt&gt;enqueued at nnn, 300s ago&lt;/tt&gt;&quot;, it lands in &lt;tt&gt;replay-dual test_25&lt;/tt&gt; at:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 3574.966184] LustreError: 6741:0:(ldlm_resource.c:1137:ldlm_resource_complain()) MGC10.9.7.66@tcp: namespace resource [0x65727473756c:0x2:0x0].0x0 (ffff97001a4b5300) refcount nonzero (1) after lock cleanup; forcing cleanup.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;which implies it or &lt;tt&gt;test_24&lt;/tt&gt; is the cause of &lt;tt&gt;test_26&lt;/tt&gt; timing out, since &lt;tt&gt;test_23d&lt;/tt&gt; is itself unmounting &lt;tt&gt;mds1&lt;/tt&gt;.  It now seems any test doing an MDS unmount afterward will also time out, since &lt;tt&gt;test_25&lt;/tt&gt; is the second-last subtest in &lt;tt&gt;replay-dual&lt;/tt&gt;, and &lt;tt&gt;test_28&lt;/tt&gt; doesn&apos;t do any unmounting of its own.&lt;/p&gt;</comment>
                            <comment id="367340" author="adilger" created="Sun, 26 Mar 2023 23:24:22 +0000"  >&lt;p&gt;Haven&apos;t seen this recently.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                            <outwardlinks description="duplicates">
                                        <issuelink>
            <issuekey id="32965">LU-7372</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="62787">LU-14406</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i01wnb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>