<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:22:33 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-15934] client refused mount with -EAGAIN because of missing MDT-MDT connection</title>
                <link>https://jira.whamcloud.com/browse/LU-15934</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;New clients were unable to establish a connection to the MDT, even after recovery had been aborted due to an llog context not being set up properly.  The clients were permanently getting &lt;tt&gt;-11 = -EAGAIN&lt;/tt&gt; errors from the server:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;(service.c:2298:ptlrpc_server_handle_request()) Handling RPC req@ffff8cdd37ad0d80 pname:cluuid+ref:pid:xid:nid:opc:job mdt09_0
01:0+-99:4093:x1719493089340032:12345-10.16.172.159@tcp:38:
(service.c:2303:ptlrpc_server_handle_request()) got req 1719493089340032
(tgt_handler.c:736:tgt_request_handle()) Process entered
(ldlm_lib.c:1100:target_handle_connect()) Process entered
(ldlm_lib.c:1360:target_handle_connect()) lfs02-MDT0003: connection from 16778a5c-5128-4231-8b45-426adc7e94b6@10.16.172.159@tcp t55835524055 exp           (null) cur 51537 last 0
(obd_class.h:831:obd_connect()) Process entered
(mdt_handler.c:6671:mdt_obd_connect()) Process entered
(lod_dev.c:2136:lod_obd_get_info()) lfs02-MDT0003-mdtlov: lfs02-MDT0001-osp-MDT0003 is not ready.
(lod_dev.c:2145:lod_obd_get_info()) Process leaving (rc=18446744073709551605 : -11 : fffffffffffffff5)
(ldlm_lib.c:1446:target_handle_connect()) Process leaving via out (rc=18446744073709551605 : -11 : 0xfffffffffffffff5)
(service.c:2347:ptlrpc_server_handle_request()) Handled RPC req@ffff8cdd37ad0d80 pname:cluuid+ref:pid:xid:nid:opc:job mdt09_001:0+-99:4093:x1719493089340032:12345-10.16.172.159@tcp:38: Request processed in 86us (124us total) trans 0 rc -11/-11
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This corresponds to the following block of code in &lt;tt&gt;lod_obd_get_info()&lt;/tt&gt;, where it is the second &quot;&lt;tt&gt;is not ready&lt;/tt&gt;&quot; message being printed from the missing &lt;tt&gt;ctxt-&amp;gt;loc_handle&lt;/tt&gt;:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
                lod_foreach_mdt(d, tgt) {
                        struct llog_ctxt *ctxt;
        
                        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (!tgt-&amp;gt;ltd_active)
                                &lt;span class=&quot;code-keyword&quot;&gt;continue&lt;/span&gt;;
               
                        ctxt = llog_get_context(tgt-&amp;gt;ltd_tgt-&amp;gt;dd_lu_dev.ld_obd,
                                                LLOG_UPDATELOG_ORIG_CTXT);
                        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (!ctxt) {
                                CDEBUG(D_INFO, &lt;span class=&quot;code-quote&quot;&gt;&quot;%s: %s is not ready.\n&quot;&lt;/span&gt;,
                                       obd-&amp;gt;obd_name,
                                      tgt-&amp;gt;ltd_tgt-&amp;gt;dd_lu_dev.ld_obd-&amp;gt;obd_name);
                                rc = -EAGAIN;
                                &lt;span class=&quot;code-keyword&quot;&gt;break&lt;/span&gt;;
                        }
                        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (!ctxt-&amp;gt;loc_handle) {
                                CDEBUG(D_INFO, &lt;span class=&quot;code-quote&quot;&gt;&quot;%s: %s is not ready.\n&quot;&lt;/span&gt;,
                                       obd-&amp;gt;obd_name,
                                      tgt-&amp;gt;ltd_tgt-&amp;gt;dd_lu_dev.ld_obd-&amp;gt;obd_name);
                                rc = -EAGAIN;
                                llog_ctxt_put(ctxt);
                                &lt;span class=&quot;code-keyword&quot;&gt;break&lt;/span&gt;;
                        }
                        llog_ctxt_put(ctxt);
                }
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;It would be useful to distinguish those two messages more clearly, e.g. &quot;&lt;tt&gt;ctxt is not ready&lt;/tt&gt;&quot; and &quot;&lt;tt&gt;handle is not ready&lt;/tt&gt;&quot;, as minor differences in line numbers would make it difficult to distinguish them in the logs.&lt;/p&gt;


&lt;p&gt;The root problem is that the MDT0003-MDT0001 connection wasn&apos;t completely set up due to &lt;tt&gt;abort_recovery_mdt&lt;/tt&gt; (due to a different recovery error, &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15761&quot; title=&quot;cannot finish MDS recovery&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15761&quot;&gt;&lt;del&gt;LU-15761&lt;/del&gt;&lt;/a&gt;), and the MDS never retries to establish this connection, leaving the filesystem permanently unusable.  Running &quot;&lt;tt&gt;lctl --device NN recover&lt;/tt&gt;&quot; reconnected the import, but did not actually re-establish the llog context. Mounting with &quot;&lt;tt&gt;-o abort_recov_mdt&lt;/tt&gt;&quot; resulted in the problem moving to MDT0000 (only the first bad llog context is printed before breaking out of the loop).&lt;/p&gt;

&lt;p&gt;I think there are two issues to be addressed here:&lt;br/&gt;
1) the MDS should try to reconnect and rebuild the llog connection in this case, at least on &lt;tt&gt;recover&lt;/tt&gt; if not automatically.  there didn&apos;t appear to be any permanent reason why these llog connections were not working, just fallout from &lt;tt&gt;abort_recovery_mdt&lt;/tt&gt;.  &lt;br/&gt;
2) is it strictly necessary to block client mounting if not all MDT-MDT connections are established?  Or is no different than any other case where the MDT loses a connection after it is mounted?  The MDT recovery had already been aborted, so allowing new clients to connect shouldn&apos;t cause any issues.  Maybe this issue would be moot if (1) was fixed, but it seems otherwise counter productive.  The filesystem was apparently fully functional for clients that had previously mounted before the MDT recovery (both MDT0003/MDT0001 and MDT0001/MDT0003 remote directory creation worked fine).&lt;/p&gt;</description>
                <environment></environment>
        <key id="70712">LU-15934</key>
            <summary>client refused mount with -EAGAIN because of missing MDT-MDT connection</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="ys">Yang Sheng</assignee>
                                    <reporter username="adilger">Andreas Dilger</reporter>
                        <labels>
                    </labels>
                <created>Sun, 12 Jun 2022 01:12:44 +0000</created>
                <updated>Wed, 20 Dec 2023 02:12:25 +0000</updated>
                            <resolved>Wed, 28 Jun 2023 22:40:00 +0000</resolved>
                                                    <fixVersion>Lustre 2.16.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>10</watches>
                                                                            <comments>
                            <comment id="356344" author="adilger" created="Wed, 14 Dec 2022 01:24:10 +0000"  >&lt;p&gt;Hit the same issue on another system.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[OI Scrub running in a loop because FID is missing]
:
1670963811.447855:0:28018:0:(client.c:1498:after_reply()) @@@ resending request on EINPROGRESS  req@ffff8ec49336cc80 x1752076841808640/t0(0) o1000-&amp;gt;fs01-MDT0001-osp-MDT0000@172.16.1.10@o2ib:24/4 lens 304/4320 e 0 to 0 dl 1670963849 ref 2 fl Rpc:RQU/2/0 rc 0/-115 job:&apos;&apos;
:
[OI Scrub is killed]
:
1670963903.447724:0:28018:0:(osp_object.c:596:osp_attr_get()) fs01-MDT0001-osp-MDT0000:osp_attr_get update error [0x900000404:0x1:0x0]: rc = -78
1670963903.447734:0:28018:0:(lod_dev.c:425:lod_sub_recovery_thread()) fs01-MDT0001-osp-MDT0000 get update log failed: rc = -78
1670966282.324977:0:26880:0:(lod_dev.c:2136:lod_obd_get_info()) fs01-MDT0000-mdtlov: fs01-MDT0001-osp-MDT0000 is not ready.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Later when a client tries to mount the filesystem it fails due to the bad llog state causing the MDT to refuse all new connections:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;1670966308.517999:0:29630:0:(service.c:2298:ptlrpc_server_handle_request()) Handling RPC req@ffff8ec6734bf500 pname:cluuid+ref:pid:xid:nid:opc:job mdt02_003:0+-99:13925:x1752076850908352:12345-0@lo:38:
1670966308.518019:0:29630:0:(ldlm_lib.c:1360:target_handle_connect()) fs01-MDT0000: connection from 5c5c267c-0fa0-4acb-b884-d5ce8cae08c2@0@lo t0 exp           (null) cur 55901 last 0
1670966308.518036:0:29630:0:(lod_dev.c:2136:lod_obd_get_info()) fs01-MDT0000-mdtlov: fs01-MDT0001-osp-MDT0000 is not ready.
1670966308.518038:0:29630:0:(lod_dev.c:2145:lod_obd_get_info()) Process leaving (rc=18446744073709551605 : -11 : fffffffffffffff5)
1670966308.518040:0:29630:0:(mdd_device.c:1615:mdd_obd_get_info()) Process leaving (rc=18446744073709551605 : -11 : fffffffffffffff5)
1670966308.518042:0:29630:0:(mdt_handler.c:6693:mdt_obd_connect()) Process leaving (rc=18446744073709551605 : -11 : fffffffffffffff5)
1670966308.518044:0:29630:0:(ldlm_lib.c:1446:target_handle_connect()) Process leaving via out (rc=18446744073709551605 : -11 : 0xfffffffffffffff5)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="357635" author="gerrit" created="Thu, 29 Dec 2022 17:52:10 +0000"  >&lt;p&gt;&quot;Yang Sheng &amp;lt;ys@whamcloud.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/49528&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/49528&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15934&quot; title=&quot;client refused mount with -EAGAIN because of missing MDT-MDT connection&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15934&quot;&gt;&lt;del&gt;LU-15934&lt;/del&gt;&lt;/a&gt; lod: print more detail info in fail path&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 46e9c8f1095c5c352829908453a7bd5fc223dd7a&lt;/p&gt;</comment>
                            <comment id="358152" author="adilger" created="Fri, 6 Jan 2023 17:14:07 +0000"  >&lt;p&gt;&quot;Yang Sheng &amp;lt;ys@whamcloud.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/49569&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/49569&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15934&quot; title=&quot;client refused mount with -EAGAIN because of missing MDT-MDT connection&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15934&quot;&gt;&lt;del&gt;LU-15934&lt;/del&gt;&lt;/a&gt; lod: renew the update llog&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 480a3babda9ef1eba31097031ae7c429cbc54bdc&lt;/p&gt;</comment>
                            <comment id="358228" author="JIRAUSER17900" created="Sat, 7 Jan 2023 13:08:33 +0000"  >&lt;p&gt;2023-01-07: The fix patch(#49569) is being worked on.&lt;/p&gt;</comment>
                            <comment id="368645" author="JIRAUSER17900" created="Thu, 6 Apr 2023 11:51:50 +0000"  >&lt;p&gt;2023-04-06: The fix patch(#49569) is being reviewed, may needs to be updated.&lt;/p&gt;</comment>
                            <comment id="370922" author="JIRAUSER17900" created="Fri, 28 Apr 2023 12:07:26 +0000"  >&lt;p&gt;2023-04-28: The fix patch(#49569) is being improved per review feedback.&lt;/p&gt;</comment>
                            <comment id="371462" author="JIRAUSER17900" created="Sat, 6 May 2023 14:02:13 +0000"  >&lt;p&gt;2023-05-13: The improving patch(#49569) is ready to land(on master-next branch).&lt;/p&gt;</comment>
                            <comment id="372807" author="gerrit" created="Fri, 19 May 2023 07:00:30 +0000"  >&lt;p&gt;&quot;Oleg Drokin &amp;lt;green@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/49569/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/49569/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15934&quot; title=&quot;client refused mount with -EAGAIN because of missing MDT-MDT connection&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15934&quot;&gt;&lt;del&gt;LU-15934&lt;/del&gt;&lt;/a&gt; lod: renew the update llog&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 814691bcffab0a19240121740fb85a1912886a3c&lt;/p&gt;</comment>
                            <comment id="373021" author="JIRAUSER17900" created="Sat, 20 May 2023 02:26:42 +0000"  >&lt;p&gt;2023-05-20: The improving patch(#49569) landed to master, another patch(#49528) is being discussed.&lt;/p&gt;</comment>
                            <comment id="374447" author="gerrit" created="Sat, 3 Jun 2023 18:50:35 +0000"  >&lt;p&gt;&quot;Yang Sheng &amp;lt;ys@whamcloud.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/51208&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/51208&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15934&quot; title=&quot;client refused mount with -EAGAIN because of missing MDT-MDT connection&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15934&quot;&gt;&lt;del&gt;LU-15934&lt;/del&gt;&lt;/a&gt; tests: add a test case for update llog&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 48ec3ead32ff44f33997051752601009903684e9&lt;/p&gt;</comment>
                            <comment id="375073" author="JIRAUSER17900" created="Sun, 11 Jun 2023 11:27:17 +0000"  >&lt;p&gt;2023-06-17: The second patch (#49528) is ready to land(on master-next branch), the third patch adding test case is being reviewed&lt;/p&gt;</comment>
                            <comment id="375896" author="gerrit" created="Tue, 20 Jun 2023 03:36:04 +0000"  >&lt;p&gt;&quot;Oleg Drokin &amp;lt;green@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/49528/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/49528/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15934&quot; title=&quot;client refused mount with -EAGAIN because of missing MDT-MDT connection&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15934&quot;&gt;&lt;del&gt;LU-15934&lt;/del&gt;&lt;/a&gt; lod: clear up the message&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 9882d4e933fd8cdbc4a9bc8bf6b29655009f7e03&lt;/p&gt;</comment>
                            <comment id="376436" author="JIRAUSER17900" created="Sun, 25 Jun 2023 02:12:30 +0000"  >&lt;p&gt;2023-06-25: The second patch (#49528) landed to master, the third patch(#51208) adding test case is ready to land(on master-next branch)&lt;/p&gt;</comment>
                            <comment id="376801" author="gerrit" created="Wed, 28 Jun 2023 21:46:46 +0000"  >&lt;p&gt;&quot;Oleg Drokin &amp;lt;green@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/51208/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/51208/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15934&quot; title=&quot;client refused mount with -EAGAIN because of missing MDT-MDT connection&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15934&quot;&gt;&lt;del&gt;LU-15934&lt;/del&gt;&lt;/a&gt; tests: add a test case for update llog&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 54301fe4f598eef5aebdbdb0c7f3dddea9541c4e&lt;/p&gt;</comment>
                            <comment id="376821" author="pjones" created="Wed, 28 Jun 2023 22:40:00 +0000"  >&lt;p&gt;Landed for 2.16&lt;/p&gt;</comment>
                            <comment id="389501" author="ys" created="Mon, 16 Oct 2023 18:49:10 +0000"  >&lt;p&gt;Hi, Andreas,&lt;/p&gt;

&lt;p&gt;Looks like the mds0 was still waiting for recovery. But mds1 was not blocked on lod part rather than communication. Do we need prolong the waiting time?&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
YangSheng&lt;/p&gt;</comment>
                            <comment id="389849" author="adilger" created="Thu, 19 Oct 2023 00:52:47 +0000"  >&lt;p&gt;YS, can you see why mds0 was not finished recovery?  If it was making progress, then waiting longer would be OK (VM testing can be very unpredictable).  However, if it is stuck for some other reason then waiting will not help and the blocker to finish recovery needs to be fixed. &lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="72350">LU-16159</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="70717">LU-15938</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="79583">LU-17365</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i02rxb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>