<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:29:27 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-9806] tgt_client_free()) ASSERTION( lut &amp;&amp; lut-&gt;lut_client_bitmap ) failed</title>
                <link>https://jira.whamcloud.com/browse/LU-9806</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This seems to be a return of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7430&quot; title=&quot;General protection fault: 0000 upon mounting MDT&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7430&quot;&gt;&lt;del&gt;LU-7430&lt;/del&gt;&lt;/a&gt; and a few other similar bugs, but happening on current master.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[291606.098200] Lustre: DEBUG MARKER: == replay-ost-single test 7: Fail OST before obd_destroy ============================================= 23:53:41 (1501300421)
[291616.783248] Lustre: DEBUG MARKER: before: 623720 after_dd: 618600 took 1 seconds
[291617.134646] LustreError: 28072:0:(osd_handler.c:2184:osd_ro()) *** setting lustre-OST0000 read-only ***
[291617.152901] Turning device loop1 (0x700001) read-only
[291617.224927] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000
[291617.277436] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-OST0000
[291617.590847] Lustre: Failing over lustre-OST0000
[291617.601802] LustreError: 22375:0:(tgt_lastrcvd.c:440:tgt_client_free()) ASSERTION( lut &amp;amp;&amp;amp; lut-&amp;gt;lut_client_bitmap ) failed: 
[291617.602975] LustreError: 22375:0:(tgt_lastrcvd.c:440:tgt_client_free()) LBUG
[291617.603578] Pid: 22375, comm: obd_zombid
[291617.604096] 
Call Trace:
[291617.606669]  [&amp;lt;ffffffffa02857ce&amp;gt;] libcfs_call_trace+0x4e/0x60 [libcfs]
[291617.607349]  [&amp;lt;ffffffffa028585c&amp;gt;] lbug_with_loc+0x4c/0xb0 [libcfs]
[291617.608122]  [&amp;lt;ffffffffa05ddde2&amp;gt;] tgt_client_free+0x2a2/0x360 [ptlrpc]
[291617.608814]  [&amp;lt;ffffffffa0db5b12&amp;gt;] ofd_destroy_export+0x62/0x180 [ofd]
[291617.609551]  [&amp;lt;ffffffffa0389239&amp;gt;] obd_zombie_impexp_cull+0x549/0x920 [obdclass]
[291617.622563]  [&amp;lt;ffffffffa038967d&amp;gt;] obd_zombie_impexp_thread+0x6d/0x1c0 [obdclass]
[291617.628967]  [&amp;lt;ffffffff810b7cc0&amp;gt;] ? default_wake_function+0x0/0x20
[291617.629676]  [&amp;lt;ffffffffa0389610&amp;gt;] ? obd_zombie_impexp_thread+0x0/0x1c0 [obdclass]
[291617.631230]  [&amp;lt;ffffffff810a2eba&amp;gt;] kthread+0xea/0xf0
[291617.631906]  [&amp;lt;ffffffff810a2dd0&amp;gt;] ? kthread+0x0/0xf0
[291617.632572]  [&amp;lt;ffffffff8170fb98&amp;gt;] ret_from_fork+0x58/0x90
[291617.633236]  [&amp;lt;ffffffff810a2dd0&amp;gt;] ? kthread+0x0/0xf0
[291617.639601] 
[291617.640036] Kernel panic - not syncing: LBUG
[291617.640462] CPU: 4 PID: 22375 Comm: obd_zombid Tainted: P           OE  ------------   3.10.0-debug #2
[291617.641354] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
[291617.641830]  ffffffffa02a4ed2 0000000025d32961 ffff8800a16b3cc0 ffffffff816fd3e4
[291617.642712]  ffff8800a16b3d40 ffffffff816f8c34 ffffffff00000008 ffff8800a16b3d50
[291617.644582]  ffff8800a16b3cf0 0000000025d32961 0000000025d32961 ffff88033e48d948
[291617.645811] Call Trace:
[291617.646408]  [&amp;lt;ffffffff816fd3e4&amp;gt;] dump_stack+0x19/0x1b
[291617.647142]  [&amp;lt;ffffffff816f8c34&amp;gt;] panic+0xd8/0x1e7
[291617.647765]  [&amp;lt;ffffffffa0285874&amp;gt;] lbug_with_loc+0x64/0xb0 [libcfs]
[291617.648540]  [&amp;lt;ffffffffa05ddde2&amp;gt;] tgt_client_free+0x2a2/0x360 [ptlrpc]
[291617.649224]  [&amp;lt;ffffffffa0db5b12&amp;gt;] ofd_destroy_export+0x62/0x180 [ofd]
[291617.649911]  [&amp;lt;ffffffffa0389239&amp;gt;] obd_zombie_impexp_cull+0x549/0x920 [obdclass]
[291617.651165]  [&amp;lt;ffffffffa038967d&amp;gt;] obd_zombie_impexp_thread+0x6d/0x1c0 [obdclass]
[291617.652377]  [&amp;lt;ffffffff810b7cc0&amp;gt;] ? wake_up_state+0x20/0x20
[291617.653065]  [&amp;lt;ffffffffa0389610&amp;gt;] ? obd_zombie_impexp_cull+0x920/0x920 [obdclass]
[291617.654285]  [&amp;lt;ffffffff810a2eba&amp;gt;] kthread+0xea/0xf0
[291617.654920]  [&amp;lt;ffffffff810a2dd0&amp;gt;] ? kthread_create_on_node+0x140/0x140
[291617.655610]  [&amp;lt;ffffffff8170fb98&amp;gt;] ret_from_fork+0x58/0x90
[291617.656262]  [&amp;lt;ffffffff810a2dd0&amp;gt;] ? kthread_create_on_node+0x140/0x140
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Crasydump on onyx-68 in /exports/crashdumps/192.168.123.181-2017-07-28-23:53:59&lt;br/&gt;
Modules also there.&lt;/p&gt;</description>
                <environment></environment>
        <key id="47568">LU-9806</key>
            <summary>tgt_client_free()) ASSERTION( lut &amp;&amp; lut-&gt;lut_client_bitmap ) failed</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="bzzz">Alex Zhuravlev</assignee>
                                    <reporter username="green">Oleg Drokin</reporter>
                        <labels>
                    </labels>
                <created>Sat, 29 Jul 2017 18:19:41 +0000</created>
                <updated>Wed, 19 Jul 2023 17:20:40 +0000</updated>
                            <resolved>Wed, 19 Jul 2023 17:20:40 +0000</resolved>
                                    <version>Lustre 2.12.0</version>
                    <version>Lustre 2.13.0</version>
                                    <fixVersion>Lustre 2.16.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="203895" author="green" created="Sun, 30 Jul 2017 04:14:34 +0000"  >&lt;p&gt;Just had another one&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[11716.272157] Lustre: DEBUG MARKER: == recovery-small test 29b: error adding new clients doesn&apos;t cause LBUG (bug 22273) ================== 23:21:29 (1501384889)
[11716.438161] Lustre: Failing over lustre-OST0000
[11716.527043] LustreError: 9005:0:(tgt_lastrcvd.c:440:tgt_client_free()) ASSERTION( lut &amp;amp;&amp;amp; lut-&amp;gt;lut_client_bitmap ) failed: 
[11716.528524] LustreError: 9005:0:(tgt_lastrcvd.c:440:tgt_client_free()) LBUG
[11716.529497] Pid: 9005, comm: obd_zombid
[11716.530209] 
Call Trace:
[11716.532127]  [&amp;lt;ffffffffa02c57ce&amp;gt;] libcfs_call_trace+0x4e/0x60 [libcfs]
[11716.534315]  [&amp;lt;ffffffffa02c585c&amp;gt;] lbug_with_loc+0x4c/0xb0 [libcfs]
[11716.535401]  [&amp;lt;ffffffffa061dde2&amp;gt;] tgt_client_free+0x2a2/0x360 [ptlrpc]
[11716.536214]  [&amp;lt;ffffffffa1412b12&amp;gt;] ofd_destroy_export+0x62/0x180 [ofd]
[11716.537110]  [&amp;lt;ffffffffa03c9239&amp;gt;] obd_zombie_impexp_cull+0x549/0x920 [obdclass]
[11716.551808]  [&amp;lt;ffffffffa03c967d&amp;gt;] obd_zombie_impexp_thread+0x6d/0x1c0 [obdclass]
[11716.553655]  [&amp;lt;ffffffff810b7cc0&amp;gt;] ? default_wake_function+0x0/0x20
[11716.554770]  [&amp;lt;ffffffffa03c9610&amp;gt;] ? obd_zombie_impexp_thread+0x0/0x1c0 [obdclass]
[11716.556332]  [&amp;lt;ffffffff810a2eba&amp;gt;] kthread+0xea/0xf0
[11716.557232]  [&amp;lt;ffffffff810a2dd0&amp;gt;] ? kthread+0x0/0xf0
[11716.558361]  [&amp;lt;ffffffff8170fb98&amp;gt;] ret_from_fork+0x58/0x90
[11716.564715]  [&amp;lt;ffffffff810a2dd0&amp;gt;] ? kthread+0x0/0xf0
[11716.567093] 
[11716.568045] Kernel panic - not syncing: LBUG
[11716.568703] CPU: 4 PID: 9005 Comm: obd_zombid Tainted: P           OE  ------------   3.10.0-debug #2
[11716.570244] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
[11716.570937]  ffffffffa02e4ed2 00000000809dbba9 ffff8800b7697cc0 ffffffff816fd3e4
[11716.572370]  ffff8800b7697d40 ffffffff816f8c34 ffffffff00000008 ffff8800b7697d50
[11716.573539]  ffff8800b7697cf0 00000000809dbba9 00000000809dbba9 ffff88033e48d948
[11716.574459] Call Trace:
[11716.574892]  [&amp;lt;ffffffff816fd3e4&amp;gt;] dump_stack+0x19/0x1b
[11716.575383]  [&amp;lt;ffffffff816f8c34&amp;gt;] panic+0xd8/0x1e7
[11716.575862]  [&amp;lt;ffffffffa02c5874&amp;gt;] lbug_with_loc+0x64/0xb0 [libcfs]
[11716.576514]  [&amp;lt;ffffffffa061dde2&amp;gt;] tgt_client_free+0x2a2/0x360 [ptlrpc]
[11716.577065]  [&amp;lt;ffffffffa1412b12&amp;gt;] ofd_destroy_export+0x62/0x180 [ofd]
[11716.577577]  [&amp;lt;ffffffffa03c9239&amp;gt;] obd_zombie_impexp_cull+0x549/0x920 [obdclass]
[11716.578500]  [&amp;lt;ffffffffa03c967d&amp;gt;] obd_zombie_impexp_thread+0x6d/0x1c0 [obdclass]
[11716.579429]  [&amp;lt;ffffffff810b7cc0&amp;gt;] ? wake_up_state+0x20/0x20
[11716.579917]  [&amp;lt;ffffffffa03c9610&amp;gt;] ? obd_zombie_impexp_cull+0x920/0x920 [obdclass]
[11716.580827]  [&amp;lt;ffffffff810a2eba&amp;gt;] kthread+0xea/0xf0
[11716.581306]  [&amp;lt;ffffffff810a2dd0&amp;gt;] ? kthread_create_on_node+0x140/0x140
[11716.581801]  [&amp;lt;ffffffff8170fb98&amp;gt;] ret_from_fork+0x58/0x90
[11716.582316]  [&amp;lt;ffffffff810a2dd0&amp;gt;] ? kthread_create_on_node+0x140/0x140
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;crashdump is in 192.168.123.146-2017-07-29-23:21:* on onyx-68&lt;/p&gt;</comment>
                            <comment id="242646" author="green" created="Mon, 25 Feb 2019 01:56:33 +0000"  >&lt;p&gt;this still seems to be regularly triggering in my testing&lt;/p&gt;</comment>
                            <comment id="258435" author="bzzz" created="Sun, 17 Nov 2019 07:38:05 +0000"  >&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
Lustre: DEBUG MARKER: == recovery-small test 60: Add Changelog entries during MDS failover ================================= 04:12:39 (1573945959)
Lustre: lustre-MDD0000: changelog on
Lustre: lustre-MDT0001: haven&lt;span class=&quot;code-quote&quot;&gt;&apos;t heard from client 128ea591-f299-4 (at 192.168.122.22@tcp) in 48 seconds. I think it&apos;&lt;/span&gt;s dead, and I am evicting it. exp 000000007725ad20, cur 1573945996 expire 1573945966 last 1573945948
Lustre: lustre-OST0000: haven&lt;span class=&quot;code-quote&quot;&gt;&apos;t heard from client 128ea591-f299-4 (at 192.168.122.22@tcp) in 48 seconds. I think it&apos;&lt;/span&gt;s dead, and I am evicting it. exp 00000000a202a5e3, cur 1573945996 expire 1573945966 last 1573945948
LustreError: 19:0:(tgt_lastrcvd.c:451:tgt_client_free()) ASSERTION( lut &amp;amp;&amp;amp; lut-&amp;gt;lut_client_bitmap ) failed: 
LustreError: 19:0:(tgt_lastrcvd.c:451:tgt_client_free()) LBUG
...
Call Trace:
 ? __schedule+0x2ad/0xb00
 schedule+0x34/0x80
 lbug_with_loc+0x79/0x80 [libcfs]
 ? tgt_client_free+0x2b0/0x330 [ptlrpc]
 ? mdt_destroy_export+0x87/0x2a0 [mdt]
 ? class_export_destroy+0xe9/0x460 [obdclass]
 ? process_one_work+0x249/0x5d0
 ? worker_thread+0x48/0x3d0
 ? kthread+0x100/0x140

umount          D    0 24858  24857 0x00000000
Call Trace:
 ? __schedule+0x2ad/0xb00
 schedule+0x34/0x80
 schedule_timeout+0x323/0x500
 ? wait_for_common+0x3b/0x160
 wait_for_common+0xc9/0x160
 ? wake_up_q+0x60/0x60
 flush_workqueue+0x143/0x4a0
 ? obd_exports_barrier+0x43/0x1a0 [obdclass]
 ? obd_exports_barrier+0x76/0x1a0 [obdclass]
 mgs_device_fini+0xdb/0x5c0 [mgs]
 class_cleanup+0x689/0xb50 [obdclass]
 class_process_config+0x153e/0x30f0 [obdclass]
 ? cache_alloc_debugcheck_after+0x138/0x150
 class_manual_cleanup+0x197/0x670 [obdclass]
 server_put_super+0x1525/0x1d50 [obdclass]
 ? evict_inodes+0x138/0x180
 generic_shutdown_super+0x5f/0xf0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;looks like MDT umount didn&apos;t wait for all exports to be gone?&lt;/p&gt;</comment>
                            <comment id="286856" author="bzzz" created="Mon, 7 Dec 2020 08:08:30 +0000"  >&lt;p&gt;there is no any serialization between export destroy and obd destroy:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
00000020:00000080:0.0:1607303759.080403:0:10539:0:(genops.c:984:class_export_put()) &lt;span class=&quot;code-keyword&quot;&gt;final&lt;/span&gt; put 0000000048c8f7e8/7bdf7e52-e46c-4201-82b5-5380be291135
00000020:00000001:1.0:1607303759.082137:0:11815:0:(tgt_main.c:570:tgt_fini()) &lt;span class=&quot;code-object&quot;&gt;Process&lt;/span&gt; entered
00000020:00000001:1.0:1607303759.082148:0:11815:0:(tgt_main.c:610:tgt_fini()) &lt;span class=&quot;code-object&quot;&gt;Process&lt;/span&gt; leaving
00000020:00000080:1.0:1607303759.082811:0:8175:0:(genops.c:943:class_export_destroy()) destroying export 0000000048c8f7e8/7bdf7e52-e46c-4201-82b5-5380be291135 &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; lustre-OST0000
00000001:00040000:1.0:1607303759.082843:0:8175:0:(tgt_lastrcvd.c:451:tgt_client_free()) ASSERTION( lut &amp;amp;&amp;amp; lut-&amp;gt;lut_client_bitmap ) failed: 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;IMHO, the check for freed OBD is very naive:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
	/* Target may have been freed (see LU-7430)
	 * Slot may be not yet assigned */
	&lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (exp-&amp;gt;exp_obd-&amp;gt;u.obt.obt_magic != OBT_MAGIC ||
	    ted-&amp;gt;ted_lr_idx &amp;lt; 0)
		&lt;span class=&quot;code-keyword&quot;&gt;return&lt;/span&gt;;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="364261" author="gerrit" created="Mon, 27 Feb 2023 18:41:09 +0000"  >&lt;p&gt;&quot;Alex Zhuravlev &amp;lt;bzzz@whamcloud.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/50147&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/50147&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9806&quot; title=&quot;tgt_client_free()) ASSERTION( lut &amp;amp;&amp;amp; lut-&amp;gt;lut_client_bitmap ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9806&quot;&gt;&lt;del&gt;LU-9806&lt;/del&gt;&lt;/a&gt; obdclass: wait for all exports to go&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 8895829088251d37576a01d959689d4d9e9204a7&lt;/p&gt;</comment>
                            <comment id="379339" author="gerrit" created="Wed, 19 Jul 2023 16:41:44 +0000"  >&lt;p&gt;&quot;Oleg Drokin &amp;lt;green@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/50147/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/50147/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-9806&quot; title=&quot;tgt_client_free()) ASSERTION( lut &amp;amp;&amp;amp; lut-&amp;gt;lut_client_bitmap ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-9806&quot;&gt;&lt;del&gt;LU-9806&lt;/del&gt;&lt;/a&gt; obdclass: wait for all exports to go&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 08f9ebe93b300c39d2af1fb8e82a22e9c84f401b&lt;/p&gt;</comment>
                            <comment id="379374" author="pjones" created="Wed, 19 Jul 2023 17:20:40 +0000"  >&lt;p&gt;Landed for 2.16&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="52939">LU-11232</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzhfr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>