<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:12:35 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-14764] Single OST quotas out of sync (with hung I/O) and acquire quota failed:-115</title>
                <link>https://jira.whamcloud.com/browse/LU-14764</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We&apos;re enforcing group quota limits on Oak. They usually work great. After a MDS crash with 2.12.6 (we hit unrelated bug &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14705&quot; title=&quot;ASSERTION( llog_osd_exist(loghandle) ) failed: with concurent &amp;quot;lfs changelog_clear&amp;quot;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-14705&quot;&gt;LU-14705&lt;/a&gt;) and restart of the MDTs, one OST (oak-OST0135) did not came back in sync with the quota master, and it&apos;s still out of sync. A few groups are out of quotas on this OST and this is generating hung I/Os at least new files that are stripped on this OST. As a workaround, I have set max_create_count=0 on all MDTs for this OST as a temporary workaround (and this worked).&lt;/p&gt;

&lt;p&gt;Example of a group out of quotas on OST0135 (index 309):&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# lfs quota -v -g oak_cblish /oak
[...]
oak-OST0134_UUID
                243010468       - 246011728       -       -       -       -       -
oak-OST0135_UUID
                256548944*      - 256542812       -       -       -       -       -
oak-OST0136_UUID
                85301696       - 86780916       -       -       -       -       -
oak-OST0137_UUID
                156684012       - 160709804       -       -       -       -       -
oak-OST0138_UUID
                35652032       - 36721800       -       -       -       -       -
oak-OST0139_UUID
                8830024       - 12589328       -       -       -       -       -
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Example of hung I/O on clients:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[2021-06-14T15:31:23-07:00] [3776399.960372] tar             S ffff8b55ed7cb180     0 13047   4645 0x00000000^M
[2021-06-14T15:31:23-07:00] [3776399.968457] Call Trace:^M                      
[2021-06-14T15:31:23-07:00] [3776399.971387]  [&amp;lt;ffffffffb90b042b&amp;gt;] ? recalc_sigpending+0x1b/0x70^M
[2021-06-14T15:31:23-07:00] [3776399.978194]  [&amp;lt;ffffffffb9788eb9&amp;gt;] schedule+0x29/0x70^M
[2021-06-14T15:31:23-07:00] [3776399.983972]  [&amp;lt;ffffffffc0e935b5&amp;gt;] cl_sync_io_wait+0x2b5/0x3d0 [obdclass]^M
[2021-06-14T15:31:23-07:00] [3776399.991654]  [&amp;lt;ffffffffb90dadf0&amp;gt;] ? wake_up_state+0x20/0x20^M
[2021-06-14T15:31:23-07:00] [3776399.998377]  [&amp;lt;ffffffffc0e93d78&amp;gt;] cl_io_submit_sync+0x178/0x270 [obdclass]^M
[2021-06-14T15:31:23-07:00] [3776400.006298]  [&amp;lt;ffffffffc125d1d6&amp;gt;] vvp_io_commit_sync+0x106/0x340 [lustre]^M
[2021-06-14T15:31:23-07:00] [3776400.014084]  [&amp;lt;ffffffffc125e5b6&amp;gt;] vvp_io_write_commit+0x4c6/0x600 [lustre]^M
[2021-06-14T15:31:23-07:00] [3776400.021971]  [&amp;lt;ffffffffc125ed25&amp;gt;] vvp_io_write_start+0x635/0xa70 [lustre]^M
[2021-06-14T15:31:23-07:00] [3776400.029760]  [&amp;lt;ffffffffc0e90225&amp;gt;] ? cl_lock_enqueue+0x65/0x120 [obdclass]^M
[2021-06-14T15:31:23-07:00] [3776400.037550]  [&amp;lt;ffffffffc0e92788&amp;gt;] cl_io_start+0x68/0x130 [obdclass]^M
[2021-06-14T15:31:23-07:00] [3776400.044750]  [&amp;lt;ffffffffc0e949fc&amp;gt;] cl_io_loop+0xcc/0x1c0 [obdclass]^M
[2021-06-14T15:31:23-07:00] [3776400.051848]  [&amp;lt;ffffffffc121407b&amp;gt;] ll_file_io_generic+0x63b/0xc90 [lustre]^M
[2021-06-14T15:31:23-07:00] [3776400.059631]  [&amp;lt;ffffffffc1214b69&amp;gt;] ll_file_aio_write+0x289/0x660 [lustre]^M
[2021-06-14T15:31:23-07:00] [3776400.067317]  [&amp;lt;ffffffffc1215040&amp;gt;] ll_file_write+0x100/0x1c0 [lustre]^M
[2021-06-14T15:31:23-07:00] [3776400.074609]  [&amp;lt;ffffffffb924de00&amp;gt;] vfs_write+0xc0/0x1f0^M
[2021-06-14T15:31:23-07:00] [3776400.080541]  [&amp;lt;ffffffffb924ebdf&amp;gt;] SyS_write+0x7f/0xf0^M
[2021-06-14T15:31:23-07:00] [3776400.086380]  [&amp;lt;ffffffffb9795f92&amp;gt;] system_call_fastpath+0x25/0x2a^M
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&#160;&lt;br/&gt;
 I tried to trigger a force_reint for this OST by doing:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;echo 1 &amp;gt; /proc/fs/lustre/osd-ldiskfs/oak-OST0135/quota_slave/force_reint 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;but that didn&apos;t seem to help. The status of the quota slave on this OST is now:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@oak-io6-s2 ~]# cat /proc/fs/lustre/osd-ldiskfs/oak-OST0135/quota_slave/info
target name:    oak-OST0135
pool ID:        0
type:           dt
quota enabled:  g
conn to master: setup
space acct:     ugp
user uptodate:  glb[0],slv[0],reint[0]
group uptodate: glb[0],slv[0],reint[1]
project uptodate: glb[0],slv[0],reint[0]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;but should likely be:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;group uptodate: glb[1],slv[1],reint[0]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;like other healthy OSTs.&lt;/p&gt;

&lt;p&gt;I&apos;ve taken some debug logs with +trace and +quota from the OSS in question that I&apos;m attaching as &lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/39077/39077_oak-OST0135_quota_issue.log.gz&quot; title=&quot;oak-OST0135_quota_issue.log.gz attached to LU-14764&quot;&gt;oak-OST0135_quota_issue.log.gz&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;We can see some -115 EINPROGRESS errors in the logs:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@oak-io6-s2 ~]# grep qsd:oak-OST0135 oak-OST0135_quota_issue.log  | head
00040000:04000000:46.0:1623797919.334289:0:107324:0:(qsd_handler.c:732:qsd_op_begin0()) $$$ op_begin space:12 qsd:oak-OST0135 qtype:grp id:8663 enforced:1 granted: 119761748 pending:0 waiting:0 req:0 usage: 119769696 qunit:0 qtune:0 edquot:0 default:no
00040000:04000000:46.0:1623797919.334291:0:107324:0:(qsd_handler.c:636:qsd_acquire()) $$$ acquiring:12 count=0 qsd:oak-OST0135 qtype:grp id:8663 enforced:1 granted: 119761748 pending:0 waiting:12 req:0 usage: 119769696 qunit:0 qtune:0 edquot:0 default:no
00040000:04000000:46.0:1623797919.334312:0:107324:0:(qsd_entry.c:247:qsd_refresh_usage()) $$$ disk usage: 119769696 qsd:oak-OST0135 qtype:grp id:8663 enforced:1 granted: 119761748 pending:0 waiting:12 req:0 usage: 119769696 qunit:0 qtune:0 edquot:0 default:no
00040000:04000000:46.0:1623797919.334316:0:107324:0:(qsd_handler.c:121:qsd_ready()) $$$ not up-to-date, dropping request and kicking off reintegration qsd:oak-OST0135 qtype:grp id:8663 enforced:1 granted: 119761748 pending:0 waiting:12 req:0 usage: 119769696 qunit:0 qtune:0 edquot:0 default:no
00040000:04000000:46.0:1623797919.334320:0:107324:0:(qsd_handler.c:751:qsd_op_begin0()) $$$ acquire quota failed:-115 qsd:oak-OST0135 qtype:grp id:8663 enforced:1 granted: 119761748 pending:0 waiting:12 req:0 usage: 119769696 qunit:0 qtune:0 edquot:0 default:no
00040000:04000000:46.0:1623797919.334330:0:107324:0:(qsd_entry.c:247:qsd_refresh_usage()) $$$ disk usage: 119769696 qsd:oak-OST0135 qtype:grp id:8663 enforced:1 granted: 119761748 pending:0 waiting:0 req:0 usage: 119769696 qunit:0 qtune:0 edquot:0 default:no
00040000:04000000:46.0:1623797919.334332:0:107324:0:(qsd_handler.c:222:qsd_calc_adjust()) $$$ overrun, reporting usage qsd:oak-OST0135 qtype:grp id:8663 enforced:1 granted: 119761748 pending:0 waiting:0 req:0 usage: 119769696 qunit:0 qtune:0 edquot:0 default:no
00040000:04000000:46.0:1623797919.334334:0:107324:0:(qsd_handler.c:121:qsd_ready()) $$$ not up-to-date, dropping request and kicking off reintegration qsd:oak-OST0135 qtype:grp id:8663 enforced:1 granted: 119761748 pending:0 waiting:0 req:0 usage: 119769696 qunit:0 qtune:0 edquot:0 default:no
00040000:04000000:46.0:1623797919.334336:0:107324:0:(qsd_handler.c:929:qsd_adjust()) $$$ delaying adjustment since qsd isn&apos;t ready qsd:oak-OST0135 qtype:grp id:8663 enforced:1 granted: 119761748 pending:0 waiting:0 req:0 usage: 119769696 qunit:0 qtune:0 edquot:0 default:no
00040000:04000000:18.0:1623797930.024599:0:271450:0:(qsd_entry.c:247:qsd_refresh_usage()) $$$ disk usage: 232050092 qsd:oak-OST0135 qtype:grp id:4722 enforced:1 granted: 268435456 pending:0 waiting:0 req:0 usage: 232050092 qunit:0 qtune:0 edquot:0 default:no
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Please let me know if you have any idea of what could be wrong, help troubleshoot, and/or if there is a way to force a resync in a different way. Otherwise, I&apos;m going to restart this OST at a scheduled maintenance. Thanks!&lt;/p&gt;</description>
                <environment>CentOS 7.9, ldiskfs</environment>
        <key id="64668">LU-14764</key>
            <summary>Single OST quotas out of sync (with hung I/O) and acquire quota failed:-115</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="hongchao.zhang">Hongchao Zhang</assignee>
                                    <reporter username="sthiell">Stephane Thiell</reporter>
                        <labels>
                    </labels>
                <created>Tue, 15 Jun 2021 23:52:51 +0000</created>
                <updated>Wed, 23 Nov 2022 15:50:15 +0000</updated>
                                            <version>Lustre 2.12.6</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="304702" author="pjones" created="Wed, 16 Jun 2021 17:33:54 +0000"  >&lt;p&gt;Hongchao&lt;/p&gt;

&lt;p&gt;Could you please advise&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="304748" author="hongchao.zhang" created="Thu, 17 Jun 2021 04:33:57 +0000"  >&lt;p&gt;Hi,&lt;br/&gt;
Could you please collect the debug log at MDT0000? the QMT device is resided at MDT0000, which handles the quota requests&lt;br/&gt;
from MDTs &amp;amp; OSTs, and the error -115 was returned by QMT.&lt;br/&gt;
Thanks!&lt;/p&gt;</comment>
                            <comment id="304853" author="sthiell" created="Fri, 18 Jun 2021 01:31:27 +0000"  >&lt;p&gt;Hi Hongchao,&lt;/p&gt;

&lt;p&gt;Thanks! OK. I&apos;ve tried, but I don&apos;t think I was able to capture the error. I&apos;m not sure why, I don&apos;t see any error -115 on the QMT, nor even mention of OST0135 itself. I used +quota and +trace debug options on MDT0 and create a file on OST index 309 (OST0135) and started a dd to write data to it. I also used lfs quota -v during that time.  I can see many other OSTs being queried like:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lookup slave index file for oak-MDT0000-lwp-OST006b_UUID
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;but I can&apos;t find OST0135...&lt;/p&gt;

&lt;p&gt;Just in case, I&apos;m attaching the logs as&#160;&lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/39117/39117_oak-md1-s2-dk%2Bquota%2Btrace.log.gz&quot; title=&quot;oak-md1-s2-dk+quota+trace.log.gz attached to LU-14764&quot;&gt;oak-md1-s2-dk+quota+trace.log.gz&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;.&lt;/p&gt;</comment>
                            <comment id="304855" author="sthiell" created="Fri, 18 Jun 2021 04:17:59 +0000"  >&lt;p&gt;Hongchao,&lt;br/&gt;
Are you sure the -115 comes from the QMT?  For example in quota/qsd_handler.c function qsd_ready() returns -115 when reintegration is still in progress:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt; 111         /* In most case, reintegration must have been triggered (when enable
 112          * quota or on OST start), however, in rare race condition (enabling
 113          * quota when starting OSTs), we might miss triggering reintegration
 114          * for some qqi.
 115          *
 116          * If the previous reintegration failed for some reason, we&apos;ll
 117          * re-trigger it here as well. */
 118         if (!qqi-&amp;gt;qqi_glb_uptodate || !qqi-&amp;gt;qqi_slv_uptodate) {
 119                 read_unlock(&amp;amp;qsd-&amp;gt;qsd_lock);
 120                 LQUOTA_DEBUG(lqe, &quot;not up-to-date, dropping request and &quot;
 121                              &quot;kicking off reintegration&quot;);
 122                 qsd_start_reint_thread(qqi);
 123                 RETURN(-EINPROGRESS);
 124         }
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;which generated:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;00040000:04000000:46.0:1623797919.334316:0:107324:0:(qsd_handler.c:121:qsd_ready()) $$$ not up-to-date, dropping request and kicking off reintegration qsd:oak-OST0135 qtype:grp id:8663 enforced:1 granted: 119761748 pending:0 waiting:12 req:0 usage: 119769696 qunit:0 qtune:0 edquot:0 default:no
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="305193" author="hongchao.zhang" created="Tue, 22 Jun 2021 15:14:46 +0000"  >&lt;p&gt;Hi Stephane,&lt;/p&gt;

&lt;p&gt;Sorry!&lt;br/&gt;
Yes, the error -EINPROGRESS(-115) can be caused by the in QSD because the QSD is not in sync with QMT.&lt;/p&gt;

&lt;p&gt;At OSS hosting OST0135, is there a process with name &quot;qsd_rein_2.oak-OST0135&#8221;? this should be the kernel thread&lt;br/&gt;
to re-integrate with QMT, &quot;cat /proc/&amp;lt;PID&amp;gt;/stack&quot; could dump the stack to show where it was stuck?&lt;br/&gt;
Thanks!&lt;/p&gt;</comment>
                            <comment id="305201" author="sthiell" created="Tue, 22 Jun 2021 16:33:18 +0000"  >&lt;p&gt;Thanks Hongchao! I&apos;ve attached a dump of all kernel tasks of the OSS as&#160;&lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/39228/39228_oak-io6-s2-LU-14764-sysrq-t.log&quot; title=&quot;oak-io6-s2-LU-14764-sysrq-t.log attached to LU-14764&quot;&gt;oak-io6-s2-LU-14764-sysrq-t.log&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;I can see this task:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[2021-06-22T08:56:09-07:00] [1031942.051787] qsd_reint_1.oak S ffff8b6f1a37e300     0 88577      2 0x00000080
[2021-06-22T08:56:09-07:00] [1031942.059042] Call Trace:
[2021-06-22T08:56:09-07:00] [1031942.061663]  [&amp;lt;ffffffffab187229&amp;gt;] schedule+0x29/0x70
[2021-06-22T08:56:09-07:00] [1031942.066788]  [&amp;lt;ffffffffab184c58&amp;gt;] schedule_timeout+0x168/0x2d0
[2021-06-22T08:56:09-07:00] [1031942.072782]  [&amp;lt;ffffffffaaaad6e0&amp;gt;] ? __internal_add_timer+0x130/0x130
[2021-06-22T08:56:09-07:00] [1031942.079314]  [&amp;lt;ffffffffc10f9600&amp;gt;] ? ptlrpc_init_rq_pool+0x110/0x110 [ptlrpc]
[2021-06-22T08:56:09-07:00] [1031942.086542]  [&amp;lt;ffffffffc1103350&amp;gt;] ptlrpc_set_wait+0x480/0x790 [ptlrpc]
[2021-06-22T08:56:09-07:00] [1031942.093228]  [&amp;lt;ffffffffaaadaf40&amp;gt;] ? wake_up_state+0x20/0x20
[2021-06-22T08:56:09-07:00] [1031942.098983]  [&amp;lt;ffffffffc11036e3&amp;gt;] ptlrpc_queue_wait+0x83/0x230 [ptlrpc]
[2021-06-22T08:56:09-07:00] [1031942.105764]  [&amp;lt;ffffffffc154d005&amp;gt;] qsd_fetch_index+0x175/0x490 [lquota]
[2021-06-22T08:56:09-07:00] [1031942.112451]  [&amp;lt;ffffffffc15517b8&amp;gt;] qsd_reint_index+0x5c8/0x15d0 [lquota]
[2021-06-22T08:56:09-07:00] [1031942.119228]  [&amp;lt;ffffffffc1553bb2&amp;gt;] qsd_reint_main+0x902/0xe90 [lquota]
[2021-06-22T08:56:09-07:00] [1031942.125834]  [&amp;lt;ffffffffc15532b0&amp;gt;] ? qsd_reconciliation+0xaf0/0xaf0 [lquota]
[2021-06-22T08:56:09-07:00] [1031942.132955]  [&amp;lt;ffffffffaaac5c21&amp;gt;] kthread+0xd1/0xe0
[2021-06-22T08:56:09-07:00] [1031942.137996]  [&amp;lt;ffffffffaaac5b50&amp;gt;] ? insert_kthread_work+0x40/0x40
[2021-06-22T08:56:09-07:00] [1031942.144245]  [&amp;lt;ffffffffab194ddd&amp;gt;] ret_from_fork_nospec_begin+0x7/0x21
[2021-06-22T08:56:09-07:00] [1031942.150845]  [&amp;lt;ffffffffaaac5b50&amp;gt;] ? insert_kthread_work+0x40/0x40
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
</comment>
                            <comment id="305388" author="hongchao.zhang" created="Thu, 24 Jun 2021 12:17:26 +0000"  >&lt;p&gt;it seems the reintegration thread was stuck&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;00000100:00000001:8.0:1623797928.588752:0:88577:0:(client.c:2206:ptlrpc_expired_set()) Process entered
00000100:00000001:8.0:1623797928.588753:0:88577:0:(client.c:2241:ptlrpc_expired_set()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797928.588755:0:88577:0:(client.c:2294:ptlrpc_set_next_timeout()) Process entered
00000100:00000001:8.0:1623797928.588755:0:88577:0:(client.c:2330:ptlrpc_set_next_timeout()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:8.0:1623797928.588756:0:88577:0:(client.c:1703:ptlrpc_check_set()) Process entered
00000100:00000001:8.0:1623797928.588757:0:88577:0:(client.c:2624:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797928.588758:0:88577:0:(client.c:1172:ptlrpc_import_delay_req()) Process entered
00000100:00000001:8.0:1623797928.588759:0:88577:0:(client.c:1227:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797928.588759:0:88577:0:(client.c:2117:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:8.0:1623797928.588760:0:88577:0:(client.c:1703:ptlrpc_check_set()) Process entered
00000100:00000001:8.0:1623797928.588760:0:88577:0:(client.c:2624:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797928.588760:0:88577:0:(client.c:1172:ptlrpc_import_delay_req()) Process entered
00000100:00000001:8.0:1623797928.588761:0:88577:0:(client.c:1227:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797928.588761:0:88577:0:(client.c:2117:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)

00000100:00000001:8.0:1623797929.588669:0:88577:0:(client.c:2206:ptlrpc_expired_set()) Process entered
00000100:00000001:8.0:1623797929.588670:0:88577:0:(client.c:2241:ptlrpc_expired_set()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797929.588671:0:88577:0:(client.c:2294:ptlrpc_set_next_timeout()) Process entered
00000100:00000001:8.0:1623797929.588671:0:88577:0:(client.c:2330:ptlrpc_set_next_timeout()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:8.0:1623797929.588672:0:88577:0:(client.c:1703:ptlrpc_check_set()) Process entered
00000100:00000001:8.0:1623797929.588673:0:88577:0:(client.c:2624:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797929.588673:0:88577:0:(client.c:1172:ptlrpc_import_delay_req()) Process entered
00000100:00000001:8.0:1623797929.588674:0:88577:0:(client.c:1227:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797929.588675:0:88577:0:(client.c:2117:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:8.0:1623797929.588675:0:88577:0:(client.c:1703:ptlrpc_check_set()) Process entered
00000100:00000001:8.0:1623797929.588676:0:88577:0:(client.c:2624:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797929.588676:0:88577:0:(client.c:1172:ptlrpc_import_delay_req()) Process entered
00000100:00000001:8.0:1623797929.588676:0:88577:0:(client.c:1227:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797929.588676:0:88577:0:(client.c:2117:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)

00000100:00000001:8.0:1623797930.588710:0:88577:0:(client.c:2206:ptlrpc_expired_set()) Process entered
00000100:00000001:8.0:1623797930.588721:0:88577:0:(client.c:2241:ptlrpc_expired_set()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797930.588723:0:88577:0:(client.c:2294:ptlrpc_set_next_timeout()) Process entered
00000100:00000001:8.0:1623797930.588723:0:88577:0:(client.c:2330:ptlrpc_set_next_timeout()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:8.0:1623797930.588724:0:88577:0:(client.c:1703:ptlrpc_check_set()) Process entered
00000100:00000001:8.0:1623797930.588725:0:88577:0:(client.c:2624:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797930.588725:0:88577:0:(client.c:1172:ptlrpc_import_delay_req()) Process entered
00000100:00000001:8.0:1623797930.588726:0:88577:0:(client.c:1227:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797930.588726:0:88577:0:(client.c:2117:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:8.0:1623797930.588727:0:88577:0:(client.c:1703:ptlrpc_check_set()) Process entered
00000100:00000001:8.0:1623797930.588727:0:88577:0:(client.c:2624:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797930.588727:0:88577:0:(client.c:1172:ptlrpc_import_delay_req()) Process entered
00000100:00000001:8.0:1623797930.588727:0:88577:0:(client.c:1227:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797930.588728:0:88577:0:(client.c:2117:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)

00000100:00000001:8.0:1623797931.588670:0:88577:0:(client.c:2206:ptlrpc_expired_set()) Process entered
00000100:00000001:8.0:1623797931.588671:0:88577:0:(client.c:2241:ptlrpc_expired_set()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797931.588673:0:88577:0:(client.c:2294:ptlrpc_set_next_timeout()) Process entered
00000100:00000001:8.0:1623797931.588673:0:88577:0:(client.c:2330:ptlrpc_set_next_timeout()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:8.0:1623797931.588674:0:88577:0:(client.c:1703:ptlrpc_check_set()) Process entered
00000100:00000001:8.0:1623797931.588675:0:88577:0:(client.c:2624:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797931.588676:0:88577:0:(client.c:1172:ptlrpc_import_delay_req()) Process entered
00000100:00000001:8.0:1623797931.588676:0:88577:0:(client.c:1227:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797931.588677:0:88577:0:(client.c:2117:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:8.0:1623797931.588677:0:88577:0:(client.c:1703:ptlrpc_check_set()) Process entered
00000100:00000001:8.0:1623797931.588677:0:88577:0:(client.c:2624:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797931.588678:0:88577:0:(client.c:1172:ptlrpc_import_delay_req()) Process entered
00000100:00000001:8.0:1623797931.588678:0:88577:0:(client.c:1227:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797931.588678:0:88577:0:(client.c:2117:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)

00000100:00000001:8.0:1623797932.588711:0:88577:0:(client.c:2206:ptlrpc_expired_set()) Process entered
00000100:00000001:8.0:1623797932.588712:0:88577:0:(client.c:2241:ptlrpc_expired_set()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797932.588714:0:88577:0:(client.c:2294:ptlrpc_set_next_timeout()) Process entered
00000100:00000001:8.0:1623797932.588715:0:88577:0:(client.c:2330:ptlrpc_set_next_timeout()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:8.0:1623797932.588715:0:88577:0:(client.c:1703:ptlrpc_check_set()) Process entered
00000100:00000001:8.0:1623797932.588717:0:88577:0:(client.c:2624:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797932.588717:0:88577:0:(client.c:1172:ptlrpc_import_delay_req()) Process entered
00000100:00000001:8.0:1623797932.588719:0:88577:0:(client.c:1227:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797932.588720:0:88577:0:(client.c:2117:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:8.0:1623797932.588720:0:88577:0:(client.c:1703:ptlrpc_check_set()) Process entered
00000100:00000001:8.0:1623797932.588721:0:88577:0:(client.c:2624:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797932.588721:0:88577:0:(client.c:1172:ptlrpc_import_delay_req()) Process entered
00000100:00000001:8.0:1623797932.588722:0:88577:0:(client.c:1227:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:8.0:1623797932.588722:0:88577:0:(client.c:2117:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)

00000100:00000001:10.0:1623797933.588682:0:88577:0:(client.c:2206:ptlrpc_expired_set()) Process entered
00000100:00000001:10.0:1623797933.588682:0:88577:0:(client.c:2241:ptlrpc_expired_set()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:10.0:1623797933.588683:0:88577:0:(client.c:2294:ptlrpc_set_next_timeout()) Process entered
00000100:00000001:10.0:1623797933.588684:0:88577:0:(client.c:2330:ptlrpc_set_next_timeout()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:10.0:1623797933.588684:0:88577:0:(client.c:1703:ptlrpc_check_set()) Process entered
00000100:00000001:10.0:1623797933.588685:0:88577:0:(client.c:2624:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:10.0:1623797933.588686:0:88577:0:(client.c:1172:ptlrpc_import_delay_req()) Process entered
00000100:00000001:10.0:1623797933.588687:0:88577:0:(client.c:1227:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:10.0:1623797933.588688:0:88577:0:(client.c:2117:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:10.0:1623797933.588688:0:88577:0:(client.c:1703:ptlrpc_check_set()) Process entered
00000100:00000001:10.0:1623797933.588688:0:88577:0:(client.c:2624:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:10.0:1623797933.588689:0:88577:0:(client.c:1172:ptlrpc_import_delay_req()) Process entered
00000100:00000001:10.0:1623797933.588689:0:88577:0:(client.c:1227:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:10.0:1623797933.588689:0:88577:0:(client.c:2117:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)

00000100:00000001:10.0:1623797934.588671:0:88577:0:(client.c:2206:ptlrpc_expired_set()) Process entered
00000100:00000001:10.0:1623797934.588672:0:88577:0:(client.c:2241:ptlrpc_expired_set()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:10.0:1623797934.588674:0:88577:0:(client.c:2294:ptlrpc_set_next_timeout()) Process entered
00000100:00000001:10.0:1623797934.588674:0:88577:0:(client.c:2330:ptlrpc_set_next_timeout()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:10.0:1623797934.588675:0:88577:0:(client.c:1703:ptlrpc_check_set()) Process entered
00000100:00000001:10.0:1623797934.588675:0:88577:0:(client.c:2624:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:10.0:1623797934.588676:0:88577:0:(client.c:1172:ptlrpc_import_delay_req()) Process entered
00000100:00000001:10.0:1623797934.588676:0:88577:0:(client.c:1227:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:10.0:1623797934.588677:0:88577:0:(client.c:2117:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)
00000100:00000001:10.0:1623797934.588677:0:88577:0:(client.c:1703:ptlrpc_check_set()) Process entered
00000100:00000001:10.0:1623797934.588677:0:88577:0:(client.c:2624:ptlrpc_unregister_reply()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:10.0:1623797934.588678:0:88577:0:(client.c:1172:ptlrpc_import_delay_req()) Process entered
00000100:00000001:10.0:1623797934.588678:0:88577:0:(client.c:1227:ptlrpc_import_delay_req()) Process leaving (rc=1 : 1 : 1)
00000100:00000001:10.0:1623797934.588679:0:88577:0:(client.c:2117:ptlrpc_check_set()) Process leaving (rc=0 : 0 : 0)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;it should be caused by the failed reconnection between OST135 and MDT0000 (using LWP)&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;int ptlrpc_check_set(const struct lu_env *env, struct ptlrpc_request_set *set)  
{
                ...
                if (req-&amp;gt;rq_phase == RQ_PHASE_RPC) {
                                ...
                                spin_lock(&amp;amp;imp-&amp;gt;imp_lock);                      
                                if (ptlrpc_import_delay_req(imp, req, &amp;amp;status)){      &amp;lt;--- here, ptlrpc_import_delay_req return 1
                                        /* put on delay list - only if we wait  
                                         * recovery finished - before send */   
                                        list_del_init(&amp;amp;req-&amp;gt;rq_list);           
                                        list_add_tail(&amp;amp;req-&amp;gt;rq_list,            
                                                          &amp;amp;imp-&amp;gt;                
                                                          imp_delayed_list);    
                                        spin_unlock(&amp;amp;imp-&amp;gt;imp_lock);            
                                        continue;                               
                                }
                                ...
                }
                ...            
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Could you please get the value &quot;cat /proc/fs/lustre/mdt/oak-MDT0000/exports/xxx/export&quot; at MDT0000?&lt;br/&gt;
Thanks!&lt;/p&gt;</comment>
                            <comment id="305423" author="sthiell" created="Thu, 24 Jun 2021 16:02:21 +0000"  >&lt;p&gt;Hi Hongchao,&lt;br/&gt;
Ah, that&apos;s interesting indeed! I can&apos;t find OST0135 in /proc/fs/lustre/mdt/oak-MDT0000/exports/10.0.2.104@o2ib5/export (NID of OSS for this OST is 10.0.2.104@o2ib5). Attaching the output as  &lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/39277/39277_oak-MDT0000-oak-io6-s2-export.txt&quot; title=&quot;oak-MDT0000-oak-io6-s2-export.txt attached to LU-14764&quot;&gt;oak-MDT0000-oak-io6-s2-export.txt&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt; &lt;/p&gt;

&lt;p&gt;This filesystem has 6 MDTs, index 0 to 5. We can see that for the other MDTs, the OST is connected:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# clush -w @mds &apos;grep -H OST0135 /proc/fs/lustre/mdt/oak-MDT*/exports/10.0.2.104@o2ib5/export&apos;
oak-md1-s2: /proc/fs/lustre/mdt/oak-MDT0003/exports/10.0.2.104@o2ib5/export:oak-MDT0003-lwp-OST0135_UUID:
oak-md1-s1: /proc/fs/lustre/mdt/oak-MDT0001/exports/10.0.2.104@o2ib5/export:oak-MDT0001-lwp-OST0135_UUID:
oak-md1-s1: /proc/fs/lustre/mdt/oak-MDT0002/exports/10.0.2.104@o2ib5/export:oak-MDT0002-lwp-OST0135_UUID:
oak-md2-s2: /proc/fs/lustre/mdt/oak-MDT0005/exports/10.0.2.104@o2ib5/export:oak-MDT0005-lwp-OST0135_UUID:
oak-md2-s1: /proc/fs/lustre/mdt/oak-MDT0004/exports/10.0.2.104@o2ib5/export:oak-MDT0004-lwp-OST0135_UUID:
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;For example, that is the export with MDT0001:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;oak-MDT0001-lwp-OST0135_UUID:
    name: oak-MDT0001
    client: 10.0.2.104@o2ib5
    connect_flags: [ version, adaptive_timeouts, lru_resize, fid_is_enabled, full20, lvb_type, lightweight_conn, lfsck, bulk_mbits ]
    connect_data:
       flags: 0x2041401043000020
       instance: 47
       target_version: 2.12.6.0
    export_flags: [  ]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Is there something more we could do to help troubleshoot this, without having to restart the OST for example, perhaps there is a way to reset the LWP between MDT0000 and OST0135 somehow? Thanks again for taking the time to look at this!&lt;/p&gt;</comment>
                            <comment id="305424" author="sthiell" created="Thu, 24 Jun 2021 16:23:05 +0000"  >&lt;p&gt;Also, the osp between MDT0000 and OST0135 seems initialized, unlike the LWP:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@oak-md1-s2 ~]# cat /sys/fs/lustre/osp/oak-OST0135-osc-MDT0000/active 
1
[root@oak-md1-s2 ~]# cat /sys/fs/lustre/osp/oak-OST0135-osc-MDT0000/uuid 
oak-MDT0000-mdtlov_UUID
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It seems like this could be a consequence of a config llog problem we have been having that I reported in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14695&quot; title=&quot;New OST not visible by MDTs. MGS problem or corrupt catalog llog?&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-14695&quot;&gt;LU-14695&lt;/a&gt;.&lt;/p&gt;

</comment>
                            <comment id="305549" author="hongchao.zhang" created="Fri, 25 Jun 2021 15:17:25 +0000"  >&lt;p&gt;I have looked the code lines related to LWP, but can&apos;t find the way to reconnect forcibly yet, sorry!&lt;br/&gt;
Does the dmesg log at OST0135 have any logs related to the reconnection? normally, there will be logs if the connection is lost.&lt;br/&gt;
Thanks!&lt;/p&gt;</comment>
                            <comment id="353985" author="sergey" created="Wed, 23 Nov 2022 15:50:15 +0000"  >&lt;p&gt;If you had pool quotas here, this looks similar with &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-16339&quot; title=&quot;bug in qmt may cause clients to hang in cl_sync_io_wait &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-16339&quot;&gt;&lt;del&gt;LU-16339&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                            <attachment id="39277" name="oak-MDT0000-oak-io6-s2-export.txt" size="6800" author="sthiell" created="Thu, 24 Jun 2021 15:58:29 +0000"/>
                            <attachment id="39077" name="oak-OST0135_quota_issue.log.gz" size="15875213" author="sthiell" created="Tue, 15 Jun 2021 23:44:01 +0000"/>
                            <attachment id="39228" name="oak-io6-s2-LU-14764-sysrq-t.log" size="7137002" author="sthiell" created="Tue, 22 Jun 2021 16:30:48 +0000"/>
                            <attachment id="39117" name="oak-md1-s2-dk+quota+trace.log.gz" size="83820552" author="sthiell" created="Fri, 18 Jun 2021 01:27:04 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i01x0n:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>