<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:09:15 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-14381] Client stuck using single NID when multiple are available</title>
                <link>https://jira.whamcloud.com/browse/LU-14381</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Filing this as a bug, but it might be user-error. The problem I&apos;m seeing is that clients are only attempting to connect to one of two available NIDs on the servers. Both clients and servers are configured with two NIDs, one on @tcp and on one @tcp1. Discovery/multi-rail is disabled. If I simulate failure of the @tcp NID on the OSS then I would expect the client to eventually try the @tcp1 NID, but that never happens. I tried modifying test-framework.sh so that it formats the OST with &lt;tt&gt;--servicenode=192.168.2.34@tcp,192.168.2.35@tcp1&lt;/tt&gt;, but that didn&apos;t make any difference. Is this working as expected? Is this a bug? Am I missing some necessary config to allow client to use either NID?&lt;/p&gt;

&lt;p&gt;Edit 1: There was a suggestion to put the interfaces on separate subnets. I tried that and it did not resolve the issue. See &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14381?focusedCommentId=290650&amp;amp;page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-290650&quot; class=&quot;external-link&quot; rel=&quot;nofollow&quot;&gt;this comment&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Edit 2: Attached -1 debug log from the client  &lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/37312/37312_client.dklog&quot; title=&quot;client.dklog attached to LU-14381&quot;&gt;client.dklog&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt; . Note this debug log was captured after I changed the configuration to put the interfaces on separate networks as noted in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14381?focusedCommentId=290650&amp;amp;page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-290650&quot; class=&quot;external-link&quot; rel=&quot;nofollow&quot;&gt;this comment&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Details on how to reproduce follow.&lt;/p&gt;

&lt;p&gt;Version under test:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;sles15s01:/home/hornc/fs2 # ./LUSTRE-VERSION-GEN
2.13.57_71_gb538826
sles15s01:/home/hornc/fs2 #
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;LNet configuration is tcp(eth0) and tcp1(eth1) with LNet peer discovery disabled:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;sles15s01:/home/hornc/fs2/lustre/tests # cat /etc/modprobe.d/lustre.conf
options libcfs libcfs_debug=320735104
options libcfs libcfs_subsystem_debug=-2049
options lnet lnet_peer_discovery_disabled=1
options lnet ip2nets=&quot;tcp(eth0) 192.168.2.[30,32,34,36,38,39,40,41]; tcp1(eth1) 192.168.2.[31,33,35,37,42,43,44,45]&quot;
sles15s01:/home/hornc/fs2/lustre/tests #
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Test-framework config:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;sles15s01:/home/hornc/fs2/lustre/tests # cat cfg/hornc.sh
# facet hosts
MDSCOUNT=1

mds_HOST=sles15s01
MDSDEV1=/dev/sdc

OSTCOUNT=1

ost_HOST=sles15s03
OSTDEV1=/dev/sde

CLIENTCOUNT=1
RCLIENTS=&quot;sles15c01&quot;
PDSH=&quot;pdsh -S -Rssh -w&quot;

SHARED_DIRECTORY=&quot;/shared/testing&quot;
MGSNID=&quot;192.168.2.30@tcp,192.168.2.31@tcp1&quot;

. /home/hornc/fs2/lustre/tests/cfg/ncli.sh
sles15s01:/home/hornc/fs2/lustre/tests #
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Use llmount.sh to stand up filesystem:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;sles15s01:/home/hornc/fs2/lustre/tests # NAME=hornc LOAD_MODULES_REMOTE=true VERBOSE=true /home/hornc/fs2/lustre/tests/llmount.sh
...
sles15s01:/home/hornc/fs2/lustre/tests # pdsh -w sles15s0[1,3],sles15c01 lctl list_nids | dshbak -c
----------------
sles15s01
----------------
192.168.2.30@tcp
192.168.2.31@tcp1
----------------
sles15c01
----------------
192.168.2.38@tcp
192.168.2.42@tcp1
----------------
sles15s03
----------------
192.168.2.34@tcp
192.168.2.35@tcp1
sles15s01:/home/hornc/fs2 # pdsh -w sles15s0[1,3],sles15c01 &apos;lnetctl global show&apos; | dshbak -c
----------------
sles15c01,sles15s[01,03]
----------------
global:
    numa_range: 0
    max_intf: 200
    discovery: 0
    drop_asym_route: 0
    retry_count: 2
    transaction_timeout: 50
    health_sensitivity: 100
    recovery_interval: 1
    router_sensitivity: 100
    lnd_timeout: 16
    response_tracking: 3
sles15s01:/home/hornc/fs2 #
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Unmount lustre from the client then re-mount with a drop rule in place, so that any traffic the client sends to eth0 on the OSS (a.k.a. 192.168.2.34@tcp) is dropped:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;sles15c01:~ # umount /mnt/lustre
sles15c01:~ # lctl net down; lustre_rmmod
LNET busy
sles15c01:~ # tf_start.sh
Loading modules from /home/hornc/fs2/lustre
detected 1 online CPUs by sysfs
libcfs will create CPU partition based on online CPUs
../libcfs/libcfs/libcfs options: &apos;libcfs_debug=320735104 libcfs_subsystem_debug=-2049&apos;
../lnet/lnet/lnet options: &apos;lnet_peer_discovery_disabled=1 ip2nets=&quot;tcp(eth0) 192.168.2.[30,32,34,36,38,39,40,41]; tcp1(eth1) 192.168.2.[31,33,35,37,42,43,44,45]&quot; accept=all&apos;
quota/lquota options: &apos;hash_lqs_cur_bits=3&apos;
sles15c01:~ # lctl set_param debug=+&apos;net rpctrace&apos;
debug=+net rpctrace
sles15c01:~ # lctl get_param debug
debug=
super ioctl neterror net warning dlmtrace error emerg ha rpctrace vfstrace config console lfsck
sles15c01:~ # lctl net_drop_add -s *@tcp -d 192.168.2.34@tcp -r 1 -e remote_timeout
Added drop rule 255.255.255.255@tcp-&amp;gt;192.168.2.34@tcp (1/1)
sles15c01:~ # mount -t lustre -o user_xattr,flock 192.168.2.30@tcp,192.168.2.31@tcp1:/lustre /mnt/lustre
sles15c01:~ # lfs check servers
lfs check: error: check &apos;lustre-OST0000-osc-ffff9c00a8b70000&apos;: Resource temporarily unavailable (11)
lustre-MDT0000-mdc-ffff9c00a8b70000 active.
sles15c01:~ #
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Wait a couple minutes and dump the log:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;sles15c01:~ # sleep 120; lfs check servers; lctl dk &amp;gt; /tmp/dk.log
lfs check: error: check &apos;lustre-OST0000-osc-ffff9c00a8b70000&apos;: Resource temporarily unavailable (11)
lustre-MDT0000-mdc-ffff9c00a8b70000 active.
sles15c01:~ #
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;With net trace enabled, LNet logs every send that it performs. Those entries look like this:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;00000400:00000200:0.0:1611862288.864372:0:11737:0:(lib-move.c:1833:lnet_handle_send()) TRACE: 192.168.2.38@tcp(192.168.2.38@tcp:&amp;lt;?&amp;gt;) -&amp;gt; 192.168.2.34@tcp(192.168.2.34@tcp:192.168.2.34@tcp) &amp;lt;?&amp;gt; : GET try# 0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;So if LNet sent any message to an @tcp1 NID then we should have record of it in the log.&lt;/p&gt;

&lt;p&gt;We can see in the debug log that we haven&apos;t sent any messages to any tcp1 NIDs:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;sles15c01:~ # grep tcp1 /tmp/dk.log
00000400:02000000:0.0:1611862138.860810:0:11727:0:(api-ni.c:2336:lnet_startup_lndni()) Added LNI 192.168.2.42@tcp1 [8/256/0/180]
00000020:01000004:0.0:1611862159.890388:0:12205:0:(obd_mount.c:968:lmd_print()) device:  192.168.2.30@tcp,192.168.2.31@tcp1:/lustre
00000020:00000080:0.0:1611862159.890446:0:12205:0:(obd_config.c:1383:class_process_config()) adding mapping from uuid MGC192.168.2.30@tcp_0 to nid 0x20001c0a8021f (192.168.2.31@tcp1)
00000020:00000080:0.0:1611862159.900401:0:12218:0:(obd_config.c:1383:class_process_config()) adding mapping from uuid 192.168.2.30@tcp to nid 0x20001c0a8021f (192.168.2.31@tcp1)
00000020:01000004:0.0:1611862159.903289:0:12218:0:(obd_mount.c:1004:lustre_check_exclusion()) Check exclusion lustre-OST0000 (0) in 0 of 192.168.2.30@tcp,192.168.2.31@tcp1:/lustre
00000020:00000080:0.0:1611862159.903298:0:12218:0:(obd_config.c:1383:class_process_config()) adding mapping from uuid 192.168.2.34@tcp to nid 0x20001c0a80223 (192.168.2.35@tcp1)
00000020:00000004:0.0:1611862160.910881:0:12205:0:(obd_mount.c:1683:lustre_fill_super()) Mount 192.168.2.30@tcp,192.168.2.31@tcp1:/lustre complete
sles15c01:~ # grep 192.168.2.35 /tmp/dk.log
00000020:00000080:0.0:1611862159.903298:0:12218:0:(obd_config.c:1383:class_process_config()) adding mapping from uuid 192.168.2.34@tcp to nid 0x20001c0a80223 (192.168.2.35@tcp1)
sles15c01:~ #
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Statistics on the OSS confirm no traffic on tcp1:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;sles15s03:~ # lnetctl net show --net tcp1 -v 2 | egrep -e send_count -e recv_count
              send_count: 0
              recv_count: 0
sles15s03:~ #
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="62568">LU-14381</key>
            <summary>Client stuck using single NID when multiple are available</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="hornc">Chris Horn</reporter>
                        <labels>
                    </labels>
                <created>Thu, 28 Jan 2021 19:49:03 +0000</created>
                <updated>Fri, 12 Nov 2021 13:24:51 +0000</updated>
                                            <version>Lustre 2.14.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="290646" author="adilger" created="Thu, 28 Jan 2021 20:13:26 +0000"  >&lt;p&gt;Have you tried this with eth0 and eth1 on different subnets?  There are strange things in the kernel TCP code that make it difficult to guarantee sending from one interface or the other when both are on the same subnet.&lt;/p&gt;</comment>
                            <comment id="290647" author="hornc" created="Thu, 28 Jan 2021 20:16:33 +0000"  >&lt;p&gt;I can try that though I&apos;m pretty certain it won&apos;t make any difference. The tracing on the client is clear. The presence of the net drop rule on the client ensures that any traffic being sent to the @tcp NID will be dropped &lt;em&gt;before it even hits the wire&lt;/em&gt; (nay, before it even hits the kernel networking code). Furthermore, the tracing makes it clear that no attempt is made to send to any @tcp1 NID. So whether the OSS is inappropriately using eth0 when it should use eth1 or vice versa doesn&apos;t make any difference. It never receives a single connect request from the client, because those connect RPCs are being sent to the @tcp nid and are thus dropped before hitting the wire.&lt;/p&gt;</comment>
                            <comment id="290650" author="hornc" created="Thu, 28 Jan 2021 20:43:41 +0000"  >&lt;p&gt;Confirmed that putting the interfaces on different networks doesn&apos;t resolve issue:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;sles15s01:~ # pdsh -w sles15s0[1,3],sles15c01 lctl list_nids | dshbak -c
----------------
sles15s01
----------------
192.168.2.30@tcp
10.10.0.50@tcp1
----------------
sles15c01
----------------
192.168.2.38@tcp
10.10.0.54@tcp1
----------------
sles15s03
----------------
192.168.2.34@tcp
10.10.0.52@tcp1
sles15s01:~ #

sles15c01:~ # umount /mnt/lustre
sles15c01:~ # lctl net down; lustre_rmmod
LNET busy
sles15c01:~ # tf_start.sh
Loading modules from /home/hornc/fs2/lustre
detected 1 online CPUs by sysfs
libcfs will create CPU partition based on online CPUs
../libcfs/libcfs/libcfs options: &apos;libcfs_debug=320735104 libcfs_subsystem_debug=-2049&apos;
../lnet/lnet/lnet options: &apos;lnet_peer_discovery_disabled=1 ip2nets=&quot;tcp(eth0) 192.168.2.[30,32,34,36,38,39,40,41]; tcp1(eth1) 10.10.0.[50,52,54]&quot; accept=all&apos;
quota/lquota options: &apos;hash_lqs_cur_bits=3&apos;
sles15c01:~ # lctl set_param debug=+&quot;net rpctrace&quot;
debug=+net rpctrace
sles15c01:~ # lctl get_param debug
debug=
super ioctl neterror net warning dlmtrace error emerg ha rpctrace vfstrace config console lfsck
sles15c01:~ # lctl net_drop_add -s *@tcp -d 192.168.2.34@tcp -r 1 -e remote_timeout
Added drop rule 255.255.255.255@tcp-&amp;gt;192.168.2.34@tcp (1/1)
sles15c01:~ # mount -t lustre -o user_xattr,flock 192.168.2.30@tcp,10.10.0.50@tcp1:/lustre /mnt/lustre
sles15c01:~ # lfs check servers
lfs check: error: check &apos;lustre-OST0000-osc-ffff9ab8f6caf000&apos;: Resource temporarily unavailable (11)
lustre-MDT0000-mdc-ffff9ab8f6caf000 active.
sles15c01:~ # sleep 120; lfs check servers; lctl dk &amp;gt; /tmp/dk.log
lfs check: error: check &apos;lustre-OST0000-osc-ffff9ab8f6caf000&apos;: Resource temporarily unavailable (11)
lustre-MDT0000-mdc-ffff9ab8f6caf000 active.
sles15c01:~ # grep tcp1 /tmp/dk.log
00000400:02000000:0.0:1611866400.918012:0:7175:0:(api-ni.c:2336:lnet_startup_lndni()) Added LNI 10.10.0.54@tcp1 [8/256/0/180]
00000020:01000004:0.0:1611866458.583906:0:7653:0:(obd_mount.c:968:lmd_print()) device:  192.168.2.30@tcp,10.10.0.50@tcp1:/lustre
00000020:00000080:0.0:1611866458.583963:0:7653:0:(obd_config.c:1383:class_process_config()) adding mapping from uuid MGC192.168.2.30@tcp_0 to nid 0x200010a0a0032 (10.10.0.50@tcp1)
00000020:00000080:0.0:1611866458.594953:0:7666:0:(obd_config.c:1383:class_process_config()) adding mapping from uuid 192.168.2.30@tcp to nid 0x200010a0a0032 (10.10.0.50@tcp1)
00000020:01000004:0.0:1611866458.598044:0:7666:0:(obd_mount.c:1004:lustre_check_exclusion()) Check exclusion lustre-OST0000 (0) in 0 of 192.168.2.30@tcp,10.10.0.50@tcp1:/lustre
00000020:00000080:0.0:1611866458.598052:0:7666:0:(obd_config.c:1383:class_process_config()) adding mapping from uuid 192.168.2.34@tcp to nid 0x200010a0a0034 (10.10.0.52@tcp1)
00000020:00000004:0.0:1611866459.606590:0:7653:0:(obd_mount.c:1683:lustre_fill_super()) Mount 192.168.2.30@tcp,10.10.0.50@tcp1:/lustre complete
sles15c01:~ # grep 10.10.0.52 /tmp/dk.log
00000020:00000080:0.0:1611866458.598052:0:7666:0:(obd_config.c:1383:class_process_config()) adding mapping from uuid 192.168.2.34@tcp to nid 0x200010a0a0034 (10.10.0.52@tcp1)
sles15c01:~ #
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="290738" author="ashehata" created="Fri, 29 Jan 2021 21:37:28 +0000"  >&lt;p&gt;Below is the call flow for setting a connection&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
  +-&amp;lt; import_set_conn
    +-&amp;lt; import_set_conn_priority
    | +-&amp;lt; ptlrpc_recover_import
    | | +-&amp;lt; mdc_iocontrol
    | | +-&amp;lt; osc_iocontrol
    | | +-&amp;lt; osp_iocontrol
    | | +-&amp;lt; ptlrpc_reconnect_import
    | | +-&amp;lt; ldebugfs_import_seq_write
    | | +-&amp;lt; ptlrpc_set_import_active
    +-&amp;lt; client_import_add_conn
    | +-&amp;lt; client_obd_setup
    | | +-&amp;lt; mgc_setup
    | | +-&amp;lt; osc_setup_common
    | | +-&amp;lt; lwp_setup
    | | +-&amp;lt; osp_init0
    +-&amp;lt; client_import_dyn_add_conn
    | +-&amp;lt; mgc_apply_recover_logs
    | | +-&amp;lt; mgc_process_recover_nodemap_log
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;the first path is from the osc_setup_common() that&apos;s on initial connect&lt;br/&gt;
 the second one is from the recover_import so on failover&lt;br/&gt;
 both of them eventually funnel into ptlrpc_uuid_to_peer()&lt;br/&gt;
 which calls LNetDist() (as far as I understand)&lt;br/&gt;
 so if LNetDist() takes into consideration the status of the NID, reachable or not (it doesn&apos;t right now)&lt;br/&gt;
 then it seems like we should be able to switch the NID that&apos;s currently used if it goes down. &lt;br/&gt;
Because LNetDist() only uses the LNet configuration to determine the distance, then the potentially down NID will always be selected.&lt;/p&gt;</comment>
                            <comment id="291163" author="hornc" created="Wed, 3 Feb 2021 20:44:53 +0000"  >&lt;p&gt;I ran an experiment to test the above theory. I modifed LNetDist to discover the supplied dstnid, and if that fails then we return &amp;lt; 0. I confirmed that this change is sufficient on the initial mount to get a working connection. i.e. the client uses the working @tcp1 NID rather than the broken @tcp NID. However, if I modify the drop rule to drop @tcp1 traffic and remove the drop rule for @tcp traffic, then we hit the same issue. Client continually tries to connect using @tcp1 and never tries the @tcp NID.&lt;/p&gt;

&lt;p&gt;I tried another experiment where I modified the test-framework.sh so that it formatted the OST using &lt;tt&gt;--servicenode=192.168.2.34@tcp:192.168.2.35@tcp1&lt;/tt&gt;  (i.e. specifying the NIDs like failover partners) and that does work to resolve the issue (perhaps as expected).&lt;/p&gt;</comment>
                            <comment id="291177" author="ashehata" created="Thu, 4 Feb 2021 02:06:51 +0000"  >&lt;p&gt;The second test where you do &quot;&lt;tt&gt;--servicenode=192.168.2.34@tcp:192.168.2.35@tcp1&lt;/tt&gt;&quot; Does that work without the LNetDist change? or does it need the LNetDist change to work?&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="37312" name="client.dklog" size="3273505" author="hornc" created="Thu, 28 Jan 2021 20:57:14 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i01kpj:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>