<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:15:39 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-1326] Multihomed configuration with lustre 2.2.0</title>
                <link>https://jira.whamcloud.com/browse/LU-1326</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Hello,&lt;/p&gt;

&lt;p&gt;I&apos;m trying to build a new multihomed (Infiniband and Ethernet) Lustre 2.2.0 system:&lt;/p&gt;

&lt;p&gt;Everything works fine as long as i&apos;m only using one of the two networks, but i&apos;m unable to use both at the same time.&lt;br/&gt;
(eg: with ethernet-only and infiniband-only clients)&lt;/p&gt;

&lt;p&gt;My test setup looks like this:&lt;/p&gt;

&lt;p&gt;n-mds1:  Combined MGS/MDT with Infiniband and Ethernet&lt;br/&gt;
n-oss01: OSS - also with Infiniband and Ethernet&lt;br/&gt;
a9115:   Infiniband-only client&lt;br/&gt;
a9116:   Ethernet-only client&lt;/p&gt;

&lt;p&gt;The mds has 2 nids: 1x IB, 1x Ethernet:&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-mds1 ~&amp;#93;&lt;/span&gt;# cat /etc/modprobe.d/lustre.conf&lt;br/&gt;
 options lnet networks=o2ib(ib0),tcp(eth0)&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-mds1 ~&amp;#93;&lt;/span&gt;# modprobe lustre&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-mds1 ~&amp;#93;&lt;/span&gt;# lctl list_nids&lt;br/&gt;
 10.201.62.13@o2ib&lt;br/&gt;
 10.201.30.13@tcp&lt;/p&gt;

&lt;p&gt;The MDS was setup like this:&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-mds1 ~&amp;#93;&lt;/span&gt;# mkfs.lustre --fsname=foobar --reformat --mdt --mgs --mgsnode=10.201.62.13@o2ib,10.201.30.13@tcp /dev/mapper/vd01&lt;/p&gt;
&lt;ol&gt;
	&lt;li&gt;..also tried without mgsnode and with --servicenode=10.201....&lt;/li&gt;
&lt;/ol&gt;


&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-mds1 ~&amp;#93;&lt;/span&gt;# mount -t lustre /dev/mapper/vd01 /lustre/mds&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-mds1 ~&amp;#93;&lt;/span&gt;# lctl dl&lt;br/&gt;
  0 UP mgs MGS MGS 5&lt;br/&gt;
  1 UP mgc MGC10.201.62.13@o2ib 3aecba06-ec8b-aeab-2151-47d5a1c1bc47 5&lt;br/&gt;
  2 UP lov foobar-MDT0000-mdtlov foobar-MDT0000-mdtlov_UUID 4&lt;br/&gt;
  3 UP mdt foobar-MDT0000 foobar-MDT0000_UUID 3&lt;br/&gt;
  4 UP mds mdd_obd-foobar-MDT0000 mdd_obd_uuid-foobar-MDT0000 3&lt;/p&gt;

&lt;p&gt;The OSS also has two NIDs and is able to ping the MDS:&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-oss01 ~&amp;#93;&lt;/span&gt;# cat /etc/modprobe.d/lustre.conf&lt;br/&gt;
 options lnet networks=o2ib(ib0),tcp(eth0)&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-oss01 ~&amp;#93;&lt;/span&gt;# modprobe lustre&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-oss01 ~&amp;#93;&lt;/span&gt;# lctl list_nids&lt;br/&gt;
 10.201.62.31@o2ib&lt;br/&gt;
 10.201.30.31@tcp&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-oss01 ~&amp;#93;&lt;/span&gt;# lctl ping 10.201.62.13@o2ib # mds-ib&lt;br/&gt;
 12345-0@lo&lt;br/&gt;
 12345-10.201.62.13@o2ib&lt;br/&gt;
 12345-10.201.30.13@tcp&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-oss01 ~&amp;#93;&lt;/span&gt;# lctl ping 10.201.30.13@tcp # mds-eth&lt;br/&gt;
 12345-0@lo&lt;br/&gt;
 12345-10.201.62.13@o2ib&lt;br/&gt;
 12345-10.201.30.13@tcp&lt;/p&gt;

&lt;p&gt;The filesystem on the OSS was created via:&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-oss01 ~&amp;#93;&lt;/span&gt;# mkfs.lustre --reformat --fsname=foobar --ost --mgsnode=10.201.62.13@o2ib,10.201.30.13@tcp --index=0 /dev/mapper/vd01&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-oss01 ~&amp;#93;&lt;/span&gt;# mount -t lustre /dev/mapper/vd01 /lustre/vd01 &amp;amp;&amp;amp; sleep 2 &amp;amp;&amp;amp; lctl dl&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-oss01 ~&amp;#93;&lt;/span&gt;# lctl dl&lt;br/&gt;
  0 UP mgc MGC10.201.62.13@o2ib 7408e9c5-b92e-5423-fa52-497d0c540a43 5&lt;br/&gt;
  1 UP ost OSS OSS_uuid 3&lt;br/&gt;
  2 UP obdfilter foobar-OST0000 foobar-OST0000_UUID 5&lt;/p&gt;

&lt;p&gt;So the OSS seems to be happy, the MDS also looks fine:&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-mds1 ~&amp;#93;&lt;/span&gt;# lctl dl&lt;br/&gt;
  0 UP mgs MGS MGS 7&lt;br/&gt;
  1 UP mgc MGC10.201.62.13@o2ib 3aecba06-ec8b-aeab-2151-47d5a1c1bc47 5&lt;br/&gt;
  2 UP lov foobar-MDT0000-mdtlov foobar-MDT0000-mdtlov_UUID 4&lt;br/&gt;
  3 UP mdt foobar-MDT0000 foobar-MDT0000_UUID 3&lt;br/&gt;
  4 UP mds mdd_obd-foobar-MDT0000 mdd_obd_uuid-foobar-MDT0000 3&lt;br/&gt;
  5 UP osc foobar-OST0000-osc-MDT0000 foobar-MDT0000-mdtlov_UUID 5&lt;/p&gt;

&lt;p&gt;Mounting the filesystem on the IB-Only client works just fine now:&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;root@a9115 ~&amp;#93;&lt;/span&gt;# cat /etc/modprobe.d/lustre.conf&lt;br/&gt;
 options lnet networks=&quot;o2ib(ib0)&quot;&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@a9115 ~&amp;#93;&lt;/span&gt;# modprobe lustre&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@a9115 ~&amp;#93;&lt;/span&gt;# lctl list_nids&lt;br/&gt;
 10.201.36.34@o2ib&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@a9115 ~&amp;#93;&lt;/span&gt;# lctl ping 10.201.62.13@o2ib&lt;br/&gt;
 12345-0@lo&lt;br/&gt;
 12345-10.201.62.13@o2ib&lt;br/&gt;
 12345-10.201.30.13@tcp&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@a9115 ~&amp;#93;&lt;/span&gt;# mount -t lustre 10.201.62.13@o2ib:/foobar /cluster/scratch&lt;/p&gt;

&lt;p&gt;..but the ethernet-only client fails:&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;root@a9116 ~&amp;#93;&lt;/span&gt;# cat /etc/modprobe.d/lustre.conf&lt;br/&gt;
 options lnet networks=tcp(eth0)&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@a9116 ~&amp;#93;&lt;/span&gt;# modprobe lustre&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@a9116 ~&amp;#93;&lt;/span&gt;# lctl list_nids&lt;br/&gt;
 10.201.4.35@tcp&lt;/p&gt;

&lt;p&gt; lctl_ping seems to work:&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@a9116 ~&amp;#93;&lt;/span&gt;# lctl ping 10.201.30.13@tcp  # mds&lt;br/&gt;
 12345-0@lo&lt;br/&gt;
 12345-10.201.62.13@o2ib&lt;br/&gt;
 12345-10.201.30.13@tcp&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@a9116 ~&amp;#93;&lt;/span&gt;# lctl ping 10.201.30.31@tcp  # oss&lt;br/&gt;
 12345-0@lo&lt;br/&gt;
 12345-10.201.62.31@o2ib&lt;br/&gt;
 12345-10.201.30.31@tcp&lt;/p&gt;

&lt;p&gt;..the mount operation fails with:&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;root@a9116 ~&amp;#93;&lt;/span&gt;# lctl clear&lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@a9116 ~&amp;#93;&lt;/span&gt;# mount -t lustre 10.201.30.13@tcp:/foobar /cluster/scratch/&lt;br/&gt;
 mount.lustre: mount 10.201.30.13@tcp:/foobar at /cluster/scratch failed: No such file or directory&lt;/p&gt;

&lt;p&gt; Apr 12 14:10:21 a9116 kernel: Lustre: MGC10.201.30.13@tcp: Reactivating import&lt;br/&gt;
 Apr 12 14:10:21 a9116 kernel: LustreError: 9130:0:(ldlm_lib.c:381:client_obd_setup()) can&apos;t add initial connection&lt;br/&gt;
 Apr 12 14:10:21 a9116 kernel: LustreError: 9130:0:(obd_config.c:521:class_setup()) setup foobar-MDT0000-mdc-ffff880e3a01f400 failed (-2)&lt;br/&gt;
 Apr 12 14:10:21 a9116 kernel: LustreError: 9130:0:(obd_config.c:1362:class_config_llog_handler()) Err -2 on cfg command:&lt;br/&gt;
 Apr 12 14:10:21 a9116 kernel: Lustre:    cmd=cf003 0:foobar-MDT0000-mdc  1:foobar-MDT0000_UUID  2:10.201.62.13@o2ib  &lt;br/&gt;
 Apr 12 14:10:21 a9116 kernel: LustreError: 15c-8: MGC10.201.30.13@tcp: The configuration from log &apos;foobar-client&apos; failed (-2). This may be the result of communication errors between this node and the MGS,&lt;br/&gt;
 a bad configuration, or other errors. See the syslog for more information.&lt;br/&gt;
 Apr 12 14:10:21 a9116 kernel: LustreError: 9116:0:(llite_lib.c:978:ll_fill_super()) Unable to process log: -2&lt;br/&gt;
 Apr 12 14:10:21 a9116 kernel: LustreError: 9116:0:(obd_config.c:566:class_cleanup()) Device 3 not setup&lt;br/&gt;
 Apr 12 14:10:21 a9116 kernel: LustreError: 9116:0:(ldlm_request.c:1170:ldlm_cli_cancel_req()) Got rc -108 from cancel RPC: canceling anyway&lt;br/&gt;
 Apr 12 14:10:22 a9116 kernel: LustreError: 9116:0:(ldlm_request.c:1796:ldlm_cli_cancel_list()) ldlm_cli_cancel_list: -108&lt;br/&gt;
 Is the MGS specification correct?&lt;br/&gt;
 Is the filesystem name correct?&lt;br/&gt;
 If upgrading, is the copied client log valid? (see upgrade docs)&lt;br/&gt;
 Apr 12 14:10:22 a9116 kernel: Lustre: client ffff880e3a01f400 umount complete&lt;br/&gt;
 Apr 12 14:10:22 a9116 kernel: LustreError: 9116:0:(obd_mount.c:2349:lustre_fill_super()) Unable to mount  (-2)&lt;/p&gt;

&lt;p&gt;Why does the ethernet client receive (or pick?) the infiniband nid of the MDS? (10.201.62.13@o2ib)&lt;/p&gt;

&lt;p&gt;&apos;lctl dk&apos; reports the same:&lt;/p&gt;

&lt;p&gt; 00000020:01000000:7.0:1334232621.765857:0:9130:0:(obd_config.c:1217:class_config_llog_handler()) Marker, inst_flg=0x0 mark_flg=0x1&lt;br/&gt;
 00000020:00000080:7.0:1334232621.765859:0:9130:0:(obd_config.c:915:class_process_config()) processing cmd: cf010&lt;br/&gt;
 00000020:00000080:7.0:1334232621.765860:0:9130:0:(obd_config.c:984:class_process_config()) marker 5 (0x1) foobar-MDT0000 add mdc&lt;br/&gt;
 00000020:00000080:7.0:1334232621.765861:0:9130:0:(obd_config.c:915:class_process_config()) processing cmd: cf005&lt;br/&gt;
 00000020:00000080:7.0:1334232621.765870:0:9130:0:(obd_config.c:926:class_process_config()) adding mapping from uuid 10.201.62.13@o2ib to nid 0x500000ac93e0d (10.201.62.13@o2ib)&lt;br/&gt;
 00000020:00000080:7.0:1334232621.765873:0:9130:0:(obd_config.c:915:class_process_config()) processing cmd: cf005&lt;br/&gt;
 00000020:00000080:7.0:1334232621.765874:0:9130:0:(obd_config.c:926:class_process_config()) adding mapping from uuid 10.201.62.13@o2ib to nid 0x200000ac91e0d (10.201.30.13@tcp)&lt;br/&gt;
 00000020:01000000:7.0:1334232621.765877:0:9130:0:(obd_config.c:1299:class_config_llog_handler()) cmd cf001, instance name: foobar-MDT0000-mdc-ffff880e3a01f400&lt;br/&gt;
 00000020:00000080:7.0:1334232621.765878:0:9130:0:(obd_config.c:915:class_process_config()) processing cmd: cf001&lt;br/&gt;
 00000020:00000080:7.0:1334232621.765879:0:9130:0:(obd_config.c:318:class_attach()) attach type mdc name: foobar-MDT0000-mdc-ffff880e3a01f400 uuid: 6a7aaf3a-bbcb-9abf-3516-4320e2718614&lt;br/&gt;
 00000020:00000080:7.0:1334232621.765936:0:9130:0:(genops.c:348:class_newdev()) Adding new device foobar-MDT0000-mdc-ffff880e3a01f400 (ffff8810358320b8)&lt;br/&gt;
 00000020:00000080:7.0:1334232621.765938:0:9130:0:(obd_config.c:392:class_attach()) OBD: dev 3 attached type mdc with refcount 1&lt;br/&gt;
 00000020:01000000:7.0:1334232621.765940:0:9130:0:(obd_config.c:1299:class_config_llog_handler()) cmd cf003, instance name: foobar-MDT0000-mdc-ffff880e3a01f400&lt;br/&gt;
 00000020:00000080:7.0:1334232621.765941:0:9130:0:(obd_config.c:915:class_process_config()) processing cmd: cf003&lt;br/&gt;
 00000100:00000100:7.0:1334232621.765957:0:9130:0:(client.c:80:ptlrpc_uuid_to_connection()) cannot find peer 10.201.62.13@o2ib!&lt;br/&gt;
 00010000:00080000:7.0:1334232621.765959:0:9130:0:(ldlm_lib.c:74:import_set_conn()) can&apos;t find connection 10.201.62.13@o2ib&lt;br/&gt;
 00010000:00020000:7.0:1334232621.765960:0:9130:0:(ldlm_lib.c:381:client_obd_setup()) can&apos;t add initial connection&lt;br/&gt;
 00000020:00000080:7.0:1334232621.793034:0:9130:0:(genops.c:786:class_export_put()) final put ffff88103af10400/6a7aaf3a-bbcb-9abf-3516-4320e2718614&lt;br/&gt;
 00000020:00020000:7.0:1334232621.793043:0:9130:0:(obd_config.c:521:class_setup()) setup foobar-MDT0000-mdc-ffff880e3a01f400 failed (-2)&lt;br/&gt;
 00000020:00000080:4.0:1334232621.793043:0:8930:0:(genops.c:915:class_import_destroy()) destroying import ffff881039698800 for foobar-MDT0000-mdc-ffff880e3a01f400&lt;br/&gt;
 00000020:00000080:4.0:1334232621.793049:0:8930:0:(genops.c:743:class_export_destroy()) destroying export ffff88103af10400/6a7aaf3a-bbcb-9abf-3516-4320e2718614 for foobar-MDT0000-mdc-ffff880e3a01f400&lt;br/&gt;
 00000020:00020000:7.0:1334232621.823020:0:9130:0:(obd_config.c:1362:class_config_llog_handler()) Err -2 on cfg command:&lt;br/&gt;
 00000020:02000400:7.0:1334232621.850778:0:9130:0:(obd_config.c:1456:class_config_dump_handler())    cmd=cf003 0:foobar-MDT0000-mdc  1:foobar-MDT0000_UUID  2:10.201.62.13@o2ib&lt;br/&gt;
 00000020:01000000:31.0:1334232621.850818:0:9116:0:(obd_config.c:1393:class_config_parse_llog()) Processed log foobar-client gen 1-13 (rc=-2)&lt;/p&gt;

&lt;p&gt;I&apos;m also puzzled about the nids in CONFIGS/foobar-&lt;span class=&quot;error&quot;&gt;&amp;#91;client,MDT0000&amp;#93;&lt;/span&gt;:&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-mds1 ~&amp;#93;&lt;/span&gt;# llog_reader  /lustre/mds/CONFIGS/foobar-client |grep uuid&lt;br/&gt;
 Target uuid : config_uuid&lt;br/&gt;
                uuid=foobar-clilov_UUID  stripe:cnt=1 size=1048576 offset=18446744073709551615 pattern=0x1&lt;br/&gt;
                uuid=foobar-clilmv_UUID  stripe:cnt=0 size=0 offset=0 pattern=0&lt;br/&gt;
 #10 (088)add_uuid  nid=10.201.62.13@o2ib(0x500000ac93e0d)  0:  1:10.201.62.13@o2ib  &lt;br/&gt;
 #11 (088)add_uuid  nid=10.201.30.13@tcp(0x200000ac91e0d)  0:  1:10.201.62.13@o2ib  &lt;br/&gt;
 #20 (088)add_uuid  nid=10.201.62.31@o2ib(0x500000ac93e1f)  0:  1:10.201.62.31@o2ib  &lt;br/&gt;
 #21 (088)add_uuid  nid=10.201.30.31@tcp(0x200000ac91e1f)  0:  1:10.201.62.31@o2ib  &lt;br/&gt;
 &lt;span class=&quot;error&quot;&gt;&amp;#91;root@n-mds1 ~&amp;#93;&lt;/span&gt;# llog_reader  /lustre/mds/CONFIGS/foobar-MDT0000 |grep uuid&lt;br/&gt;
 Target uuid : config_uuid&lt;br/&gt;
                uuid=foobar-MDT0000-mdtlov_UUID  stripe:cnt=1 size=1048576 offset=18446744073709551615 pattern=0x1&lt;br/&gt;
 #11 (088)add_uuid  nid=10.201.62.31@o2ib(0x500000ac93e1f)  0:  1:10.201.62.31@o2ib  &lt;br/&gt;
 #12 (088)add_uuid  nid=10.201.30.31@tcp(0x200000ac91e1f)  0:  1:10.201.62.31@o2ib  &lt;/p&gt;

&lt;p&gt;Is it normal that there are no tcp nids on the right hand side and what stupid mistake did i make while setting up the system? &lt;/p&gt;</description>
                <environment>Lustre 2.2.0 system</environment>
        <key id="14026">LU-1326</key>
            <summary>Multihomed configuration with lustre 2.2.0</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="ethz.support">ETHz Support</reporter>
                        <labels>
                    </labels>
                <created>Mon, 16 Apr 2012 09:32:29 +0000</created>
                <updated>Tue, 17 Apr 2012 02:35:33 +0000</updated>
                            <resolved>Mon, 16 Apr 2012 12:46:01 +0000</resolved>
                                    <version>Lustre 2.2.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>1</watches>
                                                                            <comments>
                            <comment id="34786" author="ethz.support" created="Mon, 16 Apr 2012 11:48:19 +0000"  >&lt;p&gt;I think that this is the same problem/bug as in&lt;br/&gt;
&lt;a href=&quot;http://jira.whamcloud.com/browse/LU-1308&quot; class=&quot;external-link&quot; rel=&quot;nofollow&quot;&gt;http://jira.whamcloud.com/browse/LU-1308&lt;/a&gt; :&lt;/p&gt;

&lt;p&gt;Apr 11 16:55:53 n1-4-1 kernel: Lustre: MGC172.16.126.1@tcp:&lt;br/&gt;
Reactivating import &amp;lt;-- mount starts via TCP&lt;br/&gt;
....&lt;br/&gt;
Apr 11 16:55:53 n1-4-1 kernel: Lustre: cmd=cf003 0:scratch-MDT0000-mdc&lt;br/&gt;
1:scratch-MDT0000_UUID 2:172.16.193.1@o2ib  &amp;lt;-- what is o2ib doing&lt;br/&gt;
here?!&lt;/p&gt;

&lt;p&gt;That&apos;s exactly the same message that we are getting on our installation.&lt;/p&gt;</comment>
                            <comment id="34813" author="pjones" created="Mon, 16 Apr 2012 12:34:29 +0000"  >&lt;p&gt;Yes I think that you are correct about LU1308 being a duplicate.&lt;/p&gt;</comment>
                            <comment id="34820" author="pjones" created="Mon, 16 Apr 2012 12:46:01 +0000"  >&lt;p&gt;ok let&apos;s track this issue under &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1308&quot; title=&quot;2.2 clients unable to mount upgraded MDT&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1308&quot;&gt;&lt;del&gt;LU-1308&lt;/del&gt;&lt;/a&gt; as that was opened first. I will be assigning that ticket shortly.&lt;/p&gt;</comment>
                            <comment id="34822" author="ethz.support" created="Mon, 16 Apr 2012 12:46:58 +0000"  >&lt;p&gt;Could you give me a workaround ? or are you working for a patch?&lt;/p&gt;</comment>
                            <comment id="34823" author="ethz.support" created="Mon, 16 Apr 2012 12:47:23 +0000"  >&lt;p&gt;ok&lt;/p&gt;</comment>
                            <comment id="34903" author="green" created="Tue, 17 Apr 2012 02:35:19 +0000"  >&lt;p&gt;Please try this patch &lt;a href=&quot;http://review.whamcloud.com/2561&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/2561&lt;/a&gt;&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvh27:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>6413</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>