[LU-10707] TCP eth routed LNet traffic broken Created: 23/Feb/18  Updated: 08/Nov/19  Resolved: 03/May/18

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.10.1, Lustre 2.10.2, Lustre 2.10.3
Fix Version/s: Lustre 2.10.4

Type: Bug Priority: Major
Reporter: SC Admin (Inactive) Assignee: James A Simmons
Resolution: Fixed Votes: 0
Labels: lnet, patch
Environment:

CentOS 7.4, OPA, QDR, kernel OFED, lustre-client 2.10.3.


Issue Links:
Related
is related to LU-9397 Inconsistence use of cfs_time_current... Resolved
is related to LU-6245 Untangle userland and kernel space su... Resolved
is related to LU-9019 Migrate lustre to standard 64 bit tim... Resolved
is related to LU-10807 ksocknal_reaper() jitter on b2_10 Resolved
Severity: 2
Rank (Obsolete): 9223372036854775807

 Description   

Hi Folks,

We've been experiencing a problem with our LNet routers in lustre 2.10.x and hoping we could get some guidance on a resolution.

In short: Connections from clients which reside in a TCP ethernet environment are timing and expiring after the (default) "peer_timeout 180" limit is up. The same client/router configuration with lustre-client 2.9.0 on our routers does not have the same behaviour. As far as can be determined, the issue is only present on the ethernet side and only when the router uses lustre version 2.10.x (tried 2.10.1 / 2.10.2 / 2.10.3)

Our routers have a single port OPA, dual port connectx-3, dual port connectx-4 100GbE, dual port 10GbE. I tested with various combinations of those cards installed, the most basic failed configuration being a 10Gige and CX-3 to our Qlogic fabric.

On the ethernet side, we've tried multiple ethernet fabrics (Cisco Nexus, Mellanox w/Cumulus), multiple adapters configurations - native vlan vs tagged vlan, bonded vs non-bonded. Issues with all of them.

Multiple router/client lustre.conf configs were tried, including various settings (and empty) ko2iblnd.conf configs on the router too.

What's observed from the eth client:
If I only ping the @tcp router address, it will respond up until the 180 second timeout. Routes are marked as up during this period until the peer_timeout is reached, at which point the routes will be marked down.

However, if I ping a machine on IB network, I'll recieve an "Input/output error", eg:

 
"failed to ping 192.168.55.143@o2ib10: Input/output error"


Routes will then be marked down 50 seconds after the first "Input/output error" to an IB network.

On the lnet router, I'm not seeing any errors logged when pinging an IB network from the client and Iv'e received an error. I do see ping error in the logs when pinging an @tcp address, but only after the routes are marked down. eg:

[VM root@data-mover-dev ~]# lctl ping 10.8.49.16@tcp101
12345-0@lo
12345-192.168.44.16@o2ib44
12345-192.168.55.232@o2ib10
12345-192.168.55.232@o2ib
12345-10.8.49.16@tcp101
[VM root@data-mover-dev ~]#


wait the 180 secs..

[VM root@data-mover-dev ~]# lctl ping 10.8.49.16@tcp101
failed to ping 10.8.49.16@tcp101: Input/output error
[VM root@data-mover-dev ~]#


Feb 23 23:14:05 lnet02 kernel: LNetError: 33850:0:(lib-move.c:2120:lnet_parse_get()) 10.8.49.16@tcp101: Unable to send REPLY for GET from 12345-10.8.49.155@tcp101: -113


I found it a little tricky to debug the LNET traffic flow, and welcome recommendations? At a TCP level I've captured the flow and can show the differences between a non-working 2.9.0 client / 2.10 router and a working 2.9.0 client/router. Would that be of any use?... It only really shows non-working lctl ping reply.

Ethernet client's lustre.conf:

options lnet networks=tcp101(eth3.3015) routes="o2ib0 1 10.8.44.16@tcp101;o2ib10 1 10.8.44.16@tcp101;o2ib44 1 10.8.44.16@tcp101"


Lnet router's lustre.conf:

options lnet networks="o2ib44(ib0), o2ib10(ib1), o2ib0(ib1), tcp101(bond0.3015)" forwarding=enabled

After searching around there's this thread which is pretty similar:
https://www.mail-archive.com/lustre-discuss@lists.lustre.org/msg14168.html
AFAIK we need 2.10.x for EL7.4. I'm not sure lustre-client 2.9.0 will build on EL7.4? (Can't build it via DKMS, and from source RPM it fails – looked like OFED changes in 7.4??).

Glad to provide more information on request.

Regards,
Simon



 Comments   
Comment by SC Admin (Inactive) [ 23/Feb/18 ]

realised I pasted the older config line from the ethernet client's lustre.conf. It's:

options lnet networks=tcp101(eth3.3015) routes="o2ib0 1 10.8.49.16@tcp101;o2ib10 1 10.8.49.16@tcp101;o2ib44 1 10.8.49.16@tcp101"

Comment by Peter Jones [ 23/Feb/18 ]

Amir

Can you please advise

Thanks

Peter

Comment by Amir Shehata (Inactive) [ 23/Feb/18 ]

Let's start with the most basic configuration

OPA NODE <----> router <-----> TCP NODE

Can you provide the following information on both the OPA and TCP nodes:

lnetctl export > config.yaml

Also can you enable net logging on OPA, router and TCP nodes using

lctl set_param debug=+"net neterror"

And then run the failed ping test. Afterwards collect the dump from all 3 nodes:

lctl dk > <node>.log
Comment by Amir Shehata (Inactive) [ 23/Feb/18 ]

I'm able to reproduce the problem. I'll update the ticket once I have a resolution.

Comment by Amir Shehata (Inactive) [ 24/Feb/18 ]

The problem was introduced by the following two patches:
LU-6245 libcfs: add ktime_get_real_seconds support
LU-9397 ksocklnd: move remaining time handling to 64 bits

LU-9397 needs to be reverted and the socklnd changes that were made as part of LU-6245 need to be reverted.

Comment by James A Simmons [ 25/Feb/18 ]

Instead of reverting lets figure out what the problem is. Note if you revert we end up with the problem of jiffies being uses for node to node communication. If one node uses different value of HZ then we also can run into problems. This would be trading one corner case for another. I will discuss with you a clear to duplicate it so we can properly fix it.

Comment by Peter Jones [ 24/Mar/18 ]

simmonsja it seems like this might take a while to work through. How about we revert to a consistent state for 2.10.4 while the longer term work is ongoing?

Comment by James A Simmons [ 24/Mar/18 ]

Its just a matter of me getting a test setup. I will talk to Amir. I think I know what fix is need.

Comment by Gerrit Updater [ 28/Mar/18 ]

James Simmons (uja.ornl@yahoo.com) uploaded a new patch: https://review.whamcloud.com/31810
Subject: LU-10707 socklnd: replace cfs_duration_sec with cfs_time_seconds
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: eb040d9f64db9bae0029b6a8481c5efc24c0462d

Comment by James A Simmons [ 29/Mar/18 ]

Hmm. The direction of the patch will be determined by porting back newer MOFED stack support since these patches have many dependencies on each other.

Comment by SC Admin (Inactive) [ 05/Apr/18 ]

Hi,

Had a test of 2.10.3 + patches #1, #2 but realised I'd missed out on a minor network config change due to some physical node changes in the last 3 weeks. I'd moved onto to testing the new lustre-client 2.11.0 on the lnet router by the time I realised that. I can report that 2.11.0 as the lnet routers lustre-client version (with the cfs_time_seconds) change does mean it remains up and pingable for the ethernet client.

Routes currently look like this:

[VM root@data-mover-dev ~]# lctl show_route
net               o2ib hops 4294967295 gw                10.8.49.16@tcp101 up pri 0
net             o2ib10 hops 4294967295 gw                10.8.49.16@tcp101 up pri 0
net             o2ib44 hops 4294967295 gw                10.8.49.16@tcp101 down pri 0
[VM root@data-mover-dev ~]#

From the LNET router itself I'm able to mount and use lustre filesystems on

  • o2ib0
  • o2ib10
  • o2ib44

From the Ethernet client I can ping only o2ib0 & o2ib10. I can ping MDT's but not mount filesystems. Obviously I can't ping nor mount on o2ib44 too. Possibly a misconfiguration with lustre module for OPA? This is our first go at routing between eth/opa/ib and multiple FS's on each.

[VM root@data-mover-dev ~]# mount -t lustre 192.168.55.143@o2ib10:/beer /beer
mount.lustre: mount 192.168.55.143@o2ib10:/beer at /beer failed: Input/output error
Is the MGS running?
[VM root@data-mover-dev ~]# mount -t lustre 192.168.55.129@o2ib:192.168.55.130@o2ib:/lustre /lustre
mount.lustre: mount 192.168.55.129@o2ib:192.168.55.130@o2ib:/lustre at /lustre failed: Input/output error
Is the MGS running?
[VM root@data-mover-dev ~]# lctl ping 192.168.55.143@o2ib10
failed to ping 192.168.55.143@o2ib10: Input/output error
[VM root@data-mover-dev ~]# lctl ping 192.168.55.143@o2ib10
12345-0@lo
12345-192.168.55.143@o2ib10
[VM root@data-mover-dev ~]#
[VM root@data-mover-dev ~]#
[VM root@data-mover-dev ~]#
[VM root@data-mover-dev ~]# lctl ping 192.168.55.129@o2ib; lctl ping 192.168.55.130@o2ib
failed to ping 192.168.55.129@o2ib: Input/output error
12345-0@lo
12345-192.168.55.130@o2ib
[VM root@data-mover-dev ~]# lctl ping 192.168.55.129@o2ib; lctl ping 192.168.55.130@o2ib
12345-0@lo
12345-192.168.55.129@o2ib
failed to ping 192.168.55.130@o2ib: Input/output error
[VM root@data-mover-dev ~]#
[VM root@data-mover-dev ~]#
[VM root@data-mover-dev ~]#

Getting late.. I'll re-do with 2.10.3 + patch tomorrow.  It's also a chance to review and make sure I didn't miss anything else. It's been over a month since I looked at this

Cheers,

Simon

 

 

Comment by Gerrit Updater [ 16/Apr/18 ]

James Simmons (uja.ornl@yahoo.com) uploaded a new patch: https://review.whamcloud.com/32015
Subject: LU-10707 ksocklnd: revert back to jiffies
Project: fs/lustre-release
Branch: b2_10
Current Patch Set: 1
Commit: 70f5192518961cfb056bd4fa1960c6520a030289

Comment by SC Admin (Inactive) [ 17/Apr/18 ]

Hi,

Thanks for putting out another update on this. I had a look at the latest 70f5192.diff this evening. I tested a patched lnet router running 2.10.3 with both unpatched/patched 2.10.3 lustre-client. No go unfortunately. Similar scenario as previous patches. Pings will remain working past the 180 seconds mark - though intermittently. eg:

[VM root@data-mover-dev ~]# lctl ping 10.8.49.16@tcp101
12345-0@lo
12345-192.168.44.16@o2ib44
12345-192.168.55.232@o2ib10
12345-192.168.55.232@o2ib
12345-10.8.49.16@tcp101
[VM root@data-mover-dev ~]# lctl ping 10.8.49.16@tcp101
12345-0@lo
12345-192.168.44.16@o2ib44
12345-192.168.55.232@o2ib10
12345-192.168.55.232@o2ib
12345-10.8.49.16@tcp101
[VM root@data-mover-dev ~]# lctl ping 10.8.49.16@tcp101
12345-0@lo
12345-192.168.44.16@o2ib44
12345-192.168.55.232@o2ib10
12345-192.168.55.232@o2ib
12345-10.8.49.16@tcp101
[VM root@data-mover-dev ~]# lctl ping 10.8.49.16@tcp101
failed to ping 10.8.49.16@tcp101: Input/output error
[VM root@data-mover-dev ~]# lctl ping 10.8.49.16@tcp101
failed to ping 10.8.49.16@tcp101: Input/output error
[VM root@data-mover-dev ~]# lctl ping 10.8.49.16@tcp101
12345-0@lo
12345-192.168.44.16@o2ib44
12345-192.168.55.232@o2ib10
12345-192.168.55.232@o2ib
12345-10.8.49.16@tcp101
[VM root@data-mover-dev ~]#

Routes seem to all be 'up' for 120 seconds, but I'm not able to actually route and traffic. eg:

LNET configured
Wed Apr 18 00:09:17 AEST 2018
net               o2ib hops 4294967295 gw                10.8.49.16@tcp101 up pri 0
net             o2ib10 hops 4294967295 gw                10.8.49.16@tcp101 up pri 0
net             o2ib44 hops 4294967295 gw                10.8.49.16@tcp101 up pri 0

Wed Apr 18 00:11:19 AEST 2018
net               o2ib hops 4294967295 gw                10.8.49.16@tcp101 up pri 0
net             o2ib10 hops 4294967295 gw                10.8.49.16@tcp101 up pri 0
net             o2ib44 hops 4294967295 gw                10.8.49.16@tcp101 up pri 0

Wed Apr 18 00:11:20 AEST 2018
net               o2ib hops 4294967295 gw                10.8.49.16@tcp101 down pri 0
net             o2ib10 hops 4294967295 gw                10.8.49.16@tcp101 down pri 0
net             o2ib44 hops 4294967295 gw                10.8.49.16@tcp101 down pri 0

I can still ping the lnet router from the client after the routes are marked down. eg:

[VM root@data-mover-dev ~]# lctl ping 10.8.49.16@tcp101
12345-0@lo
12345-192.168.44.16@o2ib44
12345-192.168.55.232@o2ib10
12345-192.168.55.232@o2ib
12345-10.8.49.16@tcp101
[VM root@data-mover-dev ~]#

But from the client, whilst the routes are marked 'up' I'm not able to ping a routed network. eg:

[VM root@data-mover-dev ~]# lctl ping 192.168.55.143@o2ib10
failed to ping 192.168.55.143@o2ib10: Input/output error
[VM root@data-mover-dev ~]# lctl ping 192.168.55.143@o2ib10
failed to ping 192.168.55.143@o2ib10: Input/output error
[VM root@data-mover-dev ~]# lctl ping 192.168.55.143@o2ib10
failed to ping 192.168.55.143@o2ib10: Input/output error

This works on another client which is routed via a 2.9.x lnet router. eg:

[VM root@data-mover01 ~]# lctl ping 192.168.55.143@o2ib10
12345-0@lo
12345-192.168.55.143@o2ib10
[VM root@data-mover01 ~]#

Cheers,
Simon

Comment by SC Admin (Inactive) [ 18/Apr/18 ]

An update: looked more into it this morning. Routes can stay up now.

 On some destination hosts we saw:

Apr 18 08:39:54 metadata01 kernel: LNetError: 1719:0:(o2iblnd_cb.c:2643:kiblnd_rejected()) 192.168.55.232@o2ib rejected: incompatible # of RDMA fragments 32, 256

On the LNET router I changed "map_on_demand=32" to "0", reloaded and got:

Apr 18 08:46:50 metadata01 kernel: LNetError: 1719:0:(o2iblnd_cb.c:2311:kiblnd_passive_connect()) Can't accept 192.168.55.232@o2ib: incompatible queue depth 128 (8 wanted)
Apr 18 08:46:50 metadata01 kernel: LNetError: 1719:0:(o2iblnd_cb.c:2311:kiblnd_passive_connect()) Skipped 3 previous similar messages

Again on the LNET router, I changed "peer_credits=128" to "8" and haven't seen further LNetErrors on the test hosts, nor routes marked down again.

Ping's are still erratic - in that repeating lctl pings to a host will result in success, then input/output errors, success, etc. Is this to be expected?

Still not able to mount our old filesystems (the ones on the truescale qlogic gear), but now we seem to have challenge both OPA and truescale in the LNET router and finding the right ko2iblnd.conf settings? Will aim to get the OPA lustre storage servers configured to test the LNET router soon.

cheers
simon

Comment by SC Admin (Inactive) [ 18/Apr/18 ]

Good news. Got our our QDR and OPA lustre FS's up and going via the patched lnet router earlier tonight and they've remained that way since!

Cheers,

simon

 

Comment by Gerrit Updater [ 19/Apr/18 ]

Amir Shehata (amir.shehata@intel.com) uploaded a new patch: https://review.whamcloud.com/32082
Subject: LU-10707 lnet: revert to cfs_time functions
Project: fs/lustre-release
Branch: b2_10
Current Patch Set: 1
Commit: 44cdaa38e5ed2e53572b91ba08ba91680a616532

Comment by Sebastien Buisson (Inactive) [ 20/Apr/18 ]

With patch https://review.whamcloud.com/32082, I am not able to reproduce the ping timeout issue anymore.

Comment by SC Admin (Inactive) [ 21/Apr/18 ]

Added in the updated patch https://review.whamcloud.com/32082 and it's resolved the reconnects on lustre nodes, plus lnet_selftest passes now.

cheers,
Simon

Comment by Peter Jones [ 21/Apr/18 ]

That's good news Simon. We'll look to queue up this fix for the upcoming 2.10.4 release

Comment by James A Simmons [ 21/Apr/18 ]

Sadly some of the work reverted was a back port from the linux lustre client. This means that the upstream client is broken with router  

Comment by James A Simmons [ 22/Apr/18 ]

I see two patches are needed. One patch from me https://review.whamcloud.com/#/c/32015 and another patch https://review.whamcloud.com/#/c/32082 from Amir. Sebastien can you change your review on my patch so both can land.

Comment by Gerrit Updater [ 03/May/18 ]

John L. Hammond (john.hammond@intel.com) merged in patch https://review.whamcloud.com/32015/
Subject: LU-10707 ksocklnd: revert back to jiffies
Project: fs/lustre-release
Branch: b2_10
Current Patch Set:
Commit: 62947eaec70d74d753faadee3f22f928b59fec52

Comment by Gerrit Updater [ 03/May/18 ]

John L. Hammond (john.hammond@intel.com) merged in patch https://review.whamcloud.com/32082/
Subject: LU-10707 lnet: revert to cfs_time functions
Project: fs/lustre-release
Branch: b2_10
Current Patch Set:
Commit: 0049c057d0ad5e1c56dc972004ca414dbfe6a6b8

Comment by James A Simmons [ 03/May/18 ]

Should be fixed now. If you still see problem feel free to open

Generated at Sat Feb 10 02:37:29 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.