[LU-11840] Multi rail dynamic discovery prevent mounting filesystem when some NIC is unreachable Created: 08/Jan/19  Updated: 25/Nov/20

Status: Open
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.11.0, Lustre 2.12.0
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Aurelien Degremont (Inactive) Assignee: Amir Shehata (Inactive)
Resolution: Unresolved Votes: 1
Labels: None

Issue Links:
Related
is related to LU-13548 LNet: b2_12 discovery of non-MR peers... Open
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

In recent Lustre releases, some specific filesystem could not be mounted due to a communication error between clients and servers, depending on the LNET configuration.

If we have a filesystem running on a host with 2 interfaces, let say tcp0 and tcp1 and the devices are setup to reply on both interfaces (formatted with --servicenode IP1@tcp0,IP2@tcp1).

If a client is connected only to tcp0 and try to mount this filesystem, it fails with an I/O error because it is trying to connect using tcp1 interface.

Mount failed:

 

# mount -t lustre x.y.z.a@tcp:/lustre /mnt/lustre
mount.lustre: mount x.y.z.a@tcp:/lustre at /mnt/client failed: Input/output error
Is the MGS running?

dmesg shows that communication fails using the wrong IP

[422880.743179] LNetError: 19787:0:(lib-move.c:1714:lnet_select_pathway()) no route to a.b.c.d@tcp1
# lnetctl peer show
peer:
 - primary nid: a.b.c.d@tcp1
 Multi-Rail: False
 peer ni:
 - nid: x.y.z.a@tcp
 state: NA
 - nid: 0@<0:0>
 state:

Ping is OK though:

# lctl ping x.y.z.a@tcp
12345-0@lo
12345-a.b.c.d@tcp1
12345-x.y.z.a@tcp

 

This was tested with 2.10.5 and 2.12 as server versions and 2.10, 2.11 and 2.12 as client.

Only 2.10 client is able to mount the filesystem properly with this configuration

 

I git-bisected the regression down to 0f1aaad LU-9480 lnet: implement Peer Discovery

Looking at debug log, the client:

  • setups the peer with the proper NI
  • the pings the peer
  • updates the local peer info with the wrong NI coming from the ping reply

data in the reply seems to announce the tcp1 IP as the primary nid.

The client will then use this NI to contact the server even if it has no direct connection to it (tcp1) and has a correct one for the same peer (tcp0).



 Comments   
Comment by Aurelien Degremont (Inactive) [ 08/Jan/19 ]

I made more tests.

2.12 client and 2.12 server: OK

2.10 client and 2.10 client: OK

 

2.12 client with 2.10.5 server: broken

 

Looks like it is related to MR capable/Discovery capable feature.

 

Comment by Amir Shehata (Inactive) [ 09/Jan/19 ]

Aurelien, do you see this with 2.11-> 2.10.5?

I'm suspecting a timeout length issue. But if you could verify the above, it'll prove it to me.

Comment by Aurelien Degremont (Inactive) [ 10/Jan/19 ]

2.11 client and 2.10.5 server: broken

 

Moreover, I can workaround the problem if I add the peer first:

lnetctl peer add --prim_nid x.y.z.a@tcp
mount -t lustre x.y.z.a@tcp:/fsx /mnt/client/

Timeout is a good lead as it seems to be what I see in the logs.

 

Looks like if the server is 2.12, all clients (2.10, 2.11 and 2.12) successfully mount the FS.

Comment by Aurelien Degremont (Inactive) [ 11/Jan/19 ]

If you have a potential understanding of the problem and you have trails/ideas I can follow or dig in, let me know.

Comment by Amir Shehata (Inactive) [ 11/Jan/19 ]

It seems like there is a misunderstanding around the primary_nid. So when you add the peer explicitly it works, but if you don't, then discovery depends on the list of NIDs which come back in the ping response. The first NID in that list is considered the primary_nid. Based on this:

# lctl ping x.y.z.a@tcp
12345-0@lo
12345-a.b.c.d@tcp1
12345-x.y.z.a@tcp

The tcp1 interface is first. So LNet thinks it's the primary NID of the node and tries to send messages to it, but there are no routes.

If you set up the server in such a way that the tcp interface is first, do you resolve the problem?

# lctl ping x.y.z.a@tcp
12345-0@lo
12345-x.y.z.a@tcp 
12345-a.b.c.d@tcp1
Comment by Aurelien Degremont (Inactive) [ 11/Jan/19 ]

Wooh! It works! Thanks a lot! I will work on a workaround based on that.

 

But, I thought, especially since MR appeared that it would select the best available route, and in this case, even if the primary nid is not reachable, there is one which is. Did I misunderstand the MR features?

Comment by Amir Shehata (Inactive) [ 11/Jan/19 ]

LNeth Health, which is in 2.12, should be able to do that. You'll need to enable it.

https://build.whamcloud.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#dbdoclet.mrhealth

Although, I'm a bit confused by the test matrix you outlined. When you tested 2.12->2.12 it should still be the same problem. But you're saying it works. When you have 2.12 on the servers, was the order of the NIDs still the same? Or in these test runs the order of the NIDs was:

 # lctl ping x.y.z.a@tcp
12345-0@lo
12345-x.y.z.a@tcp 
12345-a.b.c.d@tcp1
Comment by Aurelien Degremont (Inactive) [ 14/Jan/19 ]

I confirm that with a 2.12 server, where tcp1 is declared first, 2.12 client mount is still ok.

 

That's also why I'm thinking there is some MR magic in action here. Looks like the peer state in 2.12 client is not the same if the server supports 'push' or not. It seems you're right that in the simple case, it will just use the first NID that is returned when being pinged.

 

The problem is that there is still a regression between 2.10 and 2.12 in my opinion. A setup that did not look unsupported was working with a 2.10 client and is no more with a 2.12.

I think the 2.12 client rely on a server side feature that was introduced after 2.10 to properly setup its peer state and use the correct one when mounting. If the server is a 2.10 one, the push/discovery feature did not exist and the client does not try to do something smart with all the available IPs. It will just try the first one and fails.

 

Comment by Aurelien Degremont (Inactive) [ 14/Jan/19 ]

More test results, with a 2.12 client and 2.10 server, and NIDs being set in non-optimal order (ping returns tcp1 as first nid):

  • Add peer before before mount: lnetctl peer add --prim_nid x.y.z.a@tcp: Mount is OK
  • Add peer, declared as non_mr, before mount: lnetctl peer add --prim_nid x.y.z.a@tcp --non_mr: Error

 

  • Add peer, bad nid as prim_nid, 2 nids declared: lnetctl peer add --prim_nid a.b.c.d@tcp1 --nid x.y.z.a@tcp,a.b.c.d@tcp1: OK
  • Add peer, bad nid as prim_nid, 2 nids declared as non_MR: lnetctl peer add --prim_nid a.b.c.d@tcp1 --nid x.y.z.a@tcp,a.b.c.d@tcp1 --non_mr: Error

 

So that means that to have it working, we need either:

  • tcp0 as the first NID returned by server
  • the peer being explicitly declared as MultiRail capable, whatever the prim_nid is.

Hope that helps you understand the problem.

Comment by Amir Shehata (Inactive) [ 16/Jan/19 ]

As discussed today, the work around where you configure the tcp NID to be primary on the server will work in your case.

In the mean time I've been looking at a way to resolve the incompatibility between discovery enabled node and a non-discovery capable node (IE 2.10.x) and I have hit a snag.

I'm testing two different scenarios

  1. OST(2.12) MDT(2.10.x) Client(2.12)
  2. OST(2.10.x) MDT(2.10.x) Client (2.12)

Unfortunately, lustre does its own NID lookup without using LNet to pull the NID information in both scenarios, particularly, here:

 779 /**
 780  * Retrieve MDT nids from the client log, then start the lwp device.
 781  * there are only two scenarios which would include mdt nid.
 782  * 1.
 783  * marker   5 (flags=0x01, v2.1.54.0) lustre-MDTyyyy  'add mdc' xxx-
 784  * add_uuid  nid=192.168.122.162@tcp(0x20000c0a87aa2)  0:  1:192.168.122.162@tcp
 785  * attach    0:lustre-MDTyyyy-mdc  1:mdc  2:lustre-clilmv_UUID
 786  * setup     0:lustre-MDTyyyy-mdc  1:lustre-MDTyyyy_UUID  2:192.168.122.162@tcp
 787  * add_uuid  nid=192.168.172.1@tcp(0x20000c0a8ac01)  0:  1:192.168.172.1@tcp
 788  * add_conn  0:lustre-MDTyyyy-mdc  1:192.168.172.1@tcp
 789  * modify_mdc_tgts add 0:lustre-clilmv  1:lustre-MDTyyyy_UUID xxxx
 790  * marker   5 (flags=0x02, v2.1.54.0) lustre-MDTyyyy  'add mdc' xxxx-
 791  * 2.
 792  * marker   7 (flags=0x01, v2.1.54.0) lustre-MDTyyyy  'add failnid' xxxx-
 793  * add_uuid  nid=192.168.122.2@tcp(0x20000c0a87a02)  0:  1:192.168.122.2@tcp
 794  * add_conn  0:lustre-MDTyyyy-mdc  1:192.168.122.2@tcp
 795  * marker   7 (flags=0x02, v2.1.54.0) lustre-MDTyyyy  'add failnid' xxxx-
 796  **/
 797 static int client_lwp_config_process(const struct lu_env *env,
 798 »·······»·······»·······»·······     struct llog_handle *handle,
 799 »·······»·······»·······»·······     struct llog_rec_hdr *rec, void *data) 

Lustre tries to retrieve the MDT nids from the client log and it looks at the first NID in the list. In both cases the OST is unable to mount the MGS, because it's using the tcp1 NID to get the peer and ends in this error:

(events.c:543:ptlrpc_uuid_to_peer()) 192.168.122.117@tcp1->12345-<?>
(client.c:97:ptlrpc_uuid_to_connection()) cannot find peer 192.168.122.117@tcp1!

This error is independent from the backwards compatibility issue. My config looks like:

OST:
----
net:
    - net type: lo
      local NI(s):
        - nid: 0@lo
          status: up
    - net type: tcp
      local NI(s):
        - nid: 192.168.122.114@tcp
          status: up
          interfaces:
              0: eth0
        - nid: 192.168.122.115@tcp
          status: up
          interfaces:
              0: eth1

MDT:
----
net:
    - net type: lo
      local NI(s):
        - nid: 0@lo
          status: up
    - net type: tcp1
      local NI(s):
        - nid: 192.168.122.117@tcp1
          status: up
          interfaces:
              0: eth0
    - net type: tcp
      local NI(s):
        - nid: 192.168.122.118@tcp
          status: up
          interfaces:
              0: eth1

I'm curious how  you setup your OSTs so you don't run into the problem above?

Comment by Aurelien Degremont (Inactive) [ 16/Jan/19 ]

My LNET setup looks like the MDT one. There is 2 LND, tcp0 and tcp1 and only one interface for each of them.

We did the test together on a simple system where both MDT and OST where on the same server, but I do not think this makes a difference here.

Looking at my MGS client llog, it looks rather like the case #1.

Devices were formatted specifying a simple service node option (see ticket description).

Comment by Aurelien Degremont (Inactive) [ 12/Feb/19 ]

@ashehata Did you make any progress on this topic?

I'm facing a similar issue with a pure 2.10.5 configuration.

Lustre servers have both tcp0 and tcp1 NID. MDT/OSTs are setup to use both of them. But Lustre servers will try to communicate only using the first configured interface. If it fails (timeout), they will never try the second one.

Do you have any clue?

Comment by Amir Shehata (Inactive) [ 12/Feb/19 ]

Yes. I believe that's the issue I pointed to here: https://jira.whamcloud.com/browse/LU-11840?focusedCommentId=240077&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-240077

Lustre (not LNet) does its own NID lookup based on logs. The assumption inherit in the code is that there is only one NID per node. Which is not right.

Comment by Amir Shehata (Inactive) [ 12/Feb/19 ]

I'm working on a solution. Will update the ticket when I have a patch to test.

Comment by Aurelien Degremont (Inactive) [ 13/Feb/19 ]

Thanks a lot! Do you have a rough idea if this is days or weeks of work?

Comment by Amir Shehata (Inactive) [ 14/Feb/19 ]

I don't think that it's a huge amount of work but I am focused on 2.13 feature work ATM so have not looked at it in much detail yet

Comment by Aurelien Degremont (Inactive) [ 15/Feb/19 ]

OK, understood.

A simple question based on your config output. Should we declare 0@lo in a lnet.conf file? Used with lnetctl import.

I could not find a clear statement on that looking at different places.

Comment by Amir Shehata (Inactive) [ 15/Feb/19 ]

No. 0@lo will always get ignored because it's created implicitly. So you don't have to have it in the lnet.conf file.

There is actually a patch

LU-10452 lnet: cleanup YAML output

which allows you to use a "–backup" option to print a YAML block with only the elements needed to reconfigure a system.

lnetctl net show --backup 

#also when you export that backup feature is automatically set

lnetctl export > lnet.conf
Comment by Aurelien Degremont (Inactive) [ 15/Feb/19 ]

Really helpful. Thank you!

Comment by Aurelien Degremont (Inactive) [ 23/May/19 ]

For the records, disabling LNET discovery seems to workaround the issue

lnetctl set discovery 0

before mounting the Lustre client.

Comment by Aurelien Degremont (Inactive) [ 16/Jul/19 ]

I realized I could not reproduced this bug with latest master branch code.

I tracked down this behavior change between 2.12.54 and 2.12.55, so very likely related to MR Routing feature landing. I did not track to which patch precisely.

 

Lustre 2.12 is still impacted though.

Comment by sebg-crd-pm (Inactive) [ 09/Sep/19 ]

Hi, I have got the similar  problem when client/ server / router are all "lustre2.12"

The Lustre client  try to communicate only using the first configured interface of server , they will never try the second one.

clientA:
options lnet networks="tcp(eno1)" routes="o2ib 172.26.1.222@tcp"

router:
options lnet networks="tcp(eno1),o2ib(ib0)" "forwarding=enabled"

server:
options lnet networks="tcp2(eno1),o2ib0(ib0)" routes="tcp 172.20.0.222@o2ib"
=> clientA mount server o2ib fail
options lnet networks="o2ib0(ib0),tcp2(eno1)" routes="tcp 172.20.0.222@o2ib"
=> clientA mount server o2ib ok

Comment by Karsten Weiss [ 05/Jun/20 ]

AFAICS I also have this issue:

  • server (lustre-2.10.4-1.el7.x86_64):
    options lnet networks=tcp0(en0),o2ib0(in0)
  • client (lustre-2.12.5-RC1-0.el7.x86_64):
    options lnet networks="o2ib(ib0)"

These two workarounds seem to work (only very limited testing so far):

  1. Configuring LNET tcp on the client (although I actually only want to use IB):
    options lnet networks="o2ib(ib0),tcp(enp3s0f0)" 
  1. Executing this before the actual Lustre mount:
    lnetctl set discovery 0
Comment by Amir Shehata (Inactive) [ 05/Jun/20 ]

Krasten,

There is a known incompatibility between discovery enabled lustre and older lustre versions, IE 2.10.

We have a ticket open for it and we're looking at how we can resolve this:

https://jira.whamcloud.com/browse/LU-13548

Currently disabling discovery on 2.12 is the best workaround.

Generated at Sat Feb 10 02:47:23 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.