[LU-12197] incorrect peer nids with discovery enabled Created: 18/Apr/19 Updated: 12/Oct/19 Resolved: 12/Oct/19 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.12.0 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major |
| Reporter: | Ruth Klundt (Inactive) | Assignee: | Amir Shehata (Inactive) |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | arm | ||
| Environment: |
ARM clients: kernel 4.14.0-115.2.2.el7a.aarch64 MLNX_OFED_LINUX-4.5-1.0.1.0 (OFED-4.5-1.0.1) |
||
| Issue Links: |
|
||||||||||||
| Severity: | 3 | ||||||||||||
| Rank (Obsolete): | 9223372036854775807 | ||||||||||||
| Description |
|
The client side symptom is alternating success/failure of lnet ping to an oss. On the oss we see: # lnetctl peer show --nid n1-ib0@o2ib peer: - primary nid: xxx.xxx.xxx.17@o2ib Multi-Rail: True peer ni: - nid: xxx.xxx.xxx.17@o2ib state: NA - nid: xxx.xxx.xxx.182@o2ib state: NA # lnetctl peer show --nid n2-ib0@o2ib peer: - primary nid: xxx.xxx.xxx.17@o2ib Multi-Rail: True peer ni: - nid: xxx.xxx.xxx.17@o2ib state: NA - nid: xxx.xxx.xxx.182@o2ib state: NA where n1 has ipaddr ending in 182, and n2 has ipaddr ending 17. The results in logs is lots of timeouts, put NAKs, mount failures and general chaos including plenty of the following message: kernel: LustreError: 21309:0:(events.c:450:server_bulk_callback()) event type 3, status -61, desc ffff9c46e0303200 The logs lead us to believe there were IB problems, but the fabric was found to be clean and responsive between the affected client nodes and oss servers. Planning to turn off discovery going forward. I'll leave a few clients drained for awhile in case there is info you might need. fyi, rebooting the client does not change the behavior, rebooting the server clears it. Also manually deleting the incorrect peer nid on the server and re-adding the correct info for the missing peer also clears it. Also, clients are running socket direct, but only one IPoIB interface is configured and in use by lnet. |
| Comments |
| Comment by Peter Jones [ 19/Apr/19 ] |
|
Thanks Ruth |
| Comment by Peter Jones [ 01/May/19 ] |
|
Sonia Could you please assist with this issue? Thanks Peter |
| Comment by Mike Timmerman [ 20/Jun/19 ] |
|
Sonia,
This bug reported as HP-235 and
Thanks.
Mike |
| Comment by Sonia Sharma (Inactive) [ 25/Jun/19 ] |
|
Hi Mike, I tried to reproduce this on my local VM setup (with tcp) and could not reproduce the issue. When I run "lnetctl peer show", it showed nids of different peers under their own primary nid. Can you please list here how you have configured the nodes and also the exact steps you follow starting from configuration of the nodes that leads to the issue when you see nids of different peers showing up under the primary nid of 1 peer. Thanks |
| Comment by Mike Timmerman [ 25/Jun/19 ] |
|
I have asked Ruth Klundt <rklundt@sandia.gov> and Steve Champion <stephen.champion@hpe.com> to weigh in on this as they were the ones who configured and discovered the problem.
Thanks. |
| Comment by Ruth Klundt (Inactive) [ 25/Jun/19 ] |
|
software stack described above. 2 MDS nodes, 1 MDT each. 40 OSS nodes, 2 zfs OSTs each. module params: servers: options lnet networks=o2ib0(ib0) options ko2iblnd map_on_demand=16 options ko2iblnd timeout=100 options ko2iblnd credits=512 clients: options lnet live_router_check_interval=60 options lnet dead_router_check_interval=60 options lnet check_routers_before_use=1 options ko2iblnd timeout=100 options ko2iblnd credits=512 Steps before seeing this issue - basically reboot 2500+ nodes and some of them multiple times. Out of the 2500+ clients I found the error on 4 pairs of nodes, one pair on each of 4 distinct servers out of the 40 OSS nodes. Some relevant items are that: The servers and clients are running mellanox stack as listed above. As I said above, the servers have only one port into the IB network. The clients also have only one port, which appears as 2 due to the use of the mellanox socket direct feature. Only one of the ports is configured for use by lnet, and in fact only one (ib0) has an IPoIB address configured. So if the code is detecting another port automatically via some IB call and assuming it can be used that is an error. I can imagine that bad assumption leading to addition of an inappropriate nid from some list to the wrong client under high load conditions, exposing some concurrency flaw not previously seen. It seems unlikely to be reproducible on a vm with tcp, and I'm afraid that a reproducer would have to come from HPE internal testing if possible at all. The machine where this occurred is in production with Dynamic Discovery disabled. It's likely that someone familiar with Dynamic Discovery is going to have to look at the code and decipher how that feature is doing the automatic detection. I think my point here is that nothing on this machine should have been detected as multirail. If you have other more specific questions about the node configuration I'll be glad to answer what I can.
|
| Comment by Stephen Champion [ 06/Jul/19 ] |
|
I talked to Amir about this at LUG. Amir pointed to queued patches, and suggested that the patches submitted for "The problem" being insertion of the wrong nid to a peer's list of NIs. I think there is a fair chance to find this through code inspection by someone more familiar with the peer discovery process. My suspicion is that a peer node id is held by an unlocked reference as an peer ni is inserted. I noticed I suspect that the number of peer entries on the host is a factor, so duplication may not be possible without a large cluster. |
| Comment by Peter Jones [ 13/Jul/19 ] |
|
Amir Should we look to include the fix for Peter |
| Comment by Amir Shehata (Inactive) [ 26/Aug/19 ] |
|
Yes. I added it to the list of patches we ought to port to b2_12 here |
| Comment by Peter Jones [ 12/Oct/19 ] |
|
The fix is included in 2.12.3 which is in release testing ATM |