[LU-16530] OOM on routers with a faulty link/interface with 1 node Created: 03/Feb/23  Updated: 26/Jul/23

Status: Open
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Etienne Aujames Assignee: Cyril Bordage
Resolution: Unresolved Votes: 0
Labels: LNet, ko2iblnd, lnet, router
Environment:

Production, Lustre 2.12.7 on router and computes, Lustre 2.12.9 + patches on servers
peer_credits = 42
infiniband (mofed 5.4 on router and on computes, mofed 4.7 on servers)


Attachments: Text File vmcore-dmesg_hide_router272a_20221209_152308_1.txt     Text File vmcore-dmesg_hide_router272a_20221220_213634_1.txt    
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

A LNet router crash regularly with OOM on a compute partition at the CEA.
Each time, the router complains about a compute node (with RDMA timeout) and then crash with OOM.
This issue seems to be linked to a defective compute rack or infiniband interface, but this should not cause the LNet router to crash.

Environment:

x32         infiniband    x12       infiniband    ~ x100
computes    <--o2ib1-->   routers   <--o2ib0-->   servers

peer_credits = 42
discovery = 0
health_sensitivity = 0
transaction_timeout = 50
retry_count = 0

router RAM amount : 48GB

Kdumps information:
On the peer interface (lnet_peer_ni) to the faulty compute:
tx credits: ~ -4500
I read the msg tx queue (lpni_txq) and sort the messages by NID sources: for 69 NIDs I count 42 messages (peer_credits value) blocked in the tx queue.

I found the peer interface with a server NID that have 42 msg blocked on tx:
peer buffer credit: ~ -17000
On the peer router queue (lpni_rtrq), messages seems to be linked to different kib_conn (kib_rx.rx_conn) every 42 messages.
These connections are in disconnected state, with ibc_list and ibc_sched_list not linked (poison value inside). But qp and cq are not freed.
A QP take 512 pages and a CQ take 256 pages, ~ 3 MB per connection.

So it seems to be a connections leak.

Analyze
Here what I understood with the lnet/ko2iblnd sources:

1. Compute node have an issue and do not answer (or partially) to the router.
2. Messages from the servers to the compute node are queued and the tx peer credits is negative.
3. When a server peer interface have more than 42 messages blocked on tx, peer_buffer_credits is negative (by default, peer_credits == peer_buffer_credits). In that case, new message from server are queued in lpni_rtrq.
4. After that, the server is not able to send any messages to the router because peer_buffer_credits < 0. All messages from the server sent to the router timeout (RDMA timeout).
5. The server disconnects/reconnects to the routers and cleans its tx credits and resend its messages.
6. On the router, the old connection is set to disconnect but not freed because old Rx messages are not cleaned and still reference the old connection.

Can someone help me with this ?
I am not used to debug LNet/ko2iblnd.



 Comments   
Comment by Peter Jones [ 03/Feb/23 ]

Cyril

Can you please advise?

Thanks

Peter

Comment by Etienne Aujames [ 01/Mar/23 ]

Hi,

We successfully reproduce the issue on a test filesystem with Infiniband:

Configuration

  • 1 MDT/MDS
  • 5 OSS/OST
  • 2 Clients
  • 1 router IB <--> IB

Lustre 2.12.7 LTS on all nodes.

LNet configuration:

options lnet lnet_peer_discovery_disabled=1 lnet_health_sensitivity=0
options ko2iblnd peer_credits=42

servers use o2ib50
clients use o2ib51

Reproducer

  1. Mount clients
  2. Do some IO with clients (I used multithread fio)
  3. Spam client1 with lnet ping from all the servers
    clush -w@servers "while true; do seq 1 100 | xargs -P100 -I{} lnetctl ping client1@o2ib51; done"
    
  1. Add a delay (2s) rule on the client1 for the incoming traffic from *@o2ib50
    ssh client1 lctl net_delay_add -s "*.o2ib50" -d "o2ib51" -l 2 --rate=1
    

That's it!

On the router:

  • tx credits value is minimum for client1 peer_ni (available_tx_credits = - peer_buffer_credit_param * server_nodes = - 42 * 5 = -210)
  • The available_rtr_credits (peer_buffer_credit) keep decreasing on all the peer_ni of servers
  • The number of QP keep increasing

The client2 is not able to communicate with servers. All the Rx peers_ni of the servers are saturated on the router (peer_buffer_credit < 0).

The servers keep trying to reconnect to the clients.

Remarks
A drop rule or an Infiniband device reset on the client1 do not produce the issue: communication errors are detected by the router and the peer_ni on the router is set to down (see auto_down feature), messages are dropped.

I try to increase peer_buffer_credit and set lnet_health_sensitivity, this does not change the behavior.

Comment by Cyril Bordage [ 01/Mar/23 ]

Hello Etienne,

thank you for the reproducer. I will take a look into that when I will be back in one week.

Comment by Etienne Aujames [ 30/Mar/23 ]

Hi Cyril,

Have you got the time to look into that issue ?

Comment by Cyril Bordage [ 30/Mar/23 ]

Hello Etienne,

I did take a look but then was on something else… Sorry about that. I plan to work on it again very soon.

Thank you.

Comment by Cyril Bordage [ 25/Apr/23 ]

Hello Etienne,

do you have logs of your tests? Is your setup still available?

Thank you.

Comment by Etienne Aujames [ 27/Apr/23 ]

Hi Cyril,

I can't get you debug_log (maybe some dmesg if you want).
The setup is not available because it was reproduced on a router from the cluster (to reproduce this it needs a node with 2 ib interfaces on different networks).
I tried to reproduce this with tcp <-> ib but unsuccessfully.

Comment by Cyril Bordage [ 27/Apr/23 ]

Hello Etienne,

yes, dmesg could be useful.

Thank you.

Comment by Etienne Aujames [ 26/Jul/23 ]

Hi Cyril,

Sorry for the delay.

I have submitted 2 dmesg logs:

Those are logs from 2 crashes of the router in production.

The situation was stabilized by changing the CPU of the faulty client node.

Comment by Etienne Aujames [ 26/Jul/23 ]

Here some context for logs:

Generated at Sat Feb 10 03:27:48 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.