[LU-7676] OSS Servers stuck in connecting/disconnect loop Created: 16/Jan/16  Updated: 22/Sep/16  Resolved: 22/Sep/16

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Mahmoud Hanafi Assignee: Oleg Drokin
Resolution: Duplicate Votes: 0
Labels: None

Attachments: Text File out.1452910586.gz    
Issue Links:
Related
is related to LU-7054 ib_cm scalling issue when lustre clie... Resolved
is related to LU-7569 IB leaf switch caused LNet routers to... Resolved
Severity: 1
Rank (Obsolete): 9223372036854775807

 Description   

We have had several OSS started to get into a state of disconnect and reconnect with clients. Sometimes they clear-up and then re-enter the same state later. Even with reboot the will enter into the same state.

Attaching Lustre Debug dump. Please advice on what additional info is need for debugging.



 Comments   
Comment by Mahmoud Hanafi [ 16/Jan/16 ]

uploaded the following files to ftp.whamcloud.com/uploads/LU7676/

out.nbp2-oss20.1452914508.gz
out.nbp2-oss20.1452913997.gz
out.nbp2-oss18.1452914592.gz
out.nbp2-oss18.1452914592.gz
varlogmessages.gz

Comment by Mahmoud Hanafi [ 16/Jan/16 ]

Not sure why some if the field where blank....
Effected Version: 2.5.3 Server.

Comment by Oleg Drokin [ 16/Jan/16 ]

Hm, this is a strange message in the servers:

Jan 15 19:13:24 nbp2-oss18 kernel: LustreError: 13505:0:(events.c:452:server_bulk_callback()) event type 5, status -103, desc ffff881b1a708000
Jan 15 19:13:24 nbp2-oss18 kernel: Lustre: nbp2-OST0075: Bulk IO read error with 9a6a5394-9d0c-107d-b924-82de647f4613 (at 10.151.27.95@o2ib), client will retry: rc -110

So something causes those bulk transfers to get aborted. rc -110 is also etimeout. (-103 is connection aborted).

And then this:

Jan 15 19:13:24 nbp2-oss18 kernel: Lustre: 21888:0:(service.c:2050:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (192:4547s); client may timeout.  req@ffff881b110a2400 x1522760681448372/t0(0) o3->9a6a5394-9d0c-107d-b924-82de647f4613@10.151.27.95@o2ib:0/0 lens 488/432 e 0 to 0 dl 1452909414 ref 1 fl Complete:/0/ffffffff rc 0/-1

So this means server threads are spending lots of time processing this request, o3 is READ, so all those bulk timeouts are probably causing read RPCs to fail, and take long time at that. When that happens, the client whose request got stuck like that would be complaining about server unresponsiveness and will be reconnecting.

So the root cause is somewhere in the bulk IO errors.

In the log attached to this ticket we can see stuff like:

00000800:00000100:4.0F:1452910553.468033:0:2802:0:(o2iblnd_cb.c:2903:kiblnd_cm_callback()) 10.151.54.85@o2ib: UNREACHABLE -110
00000800:00000100:4.0:1452910553.563037:0:2802:0:(o2iblnd_cb.c:2903:kiblnd_cm_callback()) 10.151.0.196@o2ib: UNREACHABLE -110
00000800:00000100:4.0:1452910553.563045:0:2802:0:(o2iblnd_cb.c:2072:kiblnd_peer_connect_failed()) Deleting messages for 10.151.0.196@o2ib: connection failed

Assuming this is one of the clients, I imagine you are just having some sort of a network problem where some of the messages cannot get through?

Comment by Mahmoud Hanafi [ 16/Jan/16 ]

We haven't been able to identify any network issues. As far as we can tell the network is find.

what do you make of these messages. The downward slide of the servers is pre-seeded by these

Jan 15 19:10:09 nbp2-oss20 kernel: Lustre: 22081:0niobuf.c:285tlrpc_abort_bulk()) Unexpectedly long timeout: desc ffff880494ed2000
Jan 15 19:10:09 nbp2-oss20 kernel: Lustre: 22081:0niobuf.c:285tlrpc_abort_bulk()) Skipped 5 previous similar messages
Jan 15 19:10:22 nbp2-oss18 kernel: Lustre: 21874:0niobuf.c:285tlrpc_abort_bulk()) Unexpectedly long timeout: desc ffff881b124e4000
Jan 15 19:10:22 nbp2-oss18 kernel: Lustre: 21874:0niobuf.c:285tlrpc_abort_bulk()) Skipped 9 previous similar messages

Comment by Oleg Drokin [ 16/Jan/16 ]

That unexpectedly long timeout is more of the same.
Network/network driver/network card is slow in trying to unregister buffers we are trying to unregister. Slow as in it takes over 300 seconds to unregister such buffers )this is what triggers the message).

I think this is another sign of unhealthy network/card/driver. It's not normal for connection to a peer to fail with ETIMEOUT (-110)/UNREACHABLE as seen in the last snippet in my previous comment.

Comment by Bob Ciotti [ 16/Jan/16 ]

We also see many messages like this:
out.nbp2-oss18.1452913951.gz.denum:
00000800:00000200:15.0:1452913946.993806:0:21340:0:(o2iblnd.c:1898:kiblnd_pool_alloc_node()) Another thread is allocating new TX pool, waiting 1024 HZs for her to complete.trips = 83498830

This was part of a patch generated in https://jira.hpdd.intel.com/browse/LU-7054
http://review.whamcloud.com/#/c/16470/2/lnet/klnds/o2iblnd/o2iblnd.c
but we still see that there are a large number of "complete.trips" through. I has assumed that the "waiting HZs" of 1024 would slow this down, or does it simply schedule other threads if one waiting and not sleep (unclear to me), but in the traces I've looked at, I dont see any new pools being successfully created (and the indication of how long pool creation took to complete).

You must forgive me, grasping a little from memory... I seem to recall that there were some competition between the freeing (unregister) and pool allocation, is it possible that a something slow in the deallocation prevents new pools from being created?

Also, since I'm not familiar with this code (and I'm looking at this on my apple watch)
the "schedule_timeout(interval)", mapped to an inline null function. So I couldn't decipher yet.

Comment by Bob Ciotti [ 16/Jan/16 ]

We still have two production filesystems down. This is a critical problem.

We are going to try to run jobs on the remaining filesystems, but there were issues doing this earlier. So risky.

We are going to investigate network issues. We have found no HW problems.

Assuming that its not a network problem, do you have any suggestions as to where we should look? Debug settings? Other information we can provide to you? Mahmoud said that the traces uploaded show from boot to encountering the issue.

Comment by Amir Shehata (Inactive) [ 16/Jan/16 ]

I looked through the log file attached and I see 442 instances of connection races, which occurs when two nodes are attempting to reconnect. This could result in a flurry of reconnects, which could consume memory. There is a prototype patch that has been done to address the same issue on another site. I'm in the process of porting it to NASA's branch and I'll push it in later today for you to try.

Comment by Amir Shehata (Inactive) [ 17/Jan/16 ]

Ported the patch here:
http://review.whamcloud.com/#/c/18025/

Comment by Liang Zhen (Inactive) [ 17/Jan/16 ]

I checked my original patch, seems I forgot to call set_current_state() before schedule_timeout(), which can't really help because current thread wouldn't sleep. I have updated the patch uploaded by Amir (http://review.whamcloud.com/#/c/16470/), I also ported it to 2_5_fe (http://review.whamcloud.com/18026)

Comment by Doug Oucharek (Inactive) [ 18/Jan/16 ]

Liang: is your patch in addition to the one Amir ported or is it a replacement for it?

Comment by Doug Oucharek (Inactive) [ 18/Jan/16 ]

NASA: Please apply both patches being discussed here: http://review.whamcloud.com/#/c/18025/ and http://review.whamcloud.com/18026.

Comment by Jay Lan (Inactive) [ 18/Jan/16 ]

Patch 18026 missed a newline at line 1909.

Comment by Jay Lan (Inactive) [ 18/Jan/16 ]

It looks like http://review.whamcloud.com/#/c/18025/ is a backport of patch LU-7569 http://review.whamcloud.com/#/c/17892/ that I have been asking for. Thanks.

Comment by Jay Lan (Inactive) [ 07/Apr/16 ]

We need a b2_7_fe back port also. ATM we plan to stop running 2.7.1 until we receive the back port.

Comment by Mahmoud Hanafi [ 22/Sep/16 ]

need to ad NASA label

Comment by Peter Jones [ 22/Sep/16 ]

Actually fix landed under LU-7569

Generated at Sat Feb 10 02:10:58 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.