[LU-7676] OSS Servers stuck in connecting/disconnect loop Created: 16/Jan/16 Updated: 22/Sep/16 Resolved: 22/Sep/16 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Minor |
| Reporter: | Mahmoud Hanafi | Assignee: | Oleg Drokin |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | None | ||
| Attachments: |
|
||||||||||||
| Issue Links: |
|
||||||||||||
| Severity: | 1 | ||||||||||||
| Rank (Obsolete): | 9223372036854775807 | ||||||||||||
| Description |
|
We have had several OSS started to get into a state of disconnect and reconnect with clients. Sometimes they clear-up and then re-enter the same state later. Even with reboot the will enter into the same state. Attaching Lustre Debug dump. Please advice on what additional info is need for debugging. |
| Comments |
| Comment by Mahmoud Hanafi [ 16/Jan/16 ] |
|
uploaded the following files to ftp.whamcloud.com/uploads/LU7676/ out.nbp2-oss20.1452914508.gz |
| Comment by Mahmoud Hanafi [ 16/Jan/16 ] |
|
Not sure why some if the field where blank.... |
| Comment by Oleg Drokin [ 16/Jan/16 ] |
|
Hm, this is a strange message in the servers: Jan 15 19:13:24 nbp2-oss18 kernel: LustreError: 13505:0:(events.c:452:server_bulk_callback()) event type 5, status -103, desc ffff881b1a708000 Jan 15 19:13:24 nbp2-oss18 kernel: Lustre: nbp2-OST0075: Bulk IO read error with 9a6a5394-9d0c-107d-b924-82de647f4613 (at 10.151.27.95@o2ib), client will retry: rc -110 So something causes those bulk transfers to get aborted. rc -110 is also etimeout. (-103 is connection aborted). And then this: Jan 15 19:13:24 nbp2-oss18 kernel: Lustre: 21888:0:(service.c:2050:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (192:4547s); client may timeout. req@ffff881b110a2400 x1522760681448372/t0(0) o3->9a6a5394-9d0c-107d-b924-82de647f4613@10.151.27.95@o2ib:0/0 lens 488/432 e 0 to 0 dl 1452909414 ref 1 fl Complete:/0/ffffffff rc 0/-1 So this means server threads are spending lots of time processing this request, o3 is READ, so all those bulk timeouts are probably causing read RPCs to fail, and take long time at that. When that happens, the client whose request got stuck like that would be complaining about server unresponsiveness and will be reconnecting. So the root cause is somewhere in the bulk IO errors. In the log attached to this ticket we can see stuff like: 00000800:00000100:4.0F:1452910553.468033:0:2802:0:(o2iblnd_cb.c:2903:kiblnd_cm_callback()) 10.151.54.85@o2ib: UNREACHABLE -110
00000800:00000100:4.0:1452910553.563037:0:2802:0:(o2iblnd_cb.c:2903:kiblnd_cm_callback()) 10.151.0.196@o2ib: UNREACHABLE -110
00000800:00000100:4.0:1452910553.563045:0:2802:0:(o2iblnd_cb.c:2072:kiblnd_peer_connect_failed()) Deleting messages for 10.151.0.196@o2ib: connection failed
Assuming this is one of the clients, I imagine you are just having some sort of a network problem where some of the messages cannot get through? |
| Comment by Mahmoud Hanafi [ 16/Jan/16 ] |
|
We haven't been able to identify any network issues. As far as we can tell the network is find. what do you make of these messages. The downward slide of the servers is pre-seeded by these Jan 15 19:10:09 nbp2-oss20 kernel: Lustre: 22081:0niobuf.c:285tlrpc_abort_bulk()) Unexpectedly long timeout: desc ffff880494ed2000 |
| Comment by Oleg Drokin [ 16/Jan/16 ] |
|
That unexpectedly long timeout is more of the same. I think this is another sign of unhealthy network/card/driver. It's not normal for connection to a peer to fail with ETIMEOUT (-110)/UNREACHABLE as seen in the last snippet in my previous comment. |
| Comment by Bob Ciotti [ 16/Jan/16 ] |
|
We also see many messages like this: This was part of a patch generated in https://jira.hpdd.intel.com/browse/LU-7054 You must forgive me, grasping a little from memory... I seem to recall that there were some competition between the freeing (unregister) and pool allocation, is it possible that a something slow in the deallocation prevents new pools from being created? Also, since I'm not familiar with this code (and I'm looking at this on my apple watch) |
| Comment by Bob Ciotti [ 16/Jan/16 ] |
|
We still have two production filesystems down. This is a critical problem. We are going to try to run jobs on the remaining filesystems, but there were issues doing this earlier. So risky. We are going to investigate network issues. We have found no HW problems. Assuming that its not a network problem, do you have any suggestions as to where we should look? Debug settings? Other information we can provide to you? Mahmoud said that the traces uploaded show from boot to encountering the issue. |
| Comment by Amir Shehata (Inactive) [ 16/Jan/16 ] |
|
I looked through the log file attached and I see 442 instances of connection races, which occurs when two nodes are attempting to reconnect. This could result in a flurry of reconnects, which could consume memory. There is a prototype patch that has been done to address the same issue on another site. I'm in the process of porting it to NASA's branch and I'll push it in later today for you to try. |
| Comment by Amir Shehata (Inactive) [ 17/Jan/16 ] |
|
Ported the patch here: |
| Comment by Liang Zhen (Inactive) [ 17/Jan/16 ] |
|
I checked my original patch, seems I forgot to call set_current_state() before schedule_timeout(), which can't really help because current thread wouldn't sleep. I have updated the patch uploaded by Amir (http://review.whamcloud.com/#/c/16470/), I also ported it to 2_5_fe (http://review.whamcloud.com/18026) |
| Comment by Doug Oucharek (Inactive) [ 18/Jan/16 ] |
|
Liang: is your patch in addition to the one Amir ported or is it a replacement for it? |
| Comment by Doug Oucharek (Inactive) [ 18/Jan/16 ] |
|
NASA: Please apply both patches being discussed here: http://review.whamcloud.com/#/c/18025/ and http://review.whamcloud.com/18026. |
| Comment by Jay Lan (Inactive) [ 18/Jan/16 ] |
|
Patch 18026 missed a newline at line 1909. |
| Comment by Jay Lan (Inactive) [ 18/Jan/16 ] |
|
It looks like http://review.whamcloud.com/#/c/18025/ is a backport of patch |
| Comment by Jay Lan (Inactive) [ 07/Apr/16 ] |
|
We need a b2_7_fe back port also. ATM we plan to stop running 2.7.1 until we receive the back port. |
| Comment by Mahmoud Hanafi [ 22/Sep/16 ] |
|
need to ad NASA label |
| Comment by Peter Jones [ 22/Sep/16 ] |
|
Actually fix landed under |