Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-7676

OSS Servers stuck in connecting/disconnect loop

Details

    • Bug
    • Resolution: Duplicate
    • Minor
    • None
    • None
    • None
    • 1
    • 9223372036854775807

    Description

      We have had several OSS started to get into a state of disconnect and reconnect with clients. Sometimes they clear-up and then re-enter the same state later. Even with reboot the will enter into the same state.

      Attaching Lustre Debug dump. Please advice on what additional info is need for debugging.

      Attachments

        Issue Links

          Activity

            [LU-7676] OSS Servers stuck in connecting/disconnect loop
            liang Liang Zhen (Inactive) added a comment - - edited

            I checked my original patch, seems I forgot to call set_current_state() before schedule_timeout(), which can't really help because current thread wouldn't sleep. I have updated the patch uploaded by Amir (http://review.whamcloud.com/#/c/16470/), I also ported it to 2_5_fe (http://review.whamcloud.com/18026)

            liang Liang Zhen (Inactive) added a comment - - edited I checked my original patch, seems I forgot to call set_current_state() before schedule_timeout(), which can't really help because current thread wouldn't sleep. I have updated the patch uploaded by Amir ( http://review.whamcloud.com/#/c/16470/ ), I also ported it to 2_5_fe ( http://review.whamcloud.com/18026 )
            ashehata Amir Shehata (Inactive) added a comment - Ported the patch here: http://review.whamcloud.com/#/c/18025/

            I looked through the log file attached and I see 442 instances of connection races, which occurs when two nodes are attempting to reconnect. This could result in a flurry of reconnects, which could consume memory. There is a prototype patch that has been done to address the same issue on another site. I'm in the process of porting it to NASA's branch and I'll push it in later today for you to try.

            ashehata Amir Shehata (Inactive) added a comment - I looked through the log file attached and I see 442 instances of connection races, which occurs when two nodes are attempting to reconnect. This could result in a flurry of reconnects, which could consume memory. There is a prototype patch that has been done to address the same issue on another site. I'm in the process of porting it to NASA's branch and I'll push it in later today for you to try.

            We still have two production filesystems down. This is a critical problem.

            We are going to try to run jobs on the remaining filesystems, but there were issues doing this earlier. So risky.

            We are going to investigate network issues. We have found no HW problems.

            Assuming that its not a network problem, do you have any suggestions as to where we should look? Debug settings? Other information we can provide to you? Mahmoud said that the traces uploaded show from boot to encountering the issue.

            Bob.C Bob Ciotti (Inactive) added a comment - We still have two production filesystems down. This is a critical problem. We are going to try to run jobs on the remaining filesystems, but there were issues doing this earlier. So risky. We are going to investigate network issues. We have found no HW problems. Assuming that its not a network problem, do you have any suggestions as to where we should look? Debug settings? Other information we can provide to you? Mahmoud said that the traces uploaded show from boot to encountering the issue.
            Bob.C Bob Ciotti (Inactive) added a comment - - edited

            We also see many messages like this:
            out.nbp2-oss18.1452913951.gz.denum:
            00000800:00000200:15.0:1452913946.993806:0:21340:0:(o2iblnd.c:1898:kiblnd_pool_alloc_node()) Another thread is allocating new TX pool, waiting 1024 HZs for her to complete.trips = 83498830

            This was part of a patch generated in https://jira.hpdd.intel.com/browse/LU-7054
            http://review.whamcloud.com/#/c/16470/2/lnet/klnds/o2iblnd/o2iblnd.c
            but we still see that there are a large number of "complete.trips" through. I has assumed that the "waiting HZs" of 1024 would slow this down, or does it simply schedule other threads if one waiting and not sleep (unclear to me), but in the traces I've looked at, I dont see any new pools being successfully created (and the indication of how long pool creation took to complete).

            You must forgive me, grasping a little from memory... I seem to recall that there were some competition between the freeing (unregister) and pool allocation, is it possible that a something slow in the deallocation prevents new pools from being created?

            Also, since I'm not familiar with this code (and I'm looking at this on my apple watch)
            the "schedule_timeout(interval)", mapped to an inline null function. So I couldn't decipher yet.

            Bob.C Bob Ciotti (Inactive) added a comment - - edited We also see many messages like this: out.nbp2-oss18.1452913951.gz.denum: 00000800:00000200:15.0:1452913946.993806:0:21340:0:(o2iblnd.c:1898:kiblnd_pool_alloc_node()) Another thread is allocating new TX pool, waiting 1024 HZs for her to complete.trips = 83498830 This was part of a patch generated in https://jira.hpdd.intel.com/browse/LU-7054 http://review.whamcloud.com/#/c/16470/2/lnet/klnds/o2iblnd/o2iblnd.c but we still see that there are a large number of "complete.trips" through. I has assumed that the "waiting HZs" of 1024 would slow this down, or does it simply schedule other threads if one waiting and not sleep (unclear to me), but in the traces I've looked at, I dont see any new pools being successfully created (and the indication of how long pool creation took to complete). You must forgive me, grasping a little from memory... I seem to recall that there were some competition between the freeing (unregister) and pool allocation, is it possible that a something slow in the deallocation prevents new pools from being created? Also, since I'm not familiar with this code (and I'm looking at this on my apple watch) the "schedule_timeout(interval)", mapped to an inline null function. So I couldn't decipher yet.
            green Oleg Drokin added a comment -

            That unexpectedly long timeout is more of the same.
            Network/network driver/network card is slow in trying to unregister buffers we are trying to unregister. Slow as in it takes over 300 seconds to unregister such buffers )this is what triggers the message).

            I think this is another sign of unhealthy network/card/driver. It's not normal for connection to a peer to fail with ETIMEOUT (-110)/UNREACHABLE as seen in the last snippet in my previous comment.

            green Oleg Drokin added a comment - That unexpectedly long timeout is more of the same. Network/network driver/network card is slow in trying to unregister buffers we are trying to unregister. Slow as in it takes over 300 seconds to unregister such buffers )this is what triggers the message). I think this is another sign of unhealthy network/card/driver. It's not normal for connection to a peer to fail with ETIMEOUT (-110)/UNREACHABLE as seen in the last snippet in my previous comment.
            mhanafi Mahmoud Hanafi added a comment - - edited

            We haven't been able to identify any network issues. As far as we can tell the network is find.

            what do you make of these messages. The downward slide of the servers is pre-seeded by these

            Jan 15 19:10:09 nbp2-oss20 kernel: Lustre: 22081:0niobuf.c:285tlrpc_abort_bulk()) Unexpectedly long timeout: desc ffff880494ed2000
            Jan 15 19:10:09 nbp2-oss20 kernel: Lustre: 22081:0niobuf.c:285tlrpc_abort_bulk()) Skipped 5 previous similar messages
            Jan 15 19:10:22 nbp2-oss18 kernel: Lustre: 21874:0niobuf.c:285tlrpc_abort_bulk()) Unexpectedly long timeout: desc ffff881b124e4000
            Jan 15 19:10:22 nbp2-oss18 kernel: Lustre: 21874:0niobuf.c:285tlrpc_abort_bulk()) Skipped 9 previous similar messages

            mhanafi Mahmoud Hanafi added a comment - - edited We haven't been able to identify any network issues. As far as we can tell the network is find. what do you make of these messages. The downward slide of the servers is pre-seeded by these Jan 15 19:10:09 nbp2-oss20 kernel: Lustre: 22081:0niobuf.c:285tlrpc_abort_bulk()) Unexpectedly long timeout: desc ffff880494ed2000 Jan 15 19:10:09 nbp2-oss20 kernel: Lustre: 22081:0niobuf.c:285tlrpc_abort_bulk()) Skipped 5 previous similar messages Jan 15 19:10:22 nbp2-oss18 kernel: Lustre: 21874:0niobuf.c:285tlrpc_abort_bulk()) Unexpectedly long timeout: desc ffff881b124e4000 Jan 15 19:10:22 nbp2-oss18 kernel: Lustre: 21874:0niobuf.c:285tlrpc_abort_bulk()) Skipped 9 previous similar messages
            green Oleg Drokin added a comment -

            Hm, this is a strange message in the servers:

            Jan 15 19:13:24 nbp2-oss18 kernel: LustreError: 13505:0:(events.c:452:server_bulk_callback()) event type 5, status -103, desc ffff881b1a708000
            Jan 15 19:13:24 nbp2-oss18 kernel: Lustre: nbp2-OST0075: Bulk IO read error with 9a6a5394-9d0c-107d-b924-82de647f4613 (at 10.151.27.95@o2ib), client will retry: rc -110
            

            So something causes those bulk transfers to get aborted. rc -110 is also etimeout. (-103 is connection aborted).

            And then this:

            Jan 15 19:13:24 nbp2-oss18 kernel: Lustre: 21888:0:(service.c:2050:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (192:4547s); client may timeout.  req@ffff881b110a2400 x1522760681448372/t0(0) o3->9a6a5394-9d0c-107d-b924-82de647f4613@10.151.27.95@o2ib:0/0 lens 488/432 e 0 to 0 dl 1452909414 ref 1 fl Complete:/0/ffffffff rc 0/-1
            

            So this means server threads are spending lots of time processing this request, o3 is READ, so all those bulk timeouts are probably causing read RPCs to fail, and take long time at that. When that happens, the client whose request got stuck like that would be complaining about server unresponsiveness and will be reconnecting.

            So the root cause is somewhere in the bulk IO errors.

            In the log attached to this ticket we can see stuff like:

            00000800:00000100:4.0F:1452910553.468033:0:2802:0:(o2iblnd_cb.c:2903:kiblnd_cm_callback()) 10.151.54.85@o2ib: UNREACHABLE -110
            00000800:00000100:4.0:1452910553.563037:0:2802:0:(o2iblnd_cb.c:2903:kiblnd_cm_callback()) 10.151.0.196@o2ib: UNREACHABLE -110
            00000800:00000100:4.0:1452910553.563045:0:2802:0:(o2iblnd_cb.c:2072:kiblnd_peer_connect_failed()) Deleting messages for 10.151.0.196@o2ib: connection failed
            

            Assuming this is one of the clients, I imagine you are just having some sort of a network problem where some of the messages cannot get through?

            green Oleg Drokin added a comment - Hm, this is a strange message in the servers: Jan 15 19:13:24 nbp2-oss18 kernel: LustreError: 13505:0:(events.c:452:server_bulk_callback()) event type 5, status -103, desc ffff881b1a708000 Jan 15 19:13:24 nbp2-oss18 kernel: Lustre: nbp2-OST0075: Bulk IO read error with 9a6a5394-9d0c-107d-b924-82de647f4613 (at 10.151.27.95@o2ib), client will retry: rc -110 So something causes those bulk transfers to get aborted. rc -110 is also etimeout. (-103 is connection aborted). And then this: Jan 15 19:13:24 nbp2-oss18 kernel: Lustre: 21888:0:(service.c:2050:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (192:4547s); client may timeout. req@ffff881b110a2400 x1522760681448372/t0(0) o3->9a6a5394-9d0c-107d-b924-82de647f4613@10.151.27.95@o2ib:0/0 lens 488/432 e 0 to 0 dl 1452909414 ref 1 fl Complete:/0/ffffffff rc 0/-1 So this means server threads are spending lots of time processing this request, o3 is READ, so all those bulk timeouts are probably causing read RPCs to fail, and take long time at that. When that happens, the client whose request got stuck like that would be complaining about server unresponsiveness and will be reconnecting. So the root cause is somewhere in the bulk IO errors. In the log attached to this ticket we can see stuff like: 00000800:00000100:4.0F:1452910553.468033:0:2802:0:(o2iblnd_cb.c:2903:kiblnd_cm_callback()) 10.151.54.85@o2ib: UNREACHABLE -110 00000800:00000100:4.0:1452910553.563037:0:2802:0:(o2iblnd_cb.c:2903:kiblnd_cm_callback()) 10.151.0.196@o2ib: UNREACHABLE -110 00000800:00000100:4.0:1452910553.563045:0:2802:0:(o2iblnd_cb.c:2072:kiblnd_peer_connect_failed()) Deleting messages for 10.151.0.196@o2ib: connection failed Assuming this is one of the clients, I imagine you are just having some sort of a network problem where some of the messages cannot get through?

            Not sure why some if the field where blank....
            Effected Version: 2.5.3 Server.

            mhanafi Mahmoud Hanafi added a comment - Not sure why some if the field where blank.... Effected Version: 2.5.3 Server.

            uploaded the following files to ftp.whamcloud.com/uploads/LU7676/

            out.nbp2-oss20.1452914508.gz
            out.nbp2-oss20.1452913997.gz
            out.nbp2-oss18.1452914592.gz
            out.nbp2-oss18.1452914592.gz
            varlogmessages.gz

            mhanafi Mahmoud Hanafi added a comment - uploaded the following files to ftp.whamcloud.com/uploads/LU7676/ out.nbp2-oss20.1452914508.gz out.nbp2-oss20.1452913997.gz out.nbp2-oss18.1452914592.gz out.nbp2-oss18.1452914592.gz varlogmessages.gz

            People

              green Oleg Drokin
              mhanafi Mahmoud Hanafi
              Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: