Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-12772

bulk timeout after 2.12.2 clients upgrade

Details

    • Bug
    • Resolution: Duplicate
    • Critical
    • None
    • Lustre 2.12.2
    • None
    • 2
    • 9223372036854775807

    Description

      After upgrading to 2.12.2 clients and 2.10.8 servers we start to see large amounts of bulk io timeouts.

      client side

      Sep 16 16:36:59 r323i3n6 kernel: [1568677019.825837] Lustre: nbp2-OST0008-osc-ffff9ad887dc1800: Connection to nbp2-OST0008 (at 10.151.26.105@o2ib) was lost; in progress operations using this service will wait for recovery to complete
      Sep 16 16:37:01 r585i7n2 kernel: [1568677021.115645] Lustre: nbp2-OST0008-osc-ffff90c85d05e000: Connection to nbp2-OST0008 (at 10.151.26.105@o2ib) was lost; in progress operations using this service will wait for recovery to complete
      Sep 16 16:37:01 r311i0n3 kernel: [1568677021.371165] Lustre: nbp2-OST0094-osc-ffff976bea358800: Connection to nbp2-OST0094 (at 10.151.26.105@o2ib) was lost; in progress operations using this service will wait for recovery to complete
      Sep 16 16:37:01 r311i4n9 kernel: [1568677021.578522] Lustre: nbp2-OST0094-osc-ffff9c68adf2d000: Connection to nbp2-OST0094 (at 10.151.26.105@o2ib) was lost; in progress operations using this sea12000: Connection to nbp2-OST0008 (at 10.151.26.105@o2ib) was lost; in progress operations using this service will wait for recovery to complete
      Sep 16 16:36:59 r323i3n6 kernel: [1568677019.825837] Lustre: nbp2-OST0008-osc-ffff9ad887dc1800: Connection to nbp2-OST0008 (at 10.151.26.105@o2ib) was lost; in progress operations using this service will wait for recovery to complete
      Sep 16 16:37:01 r585i7n2 kernel: [1568677021.115645] Lustre: nbp2-OST0008-osc-ffff90c85d05e000: Connection to nbp2-OST0008 (at 10.151.26.105@o2ib) was lost; in progress operations using this service will wait for recovery to complete
      Sep 16 16:37:01 r311i0n3 kernel: [1568677021.371165] Lustre: nbp2-OST0094-osc-ffff976bea358800: Connection to nbp2-OST0094 (at 10.151.26.105@o2ib) was lost; in progress operations using this service will wait for recovery to complete
      Sep 16 16:37:01 r311i4n9 kernel: [1568677021.578522] Lustre: nbp2-OST0094-osc-ffff9c68adf2d000: Connection to nbp2-OST0094 (at 10.151.26.105@o2ib) was lost; in progress operations using this seConnection to nbp2-OST0008 (at 10.151.26.105@o2ib) was lost; in progress operations using this service will wait for recovery to complete
      Sep 16 16:36:59 r323i3n6 kernel: [1568677019.825837] Lustre: nbp2-OST0008-osc-ffff9ad887dc1800: Connection to nbp2-OST0008 (at 10.151.26.105@o2ib) was lost; in progress operations using this service will wait for recovery to complete
      Sep 16 16:37:01 r585i7n2 kernel: [1568677021.115645] Lustre: nbp2-OST0008-osc-ffff90c85d05e000: Connection to nbp2-OST0008 (at 10.151.26.105@o2ib) was lost; in progress operations using this service will wait for recovery to complete
      Sep 16 16:37:01 r311i0n3 kernel: [1568677021.371165] Lustre: nbp2-OST0094-osc-ffff976bea358800: Connection to nbp2-OST0094 (at 10.151.26.105@o2ib) was lost; in progress operations using this service will wait for recovery to complete
      Sep 16 16:37:01 r311i4n9 kernel: [1568677021.578522] Lustre: nbp2-OST0094-osc-ffff9c68adf2d000: Connection to nbp2-OST0094 (at 10.151.26.105@o2ib) was lost; in progress operations using this se

      server side

      [90158.366440] LustreError: 30777:0:(ldlm_lib.c:3239:target_bulk_io()) @@@ timeout on bulk WRITE after 300+0s  req@ffff8affe46eb450 x1644657589074304/t0(0) o4->825a80a7-da45-880e-35d1-4a750d2cf7f0@10.151.16.212@o2ib:502/0 lens 2168/448 e 0 to 0 dl 1568676837 ref 1 fl Interpret:/2/0 rc 0/0
      

      Attachments

        Issue Links

          Activity

            [LU-12772] bulk timeout after 2.12.2 clients upgrade
            mhanafi Mahmoud Hanafi added a comment - - edited

            I have not been able to get a reproducer. Other than the specific IO request that gets timeout over and over (for days) other I/O requests complete just fine. On a client in this state a 'sync' will just hang.
            An eviction from the OST will break the cycle of timeouts.
            We can run 2.10.8 clients but since we don't have a reproducer it won't help.

            Some node after reboot get stuck again in repeated timeouts. I have looked at the jobs running across those there is nothing common about the jobs.

            mhanafi Mahmoud Hanafi added a comment - - edited I have not been able to get a reproducer. Other than the specific IO request that gets timeout over and over (for days) other I/O requests complete just fine. On a client in this state a 'sync' will just hang. An eviction from the OST will break the cycle of timeouts. We can run 2.10.8 clients but since we don't have a reproducer it won't help. Some node after reboot get stuck again in repeated timeouts. I have looked at the jobs running across those there is nothing common about the jobs.

            I looked at the logs. The server logs show 637678 instances of "no credits". I also see 24350 instances of more than 30 credits being returned in one shot.

            In the server logs I see:

            nbp2-oss18_4:00000100:00100000:15.0:1569279965.578558:0:30761:0:(service.c:1939:ptlrpc_server_handle_req_in()) got req x1645173752822896
            nbp2-oss18_4:  req@ffff97c961286850 x1645173752822896/t0(0) o3->4149294c-5096-99e7-e20f-15e803fd47f8@10.151.3.144@o2ib:36/0 lens 488/0 e 0 to 0 dl 1569280371 ref 1 fl New:/2/ffffffff rc 0/-1
            nbp2-oss18_4:00002000:00100000:15.0:1569279965.578568:0:30761:0:(ofd_dev.c:2422:ofd_prolong_extent_locks()) Prolong locks for req ffff97c961286850 with x1645173752822896 ext(4096->8191)
            nbp2-oss18_4:00000100:00100000:15.0:1569279965.578575:0:30761:0:(service.c:2089:ptlrpc_server_handle_request()) Handling RPC pname:cluuid+ref:pid:xid:nid:opc ll_ost_io01_113:4149294c-5096-99e7-e20f-15e803fd47f8+6:0:x1645173752822896:12345-10.151.3.144@o2ib:3
            nbp2-oss18_5:00010000:00020000:23.0:1569280165.581313:0:30761:0:(ldlm_lib.c:3239:target_bulk_io()) @@@ timeout on bulk READ after 200+0s  req@ffff97c961286850 x1645173752822896/t0(0) o3->4149294c-5096-99e7-e20f-15e803fd47f8@10.151.3.144@o2ib:36/0 lens 488/432 e 0 to 0 dl 1569280371 ref 1 fl Interpret:/2/0 rc 0/0
            nbp2-oss18_5:00010000:00000080:23.0:1569280165.695547:0:30761:0:(ldlm_lib.c:2883:target_committed_to_req()) @@@ not sending last_committed update (0/1)  req@ffff97c961286850 x1645173752822896/t0(0) o3->4149294c-5096-99e7-e20f-15e803fd47f8@10.151.3.144@o2ib:36/0 lens 488/432 e 0 to 0 dl 1569280371 ref 1 fl Interpret:/2/ffffffff rc -110/-1
            nbp2-oss18_5:00000100:00100000:23.0:1569280165.695553:0:30761:0:(service.c:2139:ptlrpc_server_handle_request()) Handled RPC pname:cluuid+ref:pid:xid:nid:opc ll_ost_io01_113:4149294c-5096-99e7-e20f-15e803fd47f8+6:0:x1645173752822896:12345-10.151.3.144@o2ib:3 Request processed in 200116977us (200117000us total) trans 0 rc -110/-999
            

            Looks like the bulk read for that particular rpc request times out after 200 seconds. I see this multiple times.

            I'm still suspecting the mismatch in the connection attributes as well as the HIW issue discussed on the call.

            Do you have a system with both clients and servers on 2.10.8? I'm wondering if we'd see the same behavior.

            If we're able to reproduce this issue on the test system, that would be very helpful. We can experiment with the different parameters.

            ashehata Amir Shehata (Inactive) added a comment - I looked at the logs. The server logs show 637678 instances of "no credits". I also see 24350 instances of more than 30 credits being returned in one shot. In the server logs I see: nbp2-oss18_4:00000100:00100000:15.0:1569279965.578558:0:30761:0:(service.c:1939:ptlrpc_server_handle_req_in()) got req x1645173752822896 nbp2-oss18_4: req@ffff97c961286850 x1645173752822896/t0(0) o3->4149294c-5096-99e7-e20f-15e803fd47f8@10.151.3.144@o2ib:36/0 lens 488/0 e 0 to 0 dl 1569280371 ref 1 fl New:/2/ffffffff rc 0/-1 nbp2-oss18_4:00002000:00100000:15.0:1569279965.578568:0:30761:0:(ofd_dev.c:2422:ofd_prolong_extent_locks()) Prolong locks for req ffff97c961286850 with x1645173752822896 ext(4096->8191) nbp2-oss18_4:00000100:00100000:15.0:1569279965.578575:0:30761:0:(service.c:2089:ptlrpc_server_handle_request()) Handling RPC pname:cluuid+ref:pid:xid:nid:opc ll_ost_io01_113:4149294c-5096-99e7-e20f-15e803fd47f8+6:0:x1645173752822896:12345-10.151.3.144@o2ib:3 nbp2-oss18_5:00010000:00020000:23.0:1569280165.581313:0:30761:0:(ldlm_lib.c:3239:target_bulk_io()) @@@ timeout on bulk READ after 200+0s req@ffff97c961286850 x1645173752822896/t0(0) o3->4149294c-5096-99e7-e20f-15e803fd47f8@10.151.3.144@o2ib:36/0 lens 488/432 e 0 to 0 dl 1569280371 ref 1 fl Interpret:/2/0 rc 0/0 nbp2-oss18_5:00010000:00000080:23.0:1569280165.695547:0:30761:0:(ldlm_lib.c:2883:target_committed_to_req()) @@@ not sending last_committed update (0/1) req@ffff97c961286850 x1645173752822896/t0(0) o3->4149294c-5096-99e7-e20f-15e803fd47f8@10.151.3.144@o2ib:36/0 lens 488/432 e 0 to 0 dl 1569280371 ref 1 fl Interpret:/2/ffffffff rc -110/-1 nbp2-oss18_5:00000100:00100000:23.0:1569280165.695553:0:30761:0:(service.c:2139:ptlrpc_server_handle_request()) Handled RPC pname:cluuid+ref:pid:xid:nid:opc ll_ost_io01_113:4149294c-5096-99e7-e20f-15e803fd47f8+6:0:x1645173752822896:12345-10.151.3.144@o2ib:3 Request processed in 200116977us (200117000us total) trans 0 rc -110/-999 Looks like the bulk read for that particular rpc request times out after 200 seconds. I see this multiple times. I'm still suspecting the mismatch in the connection attributes as well as the HIW issue discussed on the call. Do you have a system with both clients and servers on 2.10.8? I'm wondering if we'd see the same behavior. If we're able to reproduce this issue on the test system, that would be very helpful. We can experiment with the different parameters.

            We update the clients to 2.12 but didn't see these timeouts. Only after rebooting the servers into 2.10.8 did we see the timeouts. But moving 1 filesystem back to 2.10.6 didn't make the timeouts go away on that filesystem. Only reboot of clients stops the timeouts. I am not convened these are actual "network" timeouts. The clients and server get out of sync and the client keeps trying the same bulk io over and the server times it out. The last sent of debug logs I uploaded give a good trace of the RPC traffic.
            This is exactly like https://jira.whamcloud.com/browse/LU-12543. When a user job finishes their process can't be killed because of a open file(or a lock). Only an eviction or a reboot will clear up the client.

            mhanafi Mahmoud Hanafi added a comment - We update the clients to 2.12 but didn't see these timeouts. Only after rebooting the servers into 2.10.8 did we see the timeouts. But moving 1 filesystem back to 2.10.6 didn't make the timeouts go away on that filesystem. Only reboot of clients stops the timeouts. I am not convened these are actual "network" timeouts. The clients and server get out of sync and the client keeps trying the same bulk io over and the server times it out. The last sent of debug logs I uploaded give a good trace of the RPC traffic. This is exactly like https://jira.whamcloud.com/browse/LU-12543 . When a user job finishes their process can't be killed because of a open file(or a lock). Only an eviction or a reboot will clear up the client.

            Here is a port for 36073 based on NASA's b2_10_8.

            0001-LU-12621-b_2_10_8-port.patch

            Mahmoud, just to clarify, did this timeout you posted above started happening after any changes ?

            ashehata Amir Shehata (Inactive) added a comment - Here is a port for 36073 based on NASA's b2_10_8. 0001-LU-12621-b_2_10_8-port.patch Mahmoud, just to clarify, did this timeout you posted above started happening after any changes ?

            I noticed that the clients keep timing out the same bulk request over and over. It seems to be stuck.

            ep 23 15:45:53 r401i6n8 kernel: [1569278753.126871] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection to nbp2-OST00d9 (at 10.151.26.114@o2ib) was lost; in progress operations using this service will wait for recovery to complete
            Sep 23 15:45:53 r401i6n8 kernel: [1569278753.162870] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection restored to 10.151.26.114@o2ib (at 10.151.26.114@o2ib)
            Sep 23 15:55:59 r401i6n8 kernel: [1569279359.335922] Lustre: 3467:0:(client.c:2134:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569278753/real 1569278753]  req@ffff99555d201680 x1645173752822896/t0(0) o3->nbp2-OST00d9-osc-ffff9947f4303800@10.151.26.114@o2ib:6/4 lens 488/4536 e 6 to 1 dl 1569279359 ref 2 fl Rpc:X/2/ffffffff rc -11/-1
            Sep 23 15:55:59 r401i6n8 kernel: [1569279359.335922] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection to nbp2-OST00d9 (at 10.151.26.114@o2ib) was lost; in progress operations using this service will wait for recovery to complete
            Sep 23 15:55:59 r401i6n8 kernel: [1569279359.371921] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection restored to 10.151.26.114@o2ib (at 10.151.26.114@o2ib)
            Sep 23 16:06:05 r401i6n8 kernel: [1569279965.540809] Lustre: 3467:0:(client.c:2134:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569279359/real 1569279359]  req@ffff99555d201680 x1645173752822896/t0(0) o3->nbp2-OST00d9-osc-ffff9947f4303800@10.151.26.114@o2ib:6/4 lens 488/4536 e 6 to 1 dl 1569279965 ref 2 fl Rpc:X/2/ffffffff rc -11/-1
            Sep 23 16:06:05 r401i6n8 kernel: [1569279965.540809] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection to nbp2-OST00d9 (at 10.151.26.114@o2ib) was lost; in progress operations using this service will wait for recovery to complete
            Sep 23 16:06:05 r401i6n8 kernel: [1569279965.576807] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection restored to 10.151.26.114@o2ib (at 10.151.26.114@o2ib)
            Sep 23 16:16:11 r401i6n8 kernel: [1569280571.753676] Lustre: 3467:0:(client.c:2134:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569279965/real 1569279965]  req@ffff99555d201680 x1645173752822896/t0(0) o3->nbp2-OST00d9-osc-ffff9947f4303800@10.151.26.114@o2ib:6/4 lens 488/4536 e 6 to 1 dl 1569280571 ref 2 fl Rpc:X/2/ffffffff rc -11/-1
            Sep 23 16:16:11 r401i6n8 kernel: [1569280571.753676] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection to nbp2-OST00d9 (at 10.151.26.114@o2ib) was lost; in progress operations using this service will wait for recovery to complete
            Sep 23 16:16:11 r401i6n8 kernel: [1569280571.789675] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection restored to 10.151.26.114@o2ib (at 10.151.26.114@o2ib)
            Sep 23 16:26:17 r401i6n8 kernel: [1569281177.958608] Lustre: 3467:0:(client.c:2134:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569280571/real 1569280571]  req@ffff99555d201680 x1645173752822896/t0(0) o3->nbp2-OST00d9-osc-ffff9947f4303800@10.151.26.114@o2ib:6/4 lens 488/4536 e 6 to 1 dl 1569281177 ref 2 fl Rpc:X/2/ffffffff rc -11/-1
            Sep 23 16:26:17 r401i6n8 kernel: [1569281177.958608] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection to nbp2-OST00d9 (at 10.151.26.114@o2ib) was lost; in progress operations using this service will wait for recovery to complete
            Sep 23 16:26:17 r401i6n8 kernel: [1569281177.994606] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection restored to 10.151.26.114@o2ib (at 10.151.26.114@o2ib)
            

            I upload the client side and server side logs for the above.

            uploads/LU-12772/client.tgz
            uploads/LU-12772/server.tgz

            mhanafi Mahmoud Hanafi added a comment - I noticed that the clients keep timing out the same bulk request over and over. It seems to be stuck. ep 23 15:45:53 r401i6n8 kernel: [1569278753.126871] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection to nbp2-OST00d9 (at 10.151.26.114@o2ib) was lost; in progress operations using this service will wait for recovery to complete Sep 23 15:45:53 r401i6n8 kernel: [1569278753.162870] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection restored to 10.151.26.114@o2ib (at 10.151.26.114@o2ib) Sep 23 15:55:59 r401i6n8 kernel: [1569279359.335922] Lustre: 3467:0:(client.c:2134:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569278753/real 1569278753] req@ffff99555d201680 x1645173752822896/t0(0) o3->nbp2-OST00d9-osc-ffff9947f4303800@10.151.26.114@o2ib:6/4 lens 488/4536 e 6 to 1 dl 1569279359 ref 2 fl Rpc:X/2/ffffffff rc -11/-1 Sep 23 15:55:59 r401i6n8 kernel: [1569279359.335922] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection to nbp2-OST00d9 (at 10.151.26.114@o2ib) was lost; in progress operations using this service will wait for recovery to complete Sep 23 15:55:59 r401i6n8 kernel: [1569279359.371921] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection restored to 10.151.26.114@o2ib (at 10.151.26.114@o2ib) Sep 23 16:06:05 r401i6n8 kernel: [1569279965.540809] Lustre: 3467:0:(client.c:2134:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569279359/real 1569279359] req@ffff99555d201680 x1645173752822896/t0(0) o3->nbp2-OST00d9-osc-ffff9947f4303800@10.151.26.114@o2ib:6/4 lens 488/4536 e 6 to 1 dl 1569279965 ref 2 fl Rpc:X/2/ffffffff rc -11/-1 Sep 23 16:06:05 r401i6n8 kernel: [1569279965.540809] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection to nbp2-OST00d9 (at 10.151.26.114@o2ib) was lost; in progress operations using this service will wait for recovery to complete Sep 23 16:06:05 r401i6n8 kernel: [1569279965.576807] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection restored to 10.151.26.114@o2ib (at 10.151.26.114@o2ib) Sep 23 16:16:11 r401i6n8 kernel: [1569280571.753676] Lustre: 3467:0:(client.c:2134:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569279965/real 1569279965] req@ffff99555d201680 x1645173752822896/t0(0) o3->nbp2-OST00d9-osc-ffff9947f4303800@10.151.26.114@o2ib:6/4 lens 488/4536 e 6 to 1 dl 1569280571 ref 2 fl Rpc:X/2/ffffffff rc -11/-1 Sep 23 16:16:11 r401i6n8 kernel: [1569280571.753676] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection to nbp2-OST00d9 (at 10.151.26.114@o2ib) was lost; in progress operations using this service will wait for recovery to complete Sep 23 16:16:11 r401i6n8 kernel: [1569280571.789675] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection restored to 10.151.26.114@o2ib (at 10.151.26.114@o2ib) Sep 23 16:26:17 r401i6n8 kernel: [1569281177.958608] Lustre: 3467:0:(client.c:2134:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569280571/real 1569280571] req@ffff99555d201680 x1645173752822896/t0(0) o3->nbp2-OST00d9-osc-ffff9947f4303800@10.151.26.114@o2ib:6/4 lens 488/4536 e 6 to 1 dl 1569281177 ref 2 fl Rpc:X/2/ffffffff rc -11/-1 Sep 23 16:26:17 r401i6n8 kernel: [1569281177.958608] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection to nbp2-OST00d9 (at 10.151.26.114@o2ib) was lost; in progress operations using this service will wait for recovery to complete Sep 23 16:26:17 r401i6n8 kernel: [1569281177.994606] Lustre: nbp2-OST00d9-osc-ffff9947f4303800: Connection restored to 10.151.26.114@o2ib (at 10.151.26.114@o2ib) I upload the client side and server side logs for the above. uploads/ LU-12772 /client.tgz uploads/ LU-12772 /server.tgz

            > 1. https://review.whamcloud.com/#/c/36073/

            The above suggested patch does not apply to 2.10.8 (and b2_10). Could you provide a backport or a list of prerequisite commits?

            jaylan Jay Lan (Inactive) added a comment - > 1. https://review.whamcloud.com/#/c/36073/ The above suggested patch does not apply to 2.10.8 (and b2_10). Could you provide a backport or a list of prerequisite commits?
            ashehata Amir Shehata (Inactive) added a comment - - edited

            For record keeping:
            what does this mean.
            [73978.224013] LNet: 232:0:(o2iblnd_cb.c:2341:kiblnd_passive_connect()) Can't accept conn from 10.151.26.122@o2ib (version 12): max_frags 256 too large (32 wanted)

            When two peers are negotiating the maximum fragments that can be transferred between the peers, one peer can have a larger default max_frags. In this case it's 256. If the other peer supports less, say 32. It'll reject the initial connection attempt and will return a response saying that the maximum it can support is 32. The active side will reconnect with 32. At this point the connection should succeed.

            The above message is printed during this negotiation attempt

            ashehata Amir Shehata (Inactive) added a comment - - edited For record keeping: what does this mean. [73978.224013] LNet: 232:0:(o2iblnd_cb.c:2341:kiblnd_passive_connect()) Can't accept conn from 10.151.26.122@o2ib (version 12): max_frags 256 too large (32 wanted) When two peers are negotiating the maximum fragments that can be transferred between the peers, one peer can have a larger default max_frags. In this case it's 256. If the other peer supports less, say 32. It'll reject the initial connection attempt and will return a response saying that the maximum it can support is 32. The active side will reconnect with 32. At this point the connection should succeed. The above message is printed during this negotiation attempt

            As discussed on the call here is the relevant comment from Andreas regarding the statfs:

            I'm working on patch https://review.whamcloud.com/35380 "LU-12368 obdclass: don't send multiple statfs RPCs" that adds a statfs mutex that will avoid the large majority of concurrent OST_STATFS RPCs being sent that are consuming the credits. While this is a new patch, it is a relatively straightforward client-only change, unlike the statfs proxy patches which are relatively large and change the protocol and need to be installed on both the client and server. This also includes a client-side tunable "llite.*.statfs_max_age" that will allow increasing the statfs cache age from the default 1s to something larger (e.g. 10s or 60s) on compute nodes to further reduce the number of RPCs being sent.

            The statfs mutex patch will only allow a single process to send the OST_STATFS RPC per target, and other threads will block until the first thread completes (they would be blocked waiting for their own RPCs to complete anyway, so this does not negatively impact performance), and then use the cached statfs data from the first thread. This will avoid up to 95/96 (ratio of CPU cores) of OST_STATFS RPCs, but each MDT+OST will still get one RPC. Conversely, the statfs proxy patch would avoid 129/130 (ratio of OSTs to OSTs+MDTs) of RPCs, but each thread will still send one MDT an RPC. They improvement is very comparable for this system (98% vs. 99% reduction in RPCs). The two patches are orthogonal, so they can both be used in master.

            ashehata Amir Shehata (Inactive) added a comment - As discussed on the call here is the relevant comment from Andreas regarding the statfs: I'm working on patch https://review.whamcloud.com/35380 " LU-12368 obdclass: don't send multiple statfs RPCs" that adds a statfs mutex that will avoid the large majority of concurrent OST_STATFS RPCs being sent that are consuming the credits. While this is a new patch, it is a relatively straightforward client-only change, unlike the statfs proxy patches which are relatively large and change the protocol and need to be installed on both the client and server. This also includes a client-side tunable "llite.*.statfs_max_age" that will allow increasing the statfs cache age from the default 1s to something larger (e.g. 10s or 60s) on compute nodes to further reduce the number of RPCs being sent. The statfs mutex patch will only allow a single process to send the OST_STATFS RPC per target, and other threads will block until the first thread completes (they would be blocked waiting for their own RPCs to complete anyway, so this does not negatively impact performance), and then use the cached statfs data from the first thread. This will avoid up to 95/96 (ratio of CPU cores) of OST_STATFS RPCs, but each MDT+OST will still get one RPC. Conversely, the statfs proxy patch would avoid 129/130 (ratio of OSTs to OSTs+MDTs) of RPCs, but each thread will still send one MDT an RPC. They improvement is very comparable for this system (98% vs. 99% reduction in RPCs). The two patches are orthogonal, so they can both be used in master.
            mhanafi Mahmoud Hanafi added a comment - - edited

            what does this mean.
            [73978.224013] LNet: 232:0:(o2iblnd_cb.c:2341:kiblnd_passive_connect()) Can't accept conn from 10.151.26.122@o2ib (version 12): max_frags 256 too large (32 wanted)

            mhanafi Mahmoud Hanafi added a comment - - edited what does this mean. [73978.224013] LNet: 232:0:(o2iblnd_cb.c:2341:kiblnd_passive_connect()) Can't accept conn from 10.151.26.122@o2ib (version 12): max_frags 256 too large (32 wanted)

            The peer_credits_hiw if not set will be set to half of the peer_credits. However, in the current code if connection queue depth, which is based on the peer_credits is reduced, the peer_credits_hiw is not. The primary purpose of the peer_credits_hiw is to return credits on the connection if they accumulate. If that value is too high then the credits are not returned promptly, which could lead to the no credits situation and eventually the timeouts you're seeing.

            LU-12621 and LU-12569 resolve this issue. Basically these two patches fix the core problem in LU-10213. I'll add a comment on LU-10213 to clarify.

            Both LU-12621 and LU-12569 are planned to land on master and be ported back to 2.12.3. So the three patches which need to be applied to 2.12.2 and 2.10.8 are:

            1. https://review.whamcloud.com/#/c/36073/
            2. https://review.whamcloud.com/#/c/35578/
            3. https://review.whamcloud.com/#/c/34427

            These three patches should correctly calculate the queue depth, adjust the hiw, and keep the CQE the same between the peers.

            For LU-12569, it was intended to fix the issue where the queue depth < hiw. In this case the credits will be depleted and the algorithm which uses the hiw will not be triggered. The patches fixes this issue by:

            #define IBLND_CREDITS_HIGHWATER(t, conn) ((conn->ibc_version) == IBLND_MSG_VERSION_1 ? \
                       IBLND_CREDIT_HIGHWATER_V1 : \
                       min(t->lnd_peercredits_hiw, (__u32)conn->ibc_queue_depth - 1))
            

            The hiw is managed as follows:

            239 »·······if (!tunables->lnd_peercredits_hiw)
            240 »·······»·······tunables->lnd_peercredits_hiw = peer_credits_hiw;
            241 
            242 »·······if (tunables->lnd_peercredits_hiw < net_tunables->lct_peer_tx_credits / 2)
            243 »·······»·······tunables->lnd_peercredits_hiw = net_tunables->lct_peer_tx_credits / 2;
            244 
            245 »·······if (tunables->lnd_peercredits_hiw >= net_tunables->lct_peer_tx_credits)
            246 »·······»·······tunables->lnd_peercredits_hiw = net_tunables->lct_peer_tx_credits - 1;
            

            In your case you don't set the HIW explicitly. tunables->lnd_peercredits_hiw = 0, which would result in: tunables->lnd_peercredits_hiw = net_tunables->lct_peer_tx_credits / 2

            The LU-12569 patch would result in the hiw value being set to conn->ibc_queue_depth - 1. IMHO this violates the expectations in the system. The algorithm will be a lot less aggressive in returning the credits than the standard default, which I'm not sure is what we'd want. In fact based on the many no credits we're see in the logs, I suspect that we'd need to be more aggressive in returning the credits. One option is, now that we know that the queue depth is negotiated to 42, would be to explicitly set peer_credits_hiw to 21. I have no hard recommendation, I would try 21 and observe the system. If there are still a lot of no credits, then I would try to decrease the hiw further.

            In summary, I think the issue is potentially an amalgamation between the two hiw credit issue and the non-matching CQE.

            ashehata Amir Shehata (Inactive) added a comment - The peer_credits_hiw if not set will be set to half of the peer_credits. However, in the current code if connection queue depth, which is based on the peer_credits is reduced, the peer_credits_hiw is not. The primary purpose of the peer_credits_hiw is to return credits on the connection if they accumulate. If that value is too high then the credits are not returned promptly, which could lead to the no credits situation and eventually the timeouts you're seeing. LU-12621 and LU-12569 resolve this issue. Basically these two patches fix the core problem in LU-10213 . I'll add a comment on LU-10213 to clarify. Both LU-12621 and LU-12569 are planned to land on master and be ported back to 2.12.3. So the three patches which need to be applied to 2.12.2 and 2.10.8 are: 1. https://review.whamcloud.com/#/c/36073/ 2. https://review.whamcloud.com/#/c/35578/ 3. https://review.whamcloud.com/#/c/34427 These three patches should correctly calculate the queue depth, adjust the hiw, and keep the CQE the same between the peers. For LU-12569 , it was intended to fix the issue where the queue depth < hiw. In this case the credits will be depleted and the algorithm which uses the hiw will not be triggered. The patches fixes this issue by: #define IBLND_CREDITS_HIGHWATER(t, conn) ((conn->ibc_version) == IBLND_MSG_VERSION_1 ? \ IBLND_CREDIT_HIGHWATER_V1 : \ min(t->lnd_peercredits_hiw, (__u32)conn->ibc_queue_depth - 1)) The hiw is managed as follows: 239 »······· if (!tunables->lnd_peercredits_hiw) 240 »·······»·······tunables->lnd_peercredits_hiw = peer_credits_hiw; 241 242 »······· if (tunables->lnd_peercredits_hiw < net_tunables->lct_peer_tx_credits / 2) 243 »·······»·······tunables->lnd_peercredits_hiw = net_tunables->lct_peer_tx_credits / 2; 244 245 »······· if (tunables->lnd_peercredits_hiw >= net_tunables->lct_peer_tx_credits) 246 »·······»·······tunables->lnd_peercredits_hiw = net_tunables->lct_peer_tx_credits - 1; In your case you don't set the HIW explicitly. tunables->lnd_peercredits_hiw = 0, which would result in: tunables->lnd_peercredits_hiw = net_tunables->lct_peer_tx_credits / 2 The LU-12569 patch would result in the hiw value being set to conn->ibc_queue_depth - 1. IMHO this violates the expectations in the system. The algorithm will be a lot less aggressive in returning the credits than the standard default, which I'm not sure is what we'd want. In fact based on the many no credits we're see in the logs, I suspect that we'd need to be more aggressive in returning the credits. One option is, now that we know that the queue depth is negotiated to 42, would be to explicitly set peer_credits_hiw to 21. I have no hard recommendation, I would try 21 and observe the system. If there are still a lot of no credits, then I would try to decrease the hiw further. In summary, I think the issue is potentially an amalgamation between the two hiw credit issue and the non-matching CQE.

            We see lots of queue depth warnings. both client and server side. They show up only EDR and HDR connect hosts.

            Sep 18 16:41:32 elrtr1 kernel: [1568850092.715421] LNet: 43868:0:(o2iblnd.c:941:kiblnd_create_conn()) peer 10.141.3.13@o2ib417 - queue depth reduced from 63 to 42  to allow for qp creation
            

            The two patches you listed are LU-12621 and LU-12569. But there is a note in LU-12569 that your going to rework the patch as part of LU-10213. We have that in our 2.10.8 and 2.12.2. And there is a note at the end of LU-10213 that it should be reverted for 2.12.x.

            Remember we are also struggling with LU-11644. Our workaround of setting +net for it is still working.

            It not exactly clear which patches we should apply

            Here is our service config
            {code:java}
            options ko2iblnd require_privileged_port=0 use_privileged_port=0
            options ko2iblnd ntx=251072 credits=125536 fmr_pool_size=62769 
            options ko2iblnd timeout=100 retry_count=7 peer_timeout=0 map_on_demand=32 peer_credits=63 concurrent_sends=63
            #lnet
            options lnet networks=o2ib(ib1) 
            options lnet routes="o2ib233 10.151.26.[80-94]@o2ib; o2ib313 10.151.25.[195-197,202-205,222]@o2ib 10.151.26.[60,127,140-144,150-154]@o2ib; o2ib417 10.151.26.[148,149]@o2ib 10.151.25.[167-170]@o2ib"
            options lnet dead_router_check_interval=60 live_router_check_interval=30
            options lnet avoid_asym_router_failure=1 check_routers_before_use=1 small_router_buffers=65536 large_router_buffers=8192
            options ptlrpc at_max=600 at_min=200
             

            Client side

            options ko2iblnd require_privileged_port=0 use_privileged_port=0
            options ko2iblnd timeout=100 retry_count=7 peer_timeout=0 map_on_demand=32 peer_credits=63 concurrent_sends=63
            options ko2iblnd credits=64
            #lnet
            options lnet networks=o2ib(ib1)
            options lnet avoid_asym_router_failure=1 check_routers_before_use=1 small_router_buffers=65536 large_router_buffers=8192
            options ptlrpc at_max=600 at_min=200
            

            Routers

            options ko2iblnd require_privileged_port=0 use_privileged_port=0
            options ko2iblnd timeout=150 retry_count=7 map_on_demand=32 peer_credits=63 concurrent_sends=63
            options ko2iblnd ntx=32768 credits=32768 fmr_pool_size=8193 
            #lnet
            options lnet networks="o2ib(ib0,ib1),o2ib417(ib2,ib3)" forwarding=enabled
            options lnet lnet_peer_discovery_disabled=1
            options lnet avoid_asym_router_failure=1 check_routers_before_use=1 small_router_buffers=65536 large_router_buffers=8192
            options ptlrpc at_max=600 at_min=200
            
            mhanafi Mahmoud Hanafi added a comment - We see lots of queue depth warnings. both client and server side. They show up only EDR and HDR connect hosts. Sep 18 16:41:32 elrtr1 kernel: [1568850092.715421] LNet: 43868:0:(o2iblnd.c:941:kiblnd_create_conn()) peer 10.141.3.13@o2ib417 - queue depth reduced from 63 to 42 to allow for qp creation The two patches you listed are LU-12621 and LU-12569 . But there is a note in LU-12569 that your going to rework the patch as part of LU-10213 . We have that in our 2.10.8 and 2.12.2. And there is a note at the end of LU-10213 that it should be reverted for 2.12.x. Remember we are also struggling with LU-11644 . Our workaround of setting +net for it is still working. It not exactly clear which patches we should apply Here is our service config {code:java} options ko2iblnd require_privileged_port=0 use_privileged_port=0 options ko2iblnd ntx=251072 credits=125536 fmr_pool_size=62769 options ko2iblnd timeout=100 retry_count=7 peer_timeout=0 map_on_demand=32 peer_credits=63 concurrent_sends=63 #lnet options lnet networks=o2ib(ib1) options lnet routes= "o2ib233 10.151.26.[80-94]@o2ib; o2ib313 10.151.25.[195-197,202-205,222]@o2ib 10.151.26.[60,127,140-144,150-154]@o2ib; o2ib417 10.151.26.[148,149]@o2ib 10.151.25.[167-170]@o2ib" options lnet dead_router_check_interval=60 live_router_check_interval=30 options lnet avoid_asym_router_failure=1 check_routers_before_use=1 small_router_buffers=65536 large_router_buffers=8192 options ptlrpc at_max=600 at_min=200 Client side options ko2iblnd require_privileged_port=0 use_privileged_port=0 options ko2iblnd timeout=100 retry_count=7 peer_timeout=0 map_on_demand=32 peer_credits=63 concurrent_sends=63 options ko2iblnd credits=64 #lnet options lnet networks=o2ib(ib1) options lnet avoid_asym_router_failure=1 check_routers_before_use=1 small_router_buffers=65536 large_router_buffers=8192 options ptlrpc at_max=600 at_min=200 Routers options ko2iblnd require_privileged_port=0 use_privileged_port=0 options ko2iblnd timeout=150 retry_count=7 map_on_demand=32 peer_credits=63 concurrent_sends=63 options ko2iblnd ntx=32768 credits=32768 fmr_pool_size=8193 #lnet options lnet networks= "o2ib(ib0,ib1),o2ib417(ib2,ib3)" forwarding=enabled options lnet lnet_peer_discovery_disabled=1 options lnet avoid_asym_router_failure=1 check_routers_before_use=1 small_router_buffers=65536 large_router_buffers=8192 options ptlrpc at_max=600 at_min=200

            People

              ashehata Amir Shehata (Inactive)
              mhanafi Mahmoud Hanafi
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: