Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-14293

Poor lnet/ksocklnd(?) performance on 2x100G bonded ethernet

Details

    • Bug
    • Resolution: Won't Fix
    • Major
    • None
    • Lustre 2.12.6
    • 3
    • 9223372036854775807

    Description

      During performance testing of a new Lustre file system, we discovered that read/write performance aren't where we would expect. As an example, the block level read performance for the system is just over 65GB/s. In scaling tests, we can only get to around 30 GB/s for reads. Writes are slightly better, but still in the 35GB/s range. At single node scale, we seem to cap out at a few GB/s.

      After going through tunings and everything that we can find, we're slightly better, but still miles behind where performance should be. We've played with various ksocklnd parameters (nconnds, nscheds, tx/rx buffer size, etc), but really to not much change. Current tunings that may be relevant: credits 2560, peer credits 63, max_rpcs_in_flight 32.

      Network configuration on the servers is 2x 100G ethernet bonded together (active/active) using kernel bonding (not ksocklnd bonding).

      iperf between two nodes gets nearly line rate at ~98Gb/s and iperf from two nodes to a single node can push ~190Gb/s, consistent with what would be expected from the kernel bonding.

      lnet selftest shows about ~2.5GB/s (20Gb/s) rates for node to node tests. I'm not sure if this is a bug in lnet selftest or a real reflection of the performance.

      We found the following related tickets/mailing list discussions which seem to be very similar to what we're seeing, but with no resolutions:

      http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/2019-August/016630.html

      https://jira.whamcloud.com/browse/LU-11415

      https://jira.whamcloud.com/browse/LU-12815 (maybe performance limiting, but I doubt it for what we're seeing)

       

      Any help or suggestions would be awesome.

      Thanks!

      • Jeff

      Attachments

        Issue Links

          Activity

            [LU-14293] Poor lnet/ksocklnd(?) performance on 2x100G bonded ethernet
            nilesj Jeff Niles added a comment -

            Sort of. It used 12 of the 24 configured threads. I've since reduced this, but wanted to mention what I was seeing in testing.

            I performed quite a few more tests today with the LU-12815 patch applied and various tunings, and have some good news. With the patch, we can see nearly line rate with lnet selftest (11.5-12.0GB/s, up from ~2.5GB/s). Current tunings:

            options ksocklnd sock_timeout=100 credits=2560 peer_credits=63 conns_per_peer=8 nscheds=12
            

            8 conns_per_peer seemed to give the best performance, and nscheds had to be increased because I noticed that the 6 default threads were all 100% pegged during an lnet selftest.

            Unfortunately, this isn't reflecting in the single node IOR numbers. While we saw a ~5x increase in the lnet selftest numbers, we're only seeing a 2x increase in IOR numbers. IOR writes went from ~5GB/s to 9.8GB/s and reads went from ~1.3GB/s to 2.6GB/s on a file-per-OST test (12 OSTs, 6 OSSs). Really trying to understand the brutal read disparity; hoping you all have some thoughts. The writes seem to prove that we can push that bandwidth over the network at least, but is there something about the read path that's different from a networking perspective?

            nilesj Jeff Niles added a comment - Sort of. It used 12 of the 24 configured threads. I've since reduced this, but wanted to mention what I was seeing in testing. I performed quite a few more tests today with the LU-12815 patch applied and various tunings, and have some good news. With the patch, we can see nearly line rate with lnet selftest (11.5-12.0GB/s, up from ~2.5GB/s). Current tunings: options ksocklnd sock_timeout=100 credits=2560 peer_credits=63 conns_per_peer=8 nscheds=12 8 conns_per_peer seemed to give the best performance, and nscheds had to be increased because I noticed that the 6 default threads were all 100% pegged during an lnet selftest. Unfortunately, this isn't reflecting in the single node IOR numbers. While we saw a ~5x increase in the lnet selftest numbers, we're only seeing a 2x increase in IOR numbers. IOR writes went from ~5GB/s to 9.8GB/s and reads went from ~1.3GB/s to 2.6GB/s on a file-per-OST test (12 OSTs, 6 OSSs). Really trying to understand the brutal read disparity; hoping you all have some thoughts. The writes seem to prove that we can push that bandwidth over the network at least, but is there something about the read path that's different from a networking perspective?

            Jeff, when you say "it only uses half", do you mean there are half the number of threads as when you configure nscheds to? If so, that's how it's suppose to work. The idea is not to consume all the cores with lnd threads, to allow other processes to use the system as well.

            ashehata Amir Shehata (Inactive) added a comment - Jeff, when you say "it only uses half", do you mean there are half the number of threads as when you configure nscheds to? If so, that's how it's suppose to work. The idea is not to consume all the cores with lnd threads, to allow other processes to use the system as well.

            I did a back port of the LU-12815 work for 2.12 and we have full use of our Ethernet network.

            simmonsja James A Simmons added a comment - I did a back port of the LU-12815 work for 2.12 and we have full use of our Ethernet network.
            nilesj Jeff Niles added a comment -

            Amir,

            Are you talking about the socknal_sd01_xx threads? If so, I the work did span all of them. I just swapped to using a patched server/client with LU-12815 included and it seems that when I had the default 6, they were all being used, but if I increase `nscheds` to 24 (just matching core count), it only uses half. Really interesting behavior.

            nilesj Jeff Niles added a comment - Amir, Are you talking about the socknal_sd01_xx threads? If so, I the work did span all of them. I just swapped to using a patched server/client with LU-12815 included and it seems that when I had the default 6, they were all being used, but if I increase `nscheds` to 24 (just matching core count), it only uses half. Really interesting behavior.

            Jeff, another data point: When you switched to MR with virtual interfaces, was the load distributed to all the socklnd worker threads?
            The reason I'm interested in this, is because the way work is assigned to the different CPTs is by hashing the NID. The Hash function will get us to one of the CPTs and then we pick one of the threads in that pool. If we have a single NID, we'll always get hashed into the same CPT, therefore we will not be utilizing all the worker threads. This could be another factor in the performance issue you're seeing.

            If you could confirm the socklnd worker thread usage, that'll be great.

            thanks

            ashehata Amir Shehata (Inactive) added a comment - Jeff, another data point: When you switched to MR with virtual interfaces, was the load distributed to all the socklnd worker threads? The reason I'm interested in this, is because the way work is assigned to the different CPTs is by hashing the NID. The Hash function will get us to one of the CPTs and then we pick one of the threads in that pool. If we have a single NID, we'll always get hashed into the same CPT, therefore we will not be utilizing all the worker threads. This could be another factor in the performance issue you're seeing. If you could confirm the socklnd worker thread usage, that'll be great. thanks
            adilger Andreas Dilger added a comment - - edited

            Jeff, this is exactly why the socklnd conns_per_peer parameter was being added - because the single-socket performance is just unable to saturate the network on high-speed Ethernet connections. This is not a problem for o2iblnd except for OPA.

            adilger Andreas Dilger added a comment - - edited Jeff, this is exactly why the socklnd conns_per_peer parameter was being added - because the single-socket performance is just unable to saturate the network on high-speed Ethernet connections. This is not a problem for o2iblnd except for OPA.
            nilesj Jeff Niles added a comment -

            Just to toss a quick update out: tested the multirail virtual interface setup and can get much better rates from single node -> single node with lnet_selftest. Can't really test a full file system run without huge effort to deploy that across the system, so shelving that for now.

            Is this a common problem on 100G ethernet, or are there just not many 100G eth based systems deployed?

            Path forward: We're going to attempt to move to a 2.14 (2.13.latest I guess) server with a LU-12815 patch and test with the conns-per-peer feature. This is the quickest path forward to test, rather than re-deploying without kernel bonding. Will update with how this goes tomorrow.

            nilesj Jeff Niles added a comment - Just to toss a quick update out: tested the multirail virtual interface setup and can get much better rates from single node -> single node with lnet_selftest. Can't really test a full file system run without huge effort to deploy that across the system, so shelving that for now. Is this a common problem on 100G ethernet, or are there just not many 100G eth based systems deployed? Path forward: We're going to attempt to move to a 2.14 (2.13.latest I guess) server with a LU-12815 patch and test with the conns-per-peer feature. This is the quickest path forward to test, rather than re-deploying without kernel bonding. Will update with how this goes tomorrow.

            these changes are coming into two bits:
            1) remove socklnd bonding code since it's not really needed and it simplifies the code
            2) Add the LU-12815 patch on top of it, which adds the conns-per-peer feature

            LU-12815 changes build on # 1. So unfortunately the entire series need to be ported over once it lands, if you wish to use it in 2.12.

            ashehata Amir Shehata (Inactive) added a comment - these changes are coming into two bits: 1) remove socklnd bonding code since it's not really needed and it simplifies the code 2) Add the LU-12815 patch on top of it, which adds the conns-per-peer feature LU-12815 changes build on # 1. So unfortunately the entire series need to be ported over once it lands, if you wish to use it in 2.12.

            Do we really only need a port of https://review.whamcloud.com/#/c/41056 or is the whole patch series needed?

            simmonsja James A Simmons added a comment - Do we really only need a port of https://review.whamcloud.com/#/c/41056  or is the whole patch series needed?

            Note their is a huge difference between 2.12 and master for ksocklnd. The port of LLU-12815 is pretty nasty.

            simmonsja James A Simmons added a comment - Note their is a huge difference between 2.12 and master for ksocklnd. The port of LLU-12815 is pretty nasty.
            nilesj Jeff Niles added a comment -

            Amir,

            The simplest test is between two nodes that reside on the same switch. CPT configuration is the default; in this case two partitions because we have two sockets on these.

            > lctl get_param cpu_partition_table
            cpu_partition_table=
            0 : 0 2 4 6 8 10 12 14 16 18 20 22
            1 : 1 3 5 7 9 11 13 15 17 19 21 23
            

            Top output shows 6 of the 12 threads contributing, all from one socket. We tried playing with the value of nscheds, which seems to default to 6. We attempted to set it to 24 to match core count, and while we did get 24 threads, it didn't make a difference.

            21751 root 20 0 0 0 0 R 20.9 0.0 49:09.76 socknal_sd00_00
             21754 root 20 0 0 0 0 S 17.9 0.0 49:17.12 socknal_sd00_03
             21756 root 20 0 0 0 0 S 17.5 0.0 49:12.60 socknal_sd00_05
             21753 root 20 0 0 0 0 S 16.9 0.0 49:12.37 socknal_sd00_02
             21752 root 20 0 0 0 0 S 16.2 0.0 49:09.85 socknal_sd00_01
             21755 root 20 0 0 0 0 S 16.2 0.0 49:14.87 socknal_sd00_04
            

            I don't believe that LU-12815 is the issue because when I run an lnet selftest from two or more nodes to a single node, I still only get ~2.5GB/s. Basically the bandwidth gets split across the two or more nodes and they each only see their portion of the 2.5 GB/s. My understanding of that LU is that it only helps in single connection applications; I would think that running an lnet selftest from multiple nodes to a single node would get me around that issue. Please let me know if this thinking is wrong.

            That being said, my plan this morning is to test the system after completely removing the bond. I'm planning on using one single connection rather than both and will test it standalone and using MR with logical interfaces.

            Andreas,

            The 30/35GB/s numbers are from a system-wide IOR, so across more than a single host. I used it as an example, but to avoid expanding the scope of the ticket to include an entire cluster, I shouldn't have. To simplify things, single node IOR sees slightly less than the 2.5GB/s of an lnet selftest, so I've been focusing on single node to node performance for debugging. I guess I mentioned the system wide numbers just to state that scaling doesn't help, even with hundreds of clients.

            The individual CPU usage during a node to node test is fairly balanced across the cores. We don't seem to utilize any single core more than 35%.

            Command line for iperf is really basic. 6 TCP connections are needed to fully utilize the 100G link, with -P 1 producing a little over 20Gb/s. This does match with the 2.5GB/s number that we're seeing out of lnet selftest, but doesn't explain why we still only see 2.5GB/s when running a test with multiple lnet selftest "clients" to a single "server", as that should be producing multiple TCP connections. Maybe our understanding here is backwards. I'll be testing with the multiple virtual multirail interfaces today, which I guess will test this theory.

            Thanks for all the help!

            • Jeff
            nilesj Jeff Niles added a comment - Amir, The simplest test is between two nodes that reside on the same switch. CPT configuration is the default; in this case two partitions because we have two sockets on these. > lctl get_param cpu_partition_table cpu_partition_table= 0 : 0 2 4 6 8 10 12 14 16 18 20 22 1 : 1 3 5 7 9 11 13 15 17 19 21 23 Top output shows 6 of the 12 threads contributing, all from one socket. We tried playing with the value of nscheds, which seems to default to 6. We attempted to set it to 24 to match core count, and while we did get 24 threads, it didn't make a difference. 21751 root 20 0 0 0 0 R 20.9 0.0 49:09.76 socknal_sd00_00 21754 root 20 0 0 0 0 S 17.9 0.0 49:17.12 socknal_sd00_03 21756 root 20 0 0 0 0 S 17.5 0.0 49:12.60 socknal_sd00_05 21753 root 20 0 0 0 0 S 16.9 0.0 49:12.37 socknal_sd00_02 21752 root 20 0 0 0 0 S 16.2 0.0 49:09.85 socknal_sd00_01 21755 root 20 0 0 0 0 S 16.2 0.0 49:14.87 socknal_sd00_04 I don't believe that LU-12815 is the issue because when I run an lnet selftest from two or more nodes to a single node, I still only get ~2.5GB/s. Basically the bandwidth gets split across the two or more nodes and they each only see their portion of the 2.5 GB/s. My understanding of that LU is that it only helps in single connection applications; I would think that running an lnet selftest from multiple nodes to a single node would get me around that issue. Please let me know if this thinking is wrong. That being said, my plan this morning is to test the system after completely removing the bond. I'm planning on using one single connection rather than both and will test it standalone and using MR with logical interfaces. Andreas, The 30/35GB/s numbers are from a system-wide IOR, so across more than a single host. I used it as an example, but to avoid expanding the scope of the ticket to include an entire cluster, I shouldn't have. To simplify things, single node IOR sees slightly less than the 2.5GB/s of an lnet selftest, so I've been focusing on single node to node performance for debugging. I guess I mentioned the system wide numbers just to state that scaling doesn't help, even with hundreds of clients. The individual CPU usage during a node to node test is fairly balanced across the cores. We don't seem to utilize any single core more than 35%. Command line for iperf is really basic. 6 TCP connections are needed to fully utilize the 100G link, with -P 1 producing a little over 20Gb/s. This does match with the 2.5GB/s number that we're seeing out of lnet selftest, but doesn't explain why we still only see 2.5GB/s when running a test with multiple lnet selftest "clients" to a single "server", as that should be producing multiple TCP connections. Maybe our understanding here is backwards. I'll be testing with the multiple virtual multirail interfaces today, which I guess will test this theory. Thanks for all the help! Jeff

            People

              ashehata Amir Shehata (Inactive)
              nilesj Jeff Niles
              Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: