Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-6325

CPT bound ptlrpcd's are unimplemented

Details

    • Improvement
    • Resolution: Fixed
    • Minor
    • Lustre 2.8.0
    • None
    • 3
    • 17711

    Description

      ptlrpcd_select_pc() has the comment:

      1. ifdef CFS_CPU_MODE_NUMA
      2. warning "fix this code to use new CPU partition APIs"
      3. endif

      In our own experimentation on large NUMA systems, we found substantial benefits to confining the existing ptlrpcd's to the NUMA node originating IO using the taskset command to set affinity. Unfortunately, this only works for one node at a time.

      To obtain the best case for all nodes, we need to create ptlrpcd's confined to each node and to select ptlrpcd's on the same node.

      We plan to submit a patch against master to complete this.

      Attachments

        Issue Links

          Activity

            [LU-6325] CPT bound ptlrpcd's are unimplemented
            pjones Peter Jones added a comment -

            Landed for 2.8

            pjones Peter Jones added a comment - Landed for 2.8

            Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/13972/
            Subject: LU-6325 ptlrpc: make ptlrpcd threads cpt-aware
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: 2686b25c301f055a15d13f085f5184e6f5cbbe13

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/13972/ Subject: LU-6325 ptlrpc: make ptlrpcd threads cpt-aware Project: fs/lustre-release Branch: master Current Patch Set: Commit: 2686b25c301f055a15d13f085f5184e6f5cbbe13

            I've done a performance evaluation with and without the current revision of http://review.whamcloud.com/#/c/13972/, and made the data available for review at https://docs.google.com/spreadsheets/d/1d_4-rvk6ja3msnZJFzT3L-A42jg_pNz7Ki6j1riMN_g. This data was collected using IOR on a 16 socket E5-4650 UV 2000 partition using a single FDR IB port. The IB card is adjacent to node 3. node 2 is also adjacent to node 3, and node 8 is the most distant node.

            Although there is a clear benefit, the read rates are not necessarily informative : Since each IOR thread performs every read to the same address space and Intel Data Direct I/O is implemented on these processors, the read may only make it to the L3 cache of a processor, and never be committed to memory on the node originating the IO. This can be effectively utilized by someone developing applications for NUMA architectures, moreso with this patch, but it is not the normal case we expect from end-user applications.

            So we are principally looking at write speeds to see the effect on transfer from local memory to the file system. For reference, a generic two socket E5-2660 client was able to achieve 3.4 GB/s writes on the same file system.

            We can see consistent, albeit moderate gains for IO originating from all nodes. A critical benefit not shown here is that the effect of IO to a Lustre filestem on other nodes is substantially reduced, which is an important improvement to reproducible performance of jobs on a shared system.

            schamp Stephen Champion added a comment - I've done a performance evaluation with and without the current revision of http://review.whamcloud.com/#/c/13972/ , and made the data available for review at https://docs.google.com/spreadsheets/d/1d_4-rvk6ja3msnZJFzT3L-A42jg_pNz7Ki6j1riMN_g . This data was collected using IOR on a 16 socket E5-4650 UV 2000 partition using a single FDR IB port. The IB card is adjacent to node 3. node 2 is also adjacent to node 3, and node 8 is the most distant node. Although there is a clear benefit, the read rates are not necessarily informative : Since each IOR thread performs every read to the same address space and Intel Data Direct I/O is implemented on these processors, the read may only make it to the L3 cache of a processor, and never be committed to memory on the node originating the IO. This can be effectively utilized by someone developing applications for NUMA architectures, moreso with this patch, but it is not the normal case we expect from end-user applications. So we are principally looking at write speeds to see the effect on transfer from local memory to the file system. For reference, a generic two socket E5-2660 client was able to achieve 3.4 GB/s writes on the same file system. We can see consistent, albeit moderate gains for IO originating from all nodes. A critical benefit not shown here is that the effect of IO to a Lustre filestem on other nodes is substantially reduced, which is an important improvement to reproducible performance of jobs on a shared system.

            Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/14049/
            Subject: LU-6325 libcfs: shortcut to create CPT from NUMA topology
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: dd9533737c28bd47a4b10d15ed6a4f0b3353765a

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/14049/ Subject: LU-6325 libcfs: shortcut to create CPT from NUMA topology Project: fs/lustre-release Branch: master Current Patch Set: Commit: dd9533737c28bd47a4b10d15ed6a4f0b3353765a

            thanks Olaf, I will look into it soon.

            liang Liang Zhen (Inactive) added a comment - thanks Olaf, I will look into it soon.

            I've updated http://review.whamcloud.com/13972 with an implementation of the proposed changes. The new tunable is num_ptlrpcd_partners. The ptlrpcd threads on a CPT are divided into groups of partner threads of size num_ptlrpcd_partners+1. So an alternative name for the tunable would be ptlrpcd_partner_group_size (with settings at +1 compared to the current implementation).

            • the default for num_ptlrpcd_partners is 1, which matches the pair policy that is the current default.
            • if num_ptlrpcd_partners == -1, then all ptlrpcd threads in a CPT will be partners of each other (e.g, a single partner group per CPT).
            • if num_ptlrpdcd_partners + 1 is larger than the number of ptlrpcd threads we want to create on a CPT, then no additional ptlrpcd threads are created, and they all go into a single partner group.
            • if the number of ptlrpcd threads for a CPT is smaller than and not a multiple of num_ptlrpcd_partners + 1 then the number of threads will be increased to make it a multiple.
            olaf Olaf Weber (Inactive) added a comment - I've updated http://review.whamcloud.com/13972 with an implementation of the proposed changes. The new tunable is num_ptlrpcd_partners . The ptlrpcd threads on a CPT are divided into groups of partner threads of size num_ptlrpcd_partners+1. So an alternative name for the tunable would be ptlrpcd_partner_group_size (with settings at +1 compared to the current implementation). the default for num_ptlrpcd_partners is 1, which matches the pair policy that is the current default. if num_ptlrpcd_partners == -1 , then all ptlrpcd threads in a CPT will be partners of each other (e.g, a single partner group per CPT). if num_ptlrpdcd_partners + 1 is larger than the number of ptlrpcd threads we want to create on a CPT, then no additional ptlrpcd threads are created, and they all go into a single partner group. if the number of ptlrpcd threads for a CPT is smaller than and not a multiple of num_ptlrpcd_partners + 1 then the number of threads will be increased to make it a multiple.

            Liang Zhen (liang.zhen@intel.com) uploaded a new patch: http://review.whamcloud.com/14049
            Subject: LU-6325 libcfs: easy config for NUMA-match CPT table
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: badb5d5181bd7b4b5628add5117e70536240d5bf

            gerrit Gerrit Updater added a comment - Liang Zhen (liang.zhen@intel.com) uploaded a new patch: http://review.whamcloud.com/14049 Subject: LU-6325 libcfs: easy config for NUMA-match CPT table Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: badb5d5181bd7b4b5628add5117e70536240d5bf

            I don't think it is going to be a problem if threads check from different subsets of the queues. But if that is suspected to be an issue, then we should probably make the partner relation commutative (as it is today) and enforce that if ptlrpcd_max_partners > 0 then max_ptlrpcds == N * ptlrpcd_max_partners for some N, and increase a too-low setting of max_ptlrpcds if necessary.

            The limit on the number of partners is to avoid a thundering herd pounding on the queue, which is a struct ptlrpc_request_set protected by spinlocks. Another way to address this is to have one request queue per partner set (this assumes commutative partnership). Or even have one request queue per CPT and drop partnership as a separate concept. In that case only a single wake_up() call needs to be made when a request is queued. But this will not scale well on a large system when CPTs are not used.

            Easier NUMA configuration would certainly be welcome.

            olaf Olaf Weber (Inactive) added a comment - I don't think it is going to be a problem if threads check from different subsets of the queues. But if that is suspected to be an issue, then we should probably make the partner relation commutative (as it is today) and enforce that if ptlrpcd_max_partners > 0 then max_ptlrpcds == N * ptlrpcd_max_partners for some N, and increase a too-low setting of max_ptlrpcds if necessary. The limit on the number of partners is to avoid a thundering herd pounding on the queue, which is a struct ptlrpc_request_set protected by spinlocks. Another way to address this is to have one request queue per partner set (this assumes commutative partnership). Or even have one request queue per CPT and drop partnership as a separate concept. In that case only a single wake_up() call needs to be made when a request is queued. But this will not scale well on a large system when CPTs are not used. Easier NUMA configuration would certainly be welcome.

            I like this way better, it will greatly simplify code. One thing we may also notice is how to wake partners in ptlrpcd_add_rqset(), assuming we choose the next 2 threads as partners, ptlrpcd_add_rqset wakes up thread(N+1) and thread(N+2), if both thread(N+1) and thread(N+2) will check previous two threads, then thread(N+1) will take requests from thread(N-1) and thread(N), this is probably not a problem, or we can enforce number of partners to be power2, so thread knows which subset it should check from, which way sounds better to you?
            Anyway, I think we should go ahead based on our current agreement.

            Another thing is, I think we should provide a easy way to require libcfs to create CPT table that can match numa topology. In current libcfs, user have to check numa topology, or set cpu_npartitions to number of numa nodes, they are inconvenient, I can work out a patch to improve this.

            liang Liang Zhen (Inactive) added a comment - I like this way better, it will greatly simplify code. One thing we may also notice is how to wake partners in ptlrpcd_add_rqset(), assuming we choose the next 2 threads as partners, ptlrpcd_add_rqset wakes up thread(N+1) and thread(N+2), if both thread(N+1) and thread(N+2) will check previous two threads, then thread(N+1) will take requests from thread(N-1) and thread(N), this is probably not a problem, or we can enforce number of partners to be power2, so thread knows which subset it should check from, which way sounds better to you? Anyway, I think we should go ahead based on our current agreement. Another thing is, I think we should provide a easy way to require libcfs to create CPT table that can match numa topology. In current libcfs, user have to check numa topology, or set cpu_npartitions to number of numa nodes, they are inconvenient, I can work out a patch to improve this.

            Liang, your proposal is reasonable. But since we're considering how this part of the code ought to work, I'd like to try to think this through once more.

            The LOCAL/ROUND policy mechanism queues work for processing by different threads. The partner mechanism then enables idle threads to steal work that was queued for another thread. Limiting the number of partners reduces the number of queues to scan in ptlrpcd_check() and the number of threads to wake in ptlrpc_set_add_new_req().

            It is in this context that pinning a ptlrpcd thread to a specific CPU makes sense, when combined with the code in ptlrpcd_select_pc() that avoids queuing work on the thread for the current CPU. But that other CPU is unlikely to be idle either. Ultimately I'm not convinced that Lustre should attempt this kind of fine-grained load-balancing across CPUs.

            The alternative approach is to limit the binding of ptlrpcd threads to NUMA nodes and/or CPTs. It seems to me that on the systems where we care about these things, we want the CPT boundaries to match the NUMA boundaries anyway. Therefore I'd say that only binding to CPT boundaries should be good enough.

            So I end up with binding ptlrpcd threads to CPTs, but not to individual CPUs. The code in ptlrpcd_select_pc() that avoids "the ptlrpcd for the current CPU" goes away, because there no longer is such a thing. For partner policy, only implement what you called the CPT_PARTNER policy. By default create one ptlrpcd thread per CPU in a CPT, and all threads in the same CPT are its partners.

            If a ptlrpcd_max_partners is implemented then one thing to consider is how the partner sets are computed. Currently the "partner of" relation is commutative: if A is a partner of B, then B is a partner of A. Setting things up like that is tricky, it means partitioning the ptlrpcd threads for a CPT into a number of cliques with each clique containing the right number of threads. This feels like more work than we should bother with.

            Since the only point of the partner relation is to define which additional queues a ptlrpcd thread may take work from, I'd suggest threads always looking at the the queues for the next threads (with wrap). For example, if max_ptlrpcds = 5, and ptlrpcd_max_partners = 2, then the partner relation is like this:

               0 -> 1, 2
               1 -> 2, 3
               2 -> 3, 4
               3 -> 4, 0
               4 -> 0, 1
            

            On a system with multiple CPTs this would be the relation between the 7 ptlrpcd threads within each CPT.

            For implementation this would mean code much like the current patchset, but in addition:

            • LIOD_BIND and related code is removed (this includes removing the new ptlrpcd_ctl.pc_cpu field).
            • pdb_policy_t and related code is removed (this includes removing the ptlrpcd_bind_policy tunable).
            • pdl_policy_t and related code is removed.
            olaf Olaf Weber (Inactive) added a comment - Liang, your proposal is reasonable. But since we're considering how this part of the code ought to work, I'd like to try to think this through once more. The LOCAL/ROUND policy mechanism queues work for processing by different threads. The partner mechanism then enables idle threads to steal work that was queued for another thread. Limiting the number of partners reduces the number of queues to scan in ptlrpcd_check() and the number of threads to wake in ptlrpc_set_add_new_req(). It is in this context that pinning a ptlrpcd thread to a specific CPU makes sense, when combined with the code in ptlrpcd_select_pc() that avoids queuing work on the thread for the current CPU. But that other CPU is unlikely to be idle either. Ultimately I'm not convinced that Lustre should attempt this kind of fine-grained load-balancing across CPUs. The alternative approach is to limit the binding of ptlrpcd threads to NUMA nodes and/or CPTs. It seems to me that on the systems where we care about these things, we want the CPT boundaries to match the NUMA boundaries anyway. Therefore I'd say that only binding to CPT boundaries should be good enough. So I end up with binding ptlrpcd threads to CPTs, but not to individual CPUs. The code in ptlrpcd_select_pc() that avoids "the ptlrpcd for the current CPU" goes away, because there no longer is such a thing. For partner policy, only implement what you called the CPT_PARTNER policy. By default create one ptlrpcd thread per CPU in a CPT, and all threads in the same CPT are its partners. If a ptlrpcd_max_partners is implemented then one thing to consider is how the partner sets are computed. Currently the "partner of" relation is commutative: if A is a partner of B, then B is a partner of A. Setting things up like that is tricky, it means partitioning the ptlrpcd threads for a CPT into a number of cliques with each clique containing the right number of threads. This feels like more work than we should bother with. Since the only point of the partner relation is to define which additional queues a ptlrpcd thread may take work from, I'd suggest threads always looking at the the queues for the next threads (with wrap). For example, if max_ptlrpcds = 5, and ptlrpcd_max_partners = 2, then the partner relation is like this: 0 -> 1, 2 1 -> 2, 3 2 -> 3, 4 3 -> 4, 0 4 -> 0, 1 On a system with multiple CPTs this would be the relation between the 7 ptlrpcd threads within each CPT. For implementation this would mean code much like the current patchset, but in addition: LIOD_BIND and related code is removed (this includes removing the new ptlrpcd_ctl.pc_cpu field). pdb_policy_t and related code is removed (this includes removing the ptlrpcd_bind_policy tunable). pdl_policy_t and related code is removed.

            People

              liang Liang Zhen (Inactive)
              schamp Stephen Champion
              Votes:
              0 Vote for this issue
              Watchers:
              14 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: