Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-6325

CPT bound ptlrpcd's are unimplemented

Details

    • Improvement
    • Resolution: Fixed
    • Minor
    • Lustre 2.8.0
    • None
    • 3
    • 17711

    Description

      ptlrpcd_select_pc() has the comment:

      1. ifdef CFS_CPU_MODE_NUMA
      2. warning "fix this code to use new CPU partition APIs"
      3. endif

      In our own experimentation on large NUMA systems, we found substantial benefits to confining the existing ptlrpcd's to the NUMA node originating IO using the taskset command to set affinity. Unfortunately, this only works for one node at a time.

      To obtain the best case for all nodes, we need to create ptlrpcd's confined to each node and to select ptlrpcd's on the same node.

      We plan to submit a patch against master to complete this.

      Attachments

        Issue Links

          Activity

            [LU-6325] CPT bound ptlrpcd's are unimplemented
            pjones Peter Jones added a comment -

            Landed for 2.8

            pjones Peter Jones added a comment - Landed for 2.8

            Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/13972/
            Subject: LU-6325 ptlrpc: make ptlrpcd threads cpt-aware
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: 2686b25c301f055a15d13f085f5184e6f5cbbe13

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/13972/ Subject: LU-6325 ptlrpc: make ptlrpcd threads cpt-aware Project: fs/lustre-release Branch: master Current Patch Set: Commit: 2686b25c301f055a15d13f085f5184e6f5cbbe13

            I've done a performance evaluation with and without the current revision of http://review.whamcloud.com/#/c/13972/, and made the data available for review at https://docs.google.com/spreadsheets/d/1d_4-rvk6ja3msnZJFzT3L-A42jg_pNz7Ki6j1riMN_g. This data was collected using IOR on a 16 socket E5-4650 UV 2000 partition using a single FDR IB port. The IB card is adjacent to node 3. node 2 is also adjacent to node 3, and node 8 is the most distant node.

            Although there is a clear benefit, the read rates are not necessarily informative : Since each IOR thread performs every read to the same address space and Intel Data Direct I/O is implemented on these processors, the read may only make it to the L3 cache of a processor, and never be committed to memory on the node originating the IO. This can be effectively utilized by someone developing applications for NUMA architectures, moreso with this patch, but it is not the normal case we expect from end-user applications.

            So we are principally looking at write speeds to see the effect on transfer from local memory to the file system. For reference, a generic two socket E5-2660 client was able to achieve 3.4 GB/s writes on the same file system.

            We can see consistent, albeit moderate gains for IO originating from all nodes. A critical benefit not shown here is that the effect of IO to a Lustre filestem on other nodes is substantially reduced, which is an important improvement to reproducible performance of jobs on a shared system.

            schamp Stephen Champion added a comment - I've done a performance evaluation with and without the current revision of http://review.whamcloud.com/#/c/13972/ , and made the data available for review at https://docs.google.com/spreadsheets/d/1d_4-rvk6ja3msnZJFzT3L-A42jg_pNz7Ki6j1riMN_g . This data was collected using IOR on a 16 socket E5-4650 UV 2000 partition using a single FDR IB port. The IB card is adjacent to node 3. node 2 is also adjacent to node 3, and node 8 is the most distant node. Although there is a clear benefit, the read rates are not necessarily informative : Since each IOR thread performs every read to the same address space and Intel Data Direct I/O is implemented on these processors, the read may only make it to the L3 cache of a processor, and never be committed to memory on the node originating the IO. This can be effectively utilized by someone developing applications for NUMA architectures, moreso with this patch, but it is not the normal case we expect from end-user applications. So we are principally looking at write speeds to see the effect on transfer from local memory to the file system. For reference, a generic two socket E5-2660 client was able to achieve 3.4 GB/s writes on the same file system. We can see consistent, albeit moderate gains for IO originating from all nodes. A critical benefit not shown here is that the effect of IO to a Lustre filestem on other nodes is substantially reduced, which is an important improvement to reproducible performance of jobs on a shared system.

            Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/14049/
            Subject: LU-6325 libcfs: shortcut to create CPT from NUMA topology
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: dd9533737c28bd47a4b10d15ed6a4f0b3353765a

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/14049/ Subject: LU-6325 libcfs: shortcut to create CPT from NUMA topology Project: fs/lustre-release Branch: master Current Patch Set: Commit: dd9533737c28bd47a4b10d15ed6a4f0b3353765a

            thanks Olaf, I will look into it soon.

            liang Liang Zhen (Inactive) added a comment - thanks Olaf, I will look into it soon.

            I've updated http://review.whamcloud.com/13972 with an implementation of the proposed changes. The new tunable is num_ptlrpcd_partners. The ptlrpcd threads on a CPT are divided into groups of partner threads of size num_ptlrpcd_partners+1. So an alternative name for the tunable would be ptlrpcd_partner_group_size (with settings at +1 compared to the current implementation).

            • the default for num_ptlrpcd_partners is 1, which matches the pair policy that is the current default.
            • if num_ptlrpcd_partners == -1, then all ptlrpcd threads in a CPT will be partners of each other (e.g, a single partner group per CPT).
            • if num_ptlrpdcd_partners + 1 is larger than the number of ptlrpcd threads we want to create on a CPT, then no additional ptlrpcd threads are created, and they all go into a single partner group.
            • if the number of ptlrpcd threads for a CPT is smaller than and not a multiple of num_ptlrpcd_partners + 1 then the number of threads will be increased to make it a multiple.
            olaf Olaf Weber (Inactive) added a comment - I've updated http://review.whamcloud.com/13972 with an implementation of the proposed changes. The new tunable is num_ptlrpcd_partners . The ptlrpcd threads on a CPT are divided into groups of partner threads of size num_ptlrpcd_partners+1. So an alternative name for the tunable would be ptlrpcd_partner_group_size (with settings at +1 compared to the current implementation). the default for num_ptlrpcd_partners is 1, which matches the pair policy that is the current default. if num_ptlrpcd_partners == -1 , then all ptlrpcd threads in a CPT will be partners of each other (e.g, a single partner group per CPT). if num_ptlrpdcd_partners + 1 is larger than the number of ptlrpcd threads we want to create on a CPT, then no additional ptlrpcd threads are created, and they all go into a single partner group. if the number of ptlrpcd threads for a CPT is smaller than and not a multiple of num_ptlrpcd_partners + 1 then the number of threads will be increased to make it a multiple.

            People

              liang Liang Zhen (Inactive)
              schamp Stephen Champion
              Votes:
              0 Vote for this issue
              Watchers:
              14 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: