Details

    • New Feature
    • Resolution: Fixed
    • Minor
    • Lustre 2.15.0
    • None
    • None
    • 9223372036854775807

    Description

      Now that NVIDIA has made the official release of GPUDirect Storage, we are able to release the GDS feature integration for Lustre that has been under development and testing in conjunction with NVIDIA for sometime.

      This feature provides the following:

      1. use direct bulk IO with GPU workload
      2. Select the interface nearest the GPU for optimal performance
      3. Integrate GPU selection criteria into the LNet multi-rail selection algorithm.
      4. Handle IO less than 4K in a manner which works with the GPU direct workflow
      5. Use the memory registration/deregistration mechanism provided by the nvidia-fs driver.

      Performance comparison between GPU and CPU workloads attached. Bandwidth in GB/s.

       

      Attachments

        Issue Links

          Activity

            [LU-14798] NVIDIA GPUDirect Storage Support

            Ihara, Yours comments about GPU <> CPU/RAM config. not about GPU <> IB.
            and NUMA nodes is about GPU <> CPU/RAM access.

            You can find information from NVIDIA's DGX-A100 or SuperPOD. e.g. see page 10
            https://hotchips.org/assets/program/tutorials/HC2020.NVIDIA.MichaelHouston.v02.pdf
            Again, GPU0, GPU1 and mlx5_0 are under same PCI switch against NUMA node3, GPU4, GPU5 and mlx5_6 are under same PCI switch against NUMA node7. Our test configuration was surely correct.

            sihara Shuichi Ihara added a comment - Ihara, Yours comments about GPU <> CPU/RAM config. not about GPU <> IB. and NUMA nodes is about GPU <> CPU/RAM access. You can find information from NVIDIA's DGX-A100 or SuperPOD. e.g. see page 10 https://hotchips.org/assets/program/tutorials/HC2020.NVIDIA.MichaelHouston.v02.pdf Again, GPU0, GPU1 and mlx5_0 are under same PCI switch against NUMA node3, GPU4, GPU5 and mlx5_6 are under same PCI switch against NUMA node7. Our test configuration was surely correct.

            Ihara, Yours comments about GPU <> CPU/RAM config. not about GPU <> IB.
            and NUMA nodes is about GPU <> CPU/RAM access.

            did you read an https://docs.nvidia.com/gpudirect-storage/configuration-guide/index.html ?
            if yes, can i ask you to look examples around of lspci -tv | egrep -i "nvidia | micron" or nvidia-smi topo -mp ?
            and understand what is differences with info you provided? That info don't say anything about NUMA nodes (CPU config) - this info about PCI bus config. on AMD CPU system may have a 2-4 NUMA nodes - but 8 PCIe root complex nodes so IB and GPU may exist on SAME NUMA node, but via different PCIe complex nodes - which limits an P2P transfers.

            But i don't see a reasons to continue to discussion as Whamcloud hurry to land a patches before all tests and discussion finished. So i think Whamcloud don't interested with this discussion.

            shadow Alexey Lyashkov added a comment - Ihara, Yours comments about GPU <> CPU/RAM config. not about GPU <> IB. and NUMA nodes is about GPU <> CPU/RAM access. did you read an https://docs.nvidia.com/gpudirect-storage/configuration-guide/index.html ? if yes, can i ask you to look examples around of lspci -tv | egrep -i "nvidia | micron" or nvidia-smi topo -mp ? and understand what is differences with info you provided? That info don't say anything about NUMA nodes (CPU config) - this info about PCI bus config. on AMD CPU system may have a 2-4 NUMA nodes - but 8 PCIe root complex nodes so IB and GPU may exist on SAME NUMA node, but via different PCIe complex nodes - which limits an P2P transfers. But i don't see a reasons to continue to discussion as Whamcloud hurry to land a patches before all tests and discussion finished. So i think Whamcloud don't interested with this discussion.

            My setup is fully proper numa-ware configuration I mentioned above.
            Tested GPU and IB interfaces are located on same NUMA node. see below.

            root@dgxa100:~# nvidia-smi 
            Tue Aug 10 00:15:20 2021       
            +-----------------------------------------------------------------------------+
            | NVIDIA-SMI 470.57.02    Driver Version: 470.57.02    CUDA Version: 11.4     |
            |-------------------------------+----------------------+----------------------+
            | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
            | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
            |                               |                      |               MIG M. |
            |===============================+======================+======================|
            |   0  NVIDIA A100-SXM...  On   | 00000000:07:00.0 Off |                    0 |
            | N/A   27C    P0    53W / 400W |      0MiB / 40536MiB |      0%      Default |
            |                               |                      |             Disabled |
            +-------------------------------+----------------------+----------------------+
            |   1  NVIDIA A100-SXM...  On   | 00000000:0F:00.0 Off |                    0 |
            | N/A   26C    P0    54W / 400W |      0MiB / 40536MiB |      0%      Default |
            |                               |                      |             Disabled |
            +-------------------------------+----------------------+----------------------+
            |   2  NVIDIA A100-SXM...  On   | 00000000:47:00.0 Off |                    0 |
            | N/A   27C    P0    52W / 400W |      0MiB / 40536MiB |      0%      Default |
            |                               |                      |             Disabled |
            +-------------------------------+----------------------+----------------------+
            |   3  NVIDIA A100-SXM...  On   | 00000000:4E:00.0 Off |                    0 |
            | N/A   26C    P0    51W / 400W |      0MiB / 40536MiB |      0%      Default |
            |                               |                      |             Disabled |
            +-------------------------------+----------------------+----------------------+
            |   4  NVIDIA A100-SXM...  On   | 00000000:87:00.0 Off |                    0 |
            | N/A   31C    P0    53W / 400W |      0MiB / 40536MiB |      0%      Default |
            |                               |                      |             Disabled |
            +-------------------------------+----------------------+----------------------+
            |   5  NVIDIA A100-SXM...  On   | 00000000:90:00.0 Off |                    0 |
            | N/A   31C    P0    58W / 400W |      0MiB / 40536MiB |      0%      Default |
            |                               |                      |             Disabled |
            +-------------------------------+----------------------+----------------------+
            |   6  NVIDIA A100-SXM...  On   | 00000000:B7:00.0 Off |                    0 |
            | N/A   31C    P0    55W / 400W |      0MiB / 40536MiB |      0%      Default |
            |                               |                      |             Disabled |
            +-------------------------------+----------------------+----------------------+
            |   7  NVIDIA A100-SXM...  On   | 00000000:BD:00.0 Off |                    0 |
            | N/A   31C    P0    54W / 400W |      0MiB / 40536MiB |      0%      Default |
            |                               |                      |             Disabled |
            +-------------------------------+----------------------+----------------------+
                                                                                           
            +-----------------------------------------------------------------------------+
            | Processes:                                                                  |
            |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
            |        ID   ID                                                   Usage      |
            |=============================================================================|
            |  No running processes found                                                 |
            +-----------------------------------------------------------------------------+
            

            What I selected GPU index were 0, 1, 4 and 5. (see "-d X" option in my test script which I ran gdsio)
            Those GPU's PCI bus id can be identified by "nvidia-smi" command above.

            GPU index	PCIBus-ID
            0		00000000:07:00.0
            1		00000000:0F:00.0
            4		00000000:87:00.0
            5		00000000:90:00.0
            

            Those PCI device's numa node are 3 or 7 below. That's why "-n 3" or "-n 7" with gdsio.

            root@dgxa100:~# cat /sys/bus/pci/drivers/nvidia/0000\:07\:00.0/numa_node 
            3
            root@dgxa100:~# cat /sys/bus/pci/drivers/nvidia/0000\:0f\:00.0/numa_node 
            3
            root@dgxa100:~# cat /sys/bus/pci/drivers/nvidia/0000\:87\:00.0/numa_node 
            7
            root@dgxa100:~# cat /sys/bus/pci/drivers/nvidia/0000\:90\:00.0/numa_node 
            7
            

            And two IB interfaces (ibp12s0 and ibp141s0) were configured as LNET.

            root@dgxa100:~# lnetctl net show 
            net:
                - net type: lo
                  local NI(s):
                    - nid: 0@lo
                      status: up
                - net type: o2ib
                  local NI(s):
                    - nid: 172.16.167.67@o2ib
                      status: up
                      interfaces:
                          0: ibp12s0
                    - nid: 172.16.178.67@o2ib
                      status: up
                      interfaces:
                          0: ibp141s0
            
            

            Those IB interface's PCI bus are 0000:0c:00.0(ibp12s0) and 0000:8d:00.0(ibp141s0).

            root@dgxa100:~# ls -l /sys/class/net/ibp12s0/device /sys/class/net/ibp141s0/device
            lrwxrwxrwx 1 root root 0 Aug  9 18:45 /sys/class/net/ibp12s0/device -> ../../../0000:0c:00.0
            lrwxrwxrwx 1 root root 0 Aug  9 15:01 /sys/class/net/ibp141s0/device -> ../../../0000:8d:00.0
            

            ibp12s0 is represented by mlx5_0 and mlx5_6 represented ibp141s0 and their numa nodes are also 3 and 7 as below.

            root@dgxa100:~# for a in /sys/class/infiniband/*/device; do
            > ls -l $a
            > done
            lrwxrwxrwx 1 root root 0 Aug  6 17:36 /sys/class/infiniband/mlx5_0/device -> ../../../0000:0c:00.0 <- ibp12s0
            lrwxrwxrwx 1 root root 0 Aug  6 17:36 /sys/class/infiniband/mlx5_1/device -> ../../../0000:12:00.0
            lrwxrwxrwx 1 root root 0 Aug  6 17:36 /sys/class/infiniband/mlx5_10/device -> ../../../0000:e1:00.0
            lrwxrwxrwx 1 root root 0 Aug  6 17:36 /sys/class/infiniband/mlx5_11/device -> ../../../0000:e1:00.1
            lrwxrwxrwx 1 root root 0 Aug  6 17:36 /sys/class/infiniband/mlx5_2/device -> ../../../0000:4b:00.0
            lrwxrwxrwx 1 root root 0 Aug  6 17:36 /sys/class/infiniband/mlx5_3/device -> ../../../0000:54:00.0
            lrwxrwxrwx 1 root root 0 Aug  6 17:36 /sys/class/infiniband/mlx5_4/device -> ../../../0000:61:00.0
            lrwxrwxrwx 1 root root 0 Aug  6 17:36 /sys/class/infiniband/mlx5_5/device -> ../../../0000:61:00.1
            lrwxrwxrwx 1 root root 0 Aug  6 17:36 /sys/class/infiniband/mlx5_6/device -> ../../../0000:8d:00.0 <- ibp141s0
            lrwxrwxrwx 1 root root 0 Aug  6 17:36 /sys/class/infiniband/mlx5_7/device -> ../../../0000:94:00.0
            lrwxrwxrwx 1 root root 0 Aug  6 17:36 /sys/class/infiniband/mlx5_8/device -> ../../../0000:ba:00.0
            lrwxrwxrwx 1 root root 0 Aug  6 17:36 /sys/class/infiniband/mlx5_9/device -> ../../../0000:cc:00.0
            
            root@dgxa100:~# cat /sys/class/infiniband/mlx5_0/device/numa_node 
            3
            root@dgxa100:~# cat /sys/class/infiniband/mlx5_6/device/numa_node 
            7
            

            So, GPU id 0 and 1 as well as IB interface ibp12s0 (mlx5_0) are located on same numa node 3, and GPU id 4, 5 and IB interface ibp141s0(mlx5_6) are located on numa node7.
            In fact, GDX-A100 has 8 x GPU, 8 x IB interfaces and PCI switch between GPU (or IB) <-> CPU in above setting. I've been testing multiple GPUs and IB interfaces, one of GDS-IO benefits, it can eliminate bandwidth limitation on PCI switches and all GPU talks to storage through closest IB interfaces.

            sihara Shuichi Ihara added a comment - My setup is fully proper numa-ware configuration I mentioned above. Tested GPU and IB interfaces are located on same NUMA node. see below. root@dgxa100:~# nvidia-smi Tue Aug 10 00:15:20 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA A100-SXM... On | 00000000:07:00.0 Off | 0 | | N/A 27C P0 53W / 400W | 0MiB / 40536MiB | 0% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA A100-SXM... On | 00000000:0F:00.0 Off | 0 | | N/A 26C P0 54W / 400W | 0MiB / 40536MiB | 0% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 2 NVIDIA A100-SXM... On | 00000000:47:00.0 Off | 0 | | N/A 27C P0 52W / 400W | 0MiB / 40536MiB | 0% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 3 NVIDIA A100-SXM... On | 00000000:4E:00.0 Off | 0 | | N/A 26C P0 51W / 400W | 0MiB / 40536MiB | 0% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 4 NVIDIA A100-SXM... On | 00000000:87:00.0 Off | 0 | | N/A 31C P0 53W / 400W | 0MiB / 40536MiB | 0% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 5 NVIDIA A100-SXM... On | 00000000:90:00.0 Off | 0 | | N/A 31C P0 58W / 400W | 0MiB / 40536MiB | 0% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 6 NVIDIA A100-SXM... On | 00000000:B7:00.0 Off | 0 | | N/A 31C P0 55W / 400W | 0MiB / 40536MiB | 0% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 7 NVIDIA A100-SXM... On | 00000000:BD:00.0 Off | 0 | | N/A 31C P0 54W / 400W | 0MiB / 40536MiB | 0% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ What I selected GPU index were 0, 1, 4 and 5. (see "-d X" option in my test script which I ran gdsio) Those GPU's PCI bus id can be identified by "nvidia-smi" command above. GPU index PCIBus-ID 0 00000000:07:00.0 1 00000000:0F:00.0 4 00000000:87:00.0 5 00000000:90:00.0 Those PCI device's numa node are 3 or 7 below. That's why "-n 3" or "-n 7" with gdsio. root@dgxa100:~# cat /sys/bus/pci/drivers/nvidia/0000\:07\:00.0/numa_node 3 root@dgxa100:~# cat /sys/bus/pci/drivers/nvidia/0000\:0f\:00.0/numa_node 3 root@dgxa100:~# cat /sys/bus/pci/drivers/nvidia/0000\:87\:00.0/numa_node 7 root@dgxa100:~# cat /sys/bus/pci/drivers/nvidia/0000\:90\:00.0/numa_node 7 And two IB interfaces (ibp12s0 and ibp141s0) were configured as LNET. root@dgxa100:~# lnetctl net show net: - net type: lo local NI(s): - nid: 0@lo status: up - net type: o2ib local NI(s): - nid: 172.16.167.67@o2ib status: up interfaces: 0: ibp12s0 - nid: 172.16.178.67@o2ib status: up interfaces: 0: ibp141s0 Those IB interface's PCI bus are 0000:0c:00.0(ibp12s0) and 0000:8d:00.0(ibp141s0). root@dgxa100:~# ls -l /sys/class/net/ibp12s0/device /sys/class/net/ibp141s0/device lrwxrwxrwx 1 root root 0 Aug 9 18:45 /sys/class/net/ibp12s0/device -> ../../../0000:0c:00.0 lrwxrwxrwx 1 root root 0 Aug 9 15:01 /sys/class/net/ibp141s0/device -> ../../../0000:8d:00.0 ibp12s0 is represented by mlx5_0 and mlx5_6 represented ibp141s0 and their numa nodes are also 3 and 7 as below. root@dgxa100:~# for a in /sys/class/infiniband/*/device; do > ls -l $a > done lrwxrwxrwx 1 root root 0 Aug 6 17:36 /sys/class/infiniband/mlx5_0/device -> ../../../0000:0c:00.0 <- ibp12s0 lrwxrwxrwx 1 root root 0 Aug 6 17:36 /sys/class/infiniband/mlx5_1/device -> ../../../0000:12:00.0 lrwxrwxrwx 1 root root 0 Aug 6 17:36 /sys/class/infiniband/mlx5_10/device -> ../../../0000:e1:00.0 lrwxrwxrwx 1 root root 0 Aug 6 17:36 /sys/class/infiniband/mlx5_11/device -> ../../../0000:e1:00.1 lrwxrwxrwx 1 root root 0 Aug 6 17:36 /sys/class/infiniband/mlx5_2/device -> ../../../0000:4b:00.0 lrwxrwxrwx 1 root root 0 Aug 6 17:36 /sys/class/infiniband/mlx5_3/device -> ../../../0000:54:00.0 lrwxrwxrwx 1 root root 0 Aug 6 17:36 /sys/class/infiniband/mlx5_4/device -> ../../../0000:61:00.0 lrwxrwxrwx 1 root root 0 Aug 6 17:36 /sys/class/infiniband/mlx5_5/device -> ../../../0000:61:00.1 lrwxrwxrwx 1 root root 0 Aug 6 17:36 /sys/class/infiniband/mlx5_6/device -> ../../../0000:8d:00.0 <- ibp141s0 lrwxrwxrwx 1 root root 0 Aug 6 17:36 /sys/class/infiniband/mlx5_7/device -> ../../../0000:94:00.0 lrwxrwxrwx 1 root root 0 Aug 6 17:36 /sys/class/infiniband/mlx5_8/device -> ../../../0000:ba:00.0 lrwxrwxrwx 1 root root 0 Aug 6 17:36 /sys/class/infiniband/mlx5_9/device -> ../../../0000:cc:00.0 root@dgxa100:~# cat /sys/class/infiniband/mlx5_0/device/numa_node 3 root@dgxa100:~# cat /sys/class/infiniband/mlx5_6/device/numa_node 7 So, GPU id 0 and 1 as well as IB interface ibp12s0 (mlx5_0) are located on same numa node 3, and GPU id 4, 5 and IB interface ibp141s0(mlx5_6) are located on numa node7. In fact, GDX-A100 has 8 x GPU, 8 x IB interfaces and PCI switch between GPU (or IB) <-> CPU in above setting. I've been testing multiple GPUs and IB interfaces, one of GDS-IO benefits, it can eliminate bandwidth limitation on PCI switches and all GPU talks to storage through closest IB interfaces.
            pjones Peter Jones added a comment -

            Landed for 2.15

            pjones Peter Jones added a comment - Landed for 2.15

            "Oleg Drokin <green@whamcloud.com>" merged in patch https://review.whamcloud.com/44111/
            Subject: LU-14798 lustre: Support RDMA only pages
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: 29eabeb34c5ba2cffdb5353d108ea56e0549665b

            gerrit Gerrit Updater added a comment - "Oleg Drokin <green@whamcloud.com>" merged in patch https://review.whamcloud.com/44111/ Subject: LU-14798 lustre: Support RDMA only pages Project: fs/lustre-release Branch: master Current Patch Set: Commit: 29eabeb34c5ba2cffdb5353d108ea56e0549665b

            "Oleg Drokin <green@whamcloud.com>" merged in patch https://review.whamcloud.com/44110/
            Subject: LU-14798 lnet: add LNet GPU Direct Support
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: a7a889f77cec3ad44543fd0b33669521e612097d

            gerrit Gerrit Updater added a comment - "Oleg Drokin <green@whamcloud.com>" merged in patch https://review.whamcloud.com/44110/ Subject: LU-14798 lnet: add LNet GPU Direct Support Project: fs/lustre-release Branch: master Current Patch Set: Commit: a7a889f77cec3ad44543fd0b33669521e612097d
            shadow Alexey Lyashkov added a comment - - edited

            @lhara - you have different test than i show. My test choose a SINGLE CPU + GPU which near to the IB card. you choose different number GPU's with unknown distance. And what is distance between CPU and GPU? can you please attach an lspci to understand it.

            PS. NUMA aware isn't applicable to the GPU <> IB communications. It's based on PCI root complex config. NUMA applicable just to the CPU <> local memory fact.

            shadow Alexey Lyashkov added a comment - - edited @lhara - you have different test than i show. My test choose a SINGLE CPU + GPU which near to the IB card. you choose different number GPU's with unknown distance. And what is distance between CPU and GPU? can you please attach an lspci to understand it. PS. NUMA aware isn't applicable to the GPU <> IB communications. It's based on PCI root complex config. NUMA applicable just to the CPU <> local memory fact.
            sihara Shuichi Ihara added a comment - - edited

            Due to client (DGX-A100) availability, sorry delay for posting test results of both patch LU-14795 and LU-14798 comparisons.
            Here is test results in detail.

            Tested Hardware
            1 x AI400x (23 x NVMe)
            1 x NVIDIA DGX-A100
            

            DGX-A100 supports up to 8 x GPU on DGX-A100 against 8 x IB-HDR200 and 2 x CPU. In my testing, 2 x IB-HDR2000 and 2 and 4 GPU were used in GDS-IO. This is all NUMA-aware (GPU and IB-HDR200 are on same NUMA node) and symmetric configuration.

            The test case are "thr=32, mode=0 (GDS-IO), op=1/0 (write/read) and iosize=16KB/1MB" with gdsio below.

            GDSIO=/usr/local/cuda-11.4/gds/tools/gdsio
            TARGET=/lustre/ai400x/client/gdsio
            
            mode=$1
            op=$2
            thr=$3
            iosize=$4
            
            $GDSIO -T 60 \
            	-D $TARGET/md0 -d 0 -n 3 -w $thr -s 1G -i $iosize -x $mode -I $op \
            	-D $TARGET/md4 -d 4 -n 7 -w $thr -s 1G -i $iosize -x $mode -I $op
            
            $GDSIO -T 60 \
            	-D $TARGET/md0 -d 0 -n 3 -w $thr -s 1G -i $iosize -x $mode -I $op \
            	-D $TARGET/md1 -d 1 -n 3 -w $thr -s 1G -i $iosize -x $mode -I $op \
            	-D $TARGET/md4 -d 4 -n 7 -w $thr -s 1G -i $iosize -x $mode -I $op \
            	-D $TARGET/md5 -d 5 -n 7 -w $thr -s 1G -i $iosize -x $mode -I $op 
            

            2 x GPU, 2 x IB-HDR200

            		iosize=16k			iosize=1m
            		Write		Read		Write		Read
            LU-14795	 0.968215	 2.3704		35.3331	 	35.5543
            LU-14798  	 0.979587        2.24632        34.7941         34.0566
            

            4 x GPU, 2 x IB-HDR200

            		iosize=16k			iosize=1m
            		Write		Read		Write		Read
            LU-14795	 1.05208	 2.62914	34.8957	 	37.4645
            LU-14798  	 1.28675         2.53229        36.0412         39.2747
            

            I saw that patch LU-14798 was ~5% slower than LU-14795 for 16K and 1M read in 2 x GPU but I didn't see 23% drops.
            However, patch LU-14795 was overall slower than LU-14798 in 4 x GPU, 2 x HDR200 case. (22% slower for 16K write in particular)

            sihara Shuichi Ihara added a comment - - edited Due to client (DGX-A100) availability, sorry delay for posting test results of both patch LU-14795 and LU-14798 comparisons. Here is test results in detail. Tested Hardware 1 x AI400x (23 x NVMe) 1 x NVIDIA DGX-A100 DGX-A100 supports up to 8 x GPU on DGX-A100 against 8 x IB-HDR200 and 2 x CPU. In my testing, 2 x IB-HDR2000 and 2 and 4 GPU were used in GDS-IO. This is all NUMA-aware (GPU and IB-HDR200 are on same NUMA node) and symmetric configuration. The test case are "thr=32, mode=0 (GDS-IO), op=1/0 (write/read) and iosize=16KB/1MB" with gdsio below. GDSIO=/usr/local/cuda-11.4/gds/tools/gdsio TARGET=/lustre/ai400x/client/gdsio mode=$1 op=$2 thr=$3 iosize=$4 $GDSIO -T 60 \ -D $TARGET/md0 -d 0 -n 3 -w $thr -s 1G -i $iosize -x $mode -I $op \ -D $TARGET/md4 -d 4 -n 7 -w $thr -s 1G -i $iosize -x $mode -I $op $GDSIO -T 60 \ -D $TARGET/md0 -d 0 -n 3 -w $thr -s 1G -i $iosize -x $mode -I $op \ -D $TARGET/md1 -d 1 -n 3 -w $thr -s 1G -i $iosize -x $mode -I $op \ -D $TARGET/md4 -d 4 -n 7 -w $thr -s 1G -i $iosize -x $mode -I $op \ -D $TARGET/md5 -d 5 -n 7 -w $thr -s 1G -i $iosize -x $mode -I $op 2 x GPU, 2 x IB-HDR200 iosize=16k iosize=1m Write Read Write Read LU-14795 0.968215 2.3704 35.3331 35.5543 LU-14798 0.979587 2.24632 34.7941 34.0566 4 x GPU, 2 x IB-HDR200 iosize=16k iosize=1m Write Read Write Read LU-14795 1.05208 2.62914 34.8957 37.4645 LU-14798 1.28675 2.53229 36.0412 39.2747 I saw that patch LU-14798 was ~5% slower than LU-14795 for 16K and 1M read in 2 x GPU but I didn't see 23% drops. However, patch LU-14795 was overall slower than LU-14798 in 4 x GPU, 2 x HDR200 case. (22% slower for 16K write in particular)

            shadow please check LU-14795 which i got build fails with latest GDS codes which is part of CUDA 11.4.1. patch LU-14798 was fine to build against CUDA 11.4 and 11.4.1 without any changes though.

            sihara Shuichi Ihara added a comment - shadow please check LU-14795 which i got build fails with latest GDS codes which is part of CUDA 11.4.1. patch LU-14798 was fine to build against CUDA 11.4 and 11.4.1 without any changes though.

            any news to replicate an issue ?

            shadow Alexey Lyashkov added a comment - any news to replicate an issue ?

            People

              ashehata Amir Shehata (Inactive)
              ashehata Amir Shehata (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              15 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: