Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-12748

parallel readahead needs to be optimized at high number of process

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Minor
    • Lustre 2.14.0
    • None
    • None
    • master
    • 3
    • 9223372036854775807

    Description

      parallel readahead is enabled by default in master, it contributes to sequential read performance a lot. 
      However, if the number of IO thread is increased (e.g. NP=NCPU), read performance drops and it's lower than without readahead. it needs to be tunning and optimization.
      Here is test configuration and resutls.

      Client
      2 x Platinum 8160 CPU @ 2.10GHz, 192GB memory, 2 x IB-EDR(multi-rail)
      CentOS7.6(3.10.0-957.27.2.el7.x86_64)
      OFED-4.5
      
      for i in 6 12 24 48; do                                                                                                                                       
              size=$((768/i))                                                                                                                                       
              /work/tools/mpi/gcc/openmpi/2.1.1/bin/mpirun --allow-run-as-root -np $i /work/tools/bin/ior -w -r -t 1m -b ${size}g -e -F -vv -o /scratch0/file  | tee
       ior-1n${i}p-${VER}.log                                                                                                                                       
      done
      

      Summaruy of Read Performance(MB/sec)

      branch thr=6 thr=12 thr=24 thr=48
      b2_12  9,965  14,551  17,177 18,152
      master 15,252  16,026  17,842 16,991
      master(pRA=off) 10,253  14,489  17,839 18,658

      pRA=off  - disabling parallel readahead (llite.*.read_ahead_async_file_threshold_mb=0)

      Attachments

        Issue Links

          Activity

            People

              wshilong Wang Shilong (Inactive)
              sihara Shuichi Ihara
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: