Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-13293

Readahead doesn't work well for non-stride SSF

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Major
    • Lustre 2.14.0
    • Lustre 2.14.0
    • None
    • master
    • 3
    • 9223372036854775807

    Description

      The workload is SSF (Single shared File, non-stride, xfer=4KB) from 8 clients with 128 processes. As an client veiw, it's aggregatable IO with large RPC, but readahead seems to not be working well. there are a lot of RA misses and small RPCs to servers.

      [root@ec01 ~]# salloc --nodes=8 --ntasks-per-node=16 mpirun --allow-run-as-root /work/tools/bin/ior -b 1G -o /scratch/dir/file -a POSIX -w -r -t 4k -e -C -Q 17 -vv
      
      
      Max Write: 13082.37 MiB/sec (13717.85 MB/sec)
      Max Read:  854.17 MiB/sec (895.67 MB/sec)
      
      [root@ec01 ~]# lctl get_param llite.*.read_ahead_stats
      llite.scratch-ffff96ef4843c800.read_ahead_stats=
      snapshot_time             1582596230.643029314 secs.nsecs
      hits                      460552 samples [pages]
      misses                    3733752 samples [pages]
      readpage not consecutive  16 samples [pages]
      miss inside window        69 samples [pages]
      read but discarded        9991 samples [pages]
      zero size window          371 samples [pages]
      failed to reach end       3733526 samples [pages]
      async readahead           65 samples [pages]
      failed to fast read       3733782 samples [pages]
      [root@ec01 ~]# lctl get_param osc.*.rpc_stats
      osc.scratch-OST0000-osc-ffff96ef4843c800.rpc_stats=
      snapshot_time:         1582596234.368149548 (secs.nsecs)
      read RPCs in flight:  0
      write RPCs in flight: 0
      pending write pages:  0
      pending read pages:   0
      
      			read			write
      pages per rpc         rpcs   % cum % |       rpcs   % cum %
      1:		    257335  74  74   |          0   0   0
      2:		     71811  20  94   |          0   0   0
      4:		     16834   4  99   |          0   0   0
      8:		       653   0  99   |          0   0   0
      16:		         5   0  99   |          0   0   0
      32:		         0   0  99   |          0   0   0
      64:		         0   0  99   |          0   0   0
      128:		         0   0  99   |          0   0   0
      256:		         0   0  99   |          0   0   0
      512:		         0   0  99   |          0   0   0
      1024:		         0   0  99   |          1   0   0
      2048:		         1   0  99   |          1   0   1
      4096:		        17   0 100   |        134  98 100
      

      Attachments

        Activity

          People

            wshilong Wang Shilong (Inactive)
            sihara Shuichi Ihara
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: