Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-12043

improve Lustre single thread read performances

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: Lustre 2.14.0
    • Labels:
    • Rank (Obsolete):
      9223372036854775807

      Description

      There are several efforts here and there which try to improve the performances of single
      thread read performances.

      this ticket is opened to track a simple enough patch to improve the performances as much
      as possible.

      Here is whole history:

      Currently, for sequential read IO, We grow up window
      size very quickly, and once we cached @max_readahead_per_file
      pages. For following command:

      dd if=/mnt/lustre/file of=/dev/null bs=1M

      We will do something like following:
      ...
      64M bytes cached.
      fast io for 16M bytes
      readahead extra 16M to fill up window.
      fast io for 16M bytes
      readahead extra 16M to fill up window.
      ....

      In this way, we could only use fast IO for 16M bytes and
      then fall through non-fast IO mode. this is also reason
      that why increasing @max_readahead_per_file don't give us
      performances up, since this value only changes how much
      memory we cached in memory, during my testing whatever
      I changed the value, i could only get 2GB/s for single thread
      read.

      Actually, we could do this better, if we have used
      more than 16M bytes readahead pages, submit another readahead
      requests in the background. and ideally, we could always
      use fast IO..I did a quick test with fake-IO on my limited
      PC server:

      Without patch VS Patched:
      ~2.0GB/S vs ~3.0 GB/s

      So we could gain at least 50% performance up, i supposed
      We could get more maybe more than 4GB/S with patch.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                wshilong Wang Shilong
                Reporter:
                wshilong Wang Shilong
              • Votes:
                1 Vote for this issue
                Watchers:
                14 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: