Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-11657

Prefetch whole ZFS block into client cache on random read

Details

    • Improvement
    • Resolution: Unresolved
    • Minor
    • None
    • Lustre 2.13.0
    • None
    • 9223372036854775807

    Description

      When doing random read IOPS to a ZFS-backed OST, the ZFS code will read the whole ZFS block from disk in order to do data checksum verification on the whole block. We may as well align the client read to the ZFS blocksize and fetch the whole ZFS block into client RAM so that the client could re-use those blocks if the file is small enough to fit into RAM. That avoids extra IO on the server that is not being used.

      Attachments

        Issue Links

          Activity

            [LU-11657] Prefetch whole ZFS block into client cache on random read

            There was a proposal to use "lfs ladvise" to tell the OST what blocksize to use when writing a file with random chunks, so that the on-disk blocksize matches the application chunk size.

            Perhaps something similar can be used for reading the file?

            adilger Andreas Dilger added a comment - There was a proposal to use " lfs ladvise " to tell the OST what blocksize to use when writing a file with random chunks, so that the on-disk blocksize matches the application chunk size. Perhaps something similar can be used for reading the file?

            Andreas,

            Is there will be some interface to control for this something likely in LU-11416?

            I am little worried if we always do this, this could cause many pages read but discard if file size is large.

            wshilong Wang Shilong (Inactive) added a comment - Andreas, Is there will be some interface to control for this something likely in LU-11416 ? I am little worried if we always do this, this could cause many pages read but discard if file size is large.

            People

              wc-triage WC Triage
              adilger Andreas Dilger
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated: