Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-13802

New i/o path: Buffered i/o as DIO

    XMLWordPrintable

Details

    • Bug
    • Resolution: Unresolved
    • Major
    • None
    • None
    • None
    • 3
    • 9223372036854775807

    Description

      As Andreas noted in LU-13798, the faster DIO path makes it interesting to switch from buffered i/o to direct i/o at larger sizes.

      This is actually pretty easy:
      If the buffered i/o meets the alignment requirements for DIO (buffers is page aligned and i/o size is a multiple of page size), you can simply set the DIO flag internally in Lustre, and the kernel will direct the i/o to the direct i/o code.  (In newer kernels, this does not require manipulating the O_DIRECT flag on the file, which is good because that's likely unsafe.)

      If the buffered i/o is not valid as direct i/o, the usual "fall back to buffered i/o" mechanism (implemented as part of LU-4198) happens automatically (just return 0 instead of -EINVAL).

       

      The question, then, is how to decide when to switch from buffered i/o to DIO.  I have a proposed solution that I haven't implemented yet*, which I'll describe here.
      *(I have done BIO (buffered i/o) as DIO, but I used a simple "Try all BIO as DIO" patch, not intelligent switching.)

      Essentially, direct i/o performance is a function of how much parallelism we can get by splitting the i/o, and the sync time of the back end storage.

      For example, on my flash back end, I see a benefit from switching 1 MiB BIO to 4x256 KiB DIO (1.9 GiB/s instead of 1.3 GiB/s).  But a spinning disk back end would require a much larger size for this change to make sense.

       

      So the basic question to answer is, what size of i/o do we submit?  How small & in to how many chunks do we split up the i/o?

      Note if our submitted i/o size at the higher levels is larger than stripe size or RPC size, it's automatically split to those boundaries, so if we start submitting at very large sizes, we split on those boundaries instead.

      Here's my thinking.

      We have two basic tunables, one of which has a version for rotational and non-rotational backends.

      The tunables are "preferred minimum i/o size" and "desired submission concurrency" (I'm not proud of the name of the second one, open to suggestions...).

       

      So, consider a situation where we have a preferred minimum size of 256 KiB and a desired submission concurrency of 8.

      If we do a 256 KiB BIO, that is done as buffered i/o.  If we do a 400 KiB BIO, still buffered.  But if we do a 512 KiB BIO, we split it in two 256 KiB DIOs.  A 700 KiB BIO is 2x256 KiB +188 KiB DIOs.  (These thresholds may be too small.)

      Now, consider larger sizes.  1 MiB becomes 4x256 KiB.  Then, 2 MiB 8x256 KiB submissions.

      But at larger sizes, the desired submission concurrency comes in to play.  Consider 4 MiB.  4 MiB/8 = 512 KiB.  So we split 4 MiB in to 8x512 KiB.  This model prevents us from submitting many tiny i/os once the i/o size is large enough.

      Note that I have not tested this much yet - I think 8 might be low for submission concurrency and 16 might be more desirable.  Basically, this is "try to cut the i/o in to this many RPCs", so perhaps concurrency is the wrong word...?

      Also, as I noted earlier, the preferred i/o size will be very different for spinning disk vs non-rotational media.  So we will need two values for this (I am thinking we default spinning disk to some multiple of rotational and let people override), and we will also need to make this info available on the client.

      I'll ask about that in a comment.  I've also got some benchmark info I can share later - But, basically, buffered i/o through this path performs exactly like DIO through this path.

      Attachments

        Issue Links

          Activity

            People

              paf0186 Patrick Farrell (Inactive)
              paf0186 Patrick Farrell (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

                Created:
                Updated: