Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-16724

generic page pool

    XMLWordPrintable

Details

    • Improvement
    • Resolution: Fixed
    • Minor
    • Lustre 2.16.0
    • None
    • None
    • 9223372036854775807

    Description

      Looking at the unaligned DIO work in LU-13805, Andreas pointed out we should consider how we're getting pages for our unaligned DIO, the already existing client side encryption, and our future compression feature (LU-10092).

      All three features need to allocate individual pages of memory for what are essentially bounce buffers.  The encryption feature uses a page pool, which seems like a good idea for the others.

      This ticket is to try to settle on one way to do this and ideally have only a single page pool (if that's our solution).

      sebastien , can you talk about why the page pool was chosen?  Was it performance or functionality?

      I assume page pool performance is similar to or better than getting pages from the kernel allocator.  My only concern is wasting memory by having too large of a pool re-allocated pages.  In theory, we could need several GiB of bounce buffers if there is several GiB of outstanding IO.

      However, it looks like there are shrinkers for the page pools, so if the pages are unused, the pool can be responsive to memory pressure.

      So my first guess at a solution is this:
      Have a single shared page pool ("lustre bounce buffers" or similar), with some fairly small pre-allocated size, then it can grow and shrink as needed.  Because page pools respond to kernel memory pressure, we shouldn't have the problem of holding on to unneeded memory.

      So, we have one page pool for all our bounce buffers, and we don't worry too much about starting size except to keep it small, since the required size will be highly variable.  If a customer is not using any of these features, the total need is zero (so we should keep the cost small in this case).  If they are doing heavy IO while using one or more of these features, they could easily require several GiB at once.  So we pay the cost of allocating the necessary memory the first time it's used, then we don't have to pay that cost again unless the kernel asks us to shrink it.

      How does this sound?  sebastien , adilger , ablagodarenko 

      Attachments

        Issue Links

          Activity

            People

              paf0186 Patrick Farrell (Inactive)
              paf0186 Patrick Farrell (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: