Affects Version/s: None
Fix Version/s: None
LU-13802 covers the code for switching between the BIO and DIO paths, allowing BIO which meets the requirements for DIO to use the BIO path when appropriate.
The problem is, the requirements for DIO are sometimes hard to meet. i/o must be both page & size aligned. This ticket is about how to do unaligned DIO, in order to let us do any BIO through the DIO path.
This cannot be done with the existing Lustre i/o path. There are a few minor issues, but the central problem is that if an i/o is unaligned, we no longer have a 1-to-1 mapping between a page on the client and a page in the file/on the server. (Buffered i/o creates this 1-to-1 mapping by copying in to an aligned buffer.) This 1-to-1 mapping could possibly be removed, but it would require a significant rework of the Lustre i/o path to make this possible.
So, one option is creating a new DIO path which permits unaligned i/o from userspace all the way to disk.
The other option comes from the following observation:
When doing buffered i/o, about 20% of the time is spent in allocating the buffer and doing memcopy() in to that buffer. Of the remaining 80%, something like 70% is page tracking of various kinds.
Because each page in the page cache can be accessed from multiple threads, including being flushed at any time from various threads (memory pressure va kswapd, lock cancellation, writeout...), it has to be on various lists & have references on (effectively) the file it is part of, etc.
This work, not allocation and memcopy, is where most of the time goes.
So if we implement a simple buffering scheme - allocate an aligned buffer, then copy data to (or from) that buffer - and then do a normal DIO write(/read) from(/to) that buffer, this can be hugely faster than buffered i/o.
If we use the normal DIO path (ie, sync write, and do not keep pages after read), we keep this as a buffer, and not a cache, so we can keep the DIO path lockless.
Also, if we implement this correctly, we have a number of excellent options for speeding this up:
- Move allocation (if we're not pre-allocated) and memcopy from the user thread to the ptlrpcd threads handling RPC submission - This allows us to do these operations in parallel, which should dramatically improve speed.
- Use pre-allocated buffers.
- Potentially, since we control the entire copying path, we could enable the FPU to use vectorized memcopies. (Various aspects of the buffered i/o path in the kernel mean the FPU has to be turned on and off for each page. The cost of this outweighs the benefit of vectorized memcopy.)