Details

    • Technical task
    • Resolution: Unresolved
    • Minor
    • None
    • None
    • 9223372036854775807

    Description

      Writeback on close is to piggyback the dirty data for DoM files in the close RPC if the data could fit into the inline buffer. This way it should be able to improve write on small files significantly.

      I have had this idea before and I have seen this problem in my recent test. The writeback to small files are really slow and no matter how large the number I set it to max_rpcs_in_flight of mdc, it could simply max out. Small RPCs are expensive. An alternative solution would be to have compound RPC to merge those small RPCs but it would introduce more issues. The easier solution is to have writeback on close.

      Attachments

        Issue Links

          Activity

            [LU-11428] Writeback on close for DoM

            So this would cut RPC counts by half and should reduce the RPC processing time for the write since it's inline rather than RDMA, but how big will this effect be relative to the writing itself?  It sounds like in your testing, Jinshan, you were unable to keep the MDS busy because of rpc_in_flight limits.

            It seems like we should do this, but also raise the RPC in flight limit, unless the MDS CPU/disk was fully busy (which it sounds like it wasn't).  And it sounds like raising the RPC in flight limit for the MDS might be a cheap win here.

            pfarrell Patrick Farrell (Inactive) added a comment - So this would cut RPC counts by half and should reduce the RPC processing time for the write since it's inline rather than RDMA, but how big will this effect be relative to the writing itself?  It sounds like in your testing, Jinshan, you were unable to keep the MDS busy because of rpc_in_flight limits. It seems like we should do this, but also raise the RPC in flight limit, unless the MDS CPU/disk was fully busy (which it sounds like it wasn't).  And it sounds like raising the RPC in flight limit for the MDS might be a cheap win here.

            That's pretty much what OSC is currently doing right now. In the close handling on the MDC layer, it will call routines like cl_page_make_ready() to clear page Dirty bit and put page into writeback state. After close RPC is complete, it will clear page writeback bit.

            I didn't realize there is anything special for this, but I'm pretty sure there will be some issue when this is implemented. We can discuss further that time then.

            Jinshan Jinshan Xiong added a comment - That's pretty much what OSC is currently doing right now. In the close handling on the MDC layer, it will call routines like cl_page_make_ready() to clear page Dirty bit and put page into writeback state. After close RPC is complete, it will clear page writeback bit. I didn't realize there is anything special for this, but I'm pretty sure there will be some issue when this is implemented. We can discuss further that time then.

            Jinshan,

            Any thoughts on the above?

            Thanks.

            Joe

            jgmitter Joseph Gmitter (Inactive) added a comment - Jinshan, Any thoughts on the above? Thanks. Joe

            Yes, this optimisation is in my list too though I wasn't thinking about details so far. The solution with inline data buffer on close looks easier than read-on-open because we can allocate buffer on needed size up to reasonable maximum. The problem can be the buffer preparation on a client probably. Do you have an idea how to combine that with CLIO?

            tappro Mikhail Pershin added a comment - Yes, this optimisation is in my list too though I wasn't thinking about details so far. The solution with inline data buffer on close looks easier than read-on-open because we can allocate buffer on needed size up to reasonable maximum. The problem can be the buffer preparation on a client probably. Do you have an idea how to combine that with CLIO?

            People

              tappro Mikhail Pershin
              Jinshan Jinshan Xiong
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

                Created:
                Updated: