Details

    • New Feature
    • Resolution: Unresolved
    • Minor
    • None
    • None
    • 9223372036854775807

    Description

      When mounting a client locally on the OSS or MDS it would be desirable to have a local IO path for the bulk writes from the OSC to obdfilter rather than sending the data via ptlrpc->lnet->ptlrpc since this would speed up IO performance and reduce local IO CPU usage significantly. It makes sense to implement this initially only for bulk IO (if that is easier), since that would typically have the highest memory copy overhead, and leave the locking/metadata to use the normal RPC paths, so that they are treated consistently with other clients (avoiding potential hard-to-find bugs).

      Any modifying RPCs to the local OST should be synchronous by default, or possibly use commit-on-share, so that they do not need to be replayed if the server restarts. This implies that it is more desirable to schedule lfs mirror resync in such a way that it is reading from the local OSS and writing to a remote OSS. It might be desirable to allow this functionality to be disabled for testing purposes (e.g. local client mount in test scripts), or if local performance is more important than waiting for recovery to time out.

      It should be possible to enable this mode automatically at mount time based on the client NID, rather than having e.g. a mount option force a "local mount", since it would only apply to targets that are on the same OSS/MDS and not remote targets.

      A further optimization would avoid read caching data in the llite layer to avoid double cache of the same data on the node, since the OSS would also cache the same data, and the OSS cache has the advantage that it could also be shared with other clients, though it would have a higher overhead to access than the VFS page cache (depending on the IO size).

      Attachments

        Issue Links

          Activity

            [LU-10191] FLR2: Server Local Client (SLC)

            I don't think this is fixed by LU-12722. That just allows local mounting of the client.

            This ticket is more about having a direct transfer of data from the local client mount to the local storage (probably OSC->OFD?) rather than doing memcpy() of the bulk data in the 0@lo interface.

            adilger Andreas Dilger added a comment - I don't think this is fixed by LU-12722 . That just allows local mounting of the client. This ticket is more about having a direct transfer of data from the local client mount to the local storage (probably OSC->OFD?) rather than doing memcpy() of the bulk data in the 0@lo interface.

            implemented in LU-12722

            bzzz Alex Zhuravlev added a comment - implemented in LU-12722

            Local client exclusion from recovery is being done by bzzz under LU-12722

            pfarrell Patrick Farrell (Inactive) added a comment - Local client exclusion from recovery is being done by bzzz under  LU-12722
            adilger Andreas Dilger added a comment - - edited

            Note that we can use the llite.*.client_type file to indicate that this is a local_server or similar. For better or worse, the current content is local client (and used to contain remote client for ancient LL_SBI_RMT_CLIENT mounts before patch v2_8_54_0-73-g9d06de3 was landed. There are sanity.sh test_125 and test_126 that check for local client mounts, but those checks could potentially just be removed.

            adilger Andreas Dilger added a comment - - edited Note that we can use the llite.*.client_type file to indicate that this is a local_server or similar. For better or worse, the current content is local client (and used to contain remote client for ancient LL_SBI_RMT_CLIENT mounts before patch v2_8_54_0-73-g9d06de3 was landed. There are sanity.sh test_125 and test_126 that check for local client mounts, but those checks could potentially just be removed.

            People

              wc-triage WC Triage
              adilger Andreas Dilger
              Votes:
              0 Vote for this issue
              Watchers:
              18 Start watching this issue

              Dates

                Created:
                Updated: