Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-15642

restore server read/write latency measurements

Details

    • Bug
    • Resolution: Fixed
    • Minor
    • Lustre 2.16.0
    • Lustre 2.15.0
    • None
    • 3
    • 9223372036854775807

    Description

      The patch https://review.whamcloud.com/46075 "LU-12585 obdfilter: Use actual I/O bytes in stats" changed the measurement of the read/write latency stats, so that they now contain the network round-trip time, while they previously only contained the local filesystem IO time. This caused the reported latency with 46076 applied to be much higher (tens or hundreds of milliseconds) than the pre-patch latency (hundreds of microseconds for flash).

      While it may be necessary to account for the actual read/write bytes after the RPC transfer is complete, the code should account for the IO latency after the IO is complete, as it did before, rather than after the RPC is complete. The RPC stats at the OST level and on the client will include the full RPC latency, and the ofd stats should only account for the storage latency.

      Attachments

        Issue Links

          Activity

            [LU-15642] restore server read/write latency measurements

            "Oleg Drokin <green@whamcloud.com>" merged in patch https://review.whamcloud.com/46833/
            Subject: LU-15642 obdclass: use consistent stats units
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: b515c6ec2ab84598c77c65eb78f1afd5e67b1ede

            gerrit Gerrit Updater added a comment - "Oleg Drokin <green@whamcloud.com>" merged in patch https://review.whamcloud.com/46833/ Subject: LU-15642 obdclass: use consistent stats units Project: fs/lustre-release Branch: master Current Patch Set: Commit: b515c6ec2ab84598c77c65eb78f1afd5e67b1ede

            Note above patch does not fix the IO latency stats, just some improvements while I was looking at this code.

            adilger Andreas Dilger added a comment - Note above patch does not fix the IO latency stats, just some improvements while I was looking at this code.

            "Andreas Dilger <adilger@whamcloud.com>" uploaded a new patch: https://review.whamcloud.com/46833
            Subject: LU-15642 obdclass: use consistent stats units
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: a171d269253076d7c6c3e827ce116cab142add83

            gerrit Gerrit Updater added a comment - "Andreas Dilger <adilger@whamcloud.com>" uploaded a new patch: https://review.whamcloud.com/46833 Subject: LU-15642 obdclass: use consistent stats units Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: a171d269253076d7c6c3e827ce116cab142add83

            Steve, there are already stats that include the RPC network transfer time:

            # lctl get_param obdfilter.*.stats| egrep "read|write|="
            obdfilter.testfs-OST0003.stats=
            read_bytes                26 samples [bytes] 1048576 4194304 104857600 433207581343744
            write_bytes               25 samples [bytes] 4194304 4194304 104857600 439804651110400
            read                      26 samples [usecs] 269 34432 83969 2362054419
            write                     25 samples [usecs] 546 23458 87802 1197733848
            # lctl get_param ost.OSS.ost_io.stats | egrep "read|write"
            ost_read                  26 samples [usec] 2238 101196 469557 22435172467
            ost_write                 25 samples [usec] 6709 106630 774811 36953324201
            
            adilger Andreas Dilger added a comment - Steve, there are already stats that include the RPC network transfer time: # lctl get_param obdfilter.*.stats| egrep "read|write|=" obdfilter.testfs-OST0003.stats= read_bytes 26 samples [bytes] 1048576 4194304 104857600 433207581343744 write_bytes 25 samples [bytes] 4194304 4194304 104857600 439804651110400 read 26 samples [usecs] 269 34432 83969 2362054419 write 25 samples [usecs] 546 23458 87802 1197733848 # lctl get_param ost.OSS.ost_io.stats | egrep "read|write" ost_read 26 samples [usec] 2238 101196 469557 22435172467 ost_write 25 samples [usec] 6709 106630 774811 36953324201

            > While it may be necessary to account for the actual read/write bytes after the RPC transfer is complete, the code should account for the IO latency after the IO is complete, as it did before, rather than after the RPC is complete. The RPC stats at the OST level and on the client will include the full RPC latency, and the ofd stats should only account for the storage latency.

             

            I don't know how much work this is, or if this is the best way to do things, but I think it might be useful to have both sets of counters (including for metadata operations):

            • I/O completion
            • I/O completion + after RPC completion (<countername>_rtt, or maybe an extra field that needs to be enabled via a tunable?)

            You can certainly collect the per client counters (via llite), but it's a lot more difficult to collect/munge/etc all of the client data than having an overall average server side for general use, such as identifying network congestion outside of the Lustre servers' control. 

            crusan Steve Crusan added a comment - > While it may be necessary to account for the actual read/write bytes after the RPC transfer is complete, the code should account for the IO latency after the IO is complete, as it did before, rather than after the RPC is complete. The RPC stats at the OST level and on the client will include the full RPC latency, and the ofd stats should only account for the storage latency.   I don't know how much work this is, or if this is the best way to do things, but I think it might be useful to have both sets of counters (including for metadata operations): I/O completion I/O completion + after RPC completion (<countername>_rtt, or maybe an extra field that needs to be enabled via a tunable?) You can certainly collect the per client counters (via llite), but it's a lot more difficult to collect/munge/etc all of the client data than having an overall average server side for general use, such as identifying network congestion outside of the Lustre servers' control. 

            People

              adilger Andreas Dilger
              adilger Andreas Dilger
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: