Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-7880

add OST/MDT performance statistics to obd_statfs

Details

    • Improvement
    • Resolution: Unresolved
    • Major
    • None
    • None
    • 9223372036854775807

    Description

      In order to facilitate transfer of OST and MDT performance statistics for userspace applications, such as global NRS scheduling, SCR checkpoint scheduling, QOS and allocation decisions on the MDS, etc, it is useful to transport them via obd_statfs to the clients.

      The statistics should include <peak, decaying average of current> <IOPS read, IOPS write, KiB/s read, KiB/s write>.

      The OSS and MDS already collect these statistics for presentation via /proc and it should be possible to include them into struct obd_statfs in newly-added fields at the end of the struct.

      The stats should be fetched and printed with lfs df --stats command for all targets, but not necessarily for regular statfs() requests. With LU-10018 "MDT as a statfs() proxy", the MDT_STATFS request now has an mdt_body in the request which can be used to request different behaviour for the RPC.

      Attachments

        Issue Links

          Activity

            [LU-7880] add OST/MDT performance statistics to obd_statfs
            adilger Andreas Dilger added a comment -

            The statfs data is cached at each level of the stack, so that reading from multiple "kbytesfree", "kbytestotal", "filesfree", etc. parameters doesn't generate separate RPCs for each one.

            I don't think it matters which one you use. Depending on where the stats are available, you could fill in the stats at one level and they will be accessible up the stack.

            Each of the main lprocfs stats structures has its own timestamp since it was last reset (eg. lprocfs_stats.ls_init), which is printed by lprocfs_stats_header(), so that should be used instead of the mount time. This ensures the time range of the stats matches the values that are accumulated there.

            As for the decay factor, I don't have a fixed number in mind. We typically try to avoid hard-coding constants into the code, but I'm not sure whether this needs to be configurable or not. We need to have both the decay factor as well as the decay interval. If we calculate calculate the stats every 5s to determine "instantaneous" peak IOPS/BW, and the decay by 0.4 after 1 minute, then we need a decay factor of 0.6=a^(60/5), so a=245/256 would work out to be 41% decay after a minute, 93% decay after 5 minutes, which seems reasonable.

            adilger Andreas Dilger added a comment - The statfs data is cached at each level of the stack, so that reading from multiple "kbytesfree", "kbytestotal", "filesfree", etc. parameters doesn't generate separate RPCs for each one. I don't think it matters which one you use. Depending on where the stats are available, you could fill in the stats at one level and they will be accessible up the stack. Each of the main lprocfs stats structures has its own timestamp since it was last reset (eg. lprocfs_stats.ls_init), which is printed by lprocfs_stats_header() , so that should be used instead of the mount time. This ensures the time range of the stats matches the values that are accumulated there. As for the decay factor, I don't have a fixed number in mind. We typically try to avoid hard-coding constants into the code, but I'm not sure whether this needs to be configurable or not. We need to have both the decay factor as well as the decay interval. If we calculate calculate the stats every 5s to determine "instantaneous" peak IOPS/BW, and the decay by 0.4 after 1 minute, then we need a decay factor of 0.6=a^(60/5), so a=245/256 would work out to be 41% decay after a minute, 93% decay after 5 minutes, which seems reasonable.
            georgezhaojobs George Zhao added a comment - - edited

            .I'm trying to populate and cache the new fields in obd_statfs:

            __u32           os_read_bytes_peak;
            __u32           os_write_bytes_peak;
            __u32           os_read_io_peak;
            __u32           os_write_io_peak;
            __u32           os_read_bytes_avg;
            __u32           os_write_bytes_avg;
            __u32           os_read_io_avg;
            __u32           os_write_io_avg;
            

            If I understood correctly, the logic shuold be in ofd_statfs()->tgt_statfs_internal().

            What confused me is that I found tgd_osfs cached in tg_grants_data, and obd_osfs cached in obd_device. Which one should I use?
            Another question is that, I'm going to add an argument, obd_device, to tgt_statfs_internal(), so I can get obd_stats. Please correct me if I was wrong.

            Two other questions need design decision:

            1. To get the first average value, how about  lprocfs_counter.lc_sum/(current_time - mount_time)? Where can I get the mount_time for ofd/mdt?
            2. For decaying average, what's the decay factor a? Is it configurable?
              time_delta = current_time - obd->obd_osfs_age
              new_avg = new_sample/time_delta
              new_d_avg = old_d_avg * a + new_avg * (1-a)
            georgezhaojobs George Zhao added a comment - - edited .I'm trying to populate and cache the new fields in obd_statfs: __u32 os_read_bytes_peak; __u32 os_write_bytes_peak; __u32 os_read_io_peak; __u32 os_write_io_peak; __u32 os_read_bytes_avg; __u32 os_write_bytes_avg; __u32 os_read_io_avg; __u32 os_write_io_avg; If I understood correctly, the logic shuold be in ofd_statfs()->tgt_statfs_internal() . What confused me is that I found tgd_osfs cached in tg_grants_data , and obd_osfs cached in obd_device . Which one should I use? Another question is that, I'm going to add an argument, obd_device , to tgt_statfs_internal(), so I can get obd_stats . Please correct me if I was wrong. Two other questions need design decision: To get the first average value, how about   lprocfs_counter.lc_sum/(current_time - mount_time) ? Where can I get the mount_time for ofd/mdt? For decaying average, what's the decay factor a? Is it configurable? time_delta = current_time - obd->obd_osfs_age new_avg = new_sample/time_delta new_d_avg = old_d_avg * a + new_avg * (1-a)

            Actually, it should be possible to increase the size of the obd_statfs structure in the STATFS RPC relatively easily, so long as the nodes handling it do not try to access beyond the actual size requested/replied. I don't think a 16-bit value would be granular enough, no matter what units are chosen.

            adilger Andreas Dilger added a comment - Actually, it should be possible to increase the size of the obd_statfs structure in the STATFS RPC relatively easily, so long as the nodes handling it do not try to access beyond the actual size requested/replied. I don't think a 16-bit value would be granular enough, no matter what units are chosen.
            georgezhaojobs George Zhao added a comment -

            In order to reuse the current polling mechanism, I suppose we need to figure how to fit 8 metrics into 7(or less) fields.

            Maybe compress “average read/write io count” into one u32 field? The max would be 65536, in 5 sec. Does that sounds acceptable? 

            georgezhaojobs George Zhao added a comment - In order to reuse the current polling mechanism, I suppose we need to figure how to fit 8 metrics into 7(or less) fields. Maybe compress “average read/write io count” into one u32 field? The max would be 65536, in 5 sec. Does that sounds acceptable? 
            1. "The MDS is already polling the OSTs at 5s intervals" - this is done via the "LOD->OSP" code on the MDS:
              • lod_qos_statfs_update()->lod_statfs_and_check()
              •  
            2. yes
            3. hmm, yes, except when I wrote this ticket many years ago there were more than 8 reserved fields, and now there are only 7 left.  All fields need to fit into u32 values comfortably, so units should be chosen carefully.  If KB/s this would only give 4 TB/s peak, and that may not be large enough in the future.
            4. time interval is already configurable by lod.*.qos_maxage. However, note that every MDT (10-100 today) needs to fetch this information from every OST (8-1000+), so it shouldn't be done too frequently. However, that part is mostly irrelevant, since the statfs() data will be fetched on demand at the client
            adilger Andreas Dilger added a comment - "The MDS is already polling the OSTs at 5s intervals" - this is done via the "LOD->OSP" code on the MDS: lod_qos_statfs_update() -> lod_statfs_and_check()   yes hmm, yes, except when I wrote this ticket many years ago there were more than 8 reserved fields, and now there are only 7 left.  All fields need to fit into u32 values comfortably, so units should be chosen carefully.  If KB/s this would only give 4 TB/s peak, and that may not be large enough in the future. time interval is already configurable by lod.*.qos_maxage . However, note that every MDT (10-100 today) needs to fetch this information from every OST (8-1000+), so it shouldn't be done too frequently. However, that part is mostly irrelevant, since the statfs() data will be fetched on demand at the client
            georgezhaojobs George Zhao added a comment -

            Before starting the implementation, I want to clarify a few design considerations. Please correct me if question doesn't make sense.

            1. Where can I find this logic? "The MDS is already polling the OSTs at 5s intervals".
            2. Where to calculate: Each target maintains their own metrics and fill obd_statfs in ofd_statfs() and mdt_statfs()? Also, each target records one last stats.
            3. struct obd_statfs Changes: Are we adding 8 fields (peak, avg) X (read, write) X (io, bandwidth)? 
            4. Time Window: 5 seconds, or make it configurable?

            Any guidance or suggestions on these points would be greatly appreciated.

            georgezhaojobs George Zhao added a comment - Before starting the implementation, I want to clarify a few design considerations. Please correct me if question doesn't make sense. Where can I find this logic? "The MDS is already polling the OSTs at 5s intervals". Where to calculate: Each target maintains their own metrics and fill obd_statfs in ofd_statfs() and mdt_statfs()? Also, each target records one last stats. struct obd_statfs Changes: Are we adding 8 fields (peak, avg) X (read, write) X (io, bandwidth)?  Time Window : 5 seconds, or make it configurable? Any guidance or suggestions on these points would be greatly appreciated.

            The MDS is already polling the OSTs at 5s intervals in order to fetch the free blocks and inode counters to make QOS object allocation decisions. Including the RPC performance counters in obd_statfs will not add significant overhead to this operation.

            Returning a 5-second running average for the "current" performance, and "peak" performance ever seen since mount seems reasonable, though I'm open to suggestions.

            adilger Andreas Dilger added a comment - The MDS is already polling the OSTs at 5s intervals in order to fetch the free blocks and inode counters to make QOS object allocation decisions. Including the RPC performance counters in obd_statfs will not add significant overhead to this operation. Returning a 5-second running average for the "current" performance, and "peak" performance ever seen since mount seems reasonable, though I'm open to suggestions.

            I suppose you can get instantaneous rates for the last 5 seconds if you only record the stats when called by the MDT. I think 60-second averages are more useful so we don't have to poll statfs so often; I suppose we could only record the stats if the timestamp of the last record is greater than 60 seconds, so we would effectively have 60-second epochs.

            nrutman Nathan Rutman added a comment - I suppose you can get instantaneous rates for the last 5 seconds if you only record the stats when called by the MDT. I think 60-second averages are more useful so we don't have to poll statfs so often; I suppose we could only record the stats if the timestamp of the last record is greater than 60 seconds, so we would effectively have 60-second epochs.

            We already track stats on the OST and MDT for RPCs, read/write calls with min/max duration, read_bytes/write_bytes with sums. It should be fairly straight forward to use the existing stats counters to generate peak performance and decaying average performance, either directly or by doing simple delta calculations when statfs is called (eg. save the last time and last stats and do a simple rate calculation over the past minute or whatever). The MDS is already calling statfs in the background every 5s so that is often enough to keep this updated.

            adilger Andreas Dilger added a comment - We already track stats on the OST and MDT for RPCs, read/write calls with min/max duration, read_bytes/write_bytes with sums. It should be fairly straight forward to use the existing stats counters to generate peak performance and decaying average performance, either directly or by doing simple delta calculations when statfs is called (eg. save the last time and last stats and do a simple rate calculation over the past minute or whatever). The MDS is already calling statfs in the background every 5s so that is often enough to keep this updated.
            bzzz Alex Zhuravlev added a comment - - edited

            do I understand correctly, that OFD should track performance on its own? something like a separate thread (or timer-driven callback) collecting stats from OSD and maintaining a history of average/peak throughput, RPC rate?

            AFAIU, we don't track average for last few seconds, just average since start or reset.

             

            bzzz Alex Zhuravlev added a comment - - edited do I understand correctly, that OFD should track performance on its own? something like a separate thread (or timer-driven callback) collecting stats from OSD and maintaining a history of average/peak throughput, RPC rate? AFAIU, we don't track average for last few seconds, just average since start or reset.  

            People

              wc-triage WC Triage
              adilger Andreas Dilger
              Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

                Created:
                Updated: