Details

    • Improvement
    • Resolution: Fixed
    • Major
    • Lustre 2.12.0
    • None
    • None
    • 9223372036854775807

    Description

      One thing that Lustre has been missing for a long time is I/O profiling. Lustre does support I/O profiling per process and per client, but it doesn't support profiling I/O per job, which is the commonly used case in reality.

      It would be desirable to add I/O profiling to client, llite in particular, by JOBID, also it will be better to provide tools to accumulate those stats from multiple clients and plot them correspondingly.

      Attachments

        Issue Links

          Activity

            [LU-10698] Specify complex JobIDs for Lustre

            Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/31691/
            Subject: LU-10698 obdclass: allow specifying complex jobids
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: 6488c0ec57de2d188bd15e502917b762e3a9dd1d

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/31691/ Subject: LU-10698 obdclass: allow specifying complex jobids Project: fs/lustre-release Branch: master Current Patch Set: Commit: 6488c0ec57de2d188bd15e502917b762e3a9dd1d

            Andreas Dilger (andreas.dilger@intel.com) uploaded a new patch: https://review.whamcloud.com/31691
            Subject: LU-10698 obdclass: allow specifying complex jobids
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 5a7b3fd923cec7d4acc199b9f205b3ea8483c495

            gerrit Gerrit Updater added a comment - Andreas Dilger (andreas.dilger@intel.com) uploaded a new patch: https://review.whamcloud.com/31691 Subject: LU-10698 obdclass: allow specifying complex jobids Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 5a7b3fd923cec7d4acc199b9f205b3ea8483c495

            Jinshan, all of what you propose can be done in userspace. You can translate all procname.uid formatted JobID's to "unknown", you can leave them out of the database you use for mining. What you can't do, is take stats from Lustre of "Unknown" and translate them into "rsync.12345" on 6 different nodes.

            My understanding from what I've seen from the management side of our Lustre products is that they are accumulating each job, and scoring it in a number of ways, along with keeping it in a database for deeper investigation. I'm not sure what the limits may be concerning what is kept in the DB and for how long, and at what timescales.

            I do know that this is an area of active development, as the performance penalties incurred by JobID are not as harsh as they used to be due to the cache. So we've moved from a case where JobID is off by default to one where it can be on by default.

            bevans Ben Evans (Inactive) added a comment - Jinshan, all of what you propose can be done in userspace. You can translate all procname.uid formatted JobID's to "unknown", you can leave them out of the database you use for mining. What you can't do, is take stats from Lustre of "Unknown" and translate them into "rsync.12345" on 6 different nodes. My understanding from what I've seen from the management side of our Lustre products is that they are accumulating each job, and scoring it in a number of ways, along with keeping it in a database for deeper investigation. I'm not sure what the limits may be concerning what is kept in the DB and for how long, and at what timescales. I do know that this is an area of active development, as the performance penalties incurred by JobID are not as harsh as they used to be due to the cache. So we've moved from a case where JobID is off by default to one where it can be on by default.

            Hi Evans,

            Not just discard procname.uid records, we also need to accumulate them because we want to know how much I/O is from anonymous therefore we can decide if to start investigation.

            Can you please summarize how you and your customers use job_stats information? So it sounds like those data will be kept in memory and never collected?

            Jinshan Jinshan Xiong added a comment - Hi Evans, Not just discard procname.uid records, we also need to accumulate them because we want to know how much I/O is from anonymous therefore we can decide if to start investigation. Can you please summarize how you and your customers use job_stats information? So it sounds like those data will be kept in memory and never collected?

            Jinshan, if this is something you care about in a database, simply pre-process it on insertion to ignore procname.uid style entries.

            If, on the other hand, you want this information, it can't be generated from "unknown"

            bevans Ben Evans (Inactive) added a comment - Jinshan, if this is something you care about in a database, simply pre-process it on insertion to ignore procname.uid style entries. If, on the other hand, you want this information, it can't be generated from "unknown"

            ... so it would be useful to have a way to debug that.

            True, in that case, the admin should clear job_stats, and set the jobid_var to procname_uid and then monitor the output of job_stats in real-time manner to figure out who the 'bad' guy is.

            It boils down to the point if job_stats is mainly for monitoring or auditing. Cray's customer would like to use it as monitoring, but I think we should use it for auditing. Obviously we don't work for the same customer lol.

            Jinshan Jinshan Xiong added a comment - ... so it would be useful to have a way to debug that. True, in that case, the admin should clear job_stats , and set the jobid_var to procname_uid and then monitor the output of job_stats in real-time manner to figure out who the 'bad' guy is. It boils down to the point if job_stats is mainly for monitoring or auditing. Cray's customer would like to use it as monitoring, but I think we should use it for auditing. Obviously we don't work for the same customer lol.

            Ben, at the same time, the proposed "cluster ID" functionality could be implemented in a similar manner rather than adding a special-case handler for the cluster. Something like jobid_name="clustername.%j" since the cluster name will be constant for the lifetime of the node and can just be set as a static string from the POV of the kernel.

            I don't think the implementation would be too complex, basically a scan for '%' in the string, then a switch statement that replaces the string with a known value (length limited to output buffer string).

            Jinshan, as for dumping all unknown RPCs into a single bucket, that is OK if they don't take up much of the resource, but as you write then more work is needed if it does take up a lot of the resources, so it would be useful to have a way to debug that. Your'ee replacing the case that works well with Cray, but not well for you with one that works for you but not Cray (and IMHO will work badly for you as soon as you want to debug what is causing a lot of "unknown" traffic). I think we can have a solution that works for both of you that doesn't add too much complexity.

            adilger Andreas Dilger added a comment - Ben, at the same time, the proposed "cluster ID" functionality could be implemented in a similar manner rather than adding a special-case handler for the cluster. Something like jobid_name="clustername.%j" since the cluster name will be constant for the lifetime of the node and can just be set as a static string from the POV of the kernel. I don't think the implementation would be too complex, basically a scan for '%' in the string, then a switch statement that replaces the string with a known value (length limited to output buffer string). Jinshan, as for dumping all unknown RPCs into a single bucket, that is OK if they don't take up much of the resource, but as you write then more work is needed if it does take up a lot of the resources, so it would be useful to have a way to debug that. Your'ee replacing the case that works well with Cray, but not well for you with one that works for you but not Cray (and IMHO will work badly for you as soon as you want to debug what is causing a lot of "unknown" traffic). I think we can have a solution that works for both of you that doesn't add too much complexity.

            Hi Andreas,

            It seems to be a good suggestion but probably won't be accomplished in short time.

            One thing I want to clarify is that providing too much information is not necessary good. For example, actions like collecting and saving the status of all disk drives in a cluster are not good at all because useful information are completely flooded. We should only care about the situations that some components are not working properly, like some OSTs are in degraded mode, and it should be a separate procedure to figure out which drives are not working.

            So in this case if some workload are running without proper jobid setting, I tend to think it's not a good practice to fallback to 'procname.uid' because:
            1. it may be difficult to extract useful information from a very long list of stats;
            2. if anonymous workload can be accumulated to a single entry, it's easier to know how much resource it's consumed. If it's little, probably it will be simply ignored; otherwise a separate procedure will be performed to figure out which anonymous jobs consumed that much resource.

            I hope this will make some sense.

            Jinshan Jinshan Xiong added a comment - Hi Andreas, It seems to be a good suggestion but probably won't be accomplished in short time. One thing I want to clarify is that providing too much information is not necessary good. For example, actions like collecting and saving the status of all disk drives in a cluster are not good at all because useful information are completely flooded. We should only care about the situations that some components are not working properly, like some OSTs are in degraded mode, and it should be a separate procedure to figure out which drives are not working. So in this case if some workload are running without proper jobid setting, I tend to think it's not a good practice to fallback to 'procname.uid' because: 1. it may be difficult to extract useful information from a very long list of stats; 2. if anonymous workload can be accumulated to a single entry, it's easier to know how much resource it's consumed. If it's little, probably it will be simply ignored; otherwise a separate procedure will be performed to figure out which anonymous jobs consumed that much resource. I hope this will make some sense.
            bevans Ben Evans (Inactive) added a comment - - edited

            I'm not a big fan of reinventing printf for just jobids. We had a similar proposal within Cray for LU-9789 and it didn't ever get implemented due to complexity and that a simple "good enough" solution would work for pretty much everyone.

            bevans Ben Evans (Inactive) added a comment - - edited I'm not a big fan of reinventing printf for just jobids. We had a similar proposal within Cray for LU-9789 and it didn't ever get implemented due to complexity and that a simple "good enough" solution would work for pretty much everyone.

            I can understand that if hundreds of nodes are generating unlabelled RPCs then using procname_uid could result in a lot of "rsync.1234", "rsync.2345", "ls.5678", "cp.9876", etc. kind of results if there are many active users, but otherwise this still provides useful information about what commands are generating a lot of IO traffic. The reason "procname.uid" was chosen as the fallback if JOBENV can't be found is that there is a good likelihood of the same user running on different nodes without an actual JobID to still generate the same jobid string, unlike embedding PID or other unique identifier (which would be useless after the process exits anyway).

            One option would be to allow userspace to specify a fallback jobid if obd_jobid_var is not found. This could be a more expressive syntax for the primary/fallback than just "disabled", "procname_uid", and "nodelocal" that can be specified today. For example interpreting "%proc.%uid" as "process name" '.' "user id", but allowing just "%proc", just "%uid", but also maybe "%gid", "%nid", "%pid", and other fields as desired (filtering out any unknown '%' and other escape characters). This could instead use a subset of escapes from core filenames in format_corename(), to minimize the effort for sysadmins (e.g. %e=executable, %p=PID (and friends?), %u=UID, %g=UID, %h=hostname, %n=NID). It isn't clear to me yet if PID is useful for JobID, but it isn't hard to implement and maybe there is a case for it.

            Unknown strings would just be copied literally, so you could set:

                lctl set_param jobid_var=PBS_JOBID
                lctl set_param jobid_name='%e.%u:%g_%n'
            

            or to get Jinshan's desired behaviour just set:

                lctl set_param jobid_name='unknown'
            

            This implies that if "JOBENV" is not found then "jobid_name" would be used as a fallback (which doesn't happen today), and would be interpreted as needed.

            Using "jobid_var=nodelocal" would keep "jobid_name" as a literal string as it is today, while allowing the kernel to generate useful jobids directly, similar to core dump filenames. My preference would be to keep "jobid_name=%e.%u" as the default if jobstats is enabled, since this is what we currently have, and is at least providing some reasonable information to users that didn't set anything in advance.

            adilger Andreas Dilger added a comment - I can understand that if hundreds of nodes are generating unlabelled RPCs then using procname_uid could result in a lot of "rsync.1234", "rsync.2345", "ls.5678", "cp.9876", etc. kind of results if there are many active users, but otherwise this still provides useful information about what commands are generating a lot of IO traffic. The reason " procname.uid " was chosen as the fallback if JOBENV can't be found is that there is a good likelihood of the same user running on different nodes without an actual JobID to still generate the same jobid string, unlike embedding PID or other unique identifier (which would be useless after the process exits anyway). One option would be to allow userspace to specify a fallback jobid if obd_jobid_var is not found. This could be a more expressive syntax for the primary/fallback than just " disabled ", " procname_uid ", and " nodelocal " that can be specified today. For example interpreting " %proc.%uid " as " process name " ' . ' " user id ", but allowing just " %proc ", just " %uid ", but also maybe " %gid ", " %nid ", " %pid ", and other fields as desired (filtering out any unknown '%' and other escape characters). This could instead use a subset of escapes from core filenames in format_corename() , to minimize the effort for sysadmins (e.g. %e =executable, %p =PID (and friends?), %u =UID, %g =UID, %h =hostname, %n =NID). It isn't clear to me yet if PID is useful for JobID, but it isn't hard to implement and maybe there is a case for it. Unknown strings would just be copied literally, so you could set: lctl set_param jobid_var=PBS_JOBID lctl set_param jobid_name='%e.%u:%g_%n' or to get Jinshan's desired behaviour just set: lctl set_param jobid_name='unknown' This implies that if " JOBENV " is not found then " jobid_name " would be used as a fallback (which doesn't happen today), and would be interpreted as needed. Using " jobid_var=nodelocal " would keep " jobid_name " as a literal string as it is today, while allowing the kernel to generate useful jobids directly, similar to core dump filenames. My preference would be to keep " jobid_name=%e.%u " as the default if jobstats is enabled, since this is what we currently have, and is at least providing some reasonable information to users that didn't set anything in advance.

            People

              Jinshan Jinshan Xiong
              Jinshan Jinshan Xiong
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: