Details

    • Improvement
    • Resolution: Fixed
    • Major
    • Lustre 2.12.0
    • None
    • None
    • 9223372036854775807

    Description

      One thing that Lustre has been missing for a long time is I/O profiling. Lustre does support I/O profiling per process and per client, but it doesn't support profiling I/O per job, which is the commonly used case in reality.

      It would be desirable to add I/O profiling to client, llite in particular, by JOBID, also it will be better to provide tools to accumulate those stats from multiple clients and plot them correspondingly.

      Attachments

        Issue Links

          Activity

            [LU-10698] Specify complex JobIDs for Lustre

            Ben, at the same time, the proposed "cluster ID" functionality could be implemented in a similar manner rather than adding a special-case handler for the cluster. Something like jobid_name="clustername.%j" since the cluster name will be constant for the lifetime of the node and can just be set as a static string from the POV of the kernel.

            I don't think the implementation would be too complex, basically a scan for '%' in the string, then a switch statement that replaces the string with a known value (length limited to output buffer string).

            Jinshan, as for dumping all unknown RPCs into a single bucket, that is OK if they don't take up much of the resource, but as you write then more work is needed if it does take up a lot of the resources, so it would be useful to have a way to debug that. Your'ee replacing the case that works well with Cray, but not well for you with one that works for you but not Cray (and IMHO will work badly for you as soon as you want to debug what is causing a lot of "unknown" traffic). I think we can have a solution that works for both of you that doesn't add too much complexity.

            adilger Andreas Dilger added a comment - Ben, at the same time, the proposed "cluster ID" functionality could be implemented in a similar manner rather than adding a special-case handler for the cluster. Something like jobid_name="clustername.%j" since the cluster name will be constant for the lifetime of the node and can just be set as a static string from the POV of the kernel. I don't think the implementation would be too complex, basically a scan for '%' in the string, then a switch statement that replaces the string with a known value (length limited to output buffer string). Jinshan, as for dumping all unknown RPCs into a single bucket, that is OK if they don't take up much of the resource, but as you write then more work is needed if it does take up a lot of the resources, so it would be useful to have a way to debug that. Your'ee replacing the case that works well with Cray, but not well for you with one that works for you but not Cray (and IMHO will work badly for you as soon as you want to debug what is causing a lot of "unknown" traffic). I think we can have a solution that works for both of you that doesn't add too much complexity.

            Hi Andreas,

            It seems to be a good suggestion but probably won't be accomplished in short time.

            One thing I want to clarify is that providing too much information is not necessary good. For example, actions like collecting and saving the status of all disk drives in a cluster are not good at all because useful information are completely flooded. We should only care about the situations that some components are not working properly, like some OSTs are in degraded mode, and it should be a separate procedure to figure out which drives are not working.

            So in this case if some workload are running without proper jobid setting, I tend to think it's not a good practice to fallback to 'procname.uid' because:
            1. it may be difficult to extract useful information from a very long list of stats;
            2. if anonymous workload can be accumulated to a single entry, it's easier to know how much resource it's consumed. If it's little, probably it will be simply ignored; otherwise a separate procedure will be performed to figure out which anonymous jobs consumed that much resource.

            I hope this will make some sense.

            Jinshan Jinshan Xiong added a comment - Hi Andreas, It seems to be a good suggestion but probably won't be accomplished in short time. One thing I want to clarify is that providing too much information is not necessary good. For example, actions like collecting and saving the status of all disk drives in a cluster are not good at all because useful information are completely flooded. We should only care about the situations that some components are not working properly, like some OSTs are in degraded mode, and it should be a separate procedure to figure out which drives are not working. So in this case if some workload are running without proper jobid setting, I tend to think it's not a good practice to fallback to 'procname.uid' because: 1. it may be difficult to extract useful information from a very long list of stats; 2. if anonymous workload can be accumulated to a single entry, it's easier to know how much resource it's consumed. If it's little, probably it will be simply ignored; otherwise a separate procedure will be performed to figure out which anonymous jobs consumed that much resource. I hope this will make some sense.
            bevans Ben Evans (Inactive) added a comment - - edited

            I'm not a big fan of reinventing printf for just jobids. We had a similar proposal within Cray for LU-9789 and it didn't ever get implemented due to complexity and that a simple "good enough" solution would work for pretty much everyone.

            bevans Ben Evans (Inactive) added a comment - - edited I'm not a big fan of reinventing printf for just jobids. We had a similar proposal within Cray for LU-9789 and it didn't ever get implemented due to complexity and that a simple "good enough" solution would work for pretty much everyone.

            I can understand that if hundreds of nodes are generating unlabelled RPCs then using procname_uid could result in a lot of "rsync.1234", "rsync.2345", "ls.5678", "cp.9876", etc. kind of results if there are many active users, but otherwise this still provides useful information about what commands are generating a lot of IO traffic. The reason "procname.uid" was chosen as the fallback if JOBENV can't be found is that there is a good likelihood of the same user running on different nodes without an actual JobID to still generate the same jobid string, unlike embedding PID or other unique identifier (which would be useless after the process exits anyway).

            One option would be to allow userspace to specify a fallback jobid if obd_jobid_var is not found. This could be a more expressive syntax for the primary/fallback than just "disabled", "procname_uid", and "nodelocal" that can be specified today. For example interpreting "%proc.%uid" as "process name" '.' "user id", but allowing just "%proc", just "%uid", but also maybe "%gid", "%nid", "%pid", and other fields as desired (filtering out any unknown '%' and other escape characters). This could instead use a subset of escapes from core filenames in format_corename(), to minimize the effort for sysadmins (e.g. %e=executable, %p=PID (and friends?), %u=UID, %g=UID, %h=hostname, %n=NID). It isn't clear to me yet if PID is useful for JobID, but it isn't hard to implement and maybe there is a case for it.

            Unknown strings would just be copied literally, so you could set:

                lctl set_param jobid_var=PBS_JOBID
                lctl set_param jobid_name='%e.%u:%g_%n'
            

            or to get Jinshan's desired behaviour just set:

                lctl set_param jobid_name='unknown'
            

            This implies that if "JOBENV" is not found then "jobid_name" would be used as a fallback (which doesn't happen today), and would be interpreted as needed.

            Using "jobid_var=nodelocal" would keep "jobid_name" as a literal string as it is today, while allowing the kernel to generate useful jobids directly, similar to core dump filenames. My preference would be to keep "jobid_name=%e.%u" as the default if jobstats is enabled, since this is what we currently have, and is at least providing some reasonable information to users that didn't set anything in advance.

            adilger Andreas Dilger added a comment - I can understand that if hundreds of nodes are generating unlabelled RPCs then using procname_uid could result in a lot of "rsync.1234", "rsync.2345", "ls.5678", "cp.9876", etc. kind of results if there are many active users, but otherwise this still provides useful information about what commands are generating a lot of IO traffic. The reason " procname.uid " was chosen as the fallback if JOBENV can't be found is that there is a good likelihood of the same user running on different nodes without an actual JobID to still generate the same jobid string, unlike embedding PID or other unique identifier (which would be useless after the process exits anyway). One option would be to allow userspace to specify a fallback jobid if obd_jobid_var is not found. This could be a more expressive syntax for the primary/fallback than just " disabled ", " procname_uid ", and " nodelocal " that can be specified today. For example interpreting " %proc.%uid " as " process name " ' . ' " user id ", but allowing just " %proc ", just " %uid ", but also maybe " %gid ", " %nid ", " %pid ", and other fields as desired (filtering out any unknown '%' and other escape characters). This could instead use a subset of escapes from core filenames in format_corename() , to minimize the effort for sysadmins (e.g. %e =executable, %p =PID (and friends?), %u =UID, %g =UID, %h =hostname, %n =NID). It isn't clear to me yet if PID is useful for JobID, but it isn't hard to implement and maybe there is a case for it. Unknown strings would just be copied literally, so you could set: lctl set_param jobid_var=PBS_JOBID lctl set_param jobid_name='%e.%u:%g_%n' or to get Jinshan's desired behaviour just set: lctl set_param jobid_name='unknown' This implies that if " JOBENV " is not found then " jobid_name " would be used as a fallback (which doesn't happen today), and would be interpreted as needed. Using " jobid_var=nodelocal " would keep " jobid_name " as a literal string as it is today, while allowing the kernel to generate useful jobids directly, similar to core dump filenames. My preference would be to keep " jobid_name=%e.%u " as the default if jobstats is enabled, since this is what we currently have, and is at least providing some reasonable information to users that didn't set anything in advance.

            Jinshan Xiong (jinshan.xiong@gmail.com) uploaded a new patch: https://review.whamcloud.com/31500
            Subject: LU-10698 obdclass: cleanup jobid implementation
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 3ef0ceb1c27fb02ec8c93b53333365d3fb36cd27

            gerrit Gerrit Updater added a comment - Jinshan Xiong (jinshan.xiong@gmail.com) uploaded a new patch: https://review.whamcloud.com/31500 Subject: LU-10698 obdclass: cleanup jobid implementation Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 3ef0ceb1c27fb02ec8c93b53333365d3fb36cd27

            I know darshan is able to profile I/O by intercepting glibc callbacks but still it would be better if Lustre can support it natively.

            Jinshan Jinshan Xiong added a comment - I know darshan is able to profile I/O by intercepting glibc callbacks but still it would be better if Lustre can support it natively.

            People

              Jinshan Jinshan Xiong
              Jinshan Jinshan Xiong
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: