[LU-10698] Specify complex JobIDs for Lustre Created: 22/Feb/18 Updated: 07/Feb/24 Resolved: 31/Aug/18 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | Lustre 2.12.0 |
| Type: | Improvement | Priority: | Major |
| Reporter: | Jinshan Xiong | Assignee: | Jinshan Xiong |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Issue Links: |
|
||||||||||||||||||||||||||||
| Rank (Obsolete): | 9223372036854775807 | ||||||||||||||||||||||||||||
| Description |
|
One thing that Lustre has been missing for a long time is I/O profiling. Lustre does support I/O profiling per process and per client, but it doesn't support profiling I/O per job, which is the commonly used case in reality. It would be desirable to add I/O profiling to client, llite in particular, by JOBID, also it will be better to provide tools to accumulate those stats from multiple clients and plot them correspondingly. |
| Comments |
| Comment by Jinshan Xiong [ 22/Feb/18 ] |
|
I know darshan is able to profile I/O by intercepting glibc callbacks but still it would be better if Lustre can support it natively. |
| Comment by Gerrit Updater [ 02/Mar/18 ] |
|
Jinshan Xiong (jinshan.xiong@gmail.com) uploaded a new patch: https://review.whamcloud.com/31500 |
| Comment by Andreas Dilger [ 06/Mar/18 ] |
|
I can understand that if hundreds of nodes are generating unlabelled RPCs then using procname_uid could result in a lot of "rsync.1234", "rsync.2345", "ls.5678", "cp.9876", etc. kind of results if there are many active users, but otherwise this still provides useful information about what commands are generating a lot of IO traffic. The reason "procname.uid" was chosen as the fallback if JOBENV can't be found is that there is a good likelihood of the same user running on different nodes without an actual JobID to still generate the same jobid string, unlike embedding PID or other unique identifier (which would be useless after the process exits anyway). One option would be to allow userspace to specify a fallback jobid if obd_jobid_var is not found. This could be a more expressive syntax for the primary/fallback than just "disabled", "procname_uid", and "nodelocal" that can be specified today. For example interpreting "%proc.%uid" as "process name" '.' "user id", but allowing just "%proc", just "%uid", but also maybe "%gid", "%nid", "%pid", and other fields as desired (filtering out any unknown '%' and other escape characters). This could instead use a subset of escapes from core filenames in format_corename(), to minimize the effort for sysadmins (e.g. %e=executable, %p=PID (and friends?), %u=UID, %g=UID, %h=hostname, %n=NID). It isn't clear to me yet if PID is useful for JobID, but it isn't hard to implement and maybe there is a case for it. Unknown strings would just be copied literally, so you could set: lctl set_param jobid_var=PBS_JOBID
lctl set_param jobid_name='%e.%u:%g_%n'
or to get Jinshan's desired behaviour just set: lctl set_param jobid_name='unknown' This implies that if "JOBENV" is not found then "jobid_name" would be used as a fallback (which doesn't happen today), and would be interpreted as needed. Using "jobid_var=nodelocal" would keep "jobid_name" as a literal string as it is today, while allowing the kernel to generate useful jobids directly, similar to core dump filenames. My preference would be to keep "jobid_name=%e.%u" as the default if jobstats is enabled, since this is what we currently have, and is at least providing some reasonable information to users that didn't set anything in advance. |
| Comment by Ben Evans (Inactive) [ 06/Mar/18 ] |
|
I'm not a big fan of reinventing printf for just jobids. We had a similar proposal within Cray for |
| Comment by Jinshan Xiong [ 07/Mar/18 ] |
|
Hi Andreas, It seems to be a good suggestion but probably won't be accomplished in short time. One thing I want to clarify is that providing too much information is not necessary good. For example, actions like collecting and saving the status of all disk drives in a cluster are not good at all because useful information are completely flooded. We should only care about the situations that some components are not working properly, like some OSTs are in degraded mode, and it should be a separate procedure to figure out which drives are not working. So in this case if some workload are running without proper jobid setting, I tend to think it's not a good practice to fallback to 'procname.uid' because: I hope this will make some sense. |
| Comment by Andreas Dilger [ 07/Mar/18 ] |
|
Ben, at the same time, the proposed "cluster ID" functionality could be implemented in a similar manner rather than adding a special-case handler for the cluster. Something like jobid_name="clustername.%j" since the cluster name will be constant for the lifetime of the node and can just be set as a static string from the POV of the kernel. I don't think the implementation would be too complex, basically a scan for '%' in the string, then a switch statement that replaces the string with a known value (length limited to output buffer string). Jinshan, as for dumping all unknown RPCs into a single bucket, that is OK if they don't take up much of the resource, but as you write then more work is needed if it does take up a lot of the resources, so it would be useful to have a way to debug that. Your'ee replacing the case that works well with Cray, but not well for you with one that works for you but not Cray (and IMHO will work badly for you as soon as you want to debug what is causing a lot of "unknown" traffic). I think we can have a solution that works for both of you that doesn't add too much complexity. |
| Comment by Jinshan Xiong [ 07/Mar/18 ] |
True, in that case, the admin should clear job_stats, and set the jobid_var to procname_uid and then monitor the output of job_stats in real-time manner to figure out who the 'bad' guy is. It boils down to the point if job_stats is mainly for monitoring or auditing. Cray's customer would like to use it as monitoring, but I think we should use it for auditing. Obviously we don't work for the same customer lol. |
| Comment by Ben Evans (Inactive) [ 07/Mar/18 ] |
|
Jinshan, if this is something you care about in a database, simply pre-process it on insertion to ignore procname.uid style entries. If, on the other hand, you want this information, it can't be generated from "unknown" |
| Comment by Jinshan Xiong [ 07/Mar/18 ] |
|
Hi Evans, Not just discard procname.uid records, we also need to accumulate them because we want to know how much I/O is from anonymous therefore we can decide if to start investigation. Can you please summarize how you and your customers use job_stats information? So it sounds like those data will be kept in memory and never collected? |
| Comment by Ben Evans (Inactive) [ 07/Mar/18 ] |
|
Jinshan, all of what you propose can be done in userspace. You can translate all procname.uid formatted JobID's to "unknown", you can leave them out of the database you use for mining. What you can't do, is take stats from Lustre of "Unknown" and translate them into "rsync.12345" on 6 different nodes. My understanding from what I've seen from the management side of our Lustre products is that they are accumulating each job, and scoring it in a number of ways, along with keeping it in a database for deeper investigation. I'm not sure what the limits may be concerning what is kept in the DB and for how long, and at what timescales. I do know that this is an area of active development, as the performance penalties incurred by JobID are not as harsh as they used to be due to the cache. So we've moved from a case where JobID is off by default to one where it can be on by default. |
| Comment by Gerrit Updater [ 20/Mar/18 ] |
|
Andreas Dilger (andreas.dilger@intel.com) uploaded a new patch: https://review.whamcloud.com/31691 |
| Comment by Gerrit Updater [ 09/Apr/18 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/31691/ |