[LU-15368] Monitoring changelog_size is slowing down changelogs operations Created: 13/Dec/21  Updated: 16/Dec/21

Status: Open
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.12.7
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Stephane Thiell Assignee: Peter Jones
Resolution: Unresolved Votes: 0
Labels: None
Environment:

CentOS 7.9


Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

We have noticed that monitoring changelog_size on MDS can significantly slow down changelogs operations and eventually even the MDS in general.

We have a script that monitor changelog_size every minute to display the size per MDT with Grafana (we also have an alert via Grafana when a changelog_size grows too much). The script basically does the following:

[root@fir-md1-s3 ~]# lctl get_param mdd.*-MDT*.changelog_size
mdd.fir-MDT0002.changelog_size=305538455808

Recently, a user launched massive jobs writing many small files at 30K+ create/s for hours, resulting in a backlog of changelogs on MDT0002: changelog_size increased dramatically and the monitoring script took longer and longer to get the value of changelog_size (more than 1 minute). This resulted in a slow down of changelogs commit seen from Robinhood and also eventually a general high load on the MDS and slow operations (up to 5s for a chownat).

[root@fir-md1-s3 ~]# uptime
 09:35:21 up 25 days, 23:30,  2 users,  load average: 101.05, 97.57, 100.55

I even noticed one watchdog backtrace at some point:

Dec 13 08:49:02 fir-md1-s3 kernel: Pid: 24707, comm: mdt00_062 3.10.0-1160.45.1.el7_lustre.pl1.x86_64 #1 SMP Wed Nov 10 23:41:33 PST 2021
Dec 13 08:49:02 fir-md1-s3 kernel: Call Trace:
Dec 13 08:49:02 fir-md1-s3 kernel:  [<ffffffffa7798387>] call_rwsem_down_write_failed+0x17/0x30
Dec 13 08:49:02 fir-md1-s3 kernel:  [<ffffffffc0ecf68f>] llog_cat_id2handle+0x7f/0x620 [obdclass]
Dec 13 08:49:02 fir-md1-s3 kernel:  [<ffffffffc0ed0778>] llog_cat_cancel_records+0x128/0x3d0 [obdclass]
Dec 13 08:49:02 fir-md1-s3 kernel:  [<ffffffffc1b01328>] llog_changelog_cancel_cb+0x1d8/0x5b0 [mdd]
Dec 13 08:49:02 fir-md1-s3 kernel:  [<ffffffffc0eca5af>] llog_process_thread+0x85f/0x1a70 [obdclass]
Dec 13 08:49:02 fir-md1-s3 kernel:  [<ffffffffc0ecb87c>] llog_process_or_fork+0xbc/0x450 [obdclass]
Dec 13 08:49:02 fir-md1-s3 kernel:  [<ffffffffc0ed0c59>] llog_cat_process_cb+0x239/0x250 [obdclass]
Dec 13 08:49:02 fir-md1-s3 kernel:  [<ffffffffc0eca5af>] llog_process_thread+0x85f/0x1a70 [obdclass]
Dec 13 08:49:02 fir-md1-s3 kernel:  [<ffffffffc0ecb87c>] llog_process_or_fork+0xbc/0x450 [obdclass]
Dec 13 08:49:02 fir-md1-s3 kernel:  [<ffffffffc0ecd5e1>] llog_cat_process_or_fork+0x1e1/0x360 [obdclass]
Dec 13 08:49:02 fir-md1-s3 kernel:  [<ffffffffc0ecd78e>] llog_cat_process+0x2e/0x30 [obdclass]
Dec 13 08:49:02 fir-md1-s3 kernel:  [<ffffffffc1affa34>] llog_changelog_cancel.isra.16+0x54/0x1c0 [mdd]
Dec 13 08:49:02 fir-md1-s3 kernel:  [<ffffffffc1b02110>] mdd_changelog_llog_cancel+0xd0/0x270 [mdd]
Dec 13 08:49:03 fir-md1-s3 kernel:  [<ffffffffc1b05f33>] mdd_changelog_clear+0x653/0x7d0 [mdd]
Dec 13 08:49:03 fir-md1-s3 kernel:  [<ffffffffc1b08153>] mdd_iocontrol+0x163/0x540 [mdd]
Dec 13 08:49:03 fir-md1-s3 kernel:  [<ffffffffc198684c>] mdt_iocontrol+0x5ec/0xb00 [mdt]
Dec 13 08:49:03 fir-md1-s3 kernel:  [<ffffffffc19871e4>] mdt_set_info+0x484/0x490 [mdt]
Dec 13 08:49:03 fir-md1-s3 kernel:  [<ffffffffc125b89a>] tgt_request_handle+0xada/0x1570 [ptlrpc]
Dec 13 08:49:03 fir-md1-s3 kernel:  [<ffffffffc120073b>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
Dec 13 08:49:03 fir-md1-s3 kernel:  [<ffffffffc12040a4>] ptlrpc_main+0xb34/0x1470 [ptlrpc]
Dec 13 08:49:03 fir-md1-s3 kernel:  [<ffffffffa74c5e61>] kthread+0xd1/0xe0
Dec 13 08:49:03 fir-md1-s3 kernel:  [<ffffffffa7b95ddd>] ret_from_fork_nospec_begin+0x7/0x21
Dec 13 08:49:03 fir-md1-s3 kernel:  [<ffffffffffffffff>] 0xffffffffffffffff
Dec 13 08:49:03 fir-md1-s3 kernel: LustreError: dumping log to /tmp/lustre-log.1639414143.24707
Dec 13 08:49:11 fir-md1-s3 kernel: LNet: Service thread pid 24707 completed after 369.43s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).

Just stopping the monitoring script (thus stopping checking the changelog_size from the MDS) resolved the issue: changelogs commit rates went immediately up and the load of the server down at a reasonable level.

I wanted to raise this issue so it can be improved in the future. It's important for us that we can monitor the size of the changelogs stored on each MDT. It would be nice to have a more efficient way of doing so. Thanks!



 Comments   
Comment by Andreas Dilger [ 14/Dec/21 ]

Taking a quick look at the patch http://review.whamcloud.com/16416 "LU-7156 mdd: add changelog_size to procfs" that added the changelog_size parameter, it looks like this is reprocessing each logfile to see if it can be cleaned up. However, if there are a large number of log records, and the changelog_size file is accessed frequently, this extra scanning overhead may be substantial.

It would make sense to put some limit on the amount of re-scanning done (e.g. do it once when the parameter is first accessed after a remount, or at most once per hour, or similar), so that repeated access to this parameter does not hurt MDS performance. Without that, the checking of the sizes of the llog files should be very fast (stat of a few thousand llog files should be milliseconds on a flash MDT, maybe 10s on an HDD MDT).

Comment by Li Xi [ 14/Dec/21 ]

do it once when the parameter is first accessed after a remount, or at most once per hour, or similar

So that means the readed changelog might be an async value that gathered a hour ago or so?

I think there might be two different cases when reading the changelog size. One is for estimation which does not need a precise number, which indeed can be fit well into this "async" mechanism.

The other is when a precise number is wanted when for example, need to check whether changelog is shrinking.

To support the second use case properly, I think it makes to add write opertaion to mdd_changelog_size, i.e.

1) if reading from this proc entry, it will return the cached/async value of change log size.
2) If writing to this entry, it will rescan the change log and refresh the value.

Comment by Andreas Dilger [ 15/Dec/21 ]

I'm not suggesting to cache the Changelog size for such a long time, only to prevent processing and trying to clean up the Changelog files every time. Even with an out-of-control Changelog producer, there should be at most a few thousand llog files to stat, so this shouldn't take minutes to finish.

What might be useful is to prevent multiple readers of changelog_size from processing the catalogs/llogs at the same time (e.g. if polling happens every 60s, but it takes 90s to process all of the logs for some reason). There should be a high-level mutex taken once llog traversal starts, and any reader arriving afterward should either block on the mutex, and then return the same size calculated by the first thread, or potentially return the previous size that was computed.

A further possible optimization would be to incrementally update a cached size (after the first read from the file traverses all of the logs). This could be a percpu variable that is updated by the llog process for each record, or only incremented on a whole-file basis (e.g. add the size of whole llogs when they are finished, subtract the size of whole llogs when they are destroyed).

Comment by Etienne Aujames [ 15/Dec/21 ]

Hello,

For the CEA, the changelog_size is used to monitor how well we keep up to de-queue changelog with Robinhood. And I don't know if this is the same for Stephane Thiell.
For this purpose the CEA use also the difference between current changelog id and the first changelog id for user (changelog_users) to monitor the delay between Robinhood and the MDT and trigger an alert.

This is necessary for the following reason:

  • Robinhood stop to process record: Robinhood bug or client lag
  • Robinhood stop to process record because of Lustre bug: LU-14158, LU-15280
  • Unable to clear changelog efficiently (to much changelogs): LU-14688, LU-14606
  • Jobs create to much changelogs, can't keep up.

So we need to monitor changelog usage before reaching the maximum number of changelog_catalog indexes (64767).
Also, without the changelog option "follow" is not used with several cl users, more the difference between cl_users is important more the changelog processing is slow down. But we can't use the "follow" option because of the LU-15280.

So we don't really need to have the exact "changelog_size" for this purpose but more a kind of "changelog_usage".
This usage can be easily found by retrieving the "llh_count" inside the changelog_catalog header (percent_usage = 100 * llh_count / LLOG_HDR_BITMAP_SIZE(llh)). A upper approximation of changelog_size can be computed with: 11M * llh_count (actually llog plain size could be limited to 2.1M if the MDT space is low).
For now I use debugfs to dump changelog_catalog and llog_reader to retrieve the llh_count.

What do you think about adding a "changelog_usage" procfs?

Comment by Etienne Aujames [ 15/Dec/21 ]

For the changelog_size performance issue, I think the https://review.whamcloud.com/#/c/43264/5/lustre/mdd/mdd_device.c ("LU-14606 llog: hide ENOENT for cancelling record") might help.
This patch replace "llog_cat_cancel_records" with "RETURN(LLOG_DEL_RECORD);" in llog_changelog_cancel_cb.
The problem with llog_cat_cancel_records in llog_cat_cancel_records is that it calls llog_cat_id2handle for each llog plain record.
llog_cat_id2handle have to read a lot of cathandle->u.chd.chd_head entries (llog_handle cache) with llog_handle lock if there is lot of changelogs entries. This consume a lot of CPU time and conflict directly with llog_cat_size_cb.

Comment by Stephane Thiell [ 15/Dec/21 ]

Thanks for the feedback! We are indeed using changelog_size for monitoring the de-queue process of changelogs per MDT with Robinhood (and also lauditd in some case, we have multiple readers).

We use SSDs on MDTs and they are not IO bound when this happen. Actually when this happens, sar shows a drop in IOPS. It's more like a lock contention to me. At some point when this happens, a single core is at 100%, so I believe Etienne's last comment is spot on. I will add the patch from LU-14606 to our 2.12.7 when I get a chance (thanks Etienne for backporting it to b2_12). Note that this probably won't be before early January 2022 as we're trying to minimize changes on our systems before winter closure.

I'm not a super fan of a different changelog_usage proc entry, which could add some confusion. Best would be to make the current proc entry efficient enough, and from what I hear in this ticket from you, it looks like it should be doable.

Comment by Andreas Dilger [ 15/Dec/21 ]

In theory it should also be possible to monitor the changelog usage just by the difference in current vs. last used changelog ID? That should be a good proxy for the changelog size (about 160 bytes per record), and would not slow down because of the scanning the size.

Generated at Sat Feb 10 03:17:43 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.