Details
-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
Lustre 2.4.0
-
2347
Description
The ZFS transaction sync operation will passively wait for a txg commit, instead of actively requesting TXG commit start before waiting. This causes each synchronous operation to wait up to 5s for the normal txg flush.
Until such a time that we have implemented Lustre ZIL support (LU-4009), it makes sense that transaction handles that are marked as synchronous start a TXG commit instead of passively waiting. This might impact aggregate performance, but DNE operations will be relatively rare, and should not impact normal operations noticeably. That is especially true for flash-based ZFS devices, since the IOPS to update the überblock copies at the start of each block device do not need multiple mechanical full-patter seeks, so the cost of forcing a commit is relatively low.
As an optimization, there could be a "batch wait and reschedule" for TXG sync, as there is for jbd2 since commit v2.6.28-5737-ge07f7183a486 "jbd2: improve jbd2 fsync batching" so that it allows multiple active threads to join the same TXG before it is closed for commit but does not wait unnecessarily between commits.
Attachments
Issue Links
- is related to
-
LU-4009 Add ZIL support to osd-zfs
-
- Open
-
We do have a couple of options for forcing more frequent TXG syncs which I agree probably makes sense for Lustre on flash pools. The easiest option which exists today would be to reduce the zfs "zfs_txg_timeout" kmod option. This option ensures a txg sync happens at least even N seconds (default 5). It'd be easy to tweak that option to allow a time in milliseconds to be set to experiment with. My major concern would be that a full TXG sync can is a pretty heavy weight operation, and we want to allow as much async batching as possible. Even on flash. Forcing constant, frequent TXG syncs would also probably hurt resilver and scrub speeds which run as part of the txg_sync process and expect typical multi-second TXG sync times. While I'm sure we could account for this, it seems like this could badly throw off the jbd2 algorithm.
Another option might be for Lustre to call the ZFS txg_kick() function when it needs a TXG sync to happen immediately. You could call this for the relevant DNE operations, and even do your own intelligent batching at the Lustre layer. We don't currently export this symbol, but we could.
You an always dump the /proc/spl/kstat/zfs/<pool>/txgs proc file to get an idea of how often and how long TXG syncs are taking.