Doug asked me to put more details here. would it makes sense to have a picture?
first of all, how current commit mechanism works and how this is used by Lustre. despite we say “start
and stop transaction” our Lustre-transactions actually join an existing DMU-transaction and that one is
committed as a whole or discarded. also, only the final state of DMU-transaction is a subject to commit,
not some intermediate state. Lustre heavily depends on this semantics to improve internal concurrency.
Lets consider a very simple use case - object precreation. Lustre maintains the last assigned ID in a
single slot. it doesn’t matter when a transaction updated the slot stops - only the final state of the slot
will be committed. if we were following “normal” rules (like ZPL does to support ZIL), Lustre would have
to lock the slot, start the transaction, update the slot, close the transaction and release the slot. Such
a stream of transaction is linear by definition and can be put into ZIL for subsequent replay - transaction
stop gives us actual information on the order the slot was updated in. That also means zero concurrency,
so bad performance for file creation. To improve concurrency and performance Lustre does the reverse:
start transaction, lock the slot, update the slot, release the slot, stop the transaction. This mean though
the stop doesn’t give us any information on the ordering - the order transactions get into ZIL can mismatch
the order the slot was updated in.
This is a problem partially because at OSD we see absolute values, not logical operations. We see new
objids or we see a new bitmap (in case of llogs), etc. So what would happen if we start to store operations
instead of values. Say, for object precreation again - we’d introduce an increment operation? Sometimes
we need to reset that value (when we start a new sequence). And even worse - the whole point of increment
is to not store absolute values, but we need absolute values as they have been already returned to the client
and used in LOVEA, etc. This is the case with a very simple logic - just a single value. There is llog yet..
And then we’d need a brand new mechanism to pass these special operations down through the stack, etc.
Hence I tend to think this is way too complicated to even think through all the details.
If the problem is only with the ordering, then why don’t solve this problem? If we know the order specific
updates were made to the object, then we can replay the updates in that order again. But this order doesn’t
match the transactions the updates were made in? The transactions are needed to keep the filesystem
consistent though the replay. Say, we have two transactions T1 and T2 modifying the same object. T1 got
into ZIL before T2, but T2 modified the object first. In the worst case T1 and T2 modified two objects, but
in the reverse order making they dependent each on another. TXG mechanism solved this problem as that
was a single commit unit. We’d have to do something similar - start T1, detect dependency, put T1 on hold,
start T2, apply the updates in correct order, stop T1 and T2. Doesn’t sound trivial. What if ZIL got many Tx
in between T1 and T2 as we used to run thousand threads on MDT ? Are they subject to join the same
big transaction with T1 and T2? what if DMU doesn’t let to put all of them due to TXG commit timeout or
changed pool’s property resulting in bigger overhead?
Here the snapshots come in - the only reason for the transaction is to keep the filesystem consistent, but
what if we implement our own commit points using snapshots? Essentially we mimic TXG: take a snapshot,
apply the updates in the order we need, discard the snapshot if all the updates succeeded, rollback to the
snapshot otherwise. If the system crashes during replay, we’ll find the snapshot, rollback to that and can
repeat again. In this schema there is zero need to modify Lustre core code, everything (except optimizations
like 8K writes to update a single bit in llog header) is done within osd-zfs.
Aurelien,
the previous development for ZIL integration was too complex to land and maintain. The main problem was that while Lustre locks individual data structures during updates (e.g. each log file, directory, etc), it does not hold a single global lock across the whole filesystem update since that would cause a lot of contention. Since all of these updates are done within a single filesystem transaction (TXG for ZFS), there is no problem if they are applied to the on-disk data structures in slightly different orders because they will either commit to disk or be lost as a single unit.
With ZIL, each RPC update needs to be atomic across all of the files/directories/logs that are modified, which caused a number of problems with the implementation in ZFS and the Lustre code.
One idea that I had for a better approach for Lustre (though incompatible with the current ZPL ZIL usage) is instead of trying to log the disk blocks that are being modified to the ZIL is instead log the whole RPC request to the ZIL (at most 64KB of data). If the server doesn't crash, the RPC would be discarded from the ZIL on TXG commit. If the server does crash, the recovery steps would be to re-process the RPCs in the ZIL at the Lustre level to regenerate the filesystem changes. That avoids the issues with the unordered updates to disk. For large bulk IOs the data would still be "writethrough" to the disk blocks, and the RPC would use the data from the filesystem rather than doing another bulk transfer from the client (since ZIL RPCs would be considered "sync" and the client may not preserve the data in its memory).