Type: New Feature
Affects Version/s: None
Fix Version/s: None
Erasure coding provides a more space-efficient method for adding data redundancy than mirroring, at a somewhat higher computational cost. This would typically be used for adding redundancy for large and longer-lived files to minimize space overhead. For example, RAID-6 10+2 adds only 20% space overhead while allowing two OST failures, compared to mirroring which adds 100% overhead for single-failure redundancy or 200% overhead for double-failure redundancy. Erasure coding can add redundancy for an arbitrary number of drive failures (e.g. any 3 drives in a group of 16) with a fraction of the overhead.
It would be possible to implement delayed erasure coding on striped files in a similar manner to Phase 1 mirrored files, by storing the parity stripes in a separate component in the file, having a layout that indicates the erasure coding algorithm, number of data and parity stripes, stripe_size (should probably match file stripe size), etc. The encoding would be similar to RAID-4, with specific "data" stripes (the traditional Lustre RAID-0 file layout) in the primary component, and one or more "parity" stripes stored in a separate parity component, unlike RAID-5/6 that have the parity interleaved. For widely-striped files, there could be separate parity stripes for different sets of file stripes (e.g. 10x 12+3 for a 120-stripe file), so that data+parity would be able to use all of the OSTs in the filesystem without having double failures within a single parity group. For very large files, it would be possible to split the parity component into smaller extents to reduce the parity reconstruction overhead for sub-file overwrites. Erasure coding could also be added after-the-fact to existing RAID-0 striped files, after the initial file write, or when migrating a file from an active storage tier to an archive tier.
Reads from an erasure-coded file would normally use only the primary RAID-0 component (unless data verification on read was also desired), as with non-redundant files. If a stripe in the primary component for the file fails, the client would read the data stripes and one or more parity stripes component and reconstruct the data from parity on the fly, and/or depend on the resync tool to reconstruct the failed stripe from parity.
Writes to an erasure-coded file would mark the parity component stale matching the extent of the data component that was modified, as with a regular mirrored file, and writes would continue on the primary RAID-0 striped file. The main difference from an FLR mirrored file is that the writes would always need to go to the primary data component, and the parity component would always be marked stale. It would not be possible to write to an erasure-coded file that has a failure in a primary stripe without first reconstructing it from parity.
Erasure coding will add the ability to add full redundancy of large files or whole filesystems, rather than using full mirroring. This will allow striped Lustre files to store redundancy in parity components that allow recovery from a specified number of OST failures (e.g. 3 OST failures per 12 stripes, or 4 OST failures per 24 stripes) in a manner similar to RAID-4 with fixed parity stripes.
The actual parity generation will be done with the lfs mirror resync tool in userspace. The Lustre client will do normal reads from the RAID-0 data component, unless there is an OST failure or other error reading from a data stripe. Add support for data reconstruction from the data and parity components, leveraging existing functionality for reading mirrored files.
To avoid losing redundancy on erasure-coded files that are modified, the Mirrored File Writes functionality could be used during writes to such files. Changes would be merged into the erasure coded component after the file is closed, using the Phase 1 ChangeLog consumer, and then the mirror component can be dropped.
The lfs mirror resync tool needs to be updated to generate the erasure code for the file striped file, storing the parity in a separate component from the main RAID-0 striped file. There are CPU-optimized implementations of the erasure coding algorithms available, so the majority of the work would be integrating these optimized routines into the Lustre kernel modules and userspace tools, rather than actually developing the encoding algorithms.