[LU-10911] FLR2: Erasure coding Created: 13/Apr/18  Updated: 28/Jun/23

Status: Open
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Minor
Reporter: Andreas Dilger Assignee: Zhenyu Xu
Resolution: Unresolved Votes: 0
Labels: FLR2

Attachments: Microsoft Word Erasure Coding HDL.docx    
Issue Links:
Related
is related to LU-12649 Tracker for ongoing FLR improvements Open
is related to LUDOC-463 Add feature documentation for Erasure... Open
is related to LU-16837 interop: client skip unknown componen... Resolved
Sub-Tasks:
Key
Summary
Type
Status
Assignee
LU-12186 EC: add necessary structure to adopt ... Technical task Open Zhenyu Xu  
LU-12187 EC: erasure coding layout handling Technical task Open Zhenyu Xu  
LU-12188 EC: user tool to setup erasure coding... Technical task Open Zhenyu Xu  
LU-12189 EC: import isa-l library in Lustre build Technical task Resolved James A Simmons  
LU-12668 EC: resync parity components Technical task Open Zhenyu Xu  
LU-12669 EC: recover data from parity code Technical task Open WC Triage  
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

Overview

Erasure coding provides a more space-efficient method for adding data redundancy than mirroring, at a somewhat higher computational cost. This would typically be used for adding redundancy for large and longer-lived files to minimize space overhead. For example, RAID-6 10+2 adds only 20% space overhead while allowing two OST failures, compared to mirroring which adds 100% overhead for single-failure redundancy or 200% overhead for double-failure redundancy. Erasure coding can add redundancy for an arbitrary number of drive failures (e.g. any 3 drives in a group of 16) with a fraction of the overhead.

It would be possible to implement delayed erasure coding on striped files in a similar manner to Phase 1 mirrored files, by storing the parity stripes in a separate component in the file, having a layout that indicates the erasure coding algorithm, number of data and parity stripes, stripe_size (should probably match file stripe size), etc. The encoding would be similar to RAID-4, with specific "data" stripes (the traditional Lustre RAID-0 file layout) in the primary component, and one or more "parity" stripes stored in a separate parity component, unlike RAID-5/6 that have the parity interleaved. For widely-striped files, there could be separate parity stripes for different sets of file stripes (e.g. 10x 12+3 for a 120-stripe file), so that data+parity would be able to use all of the OSTs in the filesystem without having double failures within a single parity group. For very large files, it would be possible to split the parity component into smaller extents to reduce the parity reconstruction overhead for sub-file overwrites. Erasure coding could also be added after-the-fact to existing RAID-0 striped files, after the initial file write, or when migrating a file from an active storage tier to an archive tier.

Reads from an erasure-coded file would normally use only the primary RAID-0 component (unless data verification on read was also desired), as with non-redundant files. If a stripe in the primary component for the file fails, the client would read the data stripes and one or more parity stripes component and reconstruct the data from parity on the fly, and/or depend on the resync tool to reconstruct the failed stripe from parity.

Writes to an erasure-coded file would mark the parity component stale matching the extent of the data component that was modified, as with a regular mirrored file, and writes would continue on the primary RAID-0 striped file. The main difference from an FLR mirrored file is that the writes would always need to go to the primary data component, and the parity component would always be marked stale. It would not be possible to write to an erasure-coded file that has a failure in a primary stripe without first reconstructing it from parity.

Space Efficient Data Redundancy

Erasure coding will add the ability to add full redundancy of large files or whole filesystems, rather than using full mirroring. This will allow striped Lustre files to store redundancy in parity components that allow recovery from a specified number of OST failures (e.g. 3 OST failures per 12 stripes, or 4 OST failures per 24 stripes) in a manner similar to RAID-4 with fixed parity stripes.

Required Lustre Functionality

Erasure Coded File Read

The actual parity generation will be done with the lfs mirror resync tool in userspace. The Lustre client will do normal reads from the RAID-0 data component, unless there is an OST failure or other error reading from a data stripe. Add support for data reconstruction from the data and parity components, leveraging existing functionality for reading mirrored files.

Erasure Coded File Write

To avoid losing redundancy on erasure-coded files that are modified, the Mirrored File Writes functionality could be used during writes to such files. Changes would be merged into the erasure coded component after the file is closed, using the Phase 1 ChangeLog consumer, and then the mirror component can be dropped.

External Components

Erasure Coded Resync Tool

The lfs mirror resync tool needs to be updated to generate the erasure code for the file striped file, storing the parity in a separate component from the main RAID-0 striped file. There are CPU-optimized implementations of the erasure coding algorithms available, so the majority of the work would be integrating these optimized routines into the Lustre kernel modules and userspace tools, rather than actually developing the encoding algorithms.



 Comments   
Comment by Nathan Rutman [ 24/Feb/20 ]

Is this still planned for 2.14? Any progress? This ticket doesn't seem to get updated; am I looking in the wrong place?

Comment by Andreas Dilger [ 24/Feb/20 ]

The plan is still to get this into 2.14. There are patches in Gerrit that could probably be refreshed. As always, review of the patches would be welcome.

Comment by Andreas Dilger [ 24/Feb/20 ]

The patches are in Gerrit under the sub-tasks linked above. LU-12186 thru LU-12189 and LU-12668 and LU-12669 (I think these last two are still finishing development).

Comment by Alexey Lyashkov [ 03/Mar/20 ]

Andreas - can be right Gerrit links provided in tickets ?
https://review.whamcloud.com/34678 isn't valid anymore.
same for any other.

Comment by Gerrit Updater [ 25/Mar/20 ]

[ignore this, patch pushed under wrong ticket #]

Comment by Gerrit Updater [ 25/Mar/20 ]

[ignore this, patch pushed under wrong ticket #]

Comment by Andreas Dilger [ 23/Apr/20 ]

Bobijam,
since we are very close to the end of the 2.14 feature landing window, that it makes sense to submit the patches initially so that they are conditionally compiled under #ifdef ISAL_ENABLED, so that they can be landed and tested not to cause any problems with the current master code (i.e. the code is mostly a no-op initially). Then, patches can be landed to enable ISAL_ENABLED during the build, and tests should be conditional on this support (so there needs to be some way to detect it in userspace).

That will ensure that the EC code is included as part of the 2.14 release, and gives us more time to improve the build system, fix EC bugs, etc. We would want to have the #ifdef ISAL_ENABLE checks for this code anyway, so that Lustre can still build if ISA-L is not available/usable for some systems. We shouldn't leave it like disabled for a long time, because untested code is going to break quickly, but the 2.14 feature landing window is supposed to close on April 30 (already 2 weeks late), and I think there are still changes that need to be finished to the before this feature is ready. Those can still be worked on after the feature is landed to master, before the 2.14 final release.

Comment by Zhenyu Xu [ 23/Apr/20 ]

yes, great insight, ISAL_ENABLE could be used to protect pre-EC file behavior and smooth the transition.

Comment by Alexey Lyashkov [ 23/Apr/20 ]

Andreas, i'm confused. You are OK with landing untested / buggy code?

Comment by Alexey Lyashkov [ 09/Jul/20 ]

Can someone provide a better HLD than attached? This document just about some userspace tools, and some common changes for structures. But this document don't describe anything with parity calculation - a specially in case REwrite don't covered a whole data stripes and old data need to be read to calculate a parity. No fail scenario in document, no recovery handling but it looks recovery is very complex in this case. No describing how it have plan avoid a parity rewrite with old data in case two parity updates in flight (CR lock permit this). It have bad describing a lock protection for parity between nodes, in case two nodes have a parallel write for half data stripes.
No description about compatibility with old client.

Can design document updated to solve these questions ?

Comment by James A Simmons [ 17/Mar/21 ]

Just an update.  We have moved the flr branch to the latest master and having been running normal sanity tests. Currently we are fixing various bugs we are encountering.

Comment by James A Simmons [ 22/Apr/21 ]

I just did a rebase to the latest master and I get a build error with the latest code due to the landing of LU-12142. For lov_io_lru_reserve() we use

lov_foreach_io_layout() and lov_io_fault_store() uses lov_io_layout_at(). Both functions have changed to handle both LCT_DATA and LCT_CODE types. The question is it safe to just pass LCT_DATA in both cases or do we need to examine every component to see what type LCT_* we have?

Comment by Zhenyu Xu [ 23/Apr/21 ]

I think it's ok to just pass LCT_DATA in both cases, parity code pages won't be cached after EC IO since they are ephemeral and later EC IO could use other parity components.

Comment by James A Simmons [ 04/May/21 ]

In my testing I'm seeing:

kernel: Lustre: DEBUG MARKER: == sanity test 130g: FIEMAP (overstripe file) ================================================
======== 14:15:49 (1620152149)
kernel: Lustre: 42446:0:(osd_handler.c:1938:osd_trans_start()) lustre-MDT0000: credits 19393 > trans_max 9984
kernel: Lustre: 42446:0:(osd_handler.c:1867:osd_trans_dump_creds())  create: 300/1200/0, destroy: 1/4/0
kernel: Lustre: 42446:0:(osd_handler.c:1867:osd_trans_dump_creds()) Skipped 4001 previous similar messages
kernel: Lustre: 42446:0:(osd_handler.c:1874:osd_trans_dump_creds())  attr_set: 3/3/0, xattr_set: 304/148/0
kernel: Lustre: 42446:0:(osd_handler.c:1874:osd_trans_dump_creds()) Skipped 4001 previous similar messages
kernel: Lustre: 42446:0:(osd_handler.c:1884:osd_trans_dump_creds())  write: 1501/12910/0, punch: 0/0/0, quota 4/4/0
kernel: Lustre: 42446:0:(osd_handler.c:1884:osd_trans_dump_creds()) Skipped 4001 previous similar messages
kernel: Lustre: 42446:0:(osd_handler.c:1891:osd_trans_dump_creds())  insert: 301/5116/0, delete: 2/5/0
kernel: Lustre: 42446:0:(osd_handler.c:1891:osd_trans_dump_creds()) Skipped 4001 previous similar messages
kernel: Lustre: 42446:0:(osd_handler.c:1898:osd_trans_dump_creds()) Skipped 4001 previous similar messages
kernel: Pid: 42446, comm: mdt03_001 3.10.0-1160.15.2.el7.x86_64 #1 SMP Thu Jan 21 16:15:07 EST 2021
kernel: Call Trace:
kernel: [<0>] libcfs_call_trace+0x90/0xf0 [libcfs]
kernel: [<0>] osd_trans_start+0x4bb/0x4e0 [osd_ldiskfs]

Comment by Andreas Dilger [ 04/May/21 ]
 kernel: Lustre: 42446:0:(osd_handler.c:1938:osd_trans_start()) lustre-MDT0000: credits 19393 > trans_max 9984

That is probably introduced by patches from LU-14134, possibly combined with large write RPCs. It isn't really fatal, but annoying and should be fixed.

There is a prototype patch in LU-14641 that would be useful to test if you can reproduce this easily.

Comment by James A Simmons [ 28/Jun/23 ]

An outside party  has contacted our group at ORNL so we pushed the current prototype for early review with them. This project is at the beta code stage.

Comment by Alexey Lyashkov [ 28/Jun/23 ]

James, can you drop some comments about recovery with FLR2 ? how it planed to be find which stripe is good and which is outdated and needs to be reconstructed.

Generated at Sat Feb 10 02:39:16 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.