[LU-7207] HSM: Add Archive UUID to delete changelog records Created: 24/Sep/15  Updated: 25/Apr/19

Status: Open
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Robert Read (Inactive) Assignee: WC Triage
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Related
is related to LU-6866 MDT file migration is incompatible wi... Resolved
is related to LU-10092 PCC: Lustre Persistent Client Cache Resolved
is related to LU-11376 Special file/dir to represent DAOS Co... Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

HSM tools currently store an external identifier (such as a UUID) in EA when a file is archived. The identifier is used to identify the file in the backend archive, and there may be more than one identifier if the file has been archived to multiple backends. Currently different tools are doing this independently and are not coordinating their EA names or formats.

When a file is deleted, the EA is no longer available, so it would be helpful include the identifier(s) in the Delete changelog record. I suggest we define a standard name and format HSM archive EA, and this data should be included as is in the delete changelog record.

One possible format would be to use JSON to encode a list of endpoint and archive id. Here is a strawman example to begin the discussion:

{
  "replicas": [
    {
      "endpoint" : "s3://my-bucket/archve",
      "id": "UUID"
     },
    { 
      "endpoint" : "wos://address",
      "id": OID
    }
  ]
}

Alternatively, to save space the endpoint could just be an index that refers to a specific endpoint in the local configuration.



 Comments   
Comment by Li Xi (Inactive) [ 28/Sep/15 ]

This is really a good idea. Vote +1

Comment by Li Xi (Inactive) [ 12/Nov/15 ]

Is there any body working/will work on this? If no, I'd like to do a little bit research on this at least.

Comment by Andreas Dilger [ 08/Dec/15 ]

It is my goal that the HSM archive xattr also be usable as a component in a composite file (http://wiki.lustre.org/Layout_Enhancement#2.1._Composite_Layouts). That would mean there is no need to have an HSM structure that allows multiple archive IDs to be expressed directly, since this could be handled by the composite layouts rather than as a separate xattr.

The main reason for to put the HSM archive ID as part of a composite file is to allow partial file restore to be implemented. The HSM archived file would typically cover the whole file (though it could also cover a subset if that was really needed by specifying the extents of the component, and possibly allowing an "offset" of the archived file within the component). Partial restores from tape would get separate OST-based components that "mirror" the archive copy (i.e. overlapping extents) for the part of the file that is restored.

Also, having a binary data structure along the lines of lov_mds_md would be easier to manage in the kernel, and in particular the structure needs to have a unique magic value at the start so that the component type can be identified (e.g. HSM archive component vs. RAID-0 on OST(s) vs. RAID-N parity).

My strawman would allow the direct use of older HSM xattrs as a sub-layout to allow converting over existing files, something like:

struct lov_hsm_attrs_v1 {
        __u32 hsm_magic;                /* LOV_MAGIC_HSM_V1, replaces hsm_compat */
        __u32 hsm_flags;            /* HS_* states from enum hsm_states */
        __u64 hsm_arch_id;           /* integer archive number the data is in */
        __u64 hsm_arch_ver;          /* data version of file in archive */
};

The new HSM sub-layout that includes the archive UUID would look something like:

struct lov_hsm_attrs_v2 {
        __u32 hsm_magic;             /* LOV_MAGIC_HSM_V2, replaces hsm_compat */
        __u32 hsm_flags;            /* HS_* states from enum hsm_states */
        __u64 hsm_arch_id;           /* integer archive number the data is in */
        __u64 hsm_arch_ver;          /* data version of file in archive */
        __u16 hsm_file_id_len;          /* length of archive-unique identifier hsm_uuid */
        __u16 hsm_padding2;
        unsigned char hsm_file_id[0];  /* identifier for file data within "hsm_arch_num" archive */
};
  • hsm_magic is LOV_HSM_MAGIC_V2 = 0x45320BD0 ("4532" => "HSM2")
  • hsm_flags is one of the HS_* flags from enum hsm_states*
  • hsm_arch_id would continue to be as it is today - an integer identifier for the archive in which the data exists. Normally this would be a small integer that is an index in a table to identify which copytool should be used, but might map directly to some other identifier (e.g. tape volume?) in some implementations.
  • hsm_arch_ver is a hash that identifies the version of data stored in the archived file. There is no relationship assumed between different hsm_arch_ver values, other than equality indicating that the data is identical.
  • hsm_file_id_len is the length of hsm_file_id in bytes.
  • hsm_file_id is an archive-specific identifier for the file in the archive identified by hsm_arch_id. (Open question - should this be ASCII? With a trailing NUL? Or is e.g. a binary 16-byte UUID preferable to save space inside the inode (with one of { HS_UUID | HS_U64 | HS_ASCII | HS_BIN }

    so that they could be formatted correctly for printing?), instead of a 36-byte ASCII UUID?)

* enum hsm_states should be renamed enum hsm_flags to match the comment at struct hsm_attrs (or vice versa in the rest of the code), and enum hsm_flags should be used for all of the variables and functions that hold HS_* values.

Comment by Andreas Dilger [ 09/Dec/15 ]

For composite layout access by userspace, "lfs getstripe" will be updated as part of the PFL project to format composite layouts in YAML format, so this can be consumed directly by user tools if desired, something like below (still open to suggestions on this):

$ lfs getstripe -v /mnt/lustre/file
"/mnt/lustre/file":
  fid: "[0x200000400:0x2c3:0x0]"
  composite_header:
    composite_magic: 0x0BDC0BD0
    composite_size:  536
    composite_gen:   6
    composite_flags: 0
    component_count: 3
  components:
    - component_id:     2
      component_flags:  stale, version
      component_start:  0
      component_end:    18446744073709551615
      component_offset: 152
      component_size:   48
      sub_layout:
        hsm_magic:      0x45320BD0
        hsm_flags:      [ exists, archived ] 
        hsm_arch_id:    1
        hsm_arch_ver:   0xabcd1234
        hsm_uuid_len:   16
        hsm_uuid:      e60649ac-b4e3-453f-88c7-611e78c38d5a
    - component_id:     3
      component_flags:  0
      component_start:  20971520
      component_end:    216777216
      component_offset: 208
      component_size:   144
      sub_layout:
        lmm_magic:        0x0BD30BD0
        lmm_pattern:      1
        lmm_stripe_size:  1048576
        lmm_stripe_count: 4
        lmm_stripe_index: 0
        lmm_layout_gen:   0
        lmm_layout_pool: flash
        lmm_obj:
          - 0: { lmm_ost: 0, lmm_fid: "[0x100000000:0x2:0x0]" }
          - 1: { lmm_ost: 1, lmm_fid: "[0x100010000:0x3:0x0]" }
          - 2: { lmm_ost: 2, lmm_fid: "[0x100020000:0x4:0x0]" }
          - 3: { lmm_ost: 3, lmm_fid: "[0x100030000:0x4:0x0]" }
    - component_id:     4
      component_flags:  0
      component_start:  3355443200
      component_end:    3367108864
      component_offset: 352
      component_size:   144
      sub_layout:
        lmm_magic:        0x0BD30BD0
        lmm_pattern:      1
        lmm_stripe_size:  4194304
        lmm_stripe_count: 4
        lmm_stripe_index: 5
        lmm_pool:         flash
        lmm_layout_gen:   0
        lmm_obj:
          - 0: { lmm_ost: 5, lmm_fid: "[0x100050000:0x2:0x0]" }
          - 1: { lmm_ost: 6, lmm_fid: "[0x100060000:0x2:0x0]" }
          - 2: { lmm_ost: 7, lmm_fid: "[0x100070000:0x3:0x0]" }
          - 3: { lmm_ost: 0, lmm_fid: "[0x100000000:0x3:0x0]" }

This describes a file that was originally written (as a normal RAID-0 file), then archived (creating component_id #2 on the same file), and then two disjoint parts of the file (offsets at 21MB and 3.3GB) were read back in from tape to create component_id's #3 and #4. The actual policy decisions of when to read in partial files is up to the policy engine and copytool, and outside the scope of the on-disk format.

Comment by Robert Read (Inactive) [ 09/Dec/15 ]

Will it be possible to support multiple hsm sub_layouts per component?

UUID has specific meaning and not all the identifiers will be UUIDs, so the field should be a bit more generic, such as hsm_identifier or hsm_data. (I know we've excessively abused "UUID" in Lustre since forever but no reason to continue doing that.)

YAML output is great, but I'd expect copytools would be using the API to retrieve the layout data and update the identifiers.

Note, we'll still need to use user xattrs to store UUIDs until this work is completed, so the original idea here would still be a useful interim solution.

Comment by Andreas Dilger [ 10/Dec/15 ]

My initial thought wouldn't be to store multiple UUIDs per component, but rather to store each archive copy in a separate component, possibly expanding the lov_hsm_attrs_v2 to store an "archive date" so that this could be used for storing multiple versions of the file (in-filesystem versions would store the timestamps on the OST objects as they do now). That makes archive copies and in-filesystem copies more alike.

The main difference, besides performance, would be that we can't randomly update the archive data copy, though we could do clever things like create new components for parts of the file being written, so long as they are block aligned.

Comment by Andreas Dilger [ 10/Dec/15 ]

To continue my previous comments, it would be possible to store multiple UUIDs (or whatever we want to call them) in a single struct lov_hsm_attr_v2, like objects in a lov_mds_md, if it also stored the count of such entries is also stored. They would have to be in the same archive.

I would hope that if we are adding a new xattr format that we would just use the same lov_hsm_attr_v2 struct (whatever we decide it to look like) for the existing "hsm" xattr until composite layouts are ready, to avoid having three different ways to store this data. Depending on release schedules it may be that they are ready in the same release and we don't need to handle both access styles.

Comment by Frank Zago (Inactive) [ 08/Jan/16 ]

IMO the UUID should be stored as an opaque binary array. If it is ASCII, then it limits its format (or length), and the tools have to do back and forth conversions like is currently done with FIDs.

YAML output is nice but there isn't a decent library to read/extract them in C. I'd prefer JSON (which one can still parse as YAML) or XML. Due to YAML complexity, the python YAML is also slower than the JSON parser. Or we could have an --xml / --json / --yaml option to lfs, allowing the user to choose.

Comment by Robert Read (Inactive) [ 24/Feb/16 ]

I agree, the external key should be opaque data and interpreted by data mover associated with the archive ID for that file.

Getting back to the original intention of this ticket, no matter how or where the key is stored, we still need a to ensure the data is available after a file has been deleted. The original proposal here was to add the key to the changelog. Another option is to retain deleted inodes with the archived flag set in a pending directory (much like what is currently done for open-unlinked and migrated files). The data mover would be able access the extend attributes directly using the FID, and since the remove operation already clears the archive flag, a periodic garbage collector could detect which inodes could be removed safely. There could also be a timeout (a week?) to cleanup old files regardless of the achieve flag, just to ensure they don't collect indefinitely.

Comment by Andreas Dilger [ 25/Feb/16 ]

If the timeout for these records is a week, then I don't think it is practical to keep this in unlinked inodes in the PENDING directory. Otherwise, there may be far too many inodes created and deleted in that period and PENDING may get too large. In that case I think it is more practical to store the UUID into the ChangeLog record.

In newer releases it is possible to add extensible fields to ChangeLog records as needed, and the lifetime of those records will be exactly as needed. They will only consume a few bytes in a block in the log, and not an inode or increase in the size of the PENDING directory.

Comment by Robert Read (Inactive) [ 25/Feb/16 ]

I agree adding changelog is more compact,, but the advantage of this approach is it decouples how the external HSM metadata is stored from Lustre internals, and provides more flexibility for the HSM tools.

A week was just a suggestion. Obviously the TTL should be tunable and default to not retaining them at all. If the system is working properly then files should only be retained long enough to process the queue, and if things are not working properly then the directory could be flushed.

Comment by Nathan Rutman [ 23/Jun/16 ]

+1 for the UnlinkedArchived directory. The only files that live there will be ones that were archived, so presumably not your thousands of scratch files, but rather only ones you wanted to keep. (This also seems a yummy way to implement undelete, if one were to track the path somehow.) Mainly it means that you don't have to know ahead of time what format (or even what EA) the backend may be using to track it's ids.
I'll throw in another thought - it would be nice to send a tombstone request to the coordinator queue at every unlink. This would allow the copytool to do its thing without depending on Robinhood. E.g. copytool could delete the archive copy, or could put it on a delayed-delete list, etc. This has all the same problems (still need to know backend id mapping) except that presumably the reaction time will be fast, no "pending for a week" issues. It also starts moving away from RBH dependence, which IMHO is a good thing.

Comment by Andreas Dilger [ 11/Nov/17 ]

Henri, what is the xattr name that is used by the RBH copytool, and what format does it store the archive identifier in the xattr (ASCII/binary, any archive prefix, etc)?

Comment by Frank Zago (Inactive) [ 13/Nov/17 ]

It's configurable in Robinhood. For instance Cray's copytool stores it in "trusted.tascon.uuid", and the format is ascii.

Comment by Li Xi [ 14/Mar/19 ]

I guess not many people use Lustre itself has the HSM storage. But comparing to replying on other types of file system or storage, Lustre should have good support for using Lustre itself as HSM storage. Comparing to other HSM solutions, it would be much easier for the primary Lustre and HSM Lustre to work well together. Thus, instead of using the "informal" way of saving the UUIDs of HSM into Lustre file xattr, the primary Lustre should save the FID of the HSM Lustre into archived file. The target FID on the HSM Lustre could be saved as a inline field in an extended version of XATTR_NAME_HSM, since FID only has 128 bit.

Comment by Andreas Dilger [ 16/Mar/19 ]

Yes, the "archive UUID" stored from the archive into the Lustre file is archive-specific, and in the Lustre-on-Lustre case it would likely be the remote FID. That said, rather than just using an arbitrary xattr, or changing the XATTR_NAME_HSM, there are benefits to putting the archive UUID as part of a composite layout (PFL/FLR) in the file.

There are several benefits to storing the HSM identifier in a composite layout:

  • allow storing multiple different archive objects on the same file as mirrors/versions
  • allow partial restore of a file via PFL/FLR components
  • allow partial archive of a file, or archive in multiple parts if there are single-object size limits in the archive

One candidate for this is patch https://review.whamcloud.com/33755 "LU-11376 lov: new foreign LOV format", which is just a generic Lustre layout, but even with this it would need some infrastructure changes for the code to understand this component type in the context of HSM.

Generated at Sat Feb 10 02:06:55 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.