Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-7207

HSM: Add Archive UUID to delete changelog records

Details

    • Bug
    • Resolution: Unresolved
    • Minor
    • None
    • None
    • None
    • 3
    • 9223372036854775807

    Description

      HSM tools currently store an external identifier (such as a UUID) in EA when a file is archived. The identifier is used to identify the file in the backend archive, and there may be more than one identifier if the file has been archived to multiple backends. Currently different tools are doing this independently and are not coordinating their EA names or formats.

      When a file is deleted, the EA is no longer available, so it would be helpful include the identifier(s) in the Delete changelog record. I suggest we define a standard name and format HSM archive EA, and this data should be included as is in the delete changelog record.

      One possible format would be to use JSON to encode a list of endpoint and archive id. Here is a strawman example to begin the discussion:

      {
        "replicas": [
          {
            "endpoint" : "s3://my-bucket/archve",
            "id": "UUID"
           },
          { 
            "endpoint" : "wos://address",
            "id": OID
          }
        ]
      }
      

      Alternatively, to save space the endpoint could just be an index that refers to a specific endpoint in the local configuration.

      Attachments

        Issue Links

          Activity

            [LU-7207] HSM: Add Archive UUID to delete changelog records

            Yes, the "archive UUID" stored from the archive into the Lustre file is archive-specific, and in the Lustre-on-Lustre case it would likely be the remote FID. That said, rather than just using an arbitrary xattr, or changing the XATTR_NAME_HSM, there are benefits to putting the archive UUID as part of a composite layout (PFL/FLR) in the file.

            There are several benefits to storing the HSM identifier in a composite layout:

            • allow storing multiple different archive objects on the same file as mirrors/versions
            • allow partial restore of a file via PFL/FLR components
            • allow partial archive of a file, or archive in multiple parts if there are single-object size limits in the archive

            One candidate for this is patch https://review.whamcloud.com/33755 "LU-11376 lov: new foreign LOV format", which is just a generic Lustre layout, but even with this it would need some infrastructure changes for the code to understand this component type in the context of HSM.

            adilger Andreas Dilger added a comment - Yes, the "archive UUID" stored from the archive into the Lustre file is archive-specific, and in the Lustre-on-Lustre case it would likely be the remote FID. That said, rather than just using an arbitrary xattr, or changing the XATTR_NAME_HSM, there are benefits to putting the archive UUID as part of a composite layout (PFL/FLR) in the file. There are several benefits to storing the HSM identifier in a composite layout: allow storing multiple different archive objects on the same file as mirrors/versions allow partial restore of a file via PFL/FLR components allow partial archive of a file, or archive in multiple parts if there are single-object size limits in the archive One candidate for this is patch https://review.whamcloud.com/33755 " LU-11376 lov: new foreign LOV format ", which is just a generic Lustre layout, but even with this it would need some infrastructure changes for the code to understand this component type in the context of HSM.
            lixi_wc Li Xi added a comment -

            I guess not many people use Lustre itself has the HSM storage. But comparing to replying on other types of file system or storage, Lustre should have good support for using Lustre itself as HSM storage. Comparing to other HSM solutions, it would be much easier for the primary Lustre and HSM Lustre to work well together. Thus, instead of using the "informal" way of saving the UUIDs of HSM into Lustre file xattr, the primary Lustre should save the FID of the HSM Lustre into archived file. The target FID on the HSM Lustre could be saved as a inline field in an extended version of XATTR_NAME_HSM, since FID only has 128 bit.

            lixi_wc Li Xi added a comment - I guess not many people use Lustre itself has the HSM storage. But comparing to replying on other types of file system or storage, Lustre should have good support for using Lustre itself as HSM storage. Comparing to other HSM solutions, it would be much easier for the primary Lustre and HSM Lustre to work well together. Thus, instead of using the "informal" way of saving the UUIDs of HSM into Lustre file xattr, the primary Lustre should save the FID of the HSM Lustre into archived file. The target FID on the HSM Lustre could be saved as a inline field in an extended version of XATTR_NAME_HSM, since FID only has 128 bit.

            It's configurable in Robinhood. For instance Cray's copytool stores it in "trusted.tascon.uuid", and the format is ascii.

            fzago Frank Zago (Inactive) added a comment - It's configurable in Robinhood. For instance Cray's copytool stores it in "trusted.tascon.uuid", and the format is ascii.

            Henri, what is the xattr name that is used by the RBH copytool, and what format does it store the archive identifier in the xattr (ASCII/binary, any archive prefix, etc)?

            adilger Andreas Dilger added a comment - Henri, what is the xattr name that is used by the RBH copytool, and what format does it store the archive identifier in the xattr (ASCII/binary, any archive prefix, etc)?

            +1 for the UnlinkedArchived directory. The only files that live there will be ones that were archived, so presumably not your thousands of scratch files, but rather only ones you wanted to keep. (This also seems a yummy way to implement undelete, if one were to track the path somehow.) Mainly it means that you don't have to know ahead of time what format (or even what EA) the backend may be using to track it's ids.
            I'll throw in another thought - it would be nice to send a tombstone request to the coordinator queue at every unlink. This would allow the copytool to do its thing without depending on Robinhood. E.g. copytool could delete the archive copy, or could put it on a delayed-delete list, etc. This has all the same problems (still need to know backend id mapping) except that presumably the reaction time will be fast, no "pending for a week" issues. It also starts moving away from RBH dependence, which IMHO is a good thing.

            nrutman Nathan Rutman added a comment - +1 for the UnlinkedArchived directory. The only files that live there will be ones that were archived, so presumably not your thousands of scratch files, but rather only ones you wanted to keep. (This also seems a yummy way to implement undelete, if one were to track the path somehow.) Mainly it means that you don't have to know ahead of time what format (or even what EA) the backend may be using to track it's ids. I'll throw in another thought - it would be nice to send a tombstone request to the coordinator queue at every unlink. This would allow the copytool to do its thing without depending on Robinhood. E.g. copytool could delete the archive copy, or could put it on a delayed-delete list, etc. This has all the same problems (still need to know backend id mapping) except that presumably the reaction time will be fast, no "pending for a week" issues. It also starts moving away from RBH dependence, which IMHO is a good thing.
            rread Robert Read added a comment -

            I agree adding changelog is more compact,, but the advantage of this approach is it decouples how the external HSM metadata is stored from Lustre internals, and provides more flexibility for the HSM tools.

            A week was just a suggestion. Obviously the TTL should be tunable and default to not retaining them at all. If the system is working properly then files should only be retained long enough to process the queue, and if things are not working properly then the directory could be flushed.

            rread Robert Read added a comment - I agree adding changelog is more compact,, but the advantage of this approach is it decouples how the external HSM metadata is stored from Lustre internals, and provides more flexibility for the HSM tools. A week was just a suggestion. Obviously the TTL should be tunable and default to not retaining them at all. If the system is working properly then files should only be retained long enough to process the queue, and if things are not working properly then the directory could be flushed.

            If the timeout for these records is a week, then I don't think it is practical to keep this in unlinked inodes in the PENDING directory. Otherwise, there may be far too many inodes created and deleted in that period and PENDING may get too large. In that case I think it is more practical to store the UUID into the ChangeLog record.

            In newer releases it is possible to add extensible fields to ChangeLog records as needed, and the lifetime of those records will be exactly as needed. They will only consume a few bytes in a block in the log, and not an inode or increase in the size of the PENDING directory.

            adilger Andreas Dilger added a comment - If the timeout for these records is a week, then I don't think it is practical to keep this in unlinked inodes in the PENDING directory. Otherwise, there may be far too many inodes created and deleted in that period and PENDING may get too large. In that case I think it is more practical to store the UUID into the ChangeLog record. In newer releases it is possible to add extensible fields to ChangeLog records as needed, and the lifetime of those records will be exactly as needed. They will only consume a few bytes in a block in the log, and not an inode or increase in the size of the PENDING directory.
            rread Robert Read added a comment -

            I agree, the external key should be opaque data and interpreted by data mover associated with the archive ID for that file.

            Getting back to the original intention of this ticket, no matter how or where the key is stored, we still need a to ensure the data is available after a file has been deleted. The original proposal here was to add the key to the changelog. Another option is to retain deleted inodes with the archived flag set in a pending directory (much like what is currently done for open-unlinked and migrated files). The data mover would be able access the extend attributes directly using the FID, and since the remove operation already clears the archive flag, a periodic garbage collector could detect which inodes could be removed safely. There could also be a timeout (a week?) to cleanup old files regardless of the achieve flag, just to ensure they don't collect indefinitely.

            rread Robert Read added a comment - I agree, the external key should be opaque data and interpreted by data mover associated with the archive ID for that file. Getting back to the original intention of this ticket, no matter how or where the key is stored, we still need a to ensure the data is available after a file has been deleted. The original proposal here was to add the key to the changelog. Another option is to retain deleted inodes with the archived flag set in a pending directory (much like what is currently done for open-unlinked and migrated files). The data mover would be able access the extend attributes directly using the FID, and since the remove operation already clears the archive flag, a periodic garbage collector could detect which inodes could be removed safely. There could also be a timeout (a week?) to cleanup old files regardless of the achieve flag, just to ensure they don't collect indefinitely.

            IMO the UUID should be stored as an opaque binary array. If it is ASCII, then it limits its format (or length), and the tools have to do back and forth conversions like is currently done with FIDs.

            YAML output is nice but there isn't a decent library to read/extract them in C. I'd prefer JSON (which one can still parse as YAML) or XML. Due to YAML complexity, the python YAML is also slower than the JSON parser. Or we could have an --xml / --json / --yaml option to lfs, allowing the user to choose.

            fzago Frank Zago (Inactive) added a comment - IMO the UUID should be stored as an opaque binary array. If it is ASCII, then it limits its format (or length), and the tools have to do back and forth conversions like is currently done with FIDs. YAML output is nice but there isn't a decent library to read/extract them in C. I'd prefer JSON (which one can still parse as YAML) or XML. Due to YAML complexity, the python YAML is also slower than the JSON parser. Or we could have an --xml / --json / --yaml option to lfs, allowing the user to choose.

            To continue my previous comments, it would be possible to store multiple UUIDs (or whatever we want to call them) in a single struct lov_hsm_attr_v2, like objects in a lov_mds_md, if it also stored the count of such entries is also stored. They would have to be in the same archive.

            I would hope that if we are adding a new xattr format that we would just use the same lov_hsm_attr_v2 struct (whatever we decide it to look like) for the existing "hsm" xattr until composite layouts are ready, to avoid having three different ways to store this data. Depending on release schedules it may be that they are ready in the same release and we don't need to handle both access styles.

            adilger Andreas Dilger added a comment - To continue my previous comments, it would be possible to store multiple UUIDs (or whatever we want to call them) in a single struct lov_hsm_attr_v2, like objects in a lov_mds_md, if it also stored the count of such entries is also stored. They would have to be in the same archive. I would hope that if we are adding a new xattr format that we would just use the same lov_hsm_attr_v2 struct (whatever we decide it to look like) for the existing "hsm" xattr until composite layouts are ready, to avoid having three different ways to store this data. Depending on release schedules it may be that they are ready in the same release and we don't need to handle both access styles.

            People

              wc-triage WC Triage
              rread Robert Read
              Votes:
              0 Vote for this issue
              Watchers:
              19 Start watching this issue

              Dates

                Created:
                Updated: