Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-12031

DoM/HSM: hsm_release fails after hsm_restore

Details

    • Bug
    • Resolution: Fixed
    • Minor
    • Lustre 2.16.0
    • Lustre 2.12.0
    • 3
    • 9223372036854775807

    Description

       There is an issue when releasing a file striped with DoM after an hsm_restore.

      To reproduce:

      1) create a file with a 1st component on MDT:

      lfs setstripe -E 1M -L mdt -E -1 -S 4M -c -1 /mnt/lustre/domfile

      2) archive and release the file (requires HSM set up)
       

      lfs hsm_archive /mnt/lustre/domfile
      # (wait for archive to complete)
      lfs hsm_release

      3) restore the file

      lfs hsm_restore /mnt/lustre/domfile
      # or cat /mnt/lustre/domfile

      4) release the file => FAILS 

      lfs hsm_release /mnt/lustre/domfile
      
      Cannot send HSM request (use of /mnt/lustre/domfile): Device or resource busy

       
      It may be something wrong with the data version stored in hsm EA.

      Attachments

        Issue Links

          Activity

            [LU-12031] DoM/HSM: hsm_release fails after hsm_restore

            Ben - see LU-9961

            adilger Andreas Dilger added a comment - Ben - see LU-9961

            It would be nice if instead of all the playing around with temp files we could just restore to a stripe, and once completed mark it as primary.  We should also be able to restore all the other layout information as well and mark them as secondary.

            beevans Ben Evans (Inactive) added a comment - It would be nice if instead of all the playing around with temp files we could just restore to a stripe, and once completed mark it as primary.  We should also be able to restore all the other layout information as well and mark them as secondary.

            but what is actually bad about DoM release/restore - it is the fact that DoM stripe is not actually archived and is not restored after all though it 'looks' so. On archive operation it is read and stored in atchive but unlike OST object the data in inode is not truncated and stay untouched. Upon restore DoM data is read from archive and is written to volatile file inode. But on swap layout it is gone along with volatile file actually and original data in original inode become just visible as layout says it exists. So all that time DoM data stays in inode and its copy in archive is just lost along with volatile file. That means there is no any sense to archive what is always kept in inode on disk. Therefore I tend to return back to first solution when DoM stripe is either non-released or just removed in favor of first ost stripe if exists

            tappro Mikhail Pershin added a comment - but what is actually bad about DoM release/restore - it is the fact that DoM stripe is not actually archived and is not restored after all though it 'looks' so. On archive operation it is read and stored in atchive but unlike OST object the data in inode is not truncated and stay untouched. Upon restore DoM data is read from archive and is written to volatile file inode. But on swap layout it is gone along with volatile file actually and original data in original inode become just visible as layout says it exists. So all that time DoM data stays in inode and its copy in archive is just lost along with volatile file. That means there is no any sense to archive what is always kept in inode on disk. Therefore I tend to return back to first solution when DoM stripe is either non-released or just removed in favor of first ost stripe if exists

            Ben, unlike inode_version the data_version is not changed by xattr set, that is why I was trying to introduce it. Like on OST it would be changed only on data change - write, truncate and fallocate. So that solves problems when metadata operations affects data_version though required separated xattr to store it.

            tappro Mikhail Pershin added a comment - Ben, unlike inode_version the data_version is not changed by xattr set, that is why I was trying to introduce it. Like on OST it would be changed only on data change - write, truncate and fallocate. So that solves problems when metadata operations affects data_version though required separated xattr to store it.

            I think it's much more sinister than that, in a non-DoM case, the data_version is calculated on each portion of the file (all on OSTs) then combined into a single data version and written to an XATTR on the MDT.  For DoM, the act of writing the HSM data_version to an XATTR would cause the data_version on the MDT to change.  Unless we can predict what the "next" DoM data_version is, so that the HSM XATTR agrees with the calculated data_version after the XATTR is written.  So for a restore->release case it will always be wrong.

            beevans Ben Evans (Inactive) added a comment - I think it's much more sinister than that, in a non-DoM case, the data_version is calculated on each portion of the file (all on OSTs) then combined into a single data version and written to an XATTR on the MDT.  For DoM, the act of writing the HSM data_version to an XATTR would cause the data_version on the MDT to change.  Unless we can predict what the "next" DoM data_version is, so that the HSM XATTR agrees with the calculated data_version after the XATTR is written.  So for a restore->release case it will always be wrong.

            Mike, in case it is helpful to you, newer ext4 code has a "swap data" operation that is meant to allow swapping a "volatile" file into the boot loader inode. This could be used to swap data between two DoM files if needed.

            That said, your recent comments indicate that it isn't the DoM data swap that is the main obstacle, but the ordering problem of the data version. IMHO, a content-based hash is probably still too expensive if the data version is used regularly. That would make inode operations that need 1KB/inode into data operations that need (possibly) 1MB/inode, or at least 64KB/inode. There was some discussion recently on whether the data version should be used for NFS file modification tracking, so doing a DoM checksum on every file access would be punishing. Storing a separate xattr would be much more efficient.

            Maybe I'm missing something, but is it not possible to store the "original" object version in the swapped MDT inode? This might mess with recovery, but if the volatile file is gone it would be pretty clear that the layout swap could not be replayed in any case. We could also special-case the replay operation for layout swap to take this into consideration.

            adilger Andreas Dilger added a comment - Mike, in case it is helpful to you, newer ext4 code has a "swap data" operation that is meant to allow swapping a "volatile" file into the boot loader inode. This could be used to swap data between two DoM files if needed. That said, your recent comments indicate that it isn't the DoM data swap that is the main obstacle, but the ordering problem of the data version. IMHO, a content-based hash is probably still too expensive if the data version is used regularly. That would make inode operations that need 1KB/inode into data operations that need (possibly) 1MB/inode, or at least 64KB/inode. There was some discussion recently on whether the data version should be used for NFS file modification tracking, so doing a DoM checksum on every file access would be punishing. Storing a separate xattr would be much more efficient. Maybe I'm missing something, but is it not possible to store the "original" object version in the swapped MDT inode? This might mess with recovery, but if the volatile file is gone it would be pretty clear that the layout swap could not be replayed in any case. We could also special-case the replay operation for layout swap to take this into consideration.
            tappro Mikhail Pershin added a comment - - edited

            well, it looks like having separate data_version is not the solution here. The main problem is a restore process which is using volatile file for data transfer. When transfer is completed the file data version is calculated including DoM stripe version and is stored in HSM extended attribute. The next step is layout swap between volatile file and original one which cannot keep the same DoM stripe because it is an inode data. So after layout swap  data versions of DoM stripe and file itself are always different from the one stored in HSM xattr which is just copied to the original file from volatile one. I can't find the way how this could be done correctly without compromising whole process. We can't just allow DoM stripe data version be different because that would hide possible real problem by that. From that moment any further attempts to avoid that looks not less 'hacky' than the solution proposed initially. 

            The another idea could be different approach to calculating data_version for DoM stripe, e.g. make it not transaction-based but content-based, like checksum of DoM data, considering it is not big in size and that data_version is being used by HSM mostly, so quite rare operation to affect performance. Also that means we don't need to store it anywhere which is also pros, since separate 'data_version' can be stored as new XATTR only

            tappro Mikhail Pershin added a comment - - edited well, it looks like having separate data_version is not the solution here. The main problem is a restore process which is using volatile file for data transfer. When transfer is completed the file data version is calculated including DoM stripe version and is stored in HSM extended attribute. The next step is layout swap between volatile file and original one which cannot keep the same DoM stripe because it is an inode data. So after layout swap  data versions of DoM stripe and file itself are always different from the one stored in HSM xattr which is just copied to the original file from volatile one. I can't find the way how this could be done correctly without compromising whole process. We can't just allow DoM stripe data version be different because that would hide possible real problem by that. From that moment any further attempts to avoid that looks not less 'hacky' than the solution proposed initially.  The another idea could be different approach to calculating data_version for DoM stripe, e.g. make it not transaction-based but content-based, like checksum of DoM data, considering it is not big in size and that data_version is being used by HSM mostly, so quite rare operation to affect performance. Also that means we don't need to store it anywhere which is also pros, since separate 'data_version' can be stored as new XATTR only

            Ben, yes, version could be corrected by userspace  tools, though I wouldn't call that as a fix. 

            While I proposed possible solution above with non-released DoM component and have patch for it, I am not confident in it. It looks 'hacky' in all that layout manipulations just to avoid wrong data version. The correct solution would be separated 'data_version' maintained for DoM file in addition to inode_version. I am figuring out how difficult that could be

            tappro Mikhail Pershin added a comment - Ben, yes, version could be corrected by userspace  tools, though I wouldn't call that as a fix.  While I proposed possible solution above with non-released DoM component and have patch for it, I am not confident in it. It looks 'hacky' in all that layout manipulations just to avoid wrong data version. The correct solution would be separated 'data_version' maintained for DoM file in addition to inode_version . I am figuring out how difficult that could be

            Would adding LU-13384 and performing Restore->Archive (without datacopy)->Release fix most of this all in Userspace?  We would regenerate the correct version during the fake archive step and be able to release cleanly.

            This probably won't address the DoM stripe issue, changing HSM to use file layouts would probably be the best way to handle that.

            beevans Ben Evans (Inactive) added a comment - Would adding LU-13384 and performing Restore->Archive (without datacopy)->Release fix most of this all in Userspace?  We would regenerate the correct version during the fake archive step and be able to release cleanly. This probably won't address the DoM stripe issue, changing HSM to use file layouts would probably be the best way to handle that.
            tappro Mikhail Pershin added a comment - - edited

            This issue is bigger than it seems at first sign. The initial problem was about DOM data version mismatch after restore. When file with DoM stripe is being restored, the HSM XATTR stores data version but the same setxattr operation changes inode version which is used as data version for MDT stripe as well. So it is just impossible to store current data version for DOM file because of that.
            The proposed solution is:
            1. if there is next component after DoM one then don't restore DoM stripe but delete it. The data will go to the next component. This makes sense because if the next component is used already then DoM stripe lost most of benefits and there is no big sense in having it.
            2. if DoM stripe is the only stripe being used then it is worth to keep it. In that case such file can be considered as non-released always. In general that is OK because they consume not much space. The question is - would that be handled nicely by HSM software? I suppose that should be OK at least because we can set norelease flag on selected files, technically that is the same.

            Meanwhile, working on that issue I have found another one related to VOLATILE file handling. Usually such files are used as temporary file to copy data and then swap layout with original file, e.g. during HSM release and restore. The problem is that such files they can be created with DoM layout which cannot be swapped because we swap layouts but not inodes data. Such VOLATILE files can get layout from striping saved during archive operation or from default layout. In any case, it is unsafe to create VOLATILE file with DoM layout and that should be prohibited. LU-13515 was created for that

            tappro Mikhail Pershin added a comment - - edited This issue is bigger than it seems at first sign. The initial problem was about DOM data version mismatch after restore. When file with DoM stripe is being restored, the HSM XATTR stores data version but the same setxattr operation changes inode version which is used as data version for MDT stripe as well. So it is just impossible to store current data version for DOM file because of that. The proposed solution is: 1. if there is next component after DoM one then don't restore DoM stripe but delete it. The data will go to the next component. This makes sense because if the next component is used already then DoM stripe lost most of benefits and there is no big sense in having it. 2. if DoM stripe is the only stripe being used then it is worth to keep it. In that case such file can be considered as non-released always. In general that is OK because they consume not much space. The question is - would that be handled nicely by HSM software? I suppose that should be OK at least because we can set norelease flag on selected files, technically that is the same. Meanwhile, working on that issue I have found another one related to VOLATILE file handling. Usually such files are used as temporary file to copy data and then swap layout with original file, e.g. during HSM release and restore. The problem is that such files they can be created with DoM layout which cannot be swapped because we swap layouts but not inodes data. Such VOLATILE files can get layout from striping saved during archive operation or from default layout. In any case, it is unsafe to create VOLATILE file with DoM layout and that should be prohibited. LU-13515 was created for that

            yes, there is problem with data version mismatch. I will investigate

            tappro Mikhail Pershin added a comment - yes, there is problem with data version mismatch. I will investigate

            People

              tappro Mikhail Pershin
              cealustre CEA
              Votes:
              0 Vote for this issue
              Watchers:
              18 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: