[LU-8039] Running lfs hsm commands results in "No data available" Created: 19/Apr/16  Updated: 19/Apr/16

Status: Open
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.8.0
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Tatsushi Takamura Assignee: WC Triage
Resolution: Unresolved Votes: 0
Labels: HSM

Severity: 3
Project: HSM
Rank (Obsolete): 9223372036854775807

 Description   

When running multiple lfs hsm commands in parallel, a file becomes inaccessible.

Here is a reproducer:

# cd /lustre
# echo aaa > file1
# cat file1
aaa
# while true; do lfs hsm_archive file1;  lfs hsm_release file1;  lfs hsm_restore file1;  lfs hsm_remove file1; done
# while true; do lfs hsm_archive file1;  lfs hsm_release file1;  lfs hsm_restore file1;  lfs hsm_remove file1; done (run on another terminal)
<wait for a while>
# lfs hsm_state file1
file1: (0x00000005) released exists, archive_id:1
# cat file1
cat: file1: No data available

From the result of lfs hsm_state, it appears that the file data which has not been archived has been released.


Generated at Sat Feb 10 02:14:05 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.