Details

    • Technical task
    • Resolution: Duplicate
    • Blocker
    • Lustre 2.6.0
    • Lustre 2.5.0
    • Patches submitted to autotest
    • 9548

    Description

      from https://maloo.whamcloud.com/test_sets/0afc2c56-fc86-11e2-8ce2-52540035b04c

      This sanity-hsm test 21 seems to be hitting a lot right now
      Wrong block number is one of the errors seen.

      test_21 	
      
          Error: 'wrong block number'
          Failure Rate: 33.00% of last 100 executions [all branches] 
      
      == sanity-hsm test 21: Simple release tests == 23:18:20 (1375510700)
      2+0 records in
      2+0 records out
      2097152 bytes (2.1 MB) copied, 0.353933 s, 5.9 MB/s
       sanity-hsm test_21: @@@@@@ FAIL: wrong block number 
        Trace dump:
        = /usr/lib64/lustre/tests/test-framework.sh:4202:error_noexit()
      

      Attachments

        Issue Links

          Activity

            [LU-3700] sanity-hsm test_21 Error: 'wrong block number'

            Andreas, I am not sure to understand you. Do you mean that on ZFS, to get the correct st_blocks from "stat -c %b" just after make_small(), we need to "cancel_lru_locks osc" in between ?

            bfaccini Bruno Faccini (Inactive) added a comment - Andreas, I am not sure to understand you. Do you mean that on ZFS, to get the correct st_blocks from "stat -c %b" just after make_small(), we need to "cancel_lru_locks osc" in between ?

            Hmm, looking at the test more closely, now that we have a test failure, it seems the test itself is defective. The blocks count == 1 should not be true after ARCHIVE (as it was checked beofre the 8467 patch), only after RELEASE. The failed test reported:

            # error "$f: wrong size after archive: $size != $orig_size"
            
            wrong block number after archive:  4103 != 1
            

            so it is setting "orig_size" == 1.

            That the test passes on ldiskfs is likely just a race, because it is a bit faster and the client does not have the actual on-disk blocks count, so it will return "st_blocks=1" to userspace to avoid bugs in tar/rsync that think st_blocks=0 means the file has no data in it. The zfs test is a bit slower to archive, and the client gets the actual st_blocks value back, and (incorrectly) fails this check.

            I suspect that even with make_small using conv=fsync that the client still has cached the original blocks count, so what is needed to fix this bug is to flush the locks on the client before getting orig_blocks:

            cancel_lru_locks osc

            That way, it returns an accurate value for st_blocks for later testing.

            Nathaniel, can you please submit a patch ASAP so maybe we can land this by tomorrow.

            adilger Andreas Dilger added a comment - Hmm, looking at the test more closely, now that we have a test failure, it seems the test itself is defective. The blocks count == 1 should not be true after ARCHIVE (as it was checked beofre the 8467 patch), only after RELEASE. The failed test reported: # error "$f: wrong size after archive: $size != $orig_size" wrong block number after archive: 4103 != 1 so it is setting "orig_size" == 1. That the test passes on ldiskfs is likely just a race, because it is a bit faster and the client does not have the actual on-disk blocks count, so it will return "st_blocks=1" to userspace to avoid bugs in tar/rsync that think st_blocks=0 means the file has no data in it. The zfs test is a bit slower to archive, and the client gets the actual st_blocks value back, and (incorrectly) fails this check. I suspect that even with make_small using conv=fsync that the client still has cached the original blocks count, so what is needed to fix this bug is to flush the locks on the client before getting orig_blocks: cancel_lru_locks osc That way, it returns an accurate value for st_blocks for later testing. Nathaniel, can you please submit a patch ASAP so maybe we can land this by tomorrow.

            There was a failure https://maloo.whamcloud.com/test_sets/8d7a2b66-6134-11e3-bd66-52540035b04c post #8467. It is, so far, unique (but not seemingly patch related).

            utopiabound Nathaniel Clark added a comment - There was a failure https://maloo.whamcloud.com/test_sets/8d7a2b66-6134-11e3-bd66-52540035b04c post #8467. It is, so far, unique (but not seemingly patch related).

            In case 8467 doesn't fix 21 for ZFS, here is a patch to EXCEPT it: http://review.whamcloud.com/8503

            utopiabound Nathaniel Clark added a comment - In case 8467 doesn't fix 21 for ZFS, here is a patch to EXCEPT it: http://review.whamcloud.com/8503

            Nathaniel, can you please submit a separate patch to disable this test for ZFS only. That can test and possibly land in parallel with 8467 if that does not give us any relief.

            adilger Andreas Dilger added a comment - Nathaniel, can you please submit a separate patch to disable this test for ZFS only. That can test and possibly land in parallel with 8467 if that does not give us any relief.
            bfaccini Bruno Faccini (Inactive) added a comment - - edited

            Patch is at http://review.whamcloud.com/8467.
            But I wonder how, even for ZFS, released file st_blocks can be nothing else than 1 after my patch #7776 to force it as part of LU-3864

            bfaccini Bruno Faccini (Inactive) added a comment - - edited Patch is at http://review.whamcloud.com/8467 . But I wonder how, even for ZFS, released file st_blocks can be nothing else than 1 after my patch #7776 to force it as part of LU-3864 …

            I'll work up and test a patch per Andreas's comment.

            utopiabound Nathaniel Clark added a comment - I'll work up and test a patch per Andreas's comment.

            It would make sense to me that "lfs hsm_release" would cause all of the DLM locks to be revoked from the client, so any stat from the client would return st_blocks == 1. This should be visible in the debug logs, if this test is run with at least +dlmtrace enabled.

            I think in the short term it makes sense to fix sanity-hsm.sh test_21 to enable full debug for this test (using debugsave() and debugrestore()), and print the actual block number that is returned. Maybe it is as simple as ZFS returning 2 with an external xattr or something, which might even happen with ldiskfs? Probably it makes sense to also allow some small margin, like 5 blocks or so. This test is also bad because there are two places that print "wrong block number", and it isn't even clear which one is failing.

            I think it makes sense to submit a patch to change this immediately to the following:

                    local fid=$(make_small $f)
                    local orig_size=$(stat -c "%s" $f)
                    local orig_blocks=$(stat -c "%b" $f)
            
                    check_hsm_flags $f "0x00000000"
                    $LFS hsm_archive $f || error "could not archive file"
                    wait_request_state $fid ARCHIVE SUCCEED
            
                    local blocks=$(stat -c "%b" $f)
                    [ $blocks -eq $orig_blocks ] || error "$f: wrong blocks after archive: $blocks != $orig_blocks"
                    local size=$(stat -c "%s" $f)
                    [ $size -eq $orig_size ] || error "$f: wrong size after archive: $size != $orig_size"
            
                    # Release and check states
                    $LFS hsm_release $f || error "$f: could not release file"
                    check_hsm_flags $f "0x0000000d"
            
                    blocks=$(stat -c "%b" $f)
                    [ $blocks -gt 5 ] || error "$f: too many blocks after release: $blocks > 5"
                    size=$(stat -c "%s" $f)
                    [ $size -ne $orig_size ] || error "$f: wrong size after release: $size != $orig_size"
            

            Maybe this will allow ZFS to pass, but even if it doesn't then we will have more information to debug the problem.

            adilger Andreas Dilger added a comment - It would make sense to me that " lfs hsm_release " would cause all of the DLM locks to be revoked from the client, so any stat from the client would return st_blocks == 1. This should be visible in the debug logs, if this test is run with at least +dlmtrace enabled. I think in the short term it makes sense to fix sanity-hsm.sh test_21 to enable full debug for this test (using debugsave() and debugrestore()), and print the actual block number that is returned. Maybe it is as simple as ZFS returning 2 with an external xattr or something, which might even happen with ldiskfs? Probably it makes sense to also allow some small margin, like 5 blocks or so. This test is also bad because there are two places that print "wrong block number", and it isn't even clear which one is failing. I think it makes sense to submit a patch to change this immediately to the following: local fid=$(make_small $f) local orig_size=$(stat -c "%s" $f) local orig_blocks=$(stat -c "%b" $f) check_hsm_flags $f "0x00000000" $LFS hsm_archive $f || error "could not archive file" wait_request_state $fid ARCHIVE SUCCEED local blocks=$(stat -c "%b" $f) [ $blocks -eq $orig_blocks ] || error "$f: wrong blocks after archive: $blocks != $orig_blocks" local size=$(stat -c "%s" $f) [ $size -eq $orig_size ] || error "$f: wrong size after archive: $size != $orig_size" # Release and check states $LFS hsm_release $f || error "$f: could not release file" check_hsm_flags $f "0x0000000d" blocks=$(stat -c "%b" $f) [ $blocks -gt 5 ] || error "$f: too many blocks after release: $blocks > 5" size=$(stat -c "%s" $f) [ $size -ne $orig_size ] || error "$f: wrong size after release: $size != $orig_size" Maybe this will allow ZFS to pass, but even if it doesn't then we will have more information to debug the problem.

            Could it be that there is some timing delay, and more likely with ZFS, between hsm_release and st_blocks to become 1 ?
            Will setup a ZFS platform and run sanity-hsm/test_21 in a loop to reproduce.

            bfaccini Bruno Faccini (Inactive) added a comment - Could it be that there is some timing delay, and more likely with ZFS, between hsm_release and st_blocks to become 1 ? Will setup a ZFS platform and run sanity-hsm/test_21 in a loop to reproduce.
            adilger Andreas Dilger added a comment - Failed again on ZFS. https://maloo.whamcloud.com/test_sessions/58a20a92-5759-11e3-8d5c-52540035b04c
            utopiabound Nathaniel Clark added a comment - This has been happening on ZFS pretty regularly: https://maloo.whamcloud.com/test_sets/5af74a0a-575e-11e3-8d5c-52540035b04c https://maloo.whamcloud.com/test_sets/6b9c8f22-5741-11e3-a296-52540035b04c

            People

              bfaccini Bruno Faccini (Inactive)
              keith Keith Mannthey (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: