Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-6203

sanity-hsm test 251: FAIL: Copytool failed to stop in 20s

Details

    • 3
    • 17337

    Description

      sanity-hsm test 251 failed as follows:

      CMD: shadow-26vm10 pkill -INT -x lhsmtool_posix
      CMD: shadow-26vm10 pgrep -x lhsmtool_posix
      shadow-26vm10: 7902
      Copytool still running on shadow-26vm10
      CMD: shadow-26vm10 pgrep -x lhsmtool_posix
      shadow-26vm10: 7902
      Copytool still running on shadow-26vm10
      CMD: shadow-26vm10 pgrep -x lhsmtool_posix
      shadow-26vm10: 7902
      Copytool still running on shadow-26vm10
      CMD: shadow-26vm10 pgrep -x lhsmtool_posix
      shadow-26vm10: 7902
      Copytool still running on shadow-26vm10
      CMD: shadow-26vm10 pgrep -x lhsmtool_posix
      shadow-26vm10: 7902
      Copytool still running on shadow-26vm10
      CMD: shadow-26vm10 pgrep -x lhsmtool_posix
      shadow-26vm10: 7902
      Copytool still running on shadow-26vm10
      CMD: shadow-26vm10 pgrep -x lhsmtool_posix
      shadow-26vm10: 7902
      Copytool still running on shadow-26vm10
      CMD: shadow-26vm10 pgrep -x lhsmtool_posix
      shadow-26vm10: 7902
      Copytool still running on shadow-26vm10
      CMD: shadow-26vm10 pgrep -x lhsmtool_posix
      shadow-26vm10: 7902
      Copytool still running on shadow-26vm10
      CMD: shadow-26vm10 pgrep -x lhsmtool_posix
      shadow-26vm10: 7902
       sanity-hsm test_251: @@@@@@ FAIL: Copytool failed to stop in 20s ... 
      

      Maloo reports:
      https://testing.hpdd.intel.com/test_sets/49281842-a9ef-11e4-8c6f-5254006e85c2
      https://testing.hpdd.intel.com/test_sets/4cac5380-aa11-11e4-a5c6-5254006e85c2

      Attachments

        Issue Links

          Activity

            [LU-6203] sanity-hsm test 251: FAIL: Copytool failed to stop in 20s
            yong.fan nasf (Inactive) added a comment - Another failure instance on b2_5: https://testing.hpdd.intel.com/test_sets/465bb228-bbfa-11e4-a79b-5254006e85c2

            Concerning the "https://testing.hpdd.intel.com/test_sets/97fd06d8-ac1c-11e4-992b-5254006e85c2" case, the 'Copytool failed to stop in 20s ...' errors/symptoms there looks more as an additional consequence (due to cleanup()/copytool_cleanup() execute as a trap, set in copytool_cleanup(), upon exit) of previous problem within same sub-test or even preceeding sub-tests :

            _ 1st sub-test reported to have failed for 'Copytool failed to stop in 20s ...' is test_33, but it has failed previously with "sanity-hsm test_33: @@@@@@ FAIL: request on 0x200007931:0x2b:0x0 is not SUCCEED on mds1" waiting for archive to complete/succeed ...

            _ 2nd sub-test reported to have failed for 'Copytool failed to stop in 20s ...' is test_60, but it has failed previously with "sanity-hsm test_60: @@@@@@ FAIL: Timed out waiting for progress update!" waiting for a progress update during archive.

            _ 3rd sub-test reported to have failed for 'Copytool failed to stop in 20s ...' is test_70, but it is the next just after test_60 and since its 1st cmd is a copytool_cleanup that is likely to have encountered the same problem than the preceeding.

            _ 4th sub-test reported to have failed for 'Copytool failed to stop in 20s ...' is test_71, but it is the next just after test_70 and since its 1st cmd is a copytool_cleanup that is likely to have encountered the same problem than the 2 preceeding.

            _ 5th sub-test reported to have failed for 'Copytool failed to stop in 20s ...' is test_103, and according to its specific logs it is the only one that seems to have triggered the same scenario (huge delay during lock flush/cancel processing) I have already described in my previous update. So it may be an other potential subject to the same change than for test_251.

            After more Lustre debug log reading, it seems that the "huge delay during lock flush/cancel processing" that seems to be the root cause of the problem is mainly on the OSS side, after the Client has handled the Blocking callback and sent back its Cancel of lock to the OSS. The thread handling it on the OSS can then spend multiple tens of seconds in ldlm_request_cancel()>ldlm_lock_cancel()>ldlm_cancel_callback()-> .... and
            highly probably tgt_blocking_ast()>tgt_sync()>dt_object_sync()->osd_object_sync(). So is this finally some kind of ZFS performance issue?

            bfaccini Bruno Faccini (Inactive) added a comment - Concerning the "https://testing.hpdd.intel.com/test_sets/97fd06d8-ac1c-11e4-992b-5254006e85c2" case, the 'Copytool failed to stop in 20s ...' errors/symptoms there looks more as an additional consequence (due to cleanup()/copytool_cleanup() execute as a trap, set in copytool_cleanup(), upon exit) of previous problem within same sub-test or even preceeding sub-tests : _ 1st sub-test reported to have failed for 'Copytool failed to stop in 20s ...' is test_33, but it has failed previously with "sanity-hsm test_33: @@@@@@ FAIL: request on 0x200007931:0x2b:0x0 is not SUCCEED on mds1" waiting for archive to complete/succeed ... _ 2nd sub-test reported to have failed for 'Copytool failed to stop in 20s ...' is test_60, but it has failed previously with "sanity-hsm test_60: @@@@@@ FAIL: Timed out waiting for progress update!" waiting for a progress update during archive. _ 3rd sub-test reported to have failed for 'Copytool failed to stop in 20s ...' is test_70, but it is the next just after test_60 and since its 1st cmd is a copytool_cleanup that is likely to have encountered the same problem than the preceeding. _ 4th sub-test reported to have failed for 'Copytool failed to stop in 20s ...' is test_71, but it is the next just after test_70 and since its 1st cmd is a copytool_cleanup that is likely to have encountered the same problem than the 2 preceeding. _ 5th sub-test reported to have failed for 'Copytool failed to stop in 20s ...' is test_103, and according to its specific logs it is the only one that seems to have triggered the same scenario (huge delay during lock flush/cancel processing) I have already described in my previous update. So it may be an other potential subject to the same change than for test_251. After more Lustre debug log reading, it seems that the "huge delay during lock flush/cancel processing" that seems to be the root cause of the problem is mainly on the OSS side, after the Client has handled the Blocking callback and sent back its Cancel of lock to the OSS. The thread handling it on the OSS can then spend multiple tens of seconds in ldlm_request_cancel() >ldlm_lock_cancel() >ldlm_cancel_callback()-> .... and highly probably tgt_blocking_ast() >tgt_sync() >dt_object_sync()->osd_object_sync(). So is this finally some kind of ZFS performance issue?
            yujian Jian Yu added a comment -

            The failure started affecting patch review testing on Lustre b2_5 patches:
            https://testing.hpdd.intel.com/test_sets/66e27944-acde-11e4-872a-5254006e85c2

            yujian Jian Yu added a comment - The failure started affecting patch review testing on Lustre b2_5 patches: https://testing.hpdd.intel.com/test_sets/66e27944-acde-11e4-872a-5254006e85c2
            yujian Jian Yu added a comment -

            The zfs full group test session was not run on master branch. Maybe this is the reason that we did not search out failure instances on master branch on Maloo.

            In the following report on b2_5 branch, many sub-tests failed with this issue:
            https://testing.hpdd.intel.com/test_sets/97fd06d8-ac1c-11e4-992b-5254006e85c2

            yujian Jian Yu added a comment - The zfs full group test session was not run on master branch. Maybe this is the reason that we did not search out failure instances on master branch on Maloo. In the following report on b2_5 branch, many sub-tests failed with this issue: https://testing.hpdd.intel.com/test_sets/97fd06d8-ac1c-11e4-992b-5254006e85c2

            http://review.whamcloud.com/13646 implements early lock cancel solution to speed-up copytool death.

            bfaccini Bruno Faccini (Inactive) added a comment - http://review.whamcloud.com/13646 implements early lock cancel solution to speed-up copytool death.

            Faccini Bruno (bruno.faccini@intel.com) uploaded a new patch: http://review.whamcloud.com/13646
            Subject: LU-6203 tests: early lock cancel to allow early copytool death
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: cc1f9bc9c062e07515fe6c08e358a703cee116ae

            gerrit Gerrit Updater added a comment - Faccini Bruno (bruno.faccini@intel.com) uploaded a new patch: http://review.whamcloud.com/13646 Subject: LU-6203 tests: early lock cancel to allow early copytool death Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: cc1f9bc9c062e07515fe6c08e358a703cee116ae
            bfaccini Bruno Faccini (Inactive) added a comment - - edited

            Andreas: no not on master. Based on Maloo reports search, latest test_251 sub-test failures in master have occurred about a year ago and at this time, my patch for LU-5622 was far to be integrated!, and they were linked to LU-3852

            There has only been a bunch of 8 failures, between 2014-12-30 07:31:46 UTC and 2015-01-30 14:05:49 UTC and after my patch for LU-5622 has been integrated, all with b2_5 (2 occurrences) or b_ieel2_0 (6 occurrences) branches, and only when using zfs targets. But there are also frequent success for b2_5/b_ieel2_0 branches using zfs!

            I have still not been able to reproduce the problem running with b2_5 build #112, that has been reported to trigger the problem.
            I have also analyzed the logs of the different failures and it appears that :
            _ the copytool's PID still being reported as alive is either the one running the archive action or the main one.
            _ each time the copytool log shows that at the time of the kill, the archive action seems to have a slow start and the last log line is "processing file ...", when the "archiving ..."/"saving stripe info of ..."/"start copy of ..." log lines are present in the successful run logs.
            _ the Agent debug log shows that the PID running the archive action has been stuck awaiting for an OST_GETATTR request to be replied, during a variable period of time but each time exceeding the 20s allowed to wait for copytool death.
            _ during that time the concerned OSS has been trying/waiting to cancel a contending lock from the Client that has created the file being archived.

            So could this be the consequence of some ZFS/Network config/performance related causing the file's dirty page flush, occuring at lock cancel time, to take more than 20s under some circumstances ?
            I think that a fix for these failures could be either to raise the timer waiting for copytool death (40s?) or ensure dirty data/blocks, during file creation, have been flushed before to start the archive operation (with "cancel_lru_locks osc"?).

            Also, I wonder if the priority of this ticket should be kept as Blocker ??

            bfaccini Bruno Faccini (Inactive) added a comment - - edited Andreas: no not on master. Based on Maloo reports search, latest test_251 sub-test failures in master have occurred about a year ago and at this time, my patch for LU-5622 was far to be integrated!, and they were linked to LU-3852 There has only been a bunch of 8 failures, between 2014-12-30 07:31:46 UTC and 2015-01-30 14:05:49 UTC and after my patch for LU-5622 has been integrated, all with b2_5 (2 occurrences) or b_ieel2_0 (6 occurrences) branches, and only when using zfs targets. But there are also frequent success for b2_5/b_ieel2_0 branches using zfs! I have still not been able to reproduce the problem running with b2_5 build #112, that has been reported to trigger the problem. I have also analyzed the logs of the different failures and it appears that : _ the copytool's PID still being reported as alive is either the one running the archive action or the main one. _ each time the copytool log shows that at the time of the kill, the archive action seems to have a slow start and the last log line is "processing file ...", when the "archiving ..."/"saving stripe info of ..."/"start copy of ..." log lines are present in the successful run logs. _ the Agent debug log shows that the PID running the archive action has been stuck awaiting for an OST_GETATTR request to be replied, during a variable period of time but each time exceeding the 20s allowed to wait for copytool death. _ during that time the concerned OSS has been trying/waiting to cancel a contending lock from the Client that has created the file being archived. So could this be the consequence of some ZFS/Network config/performance related causing the file's dirty page flush, occuring at lock cancel time, to take more than 20s under some circumstances ? I think that a fix for these failures could be either to raise the timer waiting for copytool death (40s?) or ensure dirty data/blocks, during file creation, have been flushed before to start the archive operation (with "cancel_lru_locks osc"?). Also, I wonder if the priority of this ticket should be kept as Blocker ??

            Bruno, Yu Jian, is this also happening on master, or only on b2_5?

            adilger Andreas Dilger added a comment - Bruno, Yu Jian, is this also happening on master, or only on b2_5?

            Yu Jian, thanks for all this research work already!!
            Will try to reproduce with a ZFS-only configuration and also have a look to the logs of the different cases you pointed to.

            bfaccini Bruno Faccini (Inactive) added a comment - Yu Jian, thanks for all this research work already!! Will try to reproduce with a ZFS-only configuration and also have a look to the logs of the different cases you pointed to.
            yujian Jian Yu added a comment -

            The zfs full group test session was not run on master branch, so we do not know whether the failure exists on master branch or not for now.

            yujian Jian Yu added a comment - The zfs full group test session was not run on master branch, so we do not know whether the failure exists on master branch or not for now.
            yujian Jian Yu added a comment -

            This is a regression failure introduced by the following commit in Lustre b2_5 build #112:

            Commit 97fc8c8caf41e9d74cdb1e373f19c907ed8481b2 by Oleg Drokin
            
            LU-5622 tests: check/wait for copytool death
            
            Seems that copytool death/kill may take more time so
            this condition must be handled in sanity-hsm copytool_cleanup()
            function to avoid situations where copytool will then not be
            restarted, but only signaled, in next copytool_setup().
            
            This patch is back-ported from the following one:
            Lustre-commit: 6facf3953b170832200ca9c111398da8feecd281
            Lustre-change: http://review.whamcloud.com/11922
            
            Signed-off-by: Bruno Faccini <bruno.faccini@intel.com>
            Change-Id: Ia817936eb030386dbe539ec8d5297812f4b6fff2
            Reviewed-on: http://review.whamcloud.com/12967
            Tested-by: Jenkins
            Tested-by: Maloo <hpdd-maloo@intel.com>
            Reviewed-by: James Nunez <james.a.nunez@intel.com>
            Reviewed-by: Henri Doreau <henri.doreau@cea.fr>
            Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
            

            Hi Bruno,
            Could you please take a look at the failure? Thank you.

            yujian Jian Yu added a comment - This is a regression failure introduced by the following commit in Lustre b2_5 build #112: Commit 97fc8c8caf41e9d74cdb1e373f19c907ed8481b2 by Oleg Drokin LU-5622 tests: check/wait for copytool death Seems that copytool death/kill may take more time so this condition must be handled in sanity-hsm copytool_cleanup() function to avoid situations where copytool will then not be restarted, but only signaled, in next copytool_setup(). This patch is back-ported from the following one: Lustre-commit: 6facf3953b170832200ca9c111398da8feecd281 Lustre-change: http://review.whamcloud.com/11922 Signed-off-by: Bruno Faccini <bruno.faccini@intel.com> Change-Id: Ia817936eb030386dbe539ec8d5297812f4b6fff2 Reviewed-on: http://review.whamcloud.com/12967 Tested-by: Jenkins Tested-by: Maloo <hpdd-maloo@intel.com> Reviewed-by: James Nunez <james.a.nunez@intel.com> Reviewed-by: Henri Doreau <henri.doreau@cea.fr> Reviewed-by: Oleg Drokin <oleg.drokin@intel.com> Hi Bruno, Could you please take a look at the failure? Thank you.

            People

              bfaccini Bruno Faccini (Inactive)
              yujian Jian Yu
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: