Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-7144

Interop 2.7.0<->master- sanity-scrub test_14: (6) Some entry under /lost+found should be repaired

Details

    • Bug
    • Resolution: Fixed
    • Minor
    • Lustre 2.9.0
    • Lustre 2.8.0
    • None
    • Client: 2.7.0
      Server: lustre-master# 3166 , RHEL 7
    • 3
    • 9223372036854775807

    Description

      This issue was created by maloo for Saurabh Tandan <saurabh.tandan@intel.com>

      This issue relates to the following test suite run: https://testing.hpdd.intel.com/test_sets/c84cd21a-514d-11e5-9f68-5254006e85c2.

      The sub-test test_14 failed with the following error:

      (6) Some entry under /lost+found should be repaired
      

      Test log:

      Starting ost1:   /dev/lvm-Role_OSS/P1 /mnt/ost1
      CMD: shadow-18vm3 mkdir -p /mnt/ost1; mount -t lustre   		                   /dev/lvm-Role_OSS/P1 /mnt/ost1
      CMD: shadow-18vm3 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/qt-3.3/bin:/usr/lib64/openmpi/bin:/usr/bin:/bin:/usr/sbin:/sbin::/sbin:/bin:/usr/sbin: NAME=autotest_config sh rpc.sh set_default_debug \"vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck\" \"all -lnet -lnd -pinger\" 4 
      CMD: shadow-18vm3 e2label /dev/lvm-Role_OSS/P1 2>/dev/null
      Started lustre-OST0000
      Starting client: shadow-18vm5.shadow.whamcloud.com:  -o user_xattr,flock shadow-18vm12@tcp:/lustre /mnt/lustre
      CMD: shadow-18vm5.shadow.whamcloud.com mkdir -p /mnt/lustre
      CMD: shadow-18vm5.shadow.whamcloud.com mount -t lustre -o user_xattr,flock shadow-18vm12@tcp:/lustre /mnt/lustre
      CMD: shadow-18vm3 /usr/sbin/lctl get_param -n osd-ldiskfs.lustre-OST0000.oi_scrub
      /usr/lib64/lustre/tests/sanity-scrub.sh: line 1076: [: -gt: unary operator expected
       sanity-scrub test_14: @@@@@@ FAIL: (6) Some entry under /lost+found should be repaired 
      

      Console:

      03:53:27:Lustre: DEBUG MARKER: == sanity-scrub test 14: OI scrub can repair objects under lost+found == 03:53:14 (1441079594)
      03:53:27:Lustre: DEBUG MARKER: grep -c /mnt/lustre' ' /proc/mounts
      03:53:27:Lustre: DEBUG MARKER: lsof -t /mnt/lustre
      03:53:27:Lustre: DEBUG MARKER: umount /mnt/lustre 2>&1
      03:53:27:Lustre: DEBUG MARKER: mkdir -p /mnt/lustre
      03:53:27:Lustre: DEBUG MARKER: mount -t lustre -o user_xattr,flock shadow-18vm12@tcp:/lustre /mnt/lustre
      03:53:27:LustreError: 11-0: lustre-OST0000-osc-ffff88007952b400: operation ost_connect to node 10.1.4.213@tcp failed: rc = -16
      03:53:27:Lustre: DEBUG MARKER: /usr/sbin/lctl mark  sanity-scrub test_14: @@@@@@ FAIL: \(6\) Some entry under \/lost+found should be repaired 
      03:53:27:Lustre: DEBUG MARKER: sanity-scrub test_14: @@@@@@ FAIL: (6) Some entry under /lost+found should be repaired
      

      Attachments

        Issue Links

          Activity

            [LU-7144] Interop 2.7.0<->master- sanity-scrub test_14: (6) Some entry under /lost+found should be repaired

            Another instance found for interop tag 2.7.66 - EL6.7 Server/2.7.1 Client, build# 3316
            https://testing.hpdd.intel.com/test_sets/535a0f2e-cc98-11e5-b80c-5254006e85c2
            Another instance found for interop tag 2.7.66 - EL6.7 Server/2.5.5 Client, build# 3316
            https://testing.hpdd.intel.com/test_sets/ad6dd9b2-cc9f-11e5-963e-5254006e85c2
            Another instance found for interop tag 2.7.66 - EL7 Server/2.5.5 Client, build# 3316
            https://testing.hpdd.intel.com/test_sets/781e3562-cc46-11e5-901d-5254006e85c2

            The reason is that the test scripts run on client. Although we have landed related patch (http://review.whamcloud.com/17520/) to master (b2_8), but the tested client is b2_7 or b2_5 based, related patches have NOT been landed to these branches yet. So we still hit trouble.

            yong.fan nasf (Inactive) added a comment - Another instance found for interop tag 2.7.66 - EL6.7 Server/2.7.1 Client, build# 3316 https://testing.hpdd.intel.com/test_sets/535a0f2e-cc98-11e5-b80c-5254006e85c2 Another instance found for interop tag 2.7.66 - EL6.7 Server/2.5.5 Client, build# 3316 https://testing.hpdd.intel.com/test_sets/ad6dd9b2-cc9f-11e5-963e-5254006e85c2 Another instance found for interop tag 2.7.66 - EL7 Server/2.5.5 Client, build# 3316 https://testing.hpdd.intel.com/test_sets/781e3562-cc46-11e5-901d-5254006e85c2 The reason is that the test scripts run on client. Although we have landed related patch ( http://review.whamcloud.com/17520/ ) to master (b2_8), but the tested client is b2_7 or b2_5 based, related patches have NOT been landed to these branches yet. So we still hit trouble.

            Andreas Dilger (andreas.dilger@intel.com) uploaded a new patch: http://review.whamcloud.com/18399
            Subject: LU-7144 tests: print client/server versions for tests
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 5280fe64f9bcdf1587e84883a693c24ba240aefe

            gerrit Gerrit Updater added a comment - Andreas Dilger (andreas.dilger@intel.com) uploaded a new patch: http://review.whamcloud.com/18399 Subject: LU-7144 tests: print client/server versions for tests Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 5280fe64f9bcdf1587e84883a693c24ba240aefe

            Another instance found for interop tag 2.7.66 - EL6.7 Server/2.7.1 Client, build# 3316
            https://testing.hpdd.intel.com/test_sets/535a0f2e-cc98-11e5-b80c-5254006e85c2

            Another instance found for interop tag 2.7.66 - EL6.7 Server/2.5.5 Client, build# 3316
            https://testing.hpdd.intel.com/test_sets/ad6dd9b2-cc9f-11e5-963e-5254006e85c2

            Another instance found for interop tag 2.7.66 - EL7 Server/2.5.5 Client, build# 3316
            https://testing.hpdd.intel.com/test_sets/781e3562-cc46-11e5-901d-5254006e85c2

            standan Saurabh Tandan (Inactive) added a comment - - edited Another instance found for interop tag 2.7.66 - EL6.7 Server/2.7.1 Client, build# 3316 https://testing.hpdd.intel.com/test_sets/535a0f2e-cc98-11e5-b80c-5254006e85c2 Another instance found for interop tag 2.7.66 - EL6.7 Server/2.5.5 Client, build# 3316 https://testing.hpdd.intel.com/test_sets/ad6dd9b2-cc9f-11e5-963e-5254006e85c2 Another instance found for interop tag 2.7.66 - EL7 Server/2.5.5 Client, build# 3316 https://testing.hpdd.intel.com/test_sets/781e3562-cc46-11e5-901d-5254006e85c2

            We have the patch http://review.whamcloud.com/#/c/17521/ for b2_7_fe.

            yong.fan nasf (Inactive) added a comment - We have the patch http://review.whamcloud.com/#/c/17521/ for b2_7_fe.

            Encountered same issue for tag 2.7.66, for interop config - EL6.7 Server/2.7.1 Client
            master - build# 3316 , b2_7_fe/34
            test log:

            Starting ost1:   /dev/lvm-Role_OSS/P1 /mnt/ost1
            CMD: shadow-3vm7 mkdir -p /mnt/ost1; mount -t lustre   		                   /dev/lvm-Role_OSS/P1 /mnt/ost1
            CMD: shadow-3vm7 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/qt-3.3/bin:/usr/lib64/openmpi/bin:/usr/bin:/bin:/usr/sbin:/sbin::/sbin:/bin:/usr/sbin: NAME=autotest_config sh rpc.sh set_default_debug \"vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck\" \"all -lnet -lnd -pinger\" 4 
            CMD: shadow-3vm7 e2label /dev/lvm-Role_OSS/P1 2>/dev/null
            Started lustre-OST0000
            Starting client: shadow-3vm5.shadow.whamcloud.com:  -o user_xattr,flock shadow-3vm12@tcp:/lustre /mnt/lustre
            CMD: shadow-3vm5.shadow.whamcloud.com mkdir -p /mnt/lustre
            CMD: shadow-3vm5.shadow.whamcloud.com mount -t lustre -o user_xattr,flock shadow-3vm12@tcp:/lustre /mnt/lustre
            CMD: shadow-3vm7 /usr/sbin/lctl get_param -n osd-ldiskfs.lustre-OST0000.oi_scrub
            /usr/lib64/lustre/tests/sanity-scrub.sh: line 1076: [: -gt: unary operator expected
             sanity-scrub test_14: @@@@@@ FAIL: (6) Some entry under /lost+found should be repaired 
            
            standan Saurabh Tandan (Inactive) added a comment - Encountered same issue for tag 2.7.66, for interop config - EL6.7 Server/2.7.1 Client master - build# 3316 , b2_7_fe/34 test log: Starting ost1: /dev/lvm-Role_OSS/P1 /mnt/ost1 CMD: shadow-3vm7 mkdir -p /mnt/ost1; mount -t lustre /dev/lvm-Role_OSS/P1 /mnt/ost1 CMD: shadow-3vm7 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/qt-3.3/bin:/usr/lib64/openmpi/bin:/usr/bin:/bin:/usr/sbin:/sbin::/sbin:/bin:/usr/sbin: NAME=autotest_config sh rpc.sh set_default_debug \"vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck\" \"all -lnet -lnd -pinger\" 4 CMD: shadow-3vm7 e2label /dev/lvm-Role_OSS/P1 2>/dev/null Started lustre-OST0000 Starting client: shadow-3vm5.shadow.whamcloud.com: -o user_xattr,flock shadow-3vm12@tcp:/lustre /mnt/lustre CMD: shadow-3vm5.shadow.whamcloud.com mkdir -p /mnt/lustre CMD: shadow-3vm5.shadow.whamcloud.com mount -t lustre -o user_xattr,flock shadow-3vm12@tcp:/lustre /mnt/lustre CMD: shadow-3vm7 /usr/sbin/lctl get_param -n osd-ldiskfs.lustre-OST0000.oi_scrub /usr/lib64/lustre/tests/sanity-scrub.sh: line 1076: [: -gt: unary operator expected sanity-scrub test_14: @@@@@@ FAIL: (6) Some entry under /lost+found should be repaired

            Another instance found on:
            Server: master, build# 3276 , RHEL 6.7
            Client: 2.7.1 , b2_7_fe/34
            https://testing.hpdd.intel.com/test_sets/9a5d0066-a592-11e5-a14a-5254006e85c2

            standan Saurabh Tandan (Inactive) added a comment - Another instance found on: Server: master, build# 3276 , RHEL 6.7 Client: 2.7.1 , b2_7_fe/34 https://testing.hpdd.intel.com/test_sets/9a5d0066-a592-11e5-a14a-5254006e85c2

            Another instance for EL6.7 Server/EL6.7 Client - ZFS
            Master, build# 3270
            Failed to run any tests on sanity-benchmark.
            no label for lustre-ost5/ost5
            https://testing.hpdd.intel.com/test_sets/7f526b00-a275-11e5-bdef-5254006e85c2
            Tests ran on : 2015-12-12

            I do not think it is related with sanity-scrub interoperability test.

            yong.fan nasf (Inactive) added a comment - Another instance for EL6.7 Server/EL6.7 Client - ZFS Master, build# 3270 Failed to run any tests on sanity-benchmark. no label for lustre-ost5/ost5 https://testing.hpdd.intel.com/test_sets/7f526b00-a275-11e5-bdef-5254006e85c2 Tests ran on : 2015-12-12 I do not think it is related with sanity-scrub interoperability test.

            Another instance forEL7.1 Server/EL7.1 Client - ZFS
            Master, build# 3264
            https://testing.hpdd.intel.com/test_sets/2cac0c80-a135-11e5-83b8-5254006e85c2

            standan Saurabh Tandan (Inactive) added a comment - Another instance forEL7.1 Server/EL7.1 Client - ZFS Master, build# 3264 https://testing.hpdd.intel.com/test_sets/2cac0c80-a135-11e5-83b8-5254006e85c2

            Another instance for EL6.7 Server/EL6.7 Client - ZFS
            Master, build# 3270
            Failed to run any tests on sanity-benchmark.

            no label for lustre-ost5/ost5
            

            https://testing.hpdd.intel.com/test_sets/7f526b00-a275-11e5-bdef-5254006e85c2
            Tests ran on : 2015-12-12

            standan Saurabh Tandan (Inactive) added a comment - Another instance for EL6.7 Server/EL6.7 Client - ZFS Master, build# 3270 Failed to run any tests on sanity-benchmark. no label for lustre-ost5/ost5 https://testing.hpdd.intel.com/test_sets/7f526b00-a275-11e5-bdef-5254006e85c2 Tests ran on : 2015-12-12

            Another instance for following interop config but tests ran before the patch Landed.
            Server: Master, Build# 3266, Tag 2.7.64
            Client: 2.5.5, b2_5_fe/62
            https://testing.hpdd.intel.com/test_sets/b61e3db8-9fcc-11e5-a33d-5254006e85c2

            standan Saurabh Tandan (Inactive) added a comment - Another instance for following interop config but tests ran before the patch Landed. Server: Master, Build# 3266, Tag 2.7.64 Client: 2.5.5, b2_5_fe/62 https://testing.hpdd.intel.com/test_sets/b61e3db8-9fcc-11e5-a33d-5254006e85c2

            People

              yong.fan nasf (Inactive)
              maloo Maloo
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: