Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-3214

Interop 2.3.0<->2.4 failure on test suite replay-dual test_14b: FAIL: after 2610580 > before 2610576

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Minor
    • Resolution: Duplicate
    • Affects Version/s: None
    • Fix Version/s: None
    • Labels:
      None
    • Environment:
      client: 2.3.0
      server: tag-2.3.64
    • Severity:
      3
    • Rank (Obsolete):
      7846

      Description

      This issue was created by maloo for sarah <sarah@whamcloud.com>

      This issue relates to the following test suite run: http://maloo.whamcloud.com/test_sets/059f2c0c-a7c9-11e2-b3cc-52540035b04c.

      The sub-test test_14b failed with the following error:

      after 2610580 > before 2610576

      == replay-dual test 14b: delete ost orphans if gap occured in objids due to VBR == 15:51:12 (1366239072)
      Waiting for orphan cleanup...
      osp.lustre-OST0000-osc-MDT0000.old_sync_processed
      osp.lustre-OST0001-osc-MDT0000.old_sync_processed
      osp.lustre-OST0002-osc-MDT0000.old_sync_processed
      osp.lustre-OST0003-osc-MDT0000.old_sync_processed
      osp.lustre-OST0004-osc-MDT0000.old_sync_processed
      osp.lustre-OST0005-osc-MDT0000.old_sync_processed
      Waiting for local destroys to complete
      Filesystem           1K-blocks      Used Available Use% Mounted on
      fat-amd-1@tcp:/lustre
                            59057088   2610576  53446512   5% /mnt/lustre
      total: 5 creates in 0.03 seconds: 169.85 creates/second
      total: 5 creates in 0.03 seconds: 190.03 creates/second
      Failing mds1 on fat-amd-1
      Stopping /mnt/mds1 (opts:) on fat-amd-1
      reboot facets: mds1
      Failover mds1 to fat-amd-1
      15:51:34 (1366239094) waiting for fat-amd-1 network 900 secs ...
      15:51:34 (1366239094) network interface is UP
      mount facets: mds1
      Starting mds1: -o user_xattr,acl  /dev/sdb1 /mnt/mds1
      Started lustre-MDT0000
      client-5: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 62 sec
      client-15: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 62 sec
      affected facets: mds1
      fat-amd-1: *.lustre-MDT0000.recovery_status status: COMPLETE
       - unlinked 0 (time 1366239160 ; total 0 ; last 0)
      total: 5 unlinks in 0 seconds: inf unlinks/second
       - unlinked 0 (time 1366239160 ; total 0 ; last 0)
      total: 5 unlinks in 0 seconds: inf unlinks/second
      Starting client: client-5.lab.whamcloud.com: -o flock,user_xattr,acl fat-amd-1@tcp:/lustre /mnt/lustre2
      Waiting for orphan cleanup...
      osp.lustre-OST0000-osc-MDT0000.old_sync_processed
      osp.lustre-OST0001-osc-MDT0000.old_sync_processed
      osp.lustre-OST0002-osc-MDT0000.old_sync_processed
      osp.lustre-OST0003-osc-MDT0000.old_sync_processed
      osp.lustre-OST0004-osc-MDT0000.old_sync_processed
      osp.lustre-OST0005-osc-MDT0000.old_sync_processed
      Waiting for local destroys to complete
      before 2610576, after 2610580
       replay-dual test_14b: @@@@@@ FAIL: after 2610580 > before 2610576 
      

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                wc-triage WC Triage
                Reporter:
                maloo Maloo
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: