Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-9601

recovery-mds-scale test_failover_mds: test_failover_mds returned 1

    XMLWordPrintable

Details

    • Bug
    • Resolution: Unresolved
    • Major
    • None
    • Lustre 2.10.0, Lustre 2.10.1, Lustre 2.11.0, Lustre 2.12.0, Lustre 2.10.4, Lustre 2.10.5
    • None
    • trevis, failover
        clients: SLES12, master branch, v2.9.58, b3591
        servers: EL7, ldiskfs, master branch, v2.9.58, b3591
    • 3
    • 9223372036854775807

    Description

      https://testing.hpdd.intel.com/test_sessions/e6b87235-1ff0-4e96-a53f-ca46ffe5ed7e

      From suite_log:

      CMD: trevis-38vm1,trevis-38vm5,trevis-38vm6,trevis-38vm7,trevis-38vm8 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/mpi/gcc/openmpi/bin:/sbin:/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/sbin:/sbin::/sbin:/bin:/usr/sbin: NAME=autotest_config sh rpc.sh check_logdir /shared_test/autotest2/2017-05-24/051508-70323187606440 
      trevis-38vm1: trevis-38vm1: executing check_logdir /shared_test/autotest2/2017-05-24/051508-70323187606440
      trevis-38vm7: trevis-38vm7.trevis.hpdd.intel.com: executing check_logdir /shared_test/autotest2/2017-05-24/051508-70323187606440
      trevis-38vm8: trevis-38vm8.trevis.hpdd.intel.com: executing check_logdir /shared_test/autotest2/2017-05-24/051508-70323187606440
      pdsh@trevis-38vm1: trevis-38vm6: mcmd: connect failed: No route to host
      pdsh@trevis-38vm1: trevis-38vm5: mcmd: connect failed: No route to host
      CMD: trevis-38vm1 uname -n
      CMD: trevis-38vm5 uname -n
      pdsh@trevis-38vm1: trevis-38vm5: mcmd: connect failed: No route to host
      
       SKIP: recovery-double-scale  SHARED_DIRECTORY should be specified with a shared directory which is accessable on all of the nodes
      Stopping clients: trevis-38vm1,trevis-38vm5,trevis-38vm6 /mnt/lustre (opts:)
      CMD: trevis-38vm1,trevis-38vm5,trevis-38vm6 running=\$(grep -c /mnt/lustre' ' /proc/mounts);
      

      and

      pdsh@trevis-38vm1: trevis-38vm5: mcmd: connect failed: No route to host
      pdsh@trevis-38vm1: trevis-38vm6: mcmd: connect failed: No route to host
       auster : @@@@@@ FAIL: clients environments are insane! 
        Trace dump:
        = /usr/lib64/lustre/tests/test-framework.sh:4952:error()
        = /usr/lib64/lustre/tests/test-framework.sh:1736:sanity_mount_check_clients()
        = /usr/lib64/lustre/tests/test-framework.sh:1741:sanity_mount_check()
        = /usr/lib64/lustre/tests/test-framework.sh:3796:setupall()
        = auster:114:reset_lustre()
        = auster:217:run_suite()
        = auster:234:run_suite_logged()
        = auster:298:run_suites()
        = auster:334:main()
      

      Attachments

        Issue Links

          Activity

            People

              bobijam Zhenyu Xu
              jcasper James Casper
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated: