Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-7657

sanity-krb5 test_151 fails with 'mount with default flavor should have failed'

    XMLWordPrintable

Details

    • Bug
    • Resolution: Cannot Reproduce
    • Minor
    • None
    • Lustre 2.8.0, Lustre 2.9.0
    • None
    • eagle cluster with Lustre tag 2.7.65
    • 3
    • 9223372036854775807

    Description

      Running the sanity-krb5 test suite on Lustre systems with a separate MGS and MDS on the same node or running with a combined MGS/MDS, test 151 fails with

      'mount with default flavor should have failed' 
      

      This does not happen when running with a MGS and MDS on separate nodes. From the test_log, we can see that the complaint is that the MDS is already mounted:

      Starting mgs:   /dev/vda3 /lustre/scratch/mdt0
      pdsh@eagle-51vm6: eagle-51vm1: ssh exited with exit code 1
      pdsh@eagle-51vm6: eagle-51vm1: ssh exited with exit code 1
      Started scratch-MDT0000
      Starting mds1:   /dev/vda3 /lustre/scratch/mdt0
      eagle-51vm1: mount.lustre: according to /etc/mtab /dev/vda3 is already mounted on /lustre/scratch/mdt0
      pdsh@eagle-51vm6: eagle-51vm1: ssh exited with exit code 17
      Start of /dev/vda3 on mds1 failed 17
      eagle-51vm1: mgc.*.mgs_server_uuid in FULL state after 0 sec
       sanity-krb5 test_151: @@@@@@ FAIL: mount with default flavor should have failed
      

      It seems like the test code expects to start the MGs and MDS separately. From test 151:

             stopall
      
              # start gss daemon on mgs node                                          
              combined_mgs_mds || start_gss_daemons $mgs_HOST "$LSVCGSSD -v"
      
              # start mgs                                                             
              start mgs $(mgsdevname 1) $MDS_MOUNT_OPTS
      
              # mount mgs with default flavor, in current framework it means mgs+mdt1\
      .                                                                               
              # the connection of mgc of mdt1 to mgs is expected fail.                
              DEVNAME=$(mdsdevname 1)
              start mds1 $DEVNAME $MDS_MOUNT_OPTS
       

      Should this test be skipped for a combined MGS/MDS set up?

      Logs are at:
      https://testing.hpdd.intel.com/test_sets/a5ea3b0c-b987-11e5-80e0-5254006e85c2
      https://testing.hpdd.intel.com/test_sets/a92042f8-b987-11e5-b318-5254006e85c2
      https://testing.hpdd.intel.com/test_sets/af1bee50-b987-11e5-825c-5254006e85c2
      https://testing.hpdd.intel.com/test_sets/b06dfcbc-b987-11e5-b318-5254006e85c2

      Attachments

        Issue Links

          Activity

            People

              wc-triage WC Triage
              jamesanunez James Nunez (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: