[LU-7000] recovery-mds-scale test_failover_mds: Restart of mds1 failed Created: 13/Aug/15  Updated: 05/Oct/15  Resolved: 05/Oct/15

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.8.0
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Maloo Assignee: WC Triage
Resolution: Duplicate Votes: 0
Labels: zfs
Environment:

client and server: lustre-master build#3120 RHEL7.1 zfs


Issue Links:
Duplicate
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

This issue was created by maloo for sarah_lw <wei3.liu@intel.com>

This issue relates to the following test suite run: https://testing.hpdd.intel.com/test_sets/fa8c01ae-37cd-11e5-a40a-5254006e85c2.

The sub-test test_failover_mds failed with the following error:

Restart of mds1 failed!

test log

Failing mds1 on onyx-31vm3
+ pm -h powerman --off onyx-31vm3
Command completed successfully
reboot facets: mds1
+ pm -h powerman --on onyx-31vm3
Command completed successfully
Failover mds1 to onyx-31vm7
17:41:19 (1438130479) waiting for onyx-31vm7 network 900 secs ...
17:41:19 (1438130479) network interface is UP
CMD: onyx-31vm7 hostname
mount facets: mds1
CMD: onyx-31vm7 zpool list -H lustre-mdt1 >/dev/null 2>&1 ||
			zpool import -f -o cachefile=none -d /dev/lvm-Role_MDS lustre-mdt1
onyx-31vm7: cannot import 'lustre-mdt1': no such pool available


 Comments   
Comment by Jian Yu [ 05/Oct/15 ]

This is TEI-3975.

Generated at Sat Feb 10 02:05:08 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.