[LU-10312] recovery-mds-scale test_failover_ost: Restart of ost1 failed! Created: 01/Dec/17  Updated: 28/Mar/18

Status: Open
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.10.2
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: James Casper Assignee: WC Triage
Resolution: Unresolved Votes: 0
Labels: None
Environment:

onyx, failover
servers: el7.4, ldiskfs, branch b2_10, v2.10.2.RC1, b50
clients: el7.4, branch b2_10, v2.10.2.RC1, b50


Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

session: https://testing.hpdd.intel.com/test_sessions/75ac9c2c-e42e-41a1-b4f3-ece3cef93272
test set: https://testing.hpdd.intel.com/test_sets/6df5cb5a-d67a-11e7-8027-52540065bddc

May be the same as LU-9707, but its failure was on mds1.

From test_log:

onyx-42vm5: e2label: No such file or directory while trying to open /dev/lvm-Role_OSS/P1
onyx-42vm5: Couldn't find valid filesystem superblock.
Starting ost1: -o loop /dev/lvm-Role_OSS/P1 /mnt/lustre-ost1
CMD: onyx-42vm5 mkdir -p /mnt/lustre-ost1; mount -t lustre -o loop /dev/lvm-Role_OSS/P1 /mnt/lustre-ost1
onyx-42vm5: mount: /dev/lvm-Role_OSS/P1: failed to setup loop device: No such file or directory
Start of /dev/lvm-Role_OSS/P1 on ost1 failed 32

Generated at Sat Feb 10 02:33:55 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.