[LU-9267] lustre-initialization-1 failed: Test system failed to start single suite, so abandoning all hope and giving up Created: 28/Mar/17  Updated: 23/Mar/18

Status: Open
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Maloo Assignee: WC Triage
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Related
is related to LU-10828 OBD devices and exports not cleaned u... Closed
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

This issue was created by maloo for nasf <fan.yong@intel.com>

Please provide additional information about the failure here.

This issue relates to the following test suite run: https://testing.hpdd.intel.com/test_sets/5fb16200-139f-11e7-8920-5254006e85c2.

The test logs show that:

02:13:05:onyx-31vm3: cannot import 'lustre-mdt2': more than one matching pool
02:13:05:onyx-31vm3: import by numeric ID instead
02:13:41:Tests running for 75 minutes, 0 restarts, current suite:test lustre-initialization-1:lustre-initialization_1


 Comments   
Comment by Andreas Dilger [ 28/Mar/17 ]

Fan Yong, are you sure this isn't caused by the snapshot patches? Having multiple pools with the same name definitely seems like a snapshot problem.

I have the same problem at home (with ldiskfs) when I make a dd copy of the MDT to a backup device, and I try to mount the disk by label, if I don't change the label on the MDT backup.

Comment by nasf (Inactive) [ 21/Jun/17 ]

Lustre snapshot needs to be triggered manually. In the lustre-initialization-1, nobody triggers Lustre snapshot, and there is node-provisioning-1 just before the lustre-initialization-1, the system should be clean when lustre-initialization-1. So I do not think it is related with the patch.

Comment by Minh Diep [ 23/Mar/18 ]

is this still an issue?

Generated at Sat Feb 10 02:24:40 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.