[LU-2003] conf-sanity 21d @@@@@@ FAIL: import is not in FULL state Created: 21/Sep/12 Updated: 06/Jan/15 Resolved: 06/Jan/15 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.3.0, Lustre 2.4.0 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Minor |
| Reporter: | James A Simmons | Assignee: | Jian Yu |
| Resolution: | Cannot Reproduce | Votes: | 0 |
| Labels: | tests | ||
| Environment: |
MDS and MGS are the same server but two different disk are used for the MDT and MGT. |
||
| Attachments: |
|
| Severity: | 3 |
| Rank (Obsolete): | 10090 |
| Description |
|
Test conf-sanity 21d failed with conf-sanity test_21d: @@@@@@ FAIL: import is not in FULL state |
| Comments |
| Comment by Peter Jones [ 21/Sep/12 ] |
|
Yujian Could you please look into this one? Thanks Peter |
| Comment by James Nunez (Inactive) [ 10/Sep/13 ] |
|
From dmesg on the OSS: [ 1207.629766] Lustre: DEBUG MARKER: == conf-sanity test 21d: start mgs then ost and then mds == 10:53:04 (1348239184) [ 1467.507155] Lustre: DEBUG MARKER: rpc : @@@@@@ FAIL: can't put import for osc.lustre-OST0001-osc-MDT0000.ost_server_uuid into FULL state after 140 sec, have DISCONN [ 1469.733721] Lustre: DEBUG MARKER: conf-sanity test_21d: @@@@@@ FAIL: import is not in FULL state |
| Comment by James Nunez (Inactive) [ 18/Dec/13 ] |
|
After several runs of conf-sanity and running conf-sanity test 21d alone, I'm able to reproduce this error. I have a separate but co-located MGS and MDS as in the original setup. Results for this failure are at https://maloo.whamcloud.com/test_sessions/cc509aae-6806-11e3-a01f-52540035b04c |
| Comment by Jian Yu [ 16/May/14 ] |
|
I'm sorry for the late reply. I've manually running conf-sanity test 21d alone with separate MGS and MDT on the latest master branch for more than 10 times but did not reproduce the failure. I'll do more experiments. |
| Comment by Jian Yu [ 19/May/14 ] |
|
Still cannot reproduce the failure on master branch (build #2052) by running conf-sanity from test 0 to 22 for more than 10 times. And also failed to reproduce the failure on Lustre 2.4.3. |
| Comment by James Nunez (Inactive) [ 20/May/14 ] |
|
James (Simmons), Have you seen this error recently in your testing? I've tried over the past two days to hit this error with the latest master running conf-sanity and just conf-sanity test 21d alone and can't trigger it. As you can see, Yu, Jian can't trigger the problem either. If you haven't seen this error, please let me know if you are comfortable closing this ticket. Thanks, |
| Comment by James A Simmons [ 20/May/14 ] |
|
Actually I haven't got around to in depth testing of 2.6 the last few months with the testing of 2.5 I have been doing. Please keep this open since I do know that for 2.4 it was failing consistently for me. |
| Comment by John Fuchs-Chesney (Inactive) [ 19/Dec/14 ] |
|
Hello James, Would you like us to continue to keep this ticket open? Thanks, |
| Comment by James A Simmons [ 19/Dec/14 ] |
|
Let me test this for b2_5 and master first. |
| Comment by Jian Yu [ 06/Jan/15 ] |
|
Hi James, I just ran conf-sanity test 21 for 10 times on the latest Lustre b2_5 and master builds separately. All of the test runs passed. Here are the reports: MGS and MDS are the same node but MGT and MDT use different disk partitions. |
| Comment by James A Simmons [ 06/Jan/15 ] |
|
Just confirmed that on master this now passes. I need to now test b2_5. |
| Comment by James A Simmons [ 06/Jan/15 ] |
|
Finished testing 2.5 and the problem is gone. This ticket can be closed. |
| Comment by Peter Jones [ 06/Jan/15 ] |
|
ok - thanks James! |