Details
-
Bug
-
Resolution: Fixed
-
Minor
-
Lustre 2.10.0
-
A Lustre configuration with the MDS and the MGS on separate nodes
-
3
-
9223372036854775807
Description
conf-sanity test_82b fails when Lustre is configured with the MGS and MDS on separate nodes with
conf-sanity test_82b: @@@@@@ FAIL: /usr/sbin/lctl pool_add scratch.test_82b scratch-OST[32b4,06a8,7070] failed
From the console log, it looks like the OST pools are never created
eagle-48vm1: Warning, pool scratch.test_82b not found eagle-48vm1: Pool scratch.test_82b not found eagle-48vm1: Pool scratch.test_82b not found eagle-48vm1: Pool scratch.test_82b not found eagle-48vm1: pool_add: No such file or directory pdsh@eagle-48vm6: eagle-48vm1: ssh exited with exit code 2 conf-sanity test_82b: @@@@@@ FAIL: /usr/sbin/lctl pool_add scratch.test_82b scratch-OST[32b4,06a8,7070] failed
test 82b checks that you can create OST pools with the –ost-list option. For OST pools to work with the MDS and the MGS on separate nodes, there must be a client running on the MGS. The problem is that all clients and servers are stopped at the beginning of the test and then only a single client is started. That single client may or may not be the client on the MGS.
From conf-sanity test 82b:
5846 # Setup Lustre filesystem. 5847 start_mgsmds || error "start_mgsmds failed" 5848 for i in $(seq $OSTCOUNT); do 5849 start ost$i $(ostdevname $i) $OST_MOUNT_OPTS || 5850 error "start ost$i failed" 5851 done 5852 5853 mount_client $MOUNT || error "mount client $MOUNT failed"
One solution to this problem is to restart all clients in this test.
Attachments
Issue Links
- is related to
-
LU-8688 All Lustre test suites should run/PASS with separate MDS and MGS
- Open