[LU-8136] sanity-hsm test_9 fails with 'request on 0x200000405:0x4:0x0 is not SUCCEED on mds1' Created: 12/May/16 Updated: 08/Nov/18 Resolved: 08/Nov/18 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.9.0 |
| Fix Version/s: | Lustre 2.9.0 |
| Type: | Bug | Priority: | Minor |
| Reporter: | James Nunez (Inactive) | Assignee: | WC Triage |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Environment: |
autotest review-dne |
||
| Severity: | 3 |
| Rank (Obsolete): | 9223372036854775807 |
| Description |
|
sanity-hsm test 9 fails with 'request on 0x200000405:0x4:0x0 is not SUCCEED on mds1' The last thing seen in the test log before the failure is CMD: trevis-5vm4 /usr/sbin/lctl get_param -n mdt.lustre-MDT0000.hsm.actions | awk '/'0x200000405:0x4:0x0'.*action='ARCHIVE'/ {print \$13}' | cut -f2 -d=
CMD: trevis-5vm4 /usr/sbin/lctl get_param -n mdt.lustre-MDT0000.hsm.actions | awk '/'0x200000405:0x4:0x0'.*action='ARCHIVE'/ {print \$13}' | cut -f2 -d=
CMD: trevis-5vm4 /usr/sbin/lctl get_param -n mdt.lustre-MDT0000.hsm.actions | awk '/'0x200000405:0x4:0x0'.*action='ARCHIVE'/ {print \$13}' | cut -f2 -d=
CMD: trevis-5vm4 /usr/sbin/lctl get_param -n mdt.lustre-MDT0000.hsm.actions | awk '/'0x200000405:0x4:0x0'.*action='ARCHIVE'/ {print \$13}' | cut -f2 -d=
Update not seen after 200s: wanted 'SUCCEED' got 'STARTED'
sanity-hsm test_9: @@@@@@ FAIL: request on 0x200000405:0x4:0x0 is not SUCCEED on mds1
Trace dump:
= /usr/lib64/lustre/tests/test-framework.sh:4769:error()
= /usr/lib64/lustre/tests/sanity-hsm.sh:766:wait_request_state()
= /usr/lib64/lustre/tests/sanity-hsm.sh:1010:test_9()
= /usr/lib64/lustre/tests/test-framework.sh:5033:run_one()
= /usr/lib64/lustre/tests/test-framework.sh:5072:run_one_logged()
= /usr/lib64/lustre/tests/test-framework.sh:4919:run_test()
= /usr/lib64/lustre/tests/sanity-hsm.sh:1016:main()
Dumping lctl log to /logdir/test_logs/2016-05-11/lustre-reviews-el7-x86_64--review-dne-part-2--1_7_1__38816__-70227460739120-004004/sanity-hsm.test_9.*.1462941864.log
Is this the same or similar issue as in So far, this test is only failing on review-dne-* tests groups. Test 9 started failing with this failure in the past two days;7 failures. Here are the failures: |
| Comments |
| Comment by Bruno Faccini (Inactive) [ 13/May/16 ] |
|
James, CT registers with 1st MDT:
00000100:00100000:0.0:1462941602.244264:0:5563:0:(service.c:2070:ptlrpc_server_handle_request()) Handling RPC pname:cluuid+ref:pid:xid:nid:opc mdt00_001:50b3d422-923b-abd9-9810-4f7c10608c4b+8:17919:x1533991955961472:12345-10.9.4.43@tcp:59
00000100:00100000:0.0:1462941602.244278:0:5563:0:(service.c:2120:ptlrpc_server_handle_request()) Handled RPC pname:cluuid+ref:pid:xid:nid:opc mdt00_001:50b3d422-923b-abd9-9810-4f7c10608c4b+8:17919:x1533991955961472:12345-10.9.4.43@tcp:59 Request procesed in 13us (26us total) trans 0 rc 0/0
00000100:00100000:0.0:1462941602.244280:0:5563:0:(nrs_fifo.c:241:nrs_fifo_req_stop()) NRS stop fifo request from 12345-10.9.4.43@tcp, seq: 308
00000100:00100000:0.0:1462941602.245002:0:6662:0:(events.c:351:request_in_callback()) peer: 12345-10.9.4.42@tcp
00000100:00100000:0.0:1462941602.245007:0:5563:0:(service.c:1922:ptlrpc_server_handle_req_in()) got req x1534001997146880
00000100:00100000:0.0:1462941602.245012:0:5563:0:(nrs_fifo.c:179:nrs_fifo_req_get()) NRS start fifo request from 12345-10.9.4.42@tcp, seq: 309
Client sends hsm_archive request to MDT/CDT:
00000100:00100000:0.0:1462941602.245014:0:5563:0:(service.c:2070:ptlrpc_server_handle_request()) Handling RPC pname:cluuid+ref:pid:xid:nid:opc mdt00_001:bed34d71-9fc0-26f6-02c9-3e3df67d2b69+38:22988:x1534001997146880:12345-10.9.4.42@tcp:58
00000040:00080000:0.0:1462941602.245032:0:5563:0:(llog_cat.c:735:llog_cat_process_cb()) processing log 0x15:1:0 at index 1 of catalog 0x8:10
00000040:00080000:0.0:1462941602.245138:0:5563:0:(llog_osd.c:696:llog_osd_write_rec()) added record [0x1:0x15:0x0]: idx: 4, 136 off8736
00000100:00100000:0.0:1462941602.245149:0:5563:0:(service.c:2120:ptlrpc_server_handle_request()) Handled RPC pname:cluuid+ref:pid:xid:nid:opc mdt00_001:bed34d71-9fc0-26f6-02c9-3e3df67d2b69+38:22988:x1534001997146880:12345-10.9.4.42@tcp:58 Request procesed in 135us (148us total) trans 0 rc 0/0
00000100:00100000:0.0:1462941602.245152:0:5563:0:(nrs_fifo.c:241:nrs_fifo_req_stop()) NRS stop fifo request from 12345-10.9.4.42@tcp, seq: 309
00000040:00080000:0.0:1462941602.245161:0:12867:0:(llog_cat.c:735:llog_cat_process_cb()) processing log 0x15:1:0 at index 1 of catalog 0x8:10
00000040:00100000:0.0:1462941602.245165:0:12867:0:(llog.c:211:llog_cancel_rec()) Canceling 2 in log 0x15:1
00000040:00100000:0.0:1462941602.245172:0:12867:0:(llog.c:211:llog_cancel_rec()) Canceling 3 in log 0x15:1
MDT/CDT sends archive request to CT:
00000100:00100000:0.0:1462941602.245214:0:12867:0:(client.c:1589:ptlrpc_send_new_req()) Sending RPC pname:cluuid:pid:xid:nid:opc hsm_cdtr:lustre-MDT0000_UUID:12867:1533991886170496:10.9.4.43@tcp:107
00000100:00100000:0.0:1462941602.245224:0:12867:0:(client.c:2287:ptlrpc_set_wait()) set ffff8800482b46c0 going to sleep for 11 seconds
00000100:00100000:0.0:1462941602.245250:0:6662:0:(events.c:351:request_in_callback()) peer: 12345-10.9.4.43@tcp
00000100:00100000:0.0:1462941602.245353:0:5563:0:(service.c:1922:ptlrpc_server_handle_req_in()) got req x1533991955961504
00000100:00100000:0.0:1462941602.245361:0:5563:0:(nrs_fifo.c:179:nrs_fifo_req_get()) NRS start fifo request from 12345-10.9.4.43@tcp, seq: 310
CT registers with 2nd MDT:
00000100:00100000:0.0:1462941602.245363:0:5563:0:(service.c:2070:ptlrpc_server_handle_request()) Handling RPC pname:cluuid+ref:pid:xid:nid:opc mdt00_001:50b3d422-923b-abd9-9810-4f7c10608c4b+5:17919:x1533991955961504:12345-10.9.4.43@tcp:59
00000100:00100000:0.0:1462941602.245377:0:5563:0:(service.c:2120:ptlrpc_server_handle_request()) Handled RPC pname:cluuid+ref:pid:xid:nid:opc mdt00_001:50b3d422-923b-abd9-9810-4f7c10608c4b+5:17919:x1533991955961504:12345-10.9.4.43@tcp:59 Request procesed in 14us (127us total) trans 0 rc 0/0
00000100:00100000:0.0:1462941602.245379:0:5563:0:(nrs_fifo.c:241:nrs_fifo_req_stop()) NRS stop fifo request from 12345-10.9.4.43@tcp, seq: 310
Agent answers (after trashing because of some race during CT start?) to MDT/CDT archive request:
00000100:00100000:0.0:1462941602.245679:0:12867:0:(client.c:1997:ptlrpc_check_set()) Completed RPC pname:cluuid:pid:xid:nid:opc hsm_cdtr:lustre-MDT0000_UUID:12867:1533991886170496:10.9.4.43@tcp:107
00000040:00080000:0.0:1462941602.245689:0:12867:0:(llog_cat.c:735:llog_cat_process_cb()) processing log 0x15:1:0 at index 1 of catalog 0x8:10
00000001:02000400:0.0:1462941602.356340:0:13401:0:(debug.c:335:libcfs_debug_mark_buffer()) DEBUG MARKER: /usr/sbin/lctl get_param -n mdt.lustre-MDT0000.hsm.actions | awk '/'0x200000405:0x4:0x0'.*action='ARCHIVE'/ {print $13}' | cut -f2 -d=
By the way, and even if a possible racy situation will need to be investigated further, we may already avoid this by moving CT start at the beginning of sanity-hsm/test_9() vs currently : test_9() {
mkdir -p $DIR/$tdir
local f=$DIR/$tdir/$tfile
local fid=$(copy_file /etc/passwd $f)
# we do not use the default one to be sure
local new_an=$((HSM_ARCHIVE_NUMBER + 1))
copytool_cleanup
copytool_setup $SINGLEAGT $MOUNT $new_an
$LFS hsm_archive --archive $new_an $f
wait_request_state $fid ARCHIVE SUCCEED
check_hsm_flags $f "0x00000009"
copytool_cleanup
}
run_test 9 "Use of explicit archive number, with dedicated copytool"
I will push a first patch in this direction and also continue to investigate to better understand the possible implication of DNE configuration vs CT startup. |
| Comment by Gerrit Updater [ 17/May/16 ] |
|
Faccini Bruno (bruno.faccini@intel.com) uploaded a new patch: http://review.whamcloud.com/20258 |
| Comment by Gu Zheng (Inactive) [ 01/Jun/16 ] |
|
Hit similar problem on our local autotest: |
| Comment by Gerrit Updater [ 02/Jun/16 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/20258/ |
| Comment by Peter Jones [ 16/Jun/16 ] |
|
Landed for 2.9 |
| Comment by Dmitry Eremin (Inactive) [ 28/Dec/16 ] |
|
The same failure now happens with test_12*, test_33-36, test_57-58, test_110*, test_222* https://testing.hpdd.intel.com/test_sets/7c0d8752-cc8d-11e6-9816-5254006e85c2 https://testing.hpdd.intel.com/test_sets/85ff554c-ccd8-11e6-9296-5254006e85c2
|
| Comment by Bruno Faccini (Inactive) [ 28/Dec/16 ] |
|
Well, may be the tempo and CT registration verification should be, as John H had already suggested when commenting my first patch, generalized (in copytool_setup()?). |
| Comment by Bruno Faccini (Inactive) [ 28/Dec/16 ] |
|
Well having a look to the recent failed auto-tests logs, it seems that their problem is not the one (CT failing to register with all MDTs in a too short time, requiring to add a tempo) that has been tracked in this ticket. CMD: trevis-35vm7 /usr/sbin/lctl get_param -n mdt.lustre-MDT0000.hsm.actions | awk '/'0x200000405:0xf:0x0'.*action='RESTORE'/ {print \$13}' | cut -f2 -d=
Changed after 16s: from 'SUCCEED
FAILED' to ''
that can be found in each of these tests main log showing that the SUCCEED request state may not have been detected due to some (new?) issue in the wait_request_state() function when interpreting "hsm/actions" proc file output. |
| Comment by Bruno Faccini (Inactive) [ 29/Dec/16 ] |
|
Dmitry, [root@eagle-31 lustre-release]# diff -urpN /usr/lib64/lustre/tests/sanity-hsm.sh.bfi /usr/lib64/lustre/tests/sanity-hsm.sh.bfi+
--- /usr/lib64/lustre/tests/sanity-hsm.sh.bfi 2016-12-28 16:41:09.000000000 +0000
+++ /usr/lib64/lustre/tests/sanity-hsm.sh.bfi+ 2016-12-29 10:40:08.000000000 +0000
@@ -724,7 +724,8 @@ wait_request_state() {
local mds=mds$(($mdtidx + 1))
local cmd="$LCTL get_param -n ${MDT_PREFIX}${mdtidx}.hsm.actions"
- cmd+=" | awk '/'$fid'.*action='$request'/ {print \\\$13}' | cut -f2 -d="
+ cmd+=" | awk '/'$fid'.*action='$request'/ {print \\\$13}' |\
+ cut -f2 -d= | uniq | grep $state"
wait_result $mds "$cmd" $state 200 ||
error "request on $fid is not $state on $mds"
[root@eagle-31 lustre-release]#
What do you think ?? |
| Comment by Dmitry Eremin (Inactive) [ 29/Dec/16 ] |
|
Thanks Bruno, I don't understand how my patch can affect this functionality but I will look into this. I parallelize only regular I/O. Other I/O should use old pipeline. Even with parallel I/O we should not have multiple requests.
|
| Comment by Dmitry Eremin (Inactive) [ 29/Dec/16 ] |
|
Thanks Bruno, You are absolutely right! HSM reads the data by 4Mb chunks even small files. Therefore we can get many callbacks if will read by small portions. Now the size of test file is less than stripe size and we always get single replay. But if the size of test file will be more than stripe size or the read will be by small portions like happens in my patch we will have several replays and tests will fail. Your fix resolve this issue. Thanks again.
|
| Comment by Bruno Faccini (Inactive) [ 29/Dec/16 ] |
|
Hello Dmitry, I am happy to have helped. |
| Comment by Bruno Faccini (Inactive) [ 30/Dec/16 ] |
|
Last, concerning the generalized need of delay to allow CT to register with all MDTs, after I have reviewed the recent auto-tests failures I still think it is not required and was only, as part of this ticket, for sanity-hsm/test_9 due to being the only sub-test using copytool_setup() and "lfs hsm_archive" in a raw, without any other cmd in-between to give enough time for CT's full registering. But anyway, I will push a patch to implement this in copytool_setup(). |
| Comment by Gerrit Updater [ 30/Dec/16 ] |
|
Faccini Bruno (bruno.faccini@intel.com) uploaded a new patch: https://review.whamcloud.com/24542 |
| Comment by Andreas Dilger [ 06/Nov/18 ] |
|
It looks like this test is still failing occasionally, and there is an unlanded patch for this ticket. |
| Comment by Quentin Bouget [ 06/Nov/18 ] |
|
The only recent test failures I could find targeted test_9A and they didn't seem to relate to the issue described in this ticket (they all happened for the same patch and many other tests failed on those runs). Am I missing something? |
| Comment by Andreas Dilger [ 08/Nov/18 ] |
|
Sorry, I didn't see that the failures were related to another issue. I've abandoned the old patch. |