[LU-3325] recovery-mds-scale test_failover_mds: tar: Cannot write: Input/output error Created: 13/May/13 Updated: 12/Aug/14 Resolved: 12/Aug/14 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.4.0, Lustre 2.4.1, Lustre 2.6.0, Lustre 2.5.1 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Critical |
| Reporter: | Jian Yu | Assignee: | Hongchao Zhang |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | LB | ||
| Environment: |
Lustre Branch: master |
||
| Issue Links: |
|
||||||||
| Severity: | 3 | ||||||||
| Rank (Obsolete): | 8209 | ||||||||
| Description |
|
After running recovery-mds-scale test_failover_mds for about 9 hours (MDS failed over 38 times), client load on one of the clients failed as follows: tar: etc/ConsoleKit/seats.d/00-primary.seat: Cannot write: Input/output error tar: etc/ConsoleKit/seats.d/00-primary.seat: Cannot utime: Cannot send after transport endpoint shutdown tar: Exiting with failure status due to previous errors Console log on the client (client-32vm6) showed that: 17:52:45:Lustre: DEBUG MARKER: mds1 has failed over 38 times, and counting... 17:52:46:Lustre: Evicted from MGS (at 10.10.4.198@tcp) after server handle changed from 0x9510c59bdf7b69c8 to 0xa78bdb1c4b1a185f 17:52:46:Lustre: MGC10.10.4.198@tcp: Connection restored to MGS (at 10.10.4.198@tcp) 18:00:41:Lustre: lustre-OST0005-osc-ffff880037c72c00: Connection to lustre-OST0005 (at 10.10.4.199@tcp) was lost; in progress operations using this service will wait for recovery to complete 18:00:41:LustreError: 11-0: lustre-OST0004-osc-ffff880037c72c00: Communicating with 10.10.4.199@tcp, operation ost_write failed with -107. 18:00:41:LustreError: Skipped 3 previous similar messages 18:00:41:LustreError: 11-0: lustre-OST0000-osc-ffff880037c72c00: Communicating with 10.10.4.199@tcp, operation ost_write failed with -107. 18:00:41:LustreError: Skipped 1 previous similar message 18:00:41:Lustre: lustre-OST0000-osc-ffff880037c72c00: Connection to lustre-OST0000 (at 10.10.4.199@tcp) was lost; in progress operations using this service will wait for recovery to complete 18:00:41:Lustre: Skipped 1 previous similar message 18:00:41:Lustre: lustre-OST0006-osc-ffff880037c72c00: Connection to lustre-OST0006 (at 10.10.4.199@tcp) was lost; in progress operations using this service will wait for recovery to complete 18:00:41:Lustre: 7109:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1368234025/real 1368234025] req@ffff88006dbdbc00 x1434661777629520/t0(0) o103->lustre-OST0005-osc-ffff880037c72c00@10.10.4.199@tcp:17/18 lens 328/224 e 0 to 1 dl 1368234036 ref 1 fl Rpc:X/0/ffffffff rc 0/-1 18:00:41:Lustre: 7109:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 3 previous similar messages 18:00:41:LustreError: 11-0: lustre-OST0001-osc-ffff880037c72c00: Communicating with 10.10.4.199@tcp, operation ost_write failed with -107. 18:00:41:LustreError: Skipped 3 previous similar messages 18:00:41:LustreError: 167-0: lustre-OST0005-osc-ffff880037c72c00: This client was evicted by lustre-OST0005; in progress operations using this service will fail. 18:00:41:LustreError: Skipped 1 previous similar message 18:00:41:Lustre: 7109:0:(llite_lib.c:2503:ll_dirty_page_discard_warn()) lustre: dirty page discard: 10.10.4.198@tcp:10.10.4.202@tcp:/lustre/fid: [0x200000401:0x1ce3c:0x0]/ may get corrupted (rc -108) 18:00:41:Lustre: 7109:0:(llite_lib.c:2503:ll_dirty_page_discard_warn()) Skipped 3 previous similar messages 18:00:41:Lustre: lustre-OST0001-osc-ffff880037c72c00: Connection restored to lustre-OST0001 (at 10.10.4.199@tcp) Console log on OSS (client-32vm4) showed that: 17:52:50:Lustre: DEBUG MARKER: mds1 has failed over 38 times, and counting... 17:53:12:LustreError: 167-0: lustre-MDT0000-lwp-OST0000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. 17:53:12:LustreError: Skipped 3 previous similar messages 17:53:12:Lustre: lustre-MDT0000-lwp-OST0002: Connection restored to lustre-MDT0000 (at 10.10.4.198@tcp) 18:00:36:Lustre: 5788:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1368234022/real 1368234024] req@ffff880068743800 x1434661750825740/t0(0) o104->lustre-OST0005@10.10.4.201@tcp:15/16 lens 296/224 e 0 to 1 dl 1368234033 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1 18:00:36:Lustre: 5788:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 105 previous similar messages 18:00:36:LustreError: 138-a: lustre-OST0005: A client on nid 10.10.4.201@tcp was evicted due to a lock blocking callback time out: rc -107 18:00:36:LustreError: Skipped 1 previous similar message 18:00:36:LustreError: 9765:0:(ldlm_lockd.c:709:ldlm_handle_ast_error()) ### client (nid 10.10.4.201@tcp) returned 0 from blocking AST ns: filter-lustre-OST0005_UUID lock: ffff880069496c00/0x15300103a13ef60b lrc: 4/0,0 mode: PR/PR res: 13012/0 rrc: 2 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) flags: 0x10020 nid: 10.10.4.201@tcp remote: 0x413d78011f8a8246 expref: 156 pid: 9910 timeout: 4329187148 lvb_type: 1 18:00:36:LustreError: 9765:0:(ldlm_lockd.c:709:ldlm_handle_ast_error()) Skipped 17 previous similar messages 18:00:36:LustreError: 9867:0:(ldlm_lib.c:2706:target_bulk_io()) @@@ Eviction on bulk GET req@ffff880048f78c00 x1434661777625888/t0(0) o4->c3d0e929-541a-b945-bf7d-9ade70c3a652@10.10.4.201@tcp:0/0 lens 488/448 e 1 to 0 dl 1368234065 ref 1 fl Interpret:/0/0 rc 0/0 18:00:36:Lustre: lustre-OST0005: Bulk IO write error with c3d0e929-541a-b945-bf7d-9ade70c3a652 (at 10.10.4.201@tcp), client will retry: rc -107 18:00:36:LustreError: 7431:0:(ldlm_lockd.c:2348:ldlm_cancel_handler()) ldlm_cancel from 10.10.4.201@tcp arrived at 1368234033 with bad export cookie 1526721388779448426 18:00:36:LustreError: 7431:0:(ldlm_lockd.c:2348:ldlm_cancel_handler()) Skipped 3 previous similar messages 18:00:36:LustreError: 7431:0:(ldlm_lockd.c:2348:ldlm_cancel_handler()) ldlm_cancel from 10.10.4.201@tcp arrived at 1368234034 with bad export cookie 1526721388779448426 18:00:36:LustreError: 7431:0:(ldlm_lockd.c:2348:ldlm_cancel_handler()) Skipped 1 previous similar message 18:00:36:LustreError: 9427:0:(ldlm_lib.c:2706:target_bulk_io()) @@@ Eviction on bulk GET req@ffff880048f78400 x1434661777625880/t0(0) o4->c3d0e929-541a-b945-bf7d-9ade70c3a652@10.10.4.201@tcp:0/0 lens 488/448 e 1 to 0 dl 1368234065 ref 1 fl Interpret:/0/0 rc 0/0 18:00:36:Lustre: lustre-OST0004: Bulk IO write error with c3d0e929-541a-b945-bf7d-9ade70c3a652 (at 10.10.4.201@tcp), client will retry: rc -107 18:00:36:LustreError: 138-a: lustre-OST0000: A client on nid 10.10.4.201@tcp was evicted due to a lock blocking callback time out: rc -107 18:00:36:LustreError: Skipped 1 previous similar message 18:00:36:LustreError: 5063:0:(ldlm_lockd.c:709:ldlm_handle_ast_error()) ### client (nid 10.10.4.201@tcp) returned 0 from blocking AST ns: filter-lustre-OST0000_UUID lock: ffff880069496800/0x15300103a13ef619 lrc: 4/0,0 mode: PR/PR res: 13204/0 rrc: 2 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) flags: 0x10020 nid: 10.10.4.201@tcp remote: 0x413d78011f8a8270 expref: 178 pid: 9910 timeout: 4329190079 lvb_type: 1 18:00:36:LustreError: 5063:0:(ldlm_lockd.c:709:ldlm_handle_ast_error()) Skipped 13 previous similar messages 18:00:36:LustreError: 9425:0:(ldlm_lib.c:2706:target_bulk_io()) @@@ Eviction on bulk GET req@ffff88002d861800 x1434661777625848/t0(0) o4->c3d0e929-541a-b945-bf7d-9ade70c3a652@10.10.4.201@tcp:0/0 lens 488/448 e 1 to 0 dl 1368234065 ref 1 fl Interpret:/0/0 rc 0/0 18:00:36:Lustre: lustre-OST0000: Bulk IO write error with c3d0e929-541a-b945-bf7d-9ade70c3a652 (at 10.10.4.201@tcp), client will retry: rc -107 18:00:36:LustreError: 7431:0:(ldlm_lockd.c:2348:ldlm_cancel_handler()) ldlm_cancel from 10.10.4.201@tcp arrived at 1368234035 with bad export cookie 1526721388779166123 18:00:36:LustreError: 7431:0:(ldlm_lockd.c:2348:ldlm_cancel_handler()) Skipped 1 previous similar message 18:00:36:LustreError: 138-a: lustre-OST0006: A client on nid 10.10.4.201@tcp was evicted due to a lock blocking callback time out: rc -107 18:00:36:LustreError: Skipped 1 previous similar message 18:00:36:LustreError: 9930:0:(ldlm_lockd.c:709:ldlm_handle_ast_error()) ### client (nid 10.10.4.201@tcp) returned 0 from blocking AST ns: filter-lustre-OST0006_UUID lock: ffff88005eee0800/0x15300103a13ef5e1 lrc: 4/0,0 mode: PR/PR res: 12947/0 rrc: 2 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) flags: 0x10020 nid: 10.10.4.201@tcp remote: 0x413d78011f8a81c8 expref: 168 pid: 9910 timeout: 4329191220 lvb_type: 1 18:00:36:LustreError: 9930:0:(ldlm_lockd.c:709:ldlm_handle_ast_error()) Skipped 13 previous similar messages 18:00:36:LustreError: 7431:0:(ldlm_lock.c:2433:ldlm_lock_dump_handle()) ### ### ns: filter-lustre-OST0006_UUID lock: ffff880069496a00/0x15300103a13ef612 lrc: 4/0,0 mode: PR/PR res: 12948/0 rrc: 2 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) flags: 0x10020 nid: 10.10.4.201@tcp remote: 0x413d78011f8a825b expref: 161 pid: 9910 timeout: 4329188939 lvb_type: 1 18:00:37:LustreError: 9429:0:(ldlm_lib.c:2706:target_bulk_io()) @@@ Eviction on bulk GET req@ffff880058037800 x1434661777628724/t0(0) o4->c3d0e929-541a-b945-bf7d-9ade70c3a652@10.10.4.201@tcp:0/0 lens 488/448 e 0 to 0 dl 1368234045 ref 1 fl Interpret:/0/0 rc 0/0 18:00:37:LustreError: 9429:0:(ldlm_lib.c:2706:target_bulk_io()) Skipped 3 previous similar messages 18:00:37:Lustre: lustre-OST0006: Bulk IO write error with c3d0e929-541a-b945-bf7d-9ade70c3a652 (at 10.10.4.201@tcp), client will retry: rc -107 18:00:37:Lustre: Skipped 3 previous similar messages 18:00:37:LustreError: 7431:0:(ldlm_lock.c:2433:ldlm_lock_dump_handle()) ### ### ns: filter-lustre-OST0002_UUID lock: ffff88005eee0200/0x15300103a13ef5f6 lrc: 4/0,0 mode: PR/PR res: 13043/0 rrc: 2 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) flags: 0x10020 nid: 10.10.4.201@tcp remote: 0x413d78011f8a8207 expref: 164 pid: 9910 timeout: 4329189806 lvb_type: 1 18:00:38:LustreError: 5064:0:(ldlm_lockd.c:709:ldlm_handle_ast_error()) ### client (nid 10.10.4.201@tcp) returned 0 from blocking AST ns: filter-lustre-OST0001_UUID lock: ffff88005eee0400/0x15300103a13ef5ef lrc: 4/0,0 mode: PR/PR res: 13011/0 rrc: 2 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) flags: 0x10020 nid: 10.10.4.201@tcp remote: 0x413d78011f8a81f2 expref: 188 pid: 9942 timeout: 4329191830 lvb_type: 1 18:00:38:LustreError: 5064:0:(ldlm_lockd.c:709:ldlm_handle_ast_error()) Skipped 13 previous similar messages 18:00:38:LustreError: 7431:0:(ldlm_lockd.c:2348:ldlm_cancel_handler()) ldlm_cancel from 10.10.4.201@tcp arrived at 1368234037 with bad export cookie 1526721388779166137 18:00:38:LustreError: 7431:0:(ldlm_lockd.c:2348:ldlm_cancel_handler()) Skipped 6 previous similar messages Maloo report: https://maloo.whamcloud.com/test_sets/053120d2-bb19-11e2-8824-52540035b04c |
| Comments |
| Comment by Jian Yu [ 13/May/13 ] |
|
This issue also occurred on master build #1481 (after running for 12 hours and MDS failed over 48 times): |
| Comment by Keith Mannthey (Inactive) [ 13/May/13 ] |
|
I see the tests report over 75% failure rates over all the branches. Do we know that this is a valid test? Under what conditions has this test passed? |
| Comment by Keith Mannthey (Inactive) [ 13/May/13 ] |
|
Also as noted in a client log osc.lustre-OST0000-osc-*.ost_server_uuid in FULL state after 0 sec Maybe the OSTs get full during this test? I would think the IO error on a client would be ok if the OST is FULL. |
| Comment by Andreas Dilger [ 13/May/13 ] |
|
Keith, "FULL" is the connection state, not that the OST is out of space. Yu Jian, there are a steady stream of failures for this test in the past. Could you go through at least the past several weeks of tests and triage the failures. We're trying to determine if this is a new type of failure, or just one that has existed a long time that we haven't noticed before. |
| Comment by Hongchao Zhang [ 14/May/13 ] |
|
https://maloo.whamcloud.com/test_sets/053120d2-bb19-11e2-8824-52540035b04c, the client is evicted from OST at 18:00 for the time out of lock blocking ast, but the debug logs related to the eviction has been overwritten |
| Comment by Jian Yu [ 14/May/13 ] |
The patch for gathering debug logs is in http://review.whamcloud.com/6013. |
| Comment by Jian Yu [ 14/May/13 ] |
Here are all of the recovery-mds-scale test failover_mds reports on master branch: recovery-mds-scale test_failover_mds The test was not performed on every master build. Before 2013-04-09, the test was not really run due to |
| Comment by Jodi Levi (Inactive) [ 14/May/13 ] |
|
Lowering priority from blocker, but still want to continue digging into this issue. |
| Comment by Jian Yu [ 15/May/13 ] |
Patch Set: 10 recovery-mds-scale test_failover_mds passed for 24 hours (MDS failed over 96 times): Re-triggering the test to try to reproduce the failure. |
| Comment by Jian Yu [ 16/May/13 ] |
Patch Set 11 in http://review.whamcloud.com/6013 : Again, recovery-mds-scale test_failover_mds passed for 24 hours (MDS failed over 96 times): The issue can not be reproduced in manually triggered test runs but occurred constantly in autotest failover test group runs. |
| Comment by Jian Yu [ 18/May/13 ] |
|
Lustre Tag: v2_4_0_RC1 PTLDEBUG=-1 The issue occurred on recovery-mds-scale test_failover_ost: |
| Comment by Jian Yu [ 20/May/13 ] |
|
Lustre Tag: v2_4_0_RC1 The issue occurred on recovery-mds-scale test_failover_ost again: |
| Comment by Jian Yu [ 20/May/13 ] |
|
Lustre Tag: v2_4_0_RC1 The issue occurred on recovery-mds-scale test_failover_mds: |
| Comment by Jian Yu [ 28/May/13 ] |
|
Lustre Tag: v2_4_0_RC2 The issue occurred on recovery-mds-scale test_failover_mds: |
| Comment by Hongchao Zhang [ 18/Jun/13 ] |
|
this issue is similar with https://jira.hpdd.intel.com/browse/LU-1499, which is caused by the time out of lock AST callback. does the issue occur again recently? how about running the test with some big timeout value(currently it's 10) for the load of the test is a little heavy! |
| Comment by Jian Yu [ 18/Jun/13 ] |
Yes, still occurs on master branch:
The default value of TIMEOUT variable in cfg/local.sh is 20. Since the issue can only be reproduced in autotest runs, the value of TIMEOUT variable in the test configuration file used by autotest needs to be changed, however, this will affect all of the test sessions performed by autotest. |
| Comment by Hongchao Zhang [ 20/Jun/13 ] |
|
according the these occurrences, there is no IO RPC during the time interval between the blocking AST sent and eviction. |
| Comment by Hongchao Zhang [ 24/Jun/13 ] |
|
debug patch is tracked at http://review.whamcloud.com/#/c/6747/ |
| Comment by Jian Yu [ 16/Sep/13 ] |
|
Lustre Tag: v2_4_1_RC2 The issue occurred on recovery-mds-scale test_failover_ost: |
| Comment by Jian Yu [ 06/Dec/13 ] |
|
Lustre Build: http://build.whamcloud.com/job/lustre-b2_4/63/ The issue occurred on recovery-mds-scale test_failover_ost: |
| Comment by Jian Yu [ 17/Jan/14 ] |
|
Lustre Build: http://build.whamcloud.com/job/lustre-b2_5/13/ The issue occurred on recovery-mds-scale test_failover_mds: |
| Comment by Jian Yu [ 07/Feb/14 ] |
|
More instance on Lustre b2_5 branch: |
| Comment by Jian Yu [ 09/Mar/14 ] |
|
Lustre Build: http://build.whamcloud.com/job/lustre-b2_5/40/ (2.5.1 RC2) The same failure occurred: |
| Comment by Jian Yu [ 01/Apr/14 ] |
|
This is blocking recovery-mds-scale testing. |
| Comment by Hongchao Zhang [ 04/Jun/14 ] |
|
Okay, will check it along with |
| Comment by Peter Jones [ 12/Aug/14 ] |
|
As per Yu, Jian this ticket can be closed as a duplicate of |