[LU-6810] Interop 2.7.0<->master sanity-lfsck test_0: test failed to respond and timed out Created: 08/Jul/15  Updated: 28/Feb/20  Resolved: 28/Feb/20

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.8.0
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Maloo Assignee: WC Triage
Resolution: Cannot Reproduce Votes: 0
Labels: None
Environment:

server: lustre-master build # 3092 RHEL6.6
client: 2.7.0


Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

This issue was created by maloo for sarah_lw <wei3.liu@intel.com>

This issue relates to the following test suite run: https://testing.hpdd.intel.com/test_sets/65d6ad44-250c-11e5-bf7b-5254006e85c2.

The sub-test test_0 failed with the following error:

test failed to respond and timed out

MDS console

18:55:33:Lustre: DEBUG MARKER: == sanity-lfsck test 0: Control LFSCK manually == 18:55:30 (1436208930)
18:55:33:Lustre: 15845:0:(osd_handler.c:920:osd_trans_start()) lustre-MDT0000: too many transaction credits (278 > 256)
18:55:33:Lustre: 15845:0:(osd_handler.c:925:osd_trans_start())   create: 3/12, destroy: 0/0
18:55:33:Lustre: 15845:0:(osd_handler.c:930:osd_trans_start())   attr_set: 2/2, xattr_set: 4/30
18:55:33:Lustre: 15845:0:(osd_handler.c:937:osd_trans_start())   write: 8/48, punch: 0/0, quota 2/2
18:55:33:Lustre: 15845:0:(osd_handler.c:942:osd_trans_start())   insert: 5/84, delete: 0/0
18:55:33:Lustre: 15845:0:(osd_handler.c:947:osd_trans_start())   ref_add: 0/0, ref_del: 0/0
18:55:33:Pid: 15845, comm: mdt00_002
18:55:33:
18:55:33:Call Trace:
18:55:33: [<ffffffffa0490875>] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
18:55:33: [<ffffffffa0d1c4ce>] osd_trans_start+0x63e/0x660 [osd_ldiskfs]
18:55:33: [<ffffffffa089db6b>] top_trans_start+0x67b/0x940 [ptlrpc]
18:55:33: [<ffffffffa0fd26f1>] lod_trans_start+0x61/0x70 [lod]
18:55:33: [<ffffffffa107d094>] mdd_trans_start+0x14/0x20 [mdd]
18:55:33: [<ffffffffa10690b1>] mdd_create+0xc21/0x1760 [mdd]
18:55:33: [<ffffffffa0f26498>] mdo_create+0x18/0x50 [mdt]
18:55:33: [<ffffffffa0f2e90a>] mdt_reint_open+0x1fea/0x2d80 [mdt]
18:55:33: [<ffffffffa060b4bc>] ? upcall_cache_get_entry+0x29c/0x880 [obdclass]
18:55:33: [<ffffffffa0f17d4d>] mdt_reint_rec+0x5d/0x200 [mdt]
18:55:33: [<ffffffffa0efcc0b>] mdt_reint_internal+0x62b/0xbf0 [mdt]
18:55:33: [<ffffffffa0efd3c6>] mdt_intent_reint+0x1f6/0x430 [mdt]
18:55:33: [<ffffffffa0efb684>] mdt_intent_policy+0x494/0xc40 [mdt]
18:55:33: [<ffffffffa07d06a7>] ldlm_lock_enqueue+0x127/0x9d0 [ptlrpc]
18:55:33: [<ffffffffa07fec2b>] ldlm_handle_enqueue0+0x51b/0x14c0 [ptlrpc]
19:56:08:********** Timeout by autotest system **********


 Comments   
Comment by Andreas Dilger [ 28/Feb/20 ]

Close old bug that hasn't been seen in a long time.

Generated at Sat Feb 10 02:03:26 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.