[LU-2342] replay-single test_20b: @@@@@@ FAIL: after 6912 > before 6784 Created: 08/Jan/12  Updated: 01/Oct/14  Resolved: 21/Mar/13

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.4.0
Fix Version/s: Lustre 2.4.0

Type: Bug Priority: Blocker
Reporter: Li Wei (Inactive) Assignee: Nathaniel Clark
Resolution: Fixed Votes: 0
Labels: HB, zfs

Severity: 3
Rank (Obsolete): 2858

 Description   

Commit: 1c0dfc1ae9637cbfe5dabc0ea67c29633b5b04ec (Dec 17, 2011)
Maloo: https://maloo.whamcloud.com/test_sets/7762668c-318d-11e1-9c6d-5254004bbbd3

== replay-single test 20b: write, unlink, eviction, replay, (test mds_cleanup_orphans) == 19:32:44 (1325043164)
/mnt/lustre/f20b
lmm_stripe_count:   1
lmm_stripe_size:    1048576
lmm_stripe_offset:  0
	obdidx		 objid		objid		 group
	     0	          1150	        0x47e	             0

stat: cannot read file system information for `/mnt/lustre': Interrupted system call
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 2.43149 s, 16.8 MB/s
Failing mds1 on node fat-intel-3vm3
Stopping /mnt/mds1 (opts:)
affected facets: mds1
Failover mds1 to fat-intel-3vm3
19:33:10 (1325043190) waiting for fat-intel-3vm3 network 900 secs ...
19:33:10 (1325043190) network interface is UP
Starting mds1: -o user_xattr,acl  lustre-mdt1/mdt1 /mnt/mds1
fat-intel-3vm3: debug=0x33f0404
fat-intel-3vm3: subsystem_debug=0xffb7e3ff
fat-intel-3vm3: debug_mb=32
Started lustre-MDT0000
affected facets: mds1
fat-intel-3vm3: *.lustre-MDT0000.recovery_status status: COMPLETE
Waiting for orphan cleanup...
before 6784, after 6912
UUID                   1K-blocks        Used   Available Use% Mounted on
lustre-MDT0000_UUID     29443456        5760    29435648   0% /mnt/lustre[MDT:0]
lustre-OST0000_UUID      2031872         896     1994112   0% /mnt/lustre[OST:0]
lustre-OST0001_UUID      2031872         896     2028032   0% /mnt/lustre[OST:1]
lustre-OST0002_UUID      2031872         896     1823872   0% /mnt/lustre[OST:2]
lustre-OST0003_UUID      2031872        1024     1799424   0% /mnt/lustre[OST:3]
lustre-OST0004_UUID      2031872         896     1996160   0% /mnt/lustre[OST:4]
lustre-OST0005_UUID      2032000        1024     2027904   0% /mnt/lustre[OST:5]
lustre-OST0006_UUID      2032000        1280     2027776   0% /mnt/lustre[OST:6]

filesystem summary:     14223360        6912    13697280   0% /mnt/lustre

osp.lustre-OST0000-osp-MDT0000.sync_changes=0
osp.lustre-OST0000-osp-MDT0000.sync_in_flight=0
osp.lustre-OST0000-osp-MDT0000.sync_in_progress=0
osp.lustre-OST0001-osp-MDT0000.sync_changes=0
osp.lustre-OST0001-osp-MDT0000.sync_in_flight=0
osp.lustre-OST0001-osp-MDT0000.sync_in_progress=0
osp.lustre-OST0002-osp-MDT0000.sync_changes=0
osp.lustre-OST0002-osp-MDT0000.sync_in_flight=0
osp.lustre-OST0002-osp-MDT0000.sync_in_progress=0
osp.lustre-OST0003-osp-MDT0000.sync_changes=0
osp.lustre-OST0003-osp-MDT0000.sync_in_flight=0
osp.lustre-OST0003-osp-MDT0000.sync_in_progress=0
osp.lustre-OST0004-osp-MDT0000.sync_changes=0
osp.lustre-OST0004-osp-MDT0000.sync_in_flight=0
osp.lustre-OST0004-osp-MDT0000.sync_in_progress=0
osp.lustre-OST0005-osp-MDT0000.sync_changes=0
osp.lustre-OST0005-osp-MDT0000.sync_in_flight=0
osp.lustre-OST0005-osp-MDT0000.sync_in_progress=0
osp.lustre-OST0006-osp-MDT0000.sync_changes=0
osp.lustre-OST0006-osp-MDT0000.sync_in_flight=0
osp.lustre-OST0006-osp-MDT0000.sync_in_progress=0
osp.lustre-OST0000-osp-MDT0000.prealloc_status=0
osp.lustre-OST0001-osp-MDT0000.prealloc_status=0
osp.lustre-OST0002-osp-MDT0000.prealloc_status=0
osp.lustre-OST0003-osp-MDT0000.prealloc_status=0
osp.lustre-OST0004-osp-MDT0000.prealloc_status=0
osp.lustre-OST0005-osp-MDT0000.prealloc_status=0
osp.lustre-OST0006-osp-MDT0000.prealloc_status=0
 replay-single test_20b: @@@@@@ FAIL: after 6912 > before 6784 
Dumping lctl log to /logdir/test_logs/2011-12-27/lustre-dev-el6-x86_64-zfs__277__-7fa5674d2740/replay-single.test_20b.*.1325043197.log


 Comments   
Comment by Li Wei (Inactive) [ 16/Jan/12 ]

Commit: 526c43ec2e47ead878f0df552b74c78b4fc79d1f (Jan 13, 2012)
Maloo: https://maloo.whamcloud.com/test_sets/45a13d50-4072-11e1-ac07-5254004bbbd3

Comment by Li Wei (Inactive) [ 02/Feb/12 ]

Commit: faefc49f0854987d29639437064e81bbc4556774 (Feb 1, 2012)
Maloo: https://maloo.whamcloud.com/test_sets/0b2c6ac8-4d79-11e1-a8f4-5254004bbbd3

Comment by Li Wei (Inactive) [ 07/Feb/12 ]

Commit: faefc49f0854987d29639437064e81bbc4556774 (Feb 1, 2012)
Maloo: https://maloo.whamcloud.com/test_sets/325d0fc6-4e50-11e1-88dd-5254004bbbd3

Comment by Li Wei (Inactive) [ 19/Feb/12 ]

Commit: f42d375dfb2f30f64440ee8bc9f78a9a3e9a9adc (Feb 17, 2012)
Maloo: https://maloo.whamcloud.com/test_sets/899b6ff0-5b0f-11e1-8801-5254004bbbd3

Comment by Li Wei (Inactive) [ 07/Mar/12 ]

Commit: 3cf946177abe53ba791203006432272c6c7e798f (Mar 5, 2012)
Maloo: https://maloo.whamcloud.com/test_sets/fcc187c8-65d5-11e1-92b1-5254004bbbd3

Comment by Alex Zhuravlev [ 16/Nov/12 ]

hmm, it is still happening ? if not, I'm suggesting to close.

Comment by Li Wei (Inactive) [ 16/Nov/12 ]

My bad, forgot to post the latest failure after moving this from Orion.

https://maloo.whamcloud.com/test_sets/2221d300-2f94-11e2-bd52-52540035b04c

Comment by Andreas Dilger [ 03/Dec/12 ]

Still failing:
https://maloo.whamcloud.com/test_sets/78d57236-3bcb-11e2-b98e-52540035b04c

Comment by Mikhail Pershin [ 24/Feb/13 ]

Doesn't it occur due to disabled gap handling? I bet it does. In that case this test should just be disable because it is not supposed to pass.

Comment by Jodi Levi (Inactive) [ 06/Mar/13 ]

Nathaniel,
Could you discuss with Mike to see if this test needs to be disable for ZFS?

Comment by Nathaniel Clark [ 08/Mar/13 ]

In the test: the + 20 on the before value, If I understand it right is for the logs. I think (from my work on replay-ost-single test 6/7 - LU-2903) that the logs are larger on zfs (up to ~256KB instead of the 20 here or the 40 in replay-ost-single).

Comment by Nathaniel Clark [ 10/Mar/13 ]

http://review.whamcloud.com/5666

Comment by Mikhail Pershin [ 11/Mar/13 ]

I met the same issue in LU-2059, config logs are hardcoded as 40 blocks but sometimes it is 44 even on ldiskfs. I've tried to don't guess but get size of logs via debugfs, but don't know how to do the same with zfs.

Comment by Peter Jones [ 21/Mar/13 ]

Landed for 2.4

Generated at Sat Feb 10 01:24:24 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.