[LU-2342] replay-single test_20b: @@@@@@ FAIL: after 6912 > before 6784 Created: 08/Jan/12 Updated: 01/Oct/14 Resolved: 21/Mar/13 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.4.0 |
| Fix Version/s: | Lustre 2.4.0 |
| Type: | Bug | Priority: | Blocker |
| Reporter: | Li Wei (Inactive) | Assignee: | Nathaniel Clark |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | HB, zfs | ||
| Severity: | 3 |
| Rank (Obsolete): | 2858 |
| Description |
|
Commit: 1c0dfc1ae9637cbfe5dabc0ea67c29633b5b04ec (Dec 17, 2011) == replay-single test 20b: write, unlink, eviction, replay, (test mds_cleanup_orphans) == 19:32:44 (1325043164) /mnt/lustre/f20b lmm_stripe_count: 1 lmm_stripe_size: 1048576 lmm_stripe_offset: 0 obdidx objid objid group 0 1150 0x47e 0 stat: cannot read file system information for `/mnt/lustre': Interrupted system call 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 2.43149 s, 16.8 MB/s Failing mds1 on node fat-intel-3vm3 Stopping /mnt/mds1 (opts:) affected facets: mds1 Failover mds1 to fat-intel-3vm3 19:33:10 (1325043190) waiting for fat-intel-3vm3 network 900 secs ... 19:33:10 (1325043190) network interface is UP Starting mds1: -o user_xattr,acl lustre-mdt1/mdt1 /mnt/mds1 fat-intel-3vm3: debug=0x33f0404 fat-intel-3vm3: subsystem_debug=0xffb7e3ff fat-intel-3vm3: debug_mb=32 Started lustre-MDT0000 affected facets: mds1 fat-intel-3vm3: *.lustre-MDT0000.recovery_status status: COMPLETE Waiting for orphan cleanup... before 6784, after 6912 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 29443456 5760 29435648 0% /mnt/lustre[MDT:0] lustre-OST0000_UUID 2031872 896 1994112 0% /mnt/lustre[OST:0] lustre-OST0001_UUID 2031872 896 2028032 0% /mnt/lustre[OST:1] lustre-OST0002_UUID 2031872 896 1823872 0% /mnt/lustre[OST:2] lustre-OST0003_UUID 2031872 1024 1799424 0% /mnt/lustre[OST:3] lustre-OST0004_UUID 2031872 896 1996160 0% /mnt/lustre[OST:4] lustre-OST0005_UUID 2032000 1024 2027904 0% /mnt/lustre[OST:5] lustre-OST0006_UUID 2032000 1280 2027776 0% /mnt/lustre[OST:6] filesystem summary: 14223360 6912 13697280 0% /mnt/lustre osp.lustre-OST0000-osp-MDT0000.sync_changes=0 osp.lustre-OST0000-osp-MDT0000.sync_in_flight=0 osp.lustre-OST0000-osp-MDT0000.sync_in_progress=0 osp.lustre-OST0001-osp-MDT0000.sync_changes=0 osp.lustre-OST0001-osp-MDT0000.sync_in_flight=0 osp.lustre-OST0001-osp-MDT0000.sync_in_progress=0 osp.lustre-OST0002-osp-MDT0000.sync_changes=0 osp.lustre-OST0002-osp-MDT0000.sync_in_flight=0 osp.lustre-OST0002-osp-MDT0000.sync_in_progress=0 osp.lustre-OST0003-osp-MDT0000.sync_changes=0 osp.lustre-OST0003-osp-MDT0000.sync_in_flight=0 osp.lustre-OST0003-osp-MDT0000.sync_in_progress=0 osp.lustre-OST0004-osp-MDT0000.sync_changes=0 osp.lustre-OST0004-osp-MDT0000.sync_in_flight=0 osp.lustre-OST0004-osp-MDT0000.sync_in_progress=0 osp.lustre-OST0005-osp-MDT0000.sync_changes=0 osp.lustre-OST0005-osp-MDT0000.sync_in_flight=0 osp.lustre-OST0005-osp-MDT0000.sync_in_progress=0 osp.lustre-OST0006-osp-MDT0000.sync_changes=0 osp.lustre-OST0006-osp-MDT0000.sync_in_flight=0 osp.lustre-OST0006-osp-MDT0000.sync_in_progress=0 osp.lustre-OST0000-osp-MDT0000.prealloc_status=0 osp.lustre-OST0001-osp-MDT0000.prealloc_status=0 osp.lustre-OST0002-osp-MDT0000.prealloc_status=0 osp.lustre-OST0003-osp-MDT0000.prealloc_status=0 osp.lustre-OST0004-osp-MDT0000.prealloc_status=0 osp.lustre-OST0005-osp-MDT0000.prealloc_status=0 osp.lustre-OST0006-osp-MDT0000.prealloc_status=0 replay-single test_20b: @@@@@@ FAIL: after 6912 > before 6784 Dumping lctl log to /logdir/test_logs/2011-12-27/lustre-dev-el6-x86_64-zfs__277__-7fa5674d2740/replay-single.test_20b.*.1325043197.log |
| Comments |
| Comment by Li Wei (Inactive) [ 16/Jan/12 ] |
|
Commit: 526c43ec2e47ead878f0df552b74c78b4fc79d1f (Jan 13, 2012) |
| Comment by Li Wei (Inactive) [ 02/Feb/12 ] |
|
Commit: faefc49f0854987d29639437064e81bbc4556774 (Feb 1, 2012) |
| Comment by Li Wei (Inactive) [ 07/Feb/12 ] |
|
Commit: faefc49f0854987d29639437064e81bbc4556774 (Feb 1, 2012) |
| Comment by Li Wei (Inactive) [ 19/Feb/12 ] |
|
Commit: f42d375dfb2f30f64440ee8bc9f78a9a3e9a9adc (Feb 17, 2012) |
| Comment by Li Wei (Inactive) [ 07/Mar/12 ] |
|
Commit: 3cf946177abe53ba791203006432272c6c7e798f (Mar 5, 2012) |
| Comment by Alex Zhuravlev [ 16/Nov/12 ] |
|
hmm, it is still happening ? if not, I'm suggesting to close. |
| Comment by Li Wei (Inactive) [ 16/Nov/12 ] |
|
My bad, forgot to post the latest failure after moving this from Orion. https://maloo.whamcloud.com/test_sets/2221d300-2f94-11e2-bd52-52540035b04c |
| Comment by Andreas Dilger [ 03/Dec/12 ] |
|
Still failing: |
| Comment by Mikhail Pershin [ 24/Feb/13 ] |
|
Doesn't it occur due to disabled gap handling? I bet it does. In that case this test should just be disable because it is not supposed to pass. |
| Comment by Jodi Levi (Inactive) [ 06/Mar/13 ] |
|
Nathaniel, |
| Comment by Nathaniel Clark [ 08/Mar/13 ] |
|
In the test: the + 20 on the before value, If I understand it right is for the logs. I think (from my work on replay-ost-single test 6/7 - |
| Comment by Nathaniel Clark [ 10/Mar/13 ] |
| Comment by Mikhail Pershin [ 11/Mar/13 ] |
|
I met the same issue in |
| Comment by Peter Jones [ 21/Mar/13 ] |
|
Landed for 2.4 |