[LU-3512] sanity test 180c: failed in developers testing Created: 26/Jun/13  Updated: 13/Oct/21  Resolved: 13/Oct/21

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.5.0
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Alexey Lyashkov Assignee: WC Triage
Resolution: Low Priority Votes: 0
Labels: None
Environment:

RHEL6, start from build directory


Issue Links:
Related
is related to LU-10903 SLES validation: sanity test_180c: Ca... Resolved
Severity: 3
Rank (Obsolete): 8836

 Description   
== sanity test 180c: test huge bulk I/O size on obdfilter, don't LASSERT == 16:30:19 (1372253419)
New object id is 0x4
getattr: object id 0x4
getattr: object id 4, mode 106666
error: test_brw: #2 - No space left on device on write
Print status every operation
test_brw: writing 10x16384 pages (obj 0x4, off 0): Wed Jun 26 16:30:19 2013
test_brw: write number 1 @ 4:0 for 67108864
obecho_create_test failed: 4
 sanity test_180c: @@@@@@ FAIL: test_180c failed with 4 
  Trace dump:
  = /Users/shadow/work/lustre/work/WC-review/ldlm-stack/lustre/tests/test-framework.sh:4064:error_noexit()
  = /Users/shadow/work/lustre/work/WC-review/ldlm-stack/lustre/tests/test-framework.sh:4091:error()
  = /Users/shadow/work/lustre/work/WC-review/ldlm-stack/lustre/tests/test-framework.sh:4330:run_one()
  = /Users/shadow/work/lustre/work/WC-review/ldlm-stack/lustre/tests/test-framework.sh:4363:run_one_logged()
  = /Users/shadow/work/lustre/work/WC-review/ldlm-stack/lustre/tests/test-framework.sh:4233:run_test()
  = sanity.sh:9636:main()
Dumping lctl log to /tmp/test_logs/1372251610/sanity.test_180c.*.1372253420.log
Dumping logs only on local client.
FAIL 180c (2s)

default local.sh config don't have enough space to test run, i think test need fixed to use different sizes in depending to the OST size, as other tests do before.



 Comments   
Comment by Alexey Lyashkov [ 26/Jun/13 ]

also test forget to remove file after fail so produce many fails after it.
like

== sanity test 200: OST pools == 16:31:06 (1372253466)
Creating new pool
Pool lustre.cea1 created
Adding targets to pool
OST lustre-OST0000_UUID added to pool lustre.cea1
Setting pool on directory /mnt/lustre/d200.pools/dir_tst
Checking pool on directory /mnt/lustre/d200.pools/dir_tst
Testing relative path works well
Setting pool on directory dir_tst
Setting pool on directory ./dir_tst
Setting pool on directory ../dir_tst
Setting pool on directory ../dir_tst/dir_tst
Checking files allocation from directory pool
touch: cannot touch `/mnt/lustre/d200.pools/dir_tst/file-1': No space left on device
Comment by Oleg Drokin [ 02/Jul/13 ]

So if the fs size is smaller, we need to drop amount of written data (if we need to write as much in the first place).
Also of course the cleanup needs to be fixed

Comment by Andreas Dilger [ 02/Jul/13 ]

For this test it would be enough to do only one BRW with the 64MB size, instead of 10. It should clean up the object at the end (via trap) if the test fails.

Comment by parinay v kondekar (Inactive) [ 29/Jul/13 ]

Xyratex-bug-id - MRP-1241

Comment by nasf (Inactive) [ 07/May/16 ]

Another failure instance on Master:
https://testing.hpdd.intel.com/test_sets/91cbecde-13c7-11e6-9b34-5254006e85c2

Generated at Sat Feb 10 01:34:33 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.