[LU-3052] Interop 1.8.9<->2.4 failure on test suite parallel-scale test_metabench: Disk quota exceeded Created: 28/Mar/13  Updated: 22/Jun/16  Resolved: 22/Jun/16

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.4.0, Lustre 1.8.9
Fix Version/s: None

Type: Bug Priority: Blocker
Reporter: Maloo Assignee: Niu Yawei (Inactive)
Resolution: Won't Fix Votes: 0
Labels: mn8, yuc2
Environment:

client: 1.8.9
server: lustre-master build #1338


Issue Links:
Duplicate
is duplicated by LU-3930 1.8.9<->2.4.1 interop: parallel-scale... Closed
Severity: 3
Rank (Obsolete): 7446

 Description   

This issue was created by maloo for sarah <sarah@whamcloud.com>

This issue relates to the following test suite run: https://maloo.whamcloud.com/test_sets/7ee8f8aa-9490-11e2-93c6-52540035b04c.

The sub-test test_metabench failed with the following error:

metabench failed! 1

Metadata Test <no-name> on 03/23/2013 at 11:46:13

Rank   0 process on node client-27vm5.lab.whamcloud.com
Rank   1 process on node client-27vm6.lab.whamcloud.com
Rank   2 process on node client-27vm5.lab.whamcloud.com
Rank   3 process on node client-27vm6.lab.whamcloud.com
Rank   4 process on node client-27vm5.lab.whamcloud.com
Rank   5 process on node client-27vm6.lab.whamcloud.com
Rank   6 process on node client-27vm5.lab.whamcloud.com
Rank   7 process on node client-27vm6.lab.whamcloud.com

[03/23/2013 11:46:13] FATAL error on process 0
Proc 0: cannot create component TIME_CREATE_007.000 in /mnt/lustre/d0.metabench/TIME_CREATE_007.000: Disk quota exceeded


 Comments   
Comment by Peter Jones [ 28/Mar/13 ]

Niu

Could you please comment on this one?

Thanks

Peter

Comment by Niu Yawei (Inactive) [ 08/Apr/13 ]

Looks all creations failed with -EDQUOT, but unfortunately there isn't any userful log.

I noticed that all tests in performance-sanity.sh were failed for the same reason. Sarah, could you reproduce it with D_TRACE enabled on mds and collect the mds log? Thanks.

Comment by Jian Yu [ 14/Aug/13 ]

Lustre client build: http://build.whamcloud.com/job/lustre-b1_8/258/ (1.8.9-wc1)
Lustre server build: http://build.whamcloud.com/job/lustre-b2_4/31/

All of the following tests hit "Disk quota exceeded" failure.

sanity-benchmark test iozone:
https://maloo.whamcloud.com/test_sets/2219b5a0-0485-11e3-90ba-52540035b04c
parallel-scale-nfsv3 test metabench:
https://maloo.whamcloud.com/test_sets/f906045e-048c-11e3-90ba-52540035b04c
parallel-scale-nfsv4 test metabench:
https://maloo.whamcloud.com/test_sets/5c847d7a-048e-11e3-90ba-52540035b04c
performance-sanity:
https://maloo.whamcloud.com/test_sets/fe62bd4a-048a-11e3-90ba-52540035b04c

Comment by Niu Yawei (Inactive) [ 29/Aug/13 ]

Hi, yujian
Looks there isn't single line of log in these maloo links, do you know why there isn't any log? If possible, could you reproduce it manually and try to get some log on mds? Thanks.

Comment by Jian Yu [ 29/Aug/13 ]

Looks there isn't single line of log in these maloo links, do you know why there isn't any log?

I think it's caused by the Maloo crash issue occurred in last weekend.

If possible, could you reproduce it manually and try to get some log on mds?

Let's wait for the test result of patch set 4 in http://review.whamcloud.com/7340 :

Test-Parameters: fortestonly \
envdefinitions=SLOW=yes,ENABLE_QUOTA=yes,PTLDEBUG=-1,DEBUG_SIZE=150,ONLY=3 \
clientdistro=el6 serverdistro=el6 clientarch=x86_64 \
serverarch=x86_64 clientjob=lustre-b1_8 clientbuildno=258 \
serverjob=lustre-b2_4 serverbuildno=40 mdtcount=1 \
testlist=performance-sanity
Comment by Jian Yu [ 01/Sep/13 ]

Hi Niu,

Here is the test report with logs gathered: https://maloo.whamcloud.com/test_sessions/f19c10e4-1190-11e3-8029-52540035b04c

Comment by Niu Yawei (Inactive) [ 02/Sep/13 ]

sigh, another typo of replacing 'lfs df' with 'lfs_df' wihtout changing number accordingly.

Comment by Niu Yawei (Inactive) [ 02/Sep/13 ]

b1_8: http://review.whamcloud.com/7520

Comment by Niu Yawei (Inactive) [ 22/Jun/16 ]

The patch for b1_8 was abandoned, and this one can be closed as "Won't fix"

Generated at Sat Feb 10 01:30:33 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.