[LU-9851] parallel-scale-nfsv3 test_metabench: metabench failed! 1 Created: 09/Aug/17 Updated: 16/Jan/23 Resolved: 16/Jan/23 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.10.1, Lustre 2.11.0, Lustre 2.12.0, Lustre 2.13.0, Lustre 2.12.1, Lustre 2.12.3, Lustre 2.14.0, Lustre 2.12.4 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major |
| Reporter: | James Casper | Assignee: | WC Triage |
| Resolution: | Cannot Reproduce | Votes: | 0 |
| Labels: | None | ||
| Environment: |
Trevis2, full |
||
| Issue Links: |
|
||||||||
| Severity: | 3 | ||||||||
| Rank (Obsolete): | 9223372036854775807 | ||||||||
| Description |
|
https://testing.hpdd.intel.com/test_sessions/a723507c-c588-4c9c-acc9-f4838394b4e9 From test_log: [08/05/2017 08:07:44] Entering time_file_creation with proc_id = 7 [trevis-53vm1.trevis.hpdd.intel.com][[38258,1],6][btl_tcp_frag.c:215:mca_btl_tcp_frag_recv] [08/05/2017 08:07:49] FATAL error on process 7 Proc 7: Could not create file number [496] named [8f4mh.4yJ]: Disk quota exceeded mca_btl_tcp_frag_recv: readv failed: Connection reset by peer (104) [trevis-53vm2.trevis.hpdd.intel.com][[38258,1],5][btl_tcp_frag.c:215:mca_btl_tcp_frag_recv] mca_btl_tcp_frag_recv: readv failed: Connection reset by peer (104) -------------------------------------------------------------------------- mpirun has exited due to process rank 7 with PID 32222 on node trevis-53vm2 exiting improperly. There are two reasons this could occur: 1. this process did not call "init" before exiting, but others in the job did. This can cause a job to hang indefinitely while it waits for all processes to call "init". By rule, if one process calls "init", then ALL processes must call "init" prior to termination. 2. this process called "init", but exited without calling "finalize". By rule, all processes that call "init" MUST call "finalize" prior to exiting or it will be considered an "abnormal termination" This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- parallel-scale-nfsv3 test_metabench: @@@@@@ FAIL: metabench failed! 1 Trace dump: = /usr/lib64/lustre/tests/test-framework.sh:4980:error() = /usr/lib64/lustre/tests/functions.sh:380:run_metabench() = /usr/lib64/lustre/tests/parallel-scale-nfs.sh:103:test_metabench() = /usr/lib64/lustre/tests/test-framework.sh:5256:run_one() = /usr/lib64/lustre/tests/test-framework.sh:5295:run_one_logged() = /usr/lib64/lustre/tests/test-framework.sh:5142:run_test() = /usr/lib64/lustre/tests/parallel-scale-nfs.sh:105:main() |
| Comments |
| Comment by James Nunez (Inactive) [ 18/Oct/18 ] |
|
log: -------------------------------------------------------------------------- log: 0: onyx-45vm5.onyx.whamcloud.com starting at Sat Oct 13 14:46:11 2018 log: rank 0: open(f9208) error: Disk quota exceeded log: -------------------------------------------------------------------------- log: MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD log: with errorcode 1. in the client test_log and was closed as a duplicate of this ticket. After performance-sanity test 3 fails, we see tests 4, 5, 6, 7 and 8 fail with the same "Disk quota exceeded" errors. Here's an example of this failure with interop testing between master servers and 2.10.5 clients https://testing.whamcloud.com/test_sets/8bf8f794-cf0d-11e8-ad90-52540065bddc . |
| Comment by Jian Yu [ 15/Aug/19 ] |
|
+1 on master branch: https://testing.whamcloud.com/test_sets/a5d93d9a-bf1c-11e9-98c8-52540065bddc |
| Comment by Andreas Dilger [ 16/Jan/23 ] |
|
Looks like the quota error has not been hit for a long time, and I suspect it was likely that the test was actually exceeding the quota on a test system. |