Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-9851

parallel-scale-nfsv3 test_metabench: metabench failed! 1

    Details

    • Type: Bug
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: Lustre 2.10.1, Lustre 2.11.0, Lustre 2.12.0, Lustre 2.13.0, Lustre 2.12.1, Lustre 2.12.3, Lustre 2.12.4
    • Fix Version/s: None
    • Labels:
    • Environment:
      Trevis2, full
      server: RHEL 7.4, zfs, branch master, v2.10.51, b3620
      client: RHEL 7.4, branch master, v2.10.51, b3620
    • Severity:
      3
    • Rank (Obsolete):
      9223372036854775807

      Description

      https://testing.hpdd.intel.com/test_sessions/a723507c-c588-4c9c-acc9-f4838394b4e9

      From test_log:

      [08/05/2017 08:07:44] Entering time_file_creation with proc_id = 7
      [trevis-53vm1.trevis.hpdd.intel.com][[38258,1],6][btl_tcp_frag.c:215:mca_btl_tcp_frag_recv] [08/05/2017 08:07:49] FATAL error on process 7
      Proc 7: Could not create file number [496] named [8f4mh.4yJ]: Disk quota exceeded
      mca_btl_tcp_frag_recv: readv failed: Connection reset by peer (104)
      [trevis-53vm2.trevis.hpdd.intel.com][[38258,1],5][btl_tcp_frag.c:215:mca_btl_tcp_frag_recv] mca_btl_tcp_frag_recv: readv failed: Connection reset by peer (104)
      --------------------------------------------------------------------------
      mpirun has exited due to process rank 7 with PID 32222 on
      node trevis-53vm2 exiting improperly. There are two reasons this could occur:
      
      1. this process did not call "init" before exiting, but others in
      the job did. This can cause a job to hang indefinitely while it waits
      for all processes to call "init". By rule, if one process calls "init",
      then ALL processes must call "init" prior to termination.
      
      2. this process called "init", but exited without calling "finalize".
      By rule, all processes that call "init" MUST call "finalize" prior to
      exiting or it will be considered an "abnormal termination"
      
      This may have caused other processes in the application to be
      terminated by signals sent by mpirun (as reported here).
      --------------------------------------------------------------------------
       parallel-scale-nfsv3 test_metabench: @@@@@@ FAIL: metabench failed! 1 
        Trace dump:
        = /usr/lib64/lustre/tests/test-framework.sh:4980:error()
        = /usr/lib64/lustre/tests/functions.sh:380:run_metabench()
        = /usr/lib64/lustre/tests/parallel-scale-nfs.sh:103:test_metabench()
        = /usr/lib64/lustre/tests/test-framework.sh:5256:run_one()
        = /usr/lib64/lustre/tests/test-framework.sh:5295:run_one_logged()
        = /usr/lib64/lustre/tests/test-framework.sh:5142:run_test()
        = /usr/lib64/lustre/tests/parallel-scale-nfs.sh:105:main()
      

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                wc-triage WC Triage
                Reporter:
                casperjx James Casper (Inactive)
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated: