Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-19659

parallel-scale-nfsv3 test_iorssf: FAIL: ior failed! 1

    XMLWordPrintable

Details

    • Bug
    • Resolution: Unresolved
    • Medium
    • Lustre 2.18.0
    • Lustre 2.17.0
    • None
    • 3
    • 9223372036854775807

    Description

      This issue was created by maloo for jianyu <yujian@whamcloud.com>

      This issue relates to the following test suite run: https://testing.whamcloud.com/test_sets/4115889f-d0d9-48ca-98b5-2df02fc60f1e

      test_iorssf failed with the following error:

      Results: 
      
      access    bw(MiB/s)  IOPS       Latency(s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   total(s)   iter
      ------    ---------  ----       ----------  ---------- ---------  --------   --------   --------   --------   ----
      Commencing write performance test: Wed Sep  3 16:51:13 2025
      WARNING: inconsistent file size by different tasks
      WARNING: Expected aggregate file size       = 25165824
      WARNING: Stat() of aggregate file size      = 23527424
      WARNING: Using actual aggregate bytes moved = 25165824
      write     154.81     1277.47    0.003075    6144       1024.00    0.006928   0.018787   0.132599   0.155028   0   
      Verifying contents of the file(s) just written.
      Wed Sep  3 16:51:13 2025
      
      WARNING: Incorrect data on write (1 errors found).
      
      Used Time Stamp 1756918273 (0x68b87201) for Data Signature
      Commencing read performance test: Wed Sep  3 16:51:13 2025
      
      read      604.06     627.71     0.006366    6144       1024.00    0.001990   0.038234   0.000046   0.039731   0   
      Max Write: 154.81 MiB/sec (162.33 MB/sec)
      Max Read:  604.06 MiB/sec (633.40 MB/sec)
      
      Summary of all tests:
      Operation   Max(MiB)   Min(MiB)  Mean(MiB)     StdDev   Max(OPs)   Min(OPs)  Mean(OPs)     StdDev    Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt   blksiz    xsize aggs(MiB)   API RefNum
      write         154.81     154.81     154.81       0.00     154.81     154.81     154.81       0.00    0.15503         NA            NA     0      4   2    1   0     1        1         0    0      1  6291456  1048576      24.0 POSIX      0
      read          604.06     604.06     604.06       0.00     604.06     604.06     604.06       0.00    0.03973         NA            NA     0      4   2    1   0     1        1         0    0      1  6291456  1048576      24.0 POSIX      0
      Finished            : Wed Sep  3 16:51:13 2025
      --------------------------------------------------------------------------
      Primary job  terminated normally, but 1 process returned
      a non-zero exit code. Per user-direction, the job has been aborted.
      --------------------------------------------------------------------------
      --------------------------------------------------------------------------
      mpirun detected that one or more processes exited with non-zero status, thus causing
      the job to be terminated. The first process to do so was:
      
        Process name: [[20902,1],0]
        Exit code:    2
      --------------------------------------------------------------------------
      [onyx-151vm1.onyx.whamcloud.com:897430] 3 more processes have sent help message help-mtl-ofi.txt / OFI call fail
      [onyx-151vm1.onyx.whamcloud.com:897430] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
       parallel-scale-nfsv3 test_iorssf: @@@@@@ FAIL: ior failed! 1 
      

      Test session details:
      clients: https://build.whamcloud.com/job/lustre-master/4650 - 4.18.0-553.71.1.el8_10.x86_64
      servers: https://build.whamcloud.com/job/lustre-master/4650 - 4.18.0-553.71.1.el8_lustre.x86_64

      <<Please provide additional information about the failure here>>

      VVVVVVV DO NOT REMOVE LINES BELOW, Added by Maloo for auto-association VVVVVVV
      parallel-scale-nfsv3 test_iorssf - ior failed! 1

      Attachments

        Activity

          People

            wc-triage WC Triage
            maloo Maloo
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: