Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-1977

Test failure on test suite sanity, subtest test_228b

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Minor
    • Lustre 2.4.0
    • Lustre 2.3.0
    • 3
    • 7545

    Description

      This issue was created by maloo for yujian <yujian@whamcloud.com>

      This issue relates to the following test suite run: https://maloo.whamcloud.com/test_sets/ff130926-0241-11e2-ab94-52540035b04c.

      The sub-test test_228b failed with the following error:

      == sanity test 228b: idle OI blocks can be reused after MDT restart == 01:46:50 (1348044410)
      fail_loc=0x80001002
      open(/mnt/lustre/d0.sanity/d228/t-9288) error: Input/output error
      total: 9288 creates in 81.11 seconds: 114.51 creates/second
      fail_loc=0
      CMD: fat-intel-2 sync
      CMD: fat-intel-2 debugfs -c -R \"stat oi.16.63\" /dev/sdc5
      fat-intel-2: debugfs 1.42.3.wc3 (15-Aug-2012)
      fat-intel-2: /dev/sdc5: catastrophic mode - not reading inode or group bitmaps
       - unlinked 0 (time 1348044498 ; total 0 ; last 0)
      unlink(/mnt/lustre/d0.sanity/d228/t-9288) error: No such file or directory
      total: 9288 unlinks in 11 seconds: 844.363647 unlinks/second
      CMD: fat-intel-2 grep -c /mnt/mds1' ' /proc/mounts
      Stopping /mnt/mds1 (opts:) on fat-intel-2
      CMD: fat-intel-2 umount -d /mnt/mds1
      CMD: fat-intel-2 lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
      CMD: fat-intel-2 mkdir -p /mnt/mds1
      CMD: fat-intel-2 test -b /dev/sdc5
      Starting mds1:   /dev/sdc5 /mnt/mds1
      CMD: fat-intel-2 mkdir -p /mnt/mds1; mount -t lustre   		                   /dev/sdc5 /mnt/mds1
      CMD: fat-intel-2 PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/utils/gss:/usr/lib64/lustre/utils:/usr/lib64/openmpi/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin::/sbin NAME=ncli sh rpc.sh set_default_debug \"vfstrace rpctrace dlmtrace neterror ha config ioctl super\" \"all -lnet -lnd -pinger\" 48 
      CMD: fat-intel-2 e2label /dev/sdc5 2>/dev/null
      Started lustre-MDT0000
      df: `/mnt/lustre': Cannot send after transport endpoint shutdown
      df: no file systems processed
       sanity test_228b: @@@@@@ FAIL: Fail to df.
      

      Info required for matching: sanity 228b

      Lustre Build: http://build.whamcloud.com/job/lustre-b2_3/19
      USE_OFD=yes
      OSTFSTYPE=zfs
      LOAD_MODULES_REMOTE=true

      Attachments

        Activity

          People

            wc-triage WC Triage
            maloo Maloo
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: