Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-9925

mount.lustre: /dev/sda has not been formatted with mkfs.lustre or the backend filesystem type is not supported by this tool

    XMLWordPrintable

Details

    • Bug
    • Resolution: Not a Bug
    • Critical
    • None
    • Lustre 2.10.0
    • None
    • Lustre: Build Version: 2.10.0_62_ge1d3a0e
    • 3
    • 9223372036854775807

    Description

      We are occasionally seeing errors trying to mount a Lustre target such as:

      # mount -t lustre /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk15 /mnt/testfs-OST0000
      mount.lustre: /dev/sda has not been formatted with mkfs.lustre or the backend filesystem type is not supported by this tool
      # echo $?
      19
      
      

       

      This is after previously having formatted and "registered" (i.e. initial mount) the target:

      # mkfs.lustre --ost --mgsnode=10.14.82.168@tcp0 --mgsnode=10.14.82.169@tcp0 --failnode=10.14.82.171@tcp0 --index=0 --mkfsoptions=-J size=2048 --backfstype=ldiskfs --fsname=testfs /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk15
      # mount -t lustre /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk15 /mnt/testfs-OST0000
      # umount /dev/sda
      
      

       

      The kernel log contains after the above:

      Aug 29 06:01:44 lotus-44vm17.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sda): file extents enabled, maximum tree depth=5
      Aug 29 06:01:44 lotus-44vm17.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sda): mounted filesystem with ordered data mode. Opts: errors=remount-ro
      Aug 29 06:01:44 lotus-44vm17.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sda): file extents enabled, maximum tree depth=5
      Aug 29 06:01:44 lotus-44vm17.lotus.hpdd.lab.intel.com kernel: LDISKFS-fs (sda): mounted filesystem with ordered data mode. Opts: ,errors=remount-ro,no_mbcache,nodelalloc
      Aug 29 06:01:44 lotus-44vm17.lotus.hpdd.lab.intel.com kernel: Lustre: testfs-OST0000: new disk, initializing
      Aug 29 06:01:44 lotus-44vm17.lotus.hpdd.lab.intel.com kernel: Lustre: srv-testfs-OST0000: No data found on store. Initialize space
      Aug 29 06:01:44 lotus-44vm17.lotus.hpdd.lab.intel.com kernel: Lustre: testfs-OST0000: Imperative Recovery not enabled, recovery window 300-900
      Aug 29 06:01:45 lotus-44vm17.lotus.hpdd.lab.intel.com kernel: Lustre: Failing over testfs-OST0000
      Aug 29 06:01:46 lotus-44vm17.lotus.hpdd.lab.intel.com kernel: Lustre: server umount testfs-OST0000 complete
      
      

      So it sure seems as if the format and registration are successful.

       

      However (occasionally, as noted above) when we subsequently [re-]mount that target with the command:

      # mount -t lustre /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk15 /mnt/testfs-OST0000
      
      

      we get an error (also as noted at the top of this report):

      mount.lustre: /dev/sda has not been formatted with mkfs.lustre or the backend filesystem type is not supported by this tool
      # echo $?
      19
      
      

       

      We are certain we have the correct device(s) on this node:

      # ls -l /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk15
      lrwxrwxrwx 1 root root 9 Aug 29 08:50 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_disk15 -> ../../sda
      
      

      Attachments

        1. dmesg-host.txt.bz2
          90 kB
        2. dmesg-vm.txt.bz2
          12 kB
        3. lustre-long-mount.dk
          761 kB
        4. mount-ENODEV-1504228032.4.strace
          343 kB
        5. mount-ENODEV-1504228037.75.strace
          357 kB
        6. mount-ENODEV-1504228041.99.strace
          368 kB
        7. mount-ENODEV-1504228045.08.strace
          283 kB
        8. mount-ENODEV-1504228051.03.strace
          369 kB
        9. mount-ENODEV-1504228057.35.strace
          359 kB
        10. mount-ENODEV-1504228061.53.strace
          287 kB

        Activity

          People

            wc-triage WC Triage
            brian Brian Murrell (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: