Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-6620

Can not Re-use loop device for MDT once unmounted from Lustre, throws error : device is busy

    XMLWordPrintable

Details

    • Bug
    • Resolution: Cannot Reproduce
    • Major
    • None
    • Lustre 2.7.0
    • Scientific Linux release 6.6 (Carbon)
      [root@localhost ~]# uname -r
      2.6.32.431.5.1.el6_lustre

    Description

      Hi,
      Pre-requisite to reproduce this bug :
      1. A single Scientific Linux (release 6.6) VM, with min 1 GB memory and
      50GB disk space.
      2. A lustre setup 2.7.51 up and running on the above VM, in my case all
      lustre components are configured on the same VM .
      3. I have added 2 extra MDTs of 20 GB each to the lustre setup, the MDTs were configured on two loop devices.
      ===================================
      Steps to reproduce the issue :
      ===================================
      1. run dd command to generate some IO on the lustre filesystem .
      ( dd if=/dev/zero of=/mnt/lustre/test bs=512M count=10).
      2. Once IOs are completed , stop Lustre filesystem , i had executed
      lustrecleanup.sh script (../lustre-release/lustre/tests/llmountcleanup.sh)
      to unmount/stop the lustre.
      3.Then I had unmounted those two extra MDTs manually using command
      < umount -f /mnt/mds2>
      < umount -f /mnt/mds3>
      4.Later when I remounted the lustre with llmount.sh script , Lustre got mounted successfully. But as Soon as I tried to bring back those two loop devices on which previously both the MDTs were configured , started throwing "device is busy" message.
      =========================
      Busy message on the command prompt when I wanted to bring back two additional MDTs on the same loops devices used before for MDTs)
      ------------------------------------------------------------------
      [root@localhost ~]# losetup /dev/loop7 /home/MGS_MDT
      losetup: /dev/loop7: device is busy
      [root@localhost ~]#
      [root@localhost ~]#
      [root@localhost ~]# losetup /dev/loop6 /home/MGS_MDT3
      losetup: /dev/loop6: device is busy
      ==============================================

      Before the Test
      =============
      lfs df -h
      UUID bytes Used Available Use% Mounted on
      lustre-MDT0000_UUID 7.2G 435.8M 6.2G 6% /mnt/lustre[MDT:0]
      lustre-MDT0001_UUID 15.0G 869.1M 13.1G 6% /mnt/lustre[MDT:1]
      lustre-MDT0002_UUID 15.0G 869.1M 13.1G 6% /mnt/lustre[MDT:2]
      lustre-OST0000_UUID 14.9G 441.4M 13.7G 3% /mnt/lustre[OST:0]
      lustre-OST0001_UUID 14.9G 441.4M 13.7G 3% /mnt/lustre[OST:1]

      filesystem summary: 29.9G 882.8M 27.5G 3% /mnt/lustre

      ===================================================
      Unmounting Lustre:
      ===================
      [root@localhost ~]# sh /var/lib/jenkins/jobs/Lustre-New-Test/workspace/default/lustre-release/lustre/tests/llmountcleanup.sh
      ==============================
      Unmounting the additional MDTs
      ==============================
      [root@localhost ~]# umount -f /mnt/mds2
      [root@localhost ~]# umount -f /mnt/mds3
      [root@localhost ~]#
      ====================================================

      Mounting the Lustre again :
      =========================
      [root@localhost ~]# sh /var/lib/jenkins/jobs/Lustre-New-Test/workspace/default/lustre-release/lustre/tests/llmount.sh

      [root@localhost ~]# lfs df -h

      UUID bytes Used Available Use% Mounted on
      lustre-MDT0000_UUID 7.2G 435.7M 6.2G 6% /mnt/lustre[MDT:0]
      lustre-OST0000_UUID 14.9G 441.1M 13.7G 3% /mnt/lustre[OST:0]
      ======================
      Trying to bring back additional MDTs using same loop devices used before:
      =============================================
      lustre-OST0001_UUID 14.9G 441.1M 13.7G 3% /mnt/lustre[OST:1]

      filesystem summary: 29.9G 882.2M 27.5G 3% /mnt/lustre
      ======================
      Attaching var/log/messages
      Attaching dmesgs
      Attaching kernel.log

      ===========================
      Thanks,
      Paramita Varma

      Attachments

        1. dmesg-for-deviceBusy.txt
          81 kB
        2. kernel.log
          1.65 MB
        3. LogMessagesfor-DeviceBusy
          804 kB

        Activity

          People

            wc-triage WC Triage
            paramitavarma Paramita varma (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: