[LU-6620] Can not Re-use loop device for MDT once unmounted from Lustre, throws error : device is busy Created: 19/May/15  Updated: 05/Aug/20  Resolved: 05/Aug/20

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.7.0
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Paramita varma (Inactive) Assignee: WC Triage
Resolution: Cannot Reproduce Votes: 0
Labels: mdt
Environment:

Scientific Linux release 6.6 (Carbon)
[root@localhost ~]# uname -r
2.6.32.431.5.1.el6_lustre


Attachments: HTML File LogMessagesfor-DeviceBusy     Text File dmesg-for-deviceBusy.txt     Text File kernel.log    
Epic/Theme: Lustre-2.5.2, test
Severity: 3
Epic: client, metadata, mount, server, test
Project: Test Infrastructure
Rank (Obsolete): 9223372036854775807

 Description   

Hi,
Pre-requisite to reproduce this bug :
1. A single Scientific Linux (release 6.6) VM, with min 1 GB memory and
50GB disk space.
2. A lustre setup 2.7.51 up and running on the above VM, in my case all
lustre components are configured on the same VM .
3. I have added 2 extra MDTs of 20 GB each to the lustre setup, the MDTs were configured on two loop devices.
===================================
Steps to reproduce the issue :
===================================
1. run dd command to generate some IO on the lustre filesystem .
( dd if=/dev/zero of=/mnt/lustre/test bs=512M count=10).
2. Once IOs are completed , stop Lustre filesystem , i had executed
lustrecleanup.sh script (../lustre-release/lustre/tests/llmountcleanup.sh)
to unmount/stop the lustre.
3.Then I had unmounted those two extra MDTs manually using command
< umount -f /mnt/mds2>
< umount -f /mnt/mds3>
4.Later when I remounted the lustre with llmount.sh script , Lustre got mounted successfully. But as Soon as I tried to bring back those two loop devices on which previously both the MDTs were configured , started throwing "device is busy" message.
=========================
Busy message on the command prompt when I wanted to bring back two additional MDTs on the same loops devices used before for MDTs)
------------------------------------------------------------------
[root@localhost ~]# losetup /dev/loop7 /home/MGS_MDT
losetup: /dev/loop7: device is busy
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# losetup /dev/loop6 /home/MGS_MDT3
losetup: /dev/loop6: device is busy
==============================================

Before the Test
=============
lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 7.2G 435.8M 6.2G 6% /mnt/lustre[MDT:0]
lustre-MDT0001_UUID 15.0G 869.1M 13.1G 6% /mnt/lustre[MDT:1]
lustre-MDT0002_UUID 15.0G 869.1M 13.1G 6% /mnt/lustre[MDT:2]
lustre-OST0000_UUID 14.9G 441.4M 13.7G 3% /mnt/lustre[OST:0]
lustre-OST0001_UUID 14.9G 441.4M 13.7G 3% /mnt/lustre[OST:1]

filesystem summary: 29.9G 882.8M 27.5G 3% /mnt/lustre

===================================================
Unmounting Lustre:
===================
[root@localhost ~]# sh /var/lib/jenkins/jobs/Lustre-New-Test/workspace/default/lustre-release/lustre/tests/llmountcleanup.sh
==============================
Unmounting the additional MDTs
==============================
[root@localhost ~]# umount -f /mnt/mds2
[root@localhost ~]# umount -f /mnt/mds3
[root@localhost ~]#
====================================================

Mounting the Lustre again :
=========================
[root@localhost ~]# sh /var/lib/jenkins/jobs/Lustre-New-Test/workspace/default/lustre-release/lustre/tests/llmount.sh

[root@localhost ~]# lfs df -h

UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 7.2G 435.7M 6.2G 6% /mnt/lustre[MDT:0]
lustre-OST0000_UUID 14.9G 441.1M 13.7G 3% /mnt/lustre[OST:0]
======================
Trying to bring back additional MDTs using same loop devices used before:
=============================================
lustre-OST0001_UUID 14.9G 441.1M 13.7G 3% /mnt/lustre[OST:1]

filesystem summary: 29.9G 882.2M 27.5G 3% /mnt/lustre
======================
Attaching var/log/messages
Attaching dmesgs
Attaching kernel.log

===========================
Thanks,
Paramita Varma


Generated at Sat Feb 10 02:01:47 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.