Details
-
Bug
-
Resolution: Fixed
-
Minor
-
Upstream, Lustre 2.15.0
-
Lustre filesystem with ZFS as backend filesystem.
Description
HPE bug-id: LUS-10650
When snapshot fs is mounted on the server already and if < lctl snapshot_mount > is issued again, snapshot fs is getting unmounted from the server.
[root@cslmo4702 ~]# mount -t lustre pool-mds65/mdt65 on /data/mdt65 type lustre (ro,svname=testfs-MDT0000,mgs,osd=osd-zfs) pool-mds65/mdt65@snap_test_fo on /mnt/snap_test_fo_MDT0000 type lustre (ro,svname=36cd4520-MDT0000,nomgs,rdonly_dev,mgs,osd=osd-zfs) [root@cslmo4702 ~]# lctl snapshot_list -F testfs filesystem_name: testfs snapshot_name: snap_test_fo snapshot_fsname: 36cd4520 modify_time: Tue Dec 7 13:35:05 2021 create_time: Tue Dec 7 13:35:05 2021 status: mounted [root@cslmo4702 ~]# lctl snapshot_mount -F testfs -n snap_test_fo Can't mount the snapshot snap_test_fo: No such process [root@cslmo4702 ~]# lctl snapshot_list -F testfs filesystem_name: testfs snapshot_name: snap_test_fo snapshot_fsname: 36cd4520 modify_time: Tue Dec 7 13:35:05 2021 create_time: Tue Dec 7 13:35:05 2021 status: not mount [root@cslmo4702 ~]# mount -t lustre pool-mds65/mdt65 on /data/mdt65 type lustre (ro,svname=testfs-MDT0000,mgs,osd=osd-zfs)
If we try the same operation with the `mount -t lustre` command instead of `lctl snapshot_mount` command then it works correctly (EALREADY is returned).
[root@cslmo4702 ~]# mount -t lustre -o rdonly_dev pool-mds65/mdt65@snap_test_fo /mnt/snap_test_fo_MDT0000 mount.lustre: mount pool-mds65/mdt65@snap_test_fo at /mnt/snap_test_fo_MDT0000 failed: Operation already in progress The target service is already running. (pool-mds65/mdt65@snap_test_fo)