Details
-
Bug
-
Resolution: Unresolved
-
Minor
-
None
-
None
-
None
-
master branch
-
3
-
9223372036854775807
Description
Mounting a Lustre target in read-only can be useful in case of disaster recovery. For example, when the underlying device is compromised. The Gael's presentation "Lustre on clown drives" from the LAD 2023 illustrates this kind of use cases.
Reproducer:
[root@dev ~]# ~eaujames/lustre-release/lustre/tests/llmount.sh ... [root@dev ~]# lctl lustre_build_version Lustre version: 2.16.51_103_gb4748cb [root@dev lustre]# cd /mnt/lustre [root@dev lustre]# printf "%s\n" testfile{001..500} | xargs -I{} -P20 dd if=/dev/zero of={} count=1 bs=1M ... [root@dev lustre]# mount -tlustre /dev/mapper/mds1_flakey on /mnt/lustre-mds1 type lustre (rw,svname=lustre-MDT0000,mgs,osd=osd-ldiskfs,user_xattr,errors=remount-ro) /dev/mapper/ost1_flakey on /mnt/lustre-ost1 type lustre (rw,svname=lustre-OST0000,mgsnode=10.0.2.7@tcp,osd=osd-ldiskfs) /dev/mapper/ost2_flakey on /mnt/lustre-ost2 type lustre (rw,svname=lustre-OST0001,mgsnode=10.0.2.7@tcp,osd=osd-ldiskfs) 10.0.2.7@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) [root@dev lustre]# umount /mnt/lustre-ost1 [root@dev lustre]# mount -tlustre /dev/mapper/ost1_flakey /mnt/lustre-ost1 [root@dev lustre]# umount /mnt/lustre-ost1 [root@dev lustre]# mount -tlustre -o ro /dev/mapper/ost1_flakey /mnt/lustre-ost1 lt-mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: Read-only file system
Dmesg:
[96293.670279] Lustre: server umount lustre-OST0000 complete [96294.779957] LustreError: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [96294.782162] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [96294.782166] Lustre: Skipped 1 previous similar message [96294.782312] LustreError: 31685:0:(ldlm_lib.c:1094:target_handle_connect()) lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [96304.569130] LDISKFS-fs (dm-7): file extents enabled, maximum tree depth=5 [96304.569368] LDISKFS-fs (dm-7): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [96304.569375] LustreError: 397:0:(osd_handler.c:8403:osd_mount()) lustre-OST0000-osd: failed to set lma on /dev/mapper/ost1_flakey root inode [96304.570132] LustreError: 397:0:(obd_config.c:777:class_setup()) setup lustre-OST0000-osd failed (-30) [96304.570979] LustreError: 397:0:(obd_mount.c:193:lustre_start_simple()) lustre-OST0000-osd setup error -30 [96304.571830] LustreError: 397:0:(tgt_mount.c:2204:server_fill_super()) Unable to start osd on /dev/mapper/ost1_flakey: -30 [96304.572607] LustreError: 397:0:(super25.c:171:lustre_fill_super()) llite: Unable to mount <unknown>: rc = -30 [96327.969756] LustreError: 31686:0:(ldlm_lib.c:1094:target_handle_connect()) lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [96327.973558] LustreError: 31686:0:(ldlm_lib.c:1094:target_handle_connect()) Skipped 6 previous similar messages
Attachments
Issue Links
- is related to
-
LU-15873 Make mounting with "-o rdonly_dev" work
-
- Resolved
-