Details
-
Bug
-
Resolution: Fixed
-
Major
-
Lustre 2.12.3, Lustre 2.12.4
-
None
-
CentOS 7, kernel: 3.10.0-1062.18.1.el7.x86_64
-
3
-
9223372036854775807
Description
We have a Lustre setup consisting of two MDTs and eleven OSTs distributed over six physical servers. All of them run CentOS 7 with the kernel 3.10.0-1062.18.1.el7.x86_64 and Lustre 2.12.4. An update to the latest kernel and Lustre release was made after this issue showed up. before, we were using 3.10.0-957.12.1.el7.x86_64 with Lustre 2.12.3. The Update did not change the situation.
We use ZFS-based snapshots since a longer time (years) as part of our backup procedure. Snapshots are created every six hours and kept for one month (thinned out after two days), once per week one of the snapshots is mounted and used to create a backup on tape. So far, this worked very reliable.
Last month we activated changelog readers for both MDTs. The changelogs are consumed be a robinhood instance. After activating the changelogs, mounting snapshots did still work as expected.
Now, the following problem showed up: creating, mounting, and destroying of snapshots using lctl snapshot_* commands still works as usual. But actually mounting the snapshot with a client hangs forever.
Example:
mgs# lctl snapshot_mount --fsname meteo0 --name 2020-04-18-120016 mounted the snapshot 2020-04-18-120016 with fsname 086c9ea3 mgs# mount -t lustre -o ro 10.153.52.41@tcp:/086c9ea3 /mnt
No further error messages are shown. The mount command will never return (I waited 30min to be sure, the logs indicate a recovery period of 15min). Mounting the actual filesystem to which this snapshot belongs is not a problem. I also verified that really all devices are included in /etc/ldev.conf and that zfs snapshots of all devices are created and mounted. Also, all involved machines are able to reach each other with lctl ping.
Kernel logs from the MGS:
[Sa Apr 18 14:50:11 2020] Lustre: 086c9ea3: root_squash is set to 65534:65534 [Sa Apr 18 14:50:11 2020] Lustre: 086c9ea3: nosquash_nids set to 10.153.52.[26-41]@tcp 10.153.52.46@tcp 10.153.52.48@tcp 10.153.52.128@tcp 10.163.54.[1-2]@tcp 0@lo
No further error messages on the MGS. Once I press CTRL-C I get the following messages on the MGS:
[Sa Apr 18 14:50:29 2020] LustreError: 1951:0:(lmv_obd.c:1415:lmv_statfs()) 086c9ea3-MDT0000-mdc-ffff9a81a3000000: can't stat MDS #0: rc = -11 [Sa Apr 18 14:50:29 2020] LustreError: 1951:0:(lov_obd.c:839:lov_cleanup()) 086c9ea3-clilov-ffff9a81a3000000: lov tgt 0 not cleaned! deathrow=0, lovrc=1 [Sa Apr 18 14:50:29 2020] LustreError: 1951:0:(lov_obd.c:839:lov_cleanup()) Skipped 10 previous similar messages [Sa Apr 18 14:50:30 2020] Lustre: Unmounted 086c9ea3-client [Sa Apr 18 14:50:30 2020] LustreError: 1951:0:(obd_mount.c:1608:lustre_fill_super()) Unable to mount (-11)
Kernel logs from the MDT are not more helpful (also two OSTs on this server):
[Sa Apr 18 14:49:37 2020] Lustre: 086c9ea3-MDT0000: set dev_rdonly on this device [Sa Apr 18 14:49:37 2020] Lustre: Skipped 1 previous similar message [Sa Apr 18 14:49:37 2020] Lustre: 086c9ea3-MDT0000: root_squash is set to 65534:65534 [Sa Apr 18 14:49:37 2020] Lustre: 086c9ea3-MDT0000: nosquash_nids set to 10.153.52.[26-41]@tcp 10.153.52.46@tcp 10.153.52.48@tcp 10.153.52.128@tcp 10.163.54.[1-2]@tcp 0@lo [Sa Apr 18 14:49:37 2020] Lustre: 086c9ea3-MDT0000: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-900 [Sa Apr 18 14:49:37 2020] Lustre: Skipped 1 previous similar message [Sa Apr 18 14:49:37 2020] Lustre: 086c9ea3-MDD0000: changelog on [Sa Apr 18 14:49:37 2020] Lustre: 086c9ea3-OST0001: set dev_rdonly on this device [Sa Apr 18 14:49:38 2020] Lustre: 086c9ea3-OST0001: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-900 [Sa Apr 18 14:49:42 2020] Lustre: 086c9ea3-OST0001: Connection restored to 10.153.52.30@tcp (at 0@lo) [Sa Apr 18 14:49:42 2020] Lustre: Skipped 7 previous similar messages
Registering the changlog readers is the only change in configuration I'm aware of. So I tried to deregister the changelog readers. That is not solving the problem, but results in addition error messages within the kernel log:
[72695.040157] Lustre: 34765:0:(llog_cat.c:818:llog_cat_process_common()) 543fa513-MDD0000: can't destroy empty log [0x2307:0x1:0x0]: rc = -30 [72695.040161] Lustre: 34765:0:(llog_cat.c:818:llog_cat_process_common()) Skipped 1 previous similar message [72695.040766] Lustre: 34765:0:(llog_cat.c:894:llog_cat_process_or_fork()) 543fa513-MDD0000: catlog [0x5:0xa:0x0] crosses index zero
Attached are files containing the Lustre debug log taken with lctl debug_kernel:
- mount_server_mgs.txt: logs collected during the call to lctl snapshot_mount on the MGS.
-{{ mount_server_mdt[0|1].txt}}: logs collected on the two MDT during call to lctl snapshot_mount. - mount_client_mgs.txt: logs from MGS for mount of the client.
- mount_client_mdt[0|1].txt: logs from both MDTs for mount of the client.
Additional information: A second Lustre filesystem without changelogs activated runs on the same physical servers (only the MGS is a separate VM). The second system works completely as expected. Snapshots are mountable. This and the fact that the actual filesystem is mountable as usual makes me think, that network issues are rather unlikely.
Thanks a lot for looking into that!