Details
-
Bug
-
Resolution: Fixed
-
Minor
-
None
-
Lustre 2.12.4
-
brass
zfs-0.7.11-9.4llnl.ch6.x86_64
lustre-2.12.4_6.chaos-1.ch6.x86_64
(other lustre clusters as well including those at lustre 2.10.8)
-
3
-
9223372036854775807
Description
Many thousands of console log messages like this one on the lustre OSS nodes after servers were rebooted while clients stayed up:
Jun 25 03:45:08 brass21 kernel: LustreError: 27913:0:(tgt_grant.c:758:tgt_grant_check()) lsrza-OST0010: cli ac60c141-9de9-1a2e-5d0d-fd1e525ff506 claims 1703936 GRANT, real grant 0 Jun 25 03:45:08 brass21 kernel: LustreError: 27913:0:(tgt_grant.c:758:tgt_grant_check()) Skipped 237 previous similar messages Jun 25 03:47:35 brass10 kernel: LustreError: 20031:0:(tgt_grant.c:758:tgt_grant_check()) lsrza-OST0005: cli f6897b82-71ad-5bc7-b60d-554c4cbbcdf7 claims 1703936 GRANT, real grant 0 Jun 25 03:47:35 brass10 kernel: LustreError: 20031:0:(tgt_grant.c:758:tgt_grant_check()) Skipped 433 previous similar messages
This server cluster has 4 MDTs and 18 OSTs.
The number of these messages dropped significantly over time. Roughly, in thousands, counts per day for all of brass were:
2020-06-24 469
2020-06-25 417
2020-06-26 39
2020-06-27 27
2020-06-28 16
2020-06-29 19
From what I can see, under Lustre 2.12.4 (at least) the clients all have some notion of their allocated grant, and when the server is restarted, the server loses all record of what grant it allocated. They then appear to sync up as clients issue new writes using grant they were given, but that the server does not know about. Eventually they would use up that "old grant" and be back in sync again.
The pattern above seems consistent with that. But why is the number of such messages so large?
There are 18 OSTs, and they report 967 exports, so that works out to about (987,000 messages / 18,000 OST_client combinations) = about 54,000 such messages per OST_client combination. It seems strange it would take 54,000 writes for the grant to be synced up between an OST and a client after some disturbance like a reboot.