[LU-13766] tgt_grant_check() lsrza-OST000a: cli dfdf1aff-07d9-53b3-5632-c18a78027eb2 claims 1703936 GRANT, real grant 0 Created: 09/Jul/20  Updated: 06/Nov/20  Resolved: 16/Oct/20

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.12.4
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Olaf Faaland Assignee: Mikhail Pershin
Resolution: Fixed Votes: 0
Labels: llnl
Environment:

brass
zfs-0.7.11-9.4llnl.ch6.x86_64
lustre-2.12.4_6.chaos-1.ch6.x86_64
(other lustre clusters as well including those at lustre 2.10.8)


Issue Links:
Related
is related to LU-12687 Fast ENOSPC on direct I/O Resolved
is related to LU-14125 client starved for grant but OST has ... Reopened
is related to LU-13457 tgt_grant_check() cli claims grant Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

Many thousands of console log messages like this one on the lustre OSS nodes after servers were rebooted while clients stayed up:

Jun 25 03:45:08 brass21 kernel: LustreError: 27913:0:(tgt_grant.c:758:tgt_grant_check()) lsrza-OST0010: cli ac60c141-9de9-1a2e-5d0d-fd1e525ff506 claims 1703936 GRANT, real grant 0
Jun 25 03:45:08 brass21 kernel: LustreError: 27913:0:(tgt_grant.c:758:tgt_grant_check()) Skipped 237 previous similar messages
Jun 25 03:47:35 brass10 kernel: LustreError: 20031:0:(tgt_grant.c:758:tgt_grant_check()) lsrza-OST0005: cli f6897b82-71ad-5bc7-b60d-554c4cbbcdf7 claims 1703936 GRANT, real grant 0
Jun 25 03:47:35 brass10 kernel: LustreError: 20031:0:(tgt_grant.c:758:tgt_grant_check()) Skipped 433 previous similar messages

This server cluster has 4 MDTs and 18 OSTs.

The number of these messages dropped significantly over time. Roughly, in thousands, counts per day for all of brass were:

2020-06-24 469
2020-06-25 417
2020-06-26 39
2020-06-27 27
2020-06-28 16
2020-06-29 19

From what I can see, under Lustre 2.12.4 (at least) the clients all have some notion of their allocated grant, and when the server is restarted, the server loses all record of what grant it allocated. They then appear to sync up as clients issue new writes using grant they were given, but that the server does not know about. Eventually they would use up that "old grant" and be back in sync again.

The pattern above seems consistent with that. But why is the number of such messages so large?

There are 18 OSTs, and they report 967 exports, so that works out to about (987,000 messages / 18,000 OST_client combinations) = about 54,000 such messages per OST_client combination. It seems strange it would take 54,000 writes for the grant to be synced up between an OST and a client after some disturbance like a reboot.



 Comments   
Comment by Olaf Faaland [ 09/Jul/20 ]

For my records, my local issue is TOSS4826

Comment by Gerrit Updater [ 09/Jul/20 ]

Olaf Faaland-LLNL (faaland1@llnl.gov) uploaded a new patch: https://review.whamcloud.com/39324
Subject: LU-13766 obdclass: add grant fields to export procfile
Project: fs/lustre-release
Branch: b2_12
Current Patch Set: 1
Commit: 46411bc3510234fa7608821f8147244ccecca1d0

Comment by Olaf Faaland [ 09/Jul/20 ]

The patch is for troubleshooting on my systems. If you think it would be a useful change generally, I'll rebase it on master and push it for test and review.

Comment by Peter Jones [ 09/Jul/20 ]

Mike

What do you think about this proposal?

Peter

Comment by Mikhail Pershin [ 10/Jul/20 ]

Usually server gets grant info from clients upon client reconnect, so it is sync-ed when clients are connected. In our case it looks like before reboot server gives more grants to clients then it has, so after reboot each client reports own grants and all of them in sum cause no grants remain on server. I tend to think that can be caused by LU-12687 issue which is about such behavior - clients may have more grants than server can handle.

Comment by Olaf Faaland [ 14/Jul/20 ]

One of the connected clients, rzgenie28, had what appears to be a corrupt (maybe underflowed?) value of cur_grant_bytes for OST0000; that node is also involved in LU-13763:

[root@rzgenie28:~]# lctl get_param -n osc.*OST0000*.cur_grant_bytes
18446744073707847680

Total grant reported by OST0000:

[root@brass5:~]# lctl get_param obdfilter.*OST0000.tot_granted
obdfilter.lsrza-OST0000.tot_granted=1551928458425

Total cur_grant_bytes summed over all clients without rzgenie28, the outlier:

Brass Clients - rzgenie28 Clients/Brass
1549998554295 18446745561354235904 - 18446744073707847680 = 
1487646388224
clients/brass = .959772
Comment by Olaf Faaland [ 15/Jul/20 ]

I confirmed that I can reproduce LU-12687 under Lustre 2.12.15 (probably no surprise to you).

Comment by Mikhail Pershin [ 15/Jul/20 ]

Olaf, I've ported it to b2_12 if needed: https://review.whamcloud.com/39386

Comment by Olaf Faaland [ 15/Jul/20 ]

Thanks, Mikhail. I ported that grant patch for direct io also, and in my local test (using FSTYPE=zfs llmount.sh, and dd oflag=direct) it did not work. Unfortunately, I just got that far yesterday before I had to stop, so I don't know yet why. Our backports look the same to me. Did you test it successfully, or are you waiting for auto testing results for that?

Comment by Mikhail Pershin [ 15/Jul/20 ]

I checked locally new patch tests 64e/f and they are working, let's see Maloo test results

Comment by Olaf Faaland [ 15/Jul/20 ]

OK, good. Thanks.

Comment by Vladimir Saveliev [ 03/Aug/20 ]

Olaf,

Jun 25 03:45:08 brass21 kernel: LustreError: 27913:0:(tgt_grant.c:758:tgt_grant_check()) lsrza-OST0010: cli ac60c141-9de9-1a2e-5d0d-fd1e525ff506 claims 1703936 GRANT, real grant 0

weren't OSTs mentioned in such messages running out of space by that time by chance?

Comment by Olaf Faaland [ 03/Aug/20 ]

Vladimir,
No, those OSTs had >350T free each.

Comment by Olaf Faaland [ 26/Aug/20 ]

Hi Mike,

Are you able to look at the test failures on https://review.whamcloud.com/#/c/39386/ ?

thanks

Comment by Mikhail Pershin [ 26/Aug/20 ]

Olaf, I am working on that right now, it seems that just taking one patch from master was not enough, some other related changes are needed.

Comment by Mikhail Pershin [ 30/Aug/20 ]

Olaf, I've found the reason of failures, patch should work now

Comment by Olaf Faaland [ 31/Aug/20 ]

Thanks Mike

Comment by Peter Jones [ 19/Sep/20 ]

The port has landed to b2_12 but I'm holding off closing out the ticket because we've been seeing some test failures that coincide with this landing that we should investigate.

Comment by Peter Jones [ 02/Oct/20 ]

ofaaland Mike has identiified that https://review.whamcloud.com/#/c/39518/ is the missing fix needed. We'll be including this into 2.12.6 and you could pick it up sooner if you wish to use the previously mentioned fixes sooner.

Comment by Olaf Faaland [ 03/Oct/20 ]

Thanks Peter, Thanks Mike

Comment by Peter Jones [ 16/Oct/20 ]

I believe that everything is landed for this now.

Generated at Sat Feb 10 03:04:02 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.