[LU-5152] Can't enforce block quota when unprivileged user change group Created: 06/Jun/14 Updated: 24/Sep/20 Resolved: 06/Feb/18 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | Lustre 2.11.0, Lustre 2.12.0, Lustre 2.10.4 |
| Type: | Bug | Priority: | Minor |
| Reporter: | Niu Yawei (Inactive) | Assignee: | Hongchao Zhang |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Attachments: |
|
||||||||||||||||||||||||||||||||
| Issue Links: |
|
||||||||||||||||||||||||||||||||
| Severity: | 3 | ||||||||||||||||||||||||||||||||
| Bugzilla ID: | 13,493 | ||||||||||||||||||||||||||||||||
| Rank (Obsolete): | 14216 | ||||||||||||||||||||||||||||||||
| Description |
|
A quota bug which affects all versions of Lustre was recently revealed: If an unprivileged user belongs to multiple groups, when she changes her file from one group to another, block quota won't be enforced. Above situation was never been considered during quota design (from the first version of quota to current new quota), probably such use case is very rare in the real world, otherwise, it would be reported earlier. I think we'd fix it in the current new quota arch to make Lustre quota complete. |
| Comments |
| Comment by Andreas Dilger [ 06/Jun/14 ] |
|
It makes sense to see what happens with quota for local filesystems. I expect that the chgrp will fail with -EDQUOT, but that should be tested. We shouldn't be making up new semantics if it is possible/reasonable to keep the same behaviour as local filesystems. |
| Comment by Niu Yawei (Inactive) [ 09/Jun/14 ] |
|
Yes, I tested on local ext4, chgrp will fail with -EDQUOT. |
| Comment by Ben Evans (Inactive) [ 12/Jan/17 ] |
|
We've encountered this bug at a customer site. |
| Comment by Andrew Perepechko [ 27/Feb/17 ] |
It was reported a long time ago in https://bugzilla.lustre.org/show_bug.cgi?id=13493 |
| Comment by Niu Yawei (Inactive) [ 30/Mar/17 ] |
|
Proposal is attached ( |
| Comment by Andreas Dilger [ 05/Apr/17 ] |
|
Niu, thanks for writing up the design. Some general comments:
The main question is whether this can be fixed more easily by just having the MDS check the target group's quota before changing the inode's group locally? The window for a user to exceed grant would typically be small enough that this shouldn't be a big problem, and it would be much simpler to implement. Definitely I'm in support of implementing grant revocation and other cleanups proposed, but I think it will take a long time to implement all of these proposed changes? |
| Comment by Niu Yawei (Inactive) [ 05/Apr/17 ] |
|
Thank you for the review, Andreas.
Right, I agree. That's why I think a reclaim mechanism is necessary, and the reclaim should happen only when it's short of quota limit on slave.
I'm not sure if I understand you correctly. You mean not acquire/consume grants for usr/group in write RPC, but use a dedicate new RPC to do acquire/release? I think that new RPC should be used for chgrp, but for cached write, we still need to acquire/consume grants for each user/group by write RPC (few more sets of 'grant' numbers needs be packed in write RPC).
That was my initial idea, but I gave it up due to two considerations:
We can see my proposal is just to offload all these jobs to client, and to ensure the quota correctness, an additional work is to reserve quota till whole setattr procedure done, but that one can be achieved by leveraging existing grant code, and a side benefit is that cache write exceeding quota problem can be solved at the same time. |
| Comment by Andreas Dilger [ 11/Apr/17 ] |
Considering that we already have problems with quotas being spread across OSTs, I think that spreading quotas across all of the clients can become even worse. If each client in a 32000-node system needs 32MB to generate good RPCs, that means 1TB of quota would be needed. Even with only 1MB of quota per client this would be 32GB of quota consumed just to generate a single RPC per client.
I was thinking that the quota/grant acquire could be done by enqueueing the DLM lock on the quota resource FID, and the quota/grant is returned to the client with the LVB data, and the client keeps this LVB updated as quota/grant is consumed. When the lock is cancelled, any remaining quota/grant is returned with the lock.
The MDS would need to track the total reserved quota for the setattr operations, not just checking each one. It would "consume" quota locally (from quota master) for the new user/group for each operation, and that quota would need to be logged in the setattr llog and transferred to the OSTs along with the setattr operations. I don't think the MDS would need to query the OSTs for their quota limits at all, but rather get its own quota. If there is a separate mechanism to reclaim space from OSTs, then that would happen in the background. It would make sense to ensure there is space in the new "llog_setattr64_rec_v2" being added to 2.10 for project quotas to log quota updates for each of user/group/project in case we need it. It is true that there could be a race condition, if the file size is growing quickly while the ownership is being changed, but that is not any different than quota races today for regular writes. |
| Comment by Niu Yawei (Inactive) [ 12/Apr/17 ] |
Right, but to implement an accurate quota for chgrp & cached write, I think that's probably the only way we have. It's worthy noting that these reserved quotas can be reclaimed when server is short of quotas (usage approaching limit), and inactive client (no user/group write from that client) should have 0 reservation at the end.
My plan was to use single lock for all IDs (not per ID lock), and that lock will never be revoked, I just want to use it's existing scalable glimpse mechanism to reclaim 'grant' or notify limit set/cleared.
I think the major drawback of this method is that it increases quota imbalance unnecessarily, when setattr on MDT, it acquires large amount of quota limit from quota master, after a short time when setattr synced to OSTs, OSTs have to acquire the limit back? (If OSTs use the limit packed in setattr log directly, it'll introduce more complexity on limit syncing between master & salves.) If OSTs acquire limit in the first place, such kind of thrashing can be avoided. And it requires change to quota slave to be aware of setattr log, it needs to scan the setattr log on quota reintegration or on rebalancing. Another thing needs be mentioned is that limit reclaim on OSTs does happen in the background, but setattr has to wait for the rebalancing done (to acquire limit for MDT), so MDT needs to handle it properly to avoid blocking MDT service thread, also, MDT needs to glimpse OST objects to know current used blocks before setattr. All these work being handled by client looks better to me.
Yes, as I mentioned in proposal, I think we can use this opportunity to solve the current problem. It looks to me that both manners require lots of development effort, so my opinion is that we should choose a way that can solve the problem better, at the meantime, the same framework could be reused by other purposes. |
| Comment by Gerrit Updater [ 16/Sep/17 ] |
|
Hongchao Zhang (hongchao.zhang@intel.com) uploaded a new patch: https://review.whamcloud.com/29029 |
| Comment by Gerrit Updater [ 17/Nov/17 ] |
|
Hongchao Zhang (hongchao.zhang@intel.com) uploaded a new patch: https://review.whamcloud.com/30146 |
| Comment by Gerrit Updater [ 06/Feb/18 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/30146/ |
| Comment by Peter Jones [ 06/Feb/18 ] |
|
Landed for 2.11 |
| Comment by Gerrit Updater [ 07/Feb/18 ] |
|
Minh Diep (minh.diep@intel.com) uploaded a new patch: https://review.whamcloud.com/31210 |
| Comment by Gerrit Updater [ 09/Feb/18 ] |
|
John L. Hammond (john.hammond@intel.com) merged in patch https://review.whamcloud.com/31210/ |
| Comment by Gerrit Updater [ 16/Nov/18 ] |
|
Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33678 |
| Comment by Gerrit Updater [ 18/Nov/18 ] |
|
Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33682 |
| Comment by Gerrit Updater [ 21/Nov/18 ] |
|
Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33705 |
| Comment by Gerrit Updater [ 22/Nov/18 ] |
|
Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/33682/ |
| Comment by Gerrit Updater [ 29/Nov/18 ] |
|
Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/33705/ |