Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-5152

Can't enforce block quota when unprivileged user change group

Details

    • 3
    • 13,493
    • 14216

    Description

      A quota bug which affects all versions of Lustre was recently revealed:

      If an unprivileged user belongs to multiple groups, when she changes her file from one group to another, block quota won't be enforced.

      Above situation was never been considered during quota design (from the first version of quota to current new quota), probably such use case is very rare in the real world, otherwise, it would be reported earlier.

      I think we'd fix it in the current new quota arch to make Lustre quota complete.

      Attachments

        Issue Links

          Activity

            [LU-5152] Can't enforce block quota when unprivileged user change group

            Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/33682/
            Subject: Revert "LU-5152 quota: enforce block quota for chgrp"
            Project: fs/lustre-release
            Branch: b2_10
            Current Patch Set:
            Commit: 83548f2b373884fcec9575417f0bc6de57abbdc2

            gerrit Gerrit Updater added a comment - Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/33682/ Subject: Revert " LU-5152 quota: enforce block quota for chgrp" Project: fs/lustre-release Branch: b2_10 Current Patch Set: Commit: 83548f2b373884fcec9575417f0bc6de57abbdc2

            Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33705
            Subject: LU-5152 quota: disable sync chgrp to OSTs
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: b9b9c21026c103a414c3bb32004459f26beeecdb

            gerrit Gerrit Updater added a comment - Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33705 Subject: LU-5152 quota: disable sync chgrp to OSTs Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: b9b9c21026c103a414c3bb32004459f26beeecdb

            Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33682
            Subject: Revert "LU-5152 quota: enforce block quota for chgrp"
            Project: fs/lustre-release
            Branch: b2_10
            Current Patch Set: 1
            Commit: 3f3e9312be341981060ec1b9912e1b93645c94a8

            gerrit Gerrit Updater added a comment - Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33682 Subject: Revert " LU-5152 quota: enforce block quota for chgrp" Project: fs/lustre-release Branch: b2_10 Current Patch Set: 1 Commit: 3f3e9312be341981060ec1b9912e1b93645c94a8

            Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33678
            Subject: Revert "LU-5152 quota: enforce block quota for chgrp"
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 52b06125e4012fd5b347c5237583b41d0254c2b5

            gerrit Gerrit Updater added a comment - Andreas Dilger (adilger@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33678 Subject: Revert " LU-5152 quota: enforce block quota for chgrp" Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 52b06125e4012fd5b347c5237583b41d0254c2b5

            John L. Hammond (john.hammond@intel.com) merged in patch https://review.whamcloud.com/31210/
            Subject: LU-5152 quota: enforce block quota for chgrp
            Project: fs/lustre-release
            Branch: b2_10
            Current Patch Set:
            Commit: 07412234ec60de20cb8d8e45d755297fe6da2d61

            gerrit Gerrit Updater added a comment - John L. Hammond (john.hammond@intel.com) merged in patch https://review.whamcloud.com/31210/ Subject: LU-5152 quota: enforce block quota for chgrp Project: fs/lustre-release Branch: b2_10 Current Patch Set: Commit: 07412234ec60de20cb8d8e45d755297fe6da2d61

            Minh Diep (minh.diep@intel.com) uploaded a new patch: https://review.whamcloud.com/31210
            Subject: LU-5152 quota: enforce block quota for chgrp
            Project: fs/lustre-release
            Branch: b2_10
            Current Patch Set: 1
            Commit: ccf8091ea0b36c1ff540eb910c9bce268a47d874

            gerrit Gerrit Updater added a comment - Minh Diep (minh.diep@intel.com) uploaded a new patch: https://review.whamcloud.com/31210 Subject: LU-5152 quota: enforce block quota for chgrp Project: fs/lustre-release Branch: b2_10 Current Patch Set: 1 Commit: ccf8091ea0b36c1ff540eb910c9bce268a47d874
            pjones Peter Jones added a comment -

            Landed for 2.11

            pjones Peter Jones added a comment - Landed for 2.11

            Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/30146/
            Subject: LU-5152 quota: enforce block quota for chgrp
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: 8a71fd5061bd073e055e6cbba1d238305e6827bb

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/30146/ Subject: LU-5152 quota: enforce block quota for chgrp Project: fs/lustre-release Branch: master Current Patch Set: Commit: 8a71fd5061bd073e055e6cbba1d238305e6827bb

            Hongchao Zhang (hongchao.zhang@intel.com) uploaded a new patch: https://review.whamcloud.com/30146
            Subject: LU-5152 quota: enforce block quota for chgrp
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 6457dbdd5f76a5bfd90f6d0383c26eaa67afb2f8

            gerrit Gerrit Updater added a comment - Hongchao Zhang (hongchao.zhang@intel.com) uploaded a new patch: https://review.whamcloud.com/30146 Subject: LU-5152 quota: enforce block quota for chgrp Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 6457dbdd5f76a5bfd90f6d0383c26eaa67afb2f8

            Hongchao Zhang (hongchao.zhang@intel.com) uploaded a new patch: https://review.whamcloud.com/29029
            Subject: LU-5152 quota: enforce block quota for chgrp
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 65ad2a61d710f1bd410fdf59a96a34497e233de0

            gerrit Gerrit Updater added a comment - Hongchao Zhang (hongchao.zhang@intel.com) uploaded a new patch: https://review.whamcloud.com/29029 Subject: LU-5152 quota: enforce block quota for chgrp Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 65ad2a61d710f1bd410fdf59a96a34497e233de0

            Considering that we already have problems with quotas being spread across OSTs, I think that spreading quotas across all of the clients can become even worse. If each client in a 32000-node system needs 32MB to generate good RPCs, that means 1TB of quota would be needed. Even with only 1MB of quota per client this would be 32GB of quota consumed just to generate a single RPC per client.

            Right, but to implement an accurate quota for chgrp & cached write, I think that's probably the only way we have. It's worthy noting that these reserved quotas can be reclaimed when server is short of quotas (usage approaching limit), and inactive client (no user/group write from that client) should have 0 reservation at the end.

            I was thinking that the quota/grant acquire could be done by enqueueing the DLM lock on the quota resource FID, and the quota/grant is returned to the client with the LVB data, and the client keeps this LVB updated as quota/grant is consumed. When the lock is cancelled, any remaining quota/grant is returned with the lock.

            My plan was to use single lock for all IDs (not per ID lock), and that lock will never be revoked, I just want to use it's existing scalable glimpse mechanism to reclaim 'grant' or notify limit set/cleared.

            The MDS would need to track the total reserved quota for the setattr operations, not just checking each one. It would "consume" quota locally (from quota master) for the new user/group for each operation, and that quota would need to be logged in the setattr llog and transferred to the OSTs along with the setattr operations. I don't think the MDS would need to query the OSTs for their quota limits at all, but rather get its own quota. If there is a separate mechanism to reclaim space from OSTs, then that would happen in the background.

            I think the major drawback of this method is that it increases quota imbalance unnecessarily, when setattr on MDT, it acquires large amount of quota limit from quota master, after a short time when setattr synced to OSTs, OSTs have to acquire the limit back? (If OSTs use the limit packed in setattr log directly, it'll introduce more complexity on limit syncing between master & salves.) If OSTs acquire limit in the first place, such kind of thrashing can be avoided.

            And it requires change to quota slave to be aware of setattr log, it needs to scan the setattr log on quota reintegration or on rebalancing.

            Another thing needs be mentioned is that limit reclaim on OSTs does happen in the background, but setattr has to wait for the rebalancing done (to acquire limit for MDT), so MDT needs to handle it properly to avoid blocking MDT service thread, also, MDT needs to glimpse OST objects to know current used blocks before setattr. All these work being handled by client looks better to me.

            It is true that there could be a race condition, if the file size is growing quickly while the ownership is being changed, but that is not any different than quota races today for regular writes.

            Yes, as I mentioned in proposal, I think we can use this opportunity to solve the current problem. It looks to me that both manners require lots of development effort, so my opinion is that we should choose a way that can solve the problem better, at the meantime, the same framework could be reused by other purposes.
            BTW, this race window looks not that short to me.

            niu Niu Yawei (Inactive) added a comment - Considering that we already have problems with quotas being spread across OSTs, I think that spreading quotas across all of the clients can become even worse. If each client in a 32000-node system needs 32MB to generate good RPCs, that means 1TB of quota would be needed. Even with only 1MB of quota per client this would be 32GB of quota consumed just to generate a single RPC per client. Right, but to implement an accurate quota for chgrp & cached write, I think that's probably the only way we have. It's worthy noting that these reserved quotas can be reclaimed when server is short of quotas (usage approaching limit), and inactive client (no user/group write from that client) should have 0 reservation at the end. I was thinking that the quota/grant acquire could be done by enqueueing the DLM lock on the quota resource FID, and the quota/grant is returned to the client with the LVB data, and the client keeps this LVB updated as quota/grant is consumed. When the lock is cancelled, any remaining quota/grant is returned with the lock. My plan was to use single lock for all IDs (not per ID lock), and that lock will never be revoked, I just want to use it's existing scalable glimpse mechanism to reclaim 'grant' or notify limit set/cleared. The MDS would need to track the total reserved quota for the setattr operations, not just checking each one. It would "consume" quota locally (from quota master) for the new user/group for each operation, and that quota would need to be logged in the setattr llog and transferred to the OSTs along with the setattr operations. I don't think the MDS would need to query the OSTs for their quota limits at all, but rather get its own quota. If there is a separate mechanism to reclaim space from OSTs, then that would happen in the background. I think the major drawback of this method is that it increases quota imbalance unnecessarily, when setattr on MDT, it acquires large amount of quota limit from quota master, after a short time when setattr synced to OSTs, OSTs have to acquire the limit back? (If OSTs use the limit packed in setattr log directly, it'll introduce more complexity on limit syncing between master & salves.) If OSTs acquire limit in the first place, such kind of thrashing can be avoided. And it requires change to quota slave to be aware of setattr log, it needs to scan the setattr log on quota reintegration or on rebalancing. Another thing needs be mentioned is that limit reclaim on OSTs does happen in the background, but setattr has to wait for the rebalancing done (to acquire limit for MDT), so MDT needs to handle it properly to avoid blocking MDT service thread, also, MDT needs to glimpse OST objects to know current used blocks before setattr. All these work being handled by client looks better to me. It is true that there could be a race condition, if the file size is growing quickly while the ownership is being changed, but that is not any different than quota races today for regular writes. Yes, as I mentioned in proposal, I think we can use this opportunity to solve the current problem. It looks to me that both manners require lots of development effort, so my opinion is that we should choose a way that can solve the problem better, at the meantime, the same framework could be reused by other purposes. BTW, this race window looks not that short to me.

            People

              hongchao.zhang Hongchao Zhang
              niu Niu Yawei (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              26 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: