[LU-4017] Add project quota support feature Created: 27/Sep/13  Updated: 21/Nov/20  Resolved: 09/May/17

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: Lustre 2.10.0

Type: New Feature Priority: Minor
Reporter: Li Xi (Inactive) Assignee: Niu Yawei (Inactive)
Resolution: Fixed Votes: 0
Labels: patch

Attachments: PDF File CLUG2013_Pool-Support-of-Quota_final.pdf     PDF File OST_POOL_Based_Quota_Design.pdf    
Issue Links:
Cloners
is cloned by LU-11023 OST Pool Quotas Resolved
Related
is related to LU-9339 fix RHEL 7.2 project quota build error Resolved
is related to LU-9355 remove obsolete OBD_FL_LOCAL_MASK Resolved
is related to LU-9554 upgrade to Lustre 2.10 breaks quota i... Resolved
is related to LU-12056 tar doesn't support project id Resolved
is related to LU-12160 use-after-free in osd_object_delete() Resolved
is related to LUDOC-202 Lustre Manual Documentation for proje... Resolved
is related to LU-7991 Add project quota for ZFS Resolved
is related to LU-9555 "df /path/to/project" should return p... Resolved
Sub-Tasks:
Key
Summary
Type
Status
Assignee
LU-7514 add project quota support to lfs find... Technical task Resolved WC Triage  
Rank (Obsolete): 10783

 Description   

OST (or MDT) pool feature enables users to group OSTs together to make object placement more flexible which is a very useful mechanism for system management. However the pool support of quota is not completed now which limits the use of it. Luckily current quota framework is really powerful and flexible which makes it possible to add new extension.

We believe that pool support of quota will be helpful for a lot of use cases, so we are trying to complete it. With the help from the community, we've made some progress. The full patch will be a big one which involves quite a lot of components of Lustre. Any advice or feedback about the implementation will be really helpful.



 Comments   
Comment by Li Xi (Inactive) [ 27/Sep/13 ]

Here is where the patch is tracked.
http://review.whamcloud.com/#/c/7418/

Comment by Peter Jones [ 27/Sep/13 ]

Thanks Li Xi. It may be a little while under we assess this feature due to the present focus on the 2.5.0 release, but I am sure that this will be a warmly received when our attention switches back to features again as the 2.6 release cycle commences.

Comment by Li Xi (Inactive) [ 27/Sep/13 ]

Thanks Peter! No problem. I also need more time to finish and cleanup the patch, so that it will be easier for us to review the codes.

Comment by Li Xi (Inactive) [ 23/Oct/13 ]

This is the presentation slides which describe pool based quota roughly. It might be useful since detailed design document is still in preparation.

Comment by Li Xi (Inactive) [ 23/Oct/13 ]

Here is the presentation slides.
http://www.opensfs.org/wp-content/uploads/2013/10/CLUG2013_Pool-Support-of-Quota_final.pdf

Comment by Andreas Dilger [ 25/Nov/13 ]

Any update on the design document? We're hoping that this code will be ready in time for the 2.6 feature freeze.

Comment by Li Xi (Inactive) [ 26/Nov/13 ]

Hi Andreas,

I've just post the design document. Sorry for the delay. Please check it.

Comment by Niu Yawei (Inactive) [ 28/Nov/13 ]

It looks to me that most of the complications come from "An OST can be a member of multiple pools", I'm not sure if this is important (is there any use case explains why an OST needs to be shared by multiple pools?), if there isn't any customer using this feature, we could probably change the rule as "An OST can only be member of single pool"? I believe that'll make things much easier.

Comment by Shuichi Ihara (Inactive) [ 28/Nov/13 ]

Yes, we understood that would be more easy.
Howerver, that is quite important fucntion and other filesystem can do that today. And, if OST can be only member of an specific OST pool, then we set quota to it, that means we have very limited quota setting to OST pools which is number of OST as maximum. For example, if we only have 8 OSTs, we can have only specfic 8 directories and OST pools (when we create 8 OST pool with single OST) for quota setting.

The other point, a single pool consists of single OST, it doesn't make sense for performance perspective too. That's why we think an OST should be able to be member of multiple OST pools, and we can set quota to all of OST pools.

Comment by Niu Yawei (Inactive) [ 28/Nov/13 ]

Howerver, that is quite important fucntion and other filesystem can do that today. And, if OST can be only member of an specific OST pool, then we set quota to it, that means we have very limited quota setting to OST pools which is number of OST as maximum. For example, if we only have 8 OSTs, we can have only specfic 8 directories and OST pools (when we create 8 OST pool with single OST) for quota setting.

You mean use the OST pool to implement directory quota? I don't think that's a good example, in my perspective, pool quota is different from the directory quota, we may implement real directory quota in the furture. Anyway, I just don't quite see the point of sharing same OST between OST pools, maybe I missed some important use cases.

The other point, a single pool consists of single OST, it doesn't make sense for performance perspective too. That's why we think an OST should be able to be member of multiple OST pools, and we can set quota to all of OST pools.

Single pool can of course have multiple OSTs, what I'm not sure is: does "sharing OST between multiple pools" make sense?

Comment by Shuichi Ihara (Inactive) [ 28/Nov/13 ]

You mean use the OST pool to implement directory quota? I don't think that's a good example, in my perspective, pool quota is different from the directory quota, we may implement real directory quota in the furture. Anyway, I just don't quite see the point of sharing same OST between OST pools, maybe I missed some important use cases.

Even today, since lustre-1.8, we can make multiple OST pools using same OSTs. And these OST pools are assigned to each specific directory. If we can have quota function to these OST pools, we can have directory quota feature, eventually, can we?

Comment by Shuichi Ihara (Inactive) [ 28/Nov/13 ]

"Directory quota" might be some confusions, but anyway, eventually, OST pool are assigned to specific directories. OSTs can belong to multiple pools for multiple directories, today.
So, we are just adding quota to these OST pools (then, that's eventually for specifc direcotries).

Comment by Li Xi (Inactive) [ 28/Nov/13 ]

Since Lustre already provide feature of sharing same OSTs in multiple pools, it seems unnecessary to add a extra limit right now which will limit us from using OST pools as well as pool based quota in many ways. As Ihara said, the upper limit of pool number will be the OST number which migh be far from enough on small systems. Personally, I'd like to use a flexible function in a limited way rather than using a very limited function. I believe the system administrators will figure out what are the suitable usages of OST pools for their use cases. It is easy to seperate OSTs into pools without any intersection if one wishes.

Based on my personal experience, I don't think the flexibility of OST pools significantly increases the difficulty of implementing pool based quota. The main difficulty, I think, is to maintain compatibility with old systems which thus is discussed a lot in the design document.

And yeah, 'direcotry quota' is confusing when it is placed together with OST pool. Currently, the patch does not support directory quota, i.e. we can not limit the total disk usages of directories using current patch. However, I believe it doesn't need much effort to add it. I'd like to add space accounting of pools along with user/group based accounting to enable it as soon as I get some spare time.

Comment by Andreas Dilger [ 28/Nov/13 ]

Thank you for the good design document.

Allowing pools to share the same OSTs is something that I would prefer to keep in the implementation.

One thing that isn't quite explained is the detail of how the pool quota is identified internally. The quota reimplementation in 2.4 allowed a full 64-bit FID to identify the quota, so that e.g. the parent directory FID could be used to identify a directory quota. If the client is only passing a 16-bit pool identifier to the OSTs the network protocol will again need to be changed to support directory quotas, and is prefer to avoid that.

I'd like to see some detail about how this would integrate with directory/project quotas if they were available? That doesn't mean you need to implement that feature, but I'd like to consider how one could have a directory/project quota on a tree and still be able to specify a pool on which to allocate the files. If two quotas apply to a file, which one takes precedence? Is it even possible to have two quotas on a file?

I also think a 16-bit identifier may be too small, especially if this also starts being used for project/directory quotas. That would be fixed by using a full 128-bit FID for the pool identifier, but there may not be space in the RPC for another FID. Is there room for at least a 32-bit identifier? That would probably be large enough for most uses.

For the "default" pool (ID = 0) is this just the regular user/group quotas? If there is no enforcement of quota on the default pool, then it would be easy for users to specify new files/directories with no pool and bypass the pool quotas entirely. It should be possible to specify a filesystem-wide default pool for any objects that do not specify a pool, so that users cannot bypass pool quotas if enabled.

It doesn't make sense for there to be a separate pool xattr, since there is already space in the lov_mds_md_v3 to store the pool name.

Some thought should be given to how this will integrate into LFSCK so that it can fix up the pool ID on existing files.

For the upgrade process, it makes sense at minimum to document details of how to list the current pool configuration and then use that to recreate the pools again. Better would be to have a script that saves the current pool config to a file and can the use the file to recreate the pool config afterward. An even better option would be to have a separate config record which contains the name-to-ID mapping for the pool that could be added at the end of the config log for existing configs so that the config does not need to be rewritten at all, just added to. For new pools this record would be written when the pool is first created.

Comment by Li Xi (Inactive) [ 29/Nov/13 ]

Hi Andreas,

Thank you very much for your advices. They are really helpful!

I'd like to explain more about the idea of directory/project quota using pool based quota. Currently, an object on an OST can only consume more disk space iff it acquies enough quotas for both of its user and its group. In order to do so, all the objects get and save their unique UID/GIDs on disk. With current patch, from the view of quota, we can consider OST pools as if they are different file systems. Different quotas of users/groups can be set to different pools, and different space usages can be got for different pools. However, generally, the current patch does not change the fact that all the accounting and limits are based on users and groups. Since quotas for projects and directories are eagerly required, we'd like to set space limit to the entire pool. That means, an object can only comsume more disk space iff 1) its user does not exceed the disk usage limit in the pool 2) its group does not exceed the disk usage limit in the pool 3) its poool does not exceed an entire usage limit. And we can continue considering pools as if they are different file systems. And futhermore, we got the ability to set the entire disk spaces of them, which makes it look like separate virtual file systems even more.

In order to do so, the total disk usage of each pool should be accounted just like how the disk usage of each user/group is acounted. And entire usage limits of a pool should be enforced just like how the limite of each user/group is enforced. It is obvious that pool ID of an OST objects should be saved just like how its UID/GID is saved. But luckily, most of the work has be finished in the current patch. Pool ID is already save for objects on both MDTs and OSTs. And I don't think it need much work to complete quotas for entire pools.With the ablility to limit entire space usages of pools, we are able to set space usage limits to directories/projects easily, since we have got a really flexible pool feature. And I believe much more people will be interested in trying to use OST pools with this new feature.

The current 16-bit pool ID means that we can define at most 65536 pools. I think it will be sufficient in a very long time enven though pool is used to set quotas to directories/projects. However, surely, more bits are always better if possible.

Personally, I don't like the fact of saving pool ID into a extra extended attribute either since we aleady have XATTR_NAME_LOV. And I think it is possible to extract pool ID from XATTR_NAME_LOV on MDT. Howevr, objects on OSTs do not have that extended attribute (correct me if I am wrong), which means we have to add a new extended attribute anyway. For simplicity of codes, I just added it to all the objects. Do you have any better idea?

Thanks!

Comment by Niu Yawei (Inactive) [ 29/Nov/13 ]

One thing that isn't quite explained is the detail of how the pool quota is identified internally. The quota reimplementation in 2.4 allowed a full 64-bit FID to identify the quota, so that e.g. the parent directory FID could be used to identify a directory quota. If the client is only passing a 16-bit pool identifier to the OSTs the network protocol will again need to be changed to support directory quotas, and is prefer to avoid that.

Andreas, there were only 16 bits for pool ID in quota FID. see lquota_generate_fid().

I'd like to explain more about the idea of directory/project quota using pool based quota. Currently, an object on an OST can only consume more disk space iff it acquies enough quotas for both of its user and its group. In order to do so, all the objects get and save their unique UID/GIDs on disk. With current patch, from the view of quota, we can consider OST pools as if they are different file systems. Different quotas of users/groups can be set to different pools, and different space usages can be got for different pools. However, generally, the current patch does not change the fact that all the accounting and limits are based on users and groups. Since quotas for projects and directories are eagerly required, we'd like to set space limit to the entire pool. That means, an object can only comsume more disk space iff 1) its user does not exceed the disk usage limit in the pool 2) its group does not exceed the disk usage limit in the pool 3) its poool does not exceed an entire usage limit. And we can continue considering pools as if they are different file systems. And futhermore, we got the ability to set the entire disk spaces of them, which makes it look like separate virtual file systems even more.

LiXi, looks the design doesn't mention that how the usage for a user/group on specific pool is tracked, so I don't quite see how can we know if the user exceeds pool quota limit? And could you explain "3) its poool does not exceed an entire usage limit." more?

Comment by Li Xi (Inactive) [ 29/Nov/13 ]

Hi Yawei,

The main function of enforcing space limits is osd_declare_inode_qid(). This function invokes osd_declare_qid() for two time, first to check that quotas of the user is not exceeded, and second to check that quotas of the gourp is not exceeded too. When we trying to add usage limit of the entire pool, we can just add the third call of osd_declare_qid(). In this call, it will check that space limit of the entire poool is not exceeded. We need to add POOLQUOTA along with existing USRQUOTA and GRPQUOTA. And we need to create a quota file for each pool too. But I don't think the work will cost too much time. What is your opinion?

I think the attempt to set limits to direcotries/projects using pool based quota is not a bad idea. There seems an efficiency problem if we tries to implement 'true' directory/subtree quota, because when we move files/directories from a directory to another one, all the disk usages should be updated immediately, which is not friendly for performance. The current implementation of pool based quota does not have that problem.

Comment by Niu Yawei (Inactive) [ 29/Nov/13 ]

The main function of enforcing space limits is osd_declare_inode_qid(). This function invokes osd_declare_qid() for two time, first to check that quotas of the user is not exceeded, and second to check that quotas of the gourp is not exceeded too. When we trying to add usage limit of the entire pool, we can just add the third call of osd_declare_qid(). In this call, it will check that space limit of the entire poool is not exceeded. We need to add POOLQUOTA along with existing USRQUOTA and GRPQUOTA. And we need to create a quota file for each pool too. But I don't think the work will cost too much time. What is your opinion?

I'm little confused, current design/implementation of pool quota is quota per user per pool but not quota per pool, right? My question was where the space usage of user per pool is stored?

The quota per pool (POOLQUOTA) you mentioned above explains the "3) its poool does not exceed an entire usage limit."? It's in your plan but not in current design, am I understanding right?

Comment by Li Xi (Inactive) [ 29/Nov/13 ]

Yeah, it is a little bit confusing here. And you are right, the current patch only support quotas per user/group per pool, not quota for an entire pool. The space usages of users per pool are stroed as quota files on OSDs, one quota file per pool for each user/group. Actually, I didn't change many codes in this part, because the existing codes are really flexible and extendable.

And it is correct that I have not implemented POOLQUOTA or quotas for entire pools. It is still under designing.

Comment by Andreas Dilger [ 29/Nov/13 ]

The main problem with storing a separate xattr for the pool name is that this will cause the xattr to overflow the space on the OST inode and allocate a separate block for the pool xattr. That will really hurt performance, especially if this xattr needs to be read from disk for each new file access.

As for pool vs project quota, if like to plan for the future once there is project quota, so that the protocol does not need to change again in the future. I think for project quota it makes sense that this can be set when the file is first created (maybe inherited from the parent directory) but we do not need to track this when the files are moved to another directory. In that regard, the current pool proposal is fine for project quota as well, if the same OSTs can be part of multiple projects (pools) since this allows users to use all of the OSTs (if they are in the pool) without loss of bandwidth.

I think if there can be many projects then 65536 is too small a limit. I haven't looked at the protocol yet to see if there is room for a 32-bit value or not.

One thing that would be needed is some way to change the project of a file after it is created. I don't know if that should be done by "lfs migrate" to change the OSTs, or just setxattr to change the pool name in the LOV xattr?

Comment by Niu Yawei (Inactive) [ 02/Dec/13 ]

Yeah, it is a little bit confusing here. And you are right, the current patch only support quotas per user/group per pool, not quota for an entire pool. The space usages of users per pool are stroed as quota files on OSDs, one quota file per pool for each user/group. Actually, I didn't change many codes in this part, because the existing codes are really flexible and extendable.

hmm, the quota accounting files were created by underlying backend filesystem, and the space usage was tracked/updated by the backend filesystem as well. Where is the accounting info for each pool is tracked/updated? In backend filesystem (ldiskfs/zfs) or in OSD?

I think if there can be many projects then 65536 is too small a limit. I haven't looked at the protocol yet to see if there is room for a 32-bit value or not.

I agree, 65536 looks little bit small for projects limit, I was thinking that directory FID could be better than pool ID for this purpose.
Another reason that I think pool quota isn't suite for project quota is that setting an inode limit to an OST pool sounds weird to me, however, project quota may have the requirement of setting limit on project files.

My personal thoughts are:

  • If OST can only belong to one pool and no inode limit for OST pool quota, pool quota will be simpler: much less compatibility problems, no need to track usage for each pool at all.
  • Implementing directory quota for project quota purpose could be better than implementing a complex (OST) pool quota (has usage tracking for each pool, has inode limit, all compatiblity problems resolved) for that purpose.
Comment by Li Xi (Inactive) [ 02/Dec/13 ]

I don't know this in detail, so please correct me if I am wrong. I think current disk usages and limites of Lustre is tracked and enforced on QSD layer. The usages and limits of inodes/kbytes are maintained by QSD itself without any help from ldiskfs. I think it is one of the reasons why the current quota framework is flexible and powerful. And I guess it is why 'lfs quotacheck' does not work any more?

As Andreas mentioned, it is not necessary that pool feature is only limited to OST. We might want to have pools for both OSTs and MDTs in the future as soon as DNE is widely used. In that regard, setting both space and inode limits to pools seems straightforward. When the MDT support of pool is ready, it needs little (if not no) work to enforce inode limits to pools.

Current quota framework is really powerful, which makes it not so hard to implement quota support for pool. Yawei, do you have any good idea of implementing directory quota already? I will be very happy to discuss about it.

Comment by Niu Yawei (Inactive) [ 02/Dec/13 ]

I don't know this in detail, so please correct me if I am wrong. I think current disk usages and limites of Lustre is tracked and enforced on QSD layer. The usages and limits of inodes/kbytes are maintained by QSD itself without any help from ldiskfs. I think it is one of the reasons why the current quota framework is flexible and powerful. And I guess it is why 'lfs quotacheck' does not work any more?

Quota enforcement (limit) is in OSD, but quota accounting (usage) is in backend filesystem. The reason we no longer need quotacheck is that ldiskfs (and zfs) now always enable quota accounting by default.

As Andreas mentioned, it is not necessary that pool feature is only limited to OST. We might want to have pools for both OSTs and MDTs in the future as soon as DNE is widely used. In that regard, setting both space and inode limits to pools seems straightforward. When the MDT support of pool is ready, it needs little (if not no) work to enforce inode limits to pools.

That means if we want to set quota for a project, we have to create two pools (one MDT pool and one OST pool), then set inode and block limit separately?

Current quota framework is really powerful, which makes it not so hard to implement quota support for pool. Yawei, do you have any good idea of implementing directory quota already? I will be very happy to discuss about it.

I didn't think on directory quota carefully, but I think the major work for directory quota is usage tracking for each directory, and if we can address the problem of usage tracking for each pool, then might be able to address the usage tracking for directory in the same way.

Comment by Andreas Dilger [ 02/Dec/13 ]

I've always thought that MDTs might be in the same pool as OSTs, so no need for a separate pool. It would also be possibly to have an MDT-only pools if it only relates to namespace (e.g. DNE striped directory selection from an MDT pool). The difference between an OST and an MDT have already started disappearing with the Data-on-MDT project.

I also think it will be easier to consider a "project" quota instead of a "directory" quota. The "project" quota would be inherited from the parent directory as one expects, but the main difference is that moving a file out of a project directory does not remove it from the project accounting. This avoids a lot of complexity in implementation to track when the files are removed from the project. My understanding is that this matches the semantics of the ext3/4 project quota implementation. It would make sense to check what the semantics are for XFS project quotas.

As for accounting of the pool quota - Niu is correct. While the Lustre quota code handles granting of quota to servers, it is the underlying OSD quota accounting that tracks all of the space usage.

Comment by Li Xi (Inactive) [ 02/Dec/13 ]

Thank you, Andreas and Yawei, for correcting me! I will check the codes for more detail.

After a quick glimpse of codes, I think both project quota of XFS and subtree quota of ext4 have the similar idea. An internal attribute, project ID (or subtree ID for ext4) is set to inode so as to mark the file as a member of a project (or subtree). This ID is inherited from parent directory, and is kept unchanged when file is moved out of its former directory, which looks like the pool attribute of Lustre. This fact makes me believe that pool feature of Lustre is a good basis to implement project-like quota.

Comment by Li Xi (Inactive) [ 02/Dec/13 ]

BTW, as far as I know, GPFS only supports 10000 independent file sets at most. Maybe I missed something, but it seems to me that 65536 pools are pretty sufficient enven for project quota.

Comment by Shuichi Ihara (Inactive) [ 04/Dec/13 ]

Andreas, Niu
Thank you so much for a lot of your review and advices. We are now on same page and can move forward for next stage and progress? Please advise.

Comment by Niu Yawei (Inactive) [ 04/Dec/13 ]

I think we may define three types of quota:

1. pool quota

This kind of quota is to limit usage on real device. We now have only two default quota pools, MD pool and DT pool, all MDTs belong to MD pool and all OSTs belong to DT pool, these two default pools are for whole filesystem (MD manages whole fs inode limit, DT manages whole fs block limit).

If user want some more specific quota rather than the whole fs quota, she can create quota pool and set limit for the specific pool. For instance, a Lustre has several fast OSTs which SSD drive in the backend, besides the whole fs limit, to set more restricted limit for those fast OSTs, one may create a 'fast' pool with all these OSTs in, and set smaller limit on it. Once the feature of small file on MDT done, MDT will consume block limit too (block quota is ignored on MDT for now), and user may want smaller block limit on MDTs comparing with OSTs, then she can create a 'small file' pool with all MDTs in and set smaller block limit for it.

2. directory quota / project quota / ...

These kind of quotas are to limit usage on object (directory, project ...), they are harder for implementation comparing with pool quota, because usage tracking for each directory/project has to be done (not like pool quota, it can leverage the backend fs quota usage naturally). Such kind of quota is usually to limit the total size of directory or project, so the quota identiy should be directory FID (or parent directory FID of project) but not UID/GID, and directory/project quota should only be valid for default MD/DT pools imho.

3. per user directory quota / per user project quota / ...

These kind of quotas are to limit specific user on object (directory, project ...), it's harder than previous two quotas, and looks current quota framework can't support it well.

It looks to me that you want the 3rd one, but the design/implementation mixed with the 1st one? Anyway, I think we may start from the easy one (the 1st pool quota) if you want.

Comment by Gerrit Updater [ 28/Jul/15 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/15761
Subject: LU-4017 e2fsprogs: clean up codes for adding new quota type
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set: 1
Commit: 41427ba05d0715e836d496352da3c6b5fcbc57ab

Comment by Gerrit Updater [ 28/Jul/15 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/15762
Subject: LU-4017 e2fsprogs: add project quota support
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set: 1
Commit: 1360f25370938ee90bdbf00251e244235c44199b

Comment by Gerrit Updater [ 28/Jul/15 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/15763
Subject: LU-4017 e2fsprogs: add project feature
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set: 1
Commit: ed5ed77aa6daa3cc8eb9f5a4556cc1a139d0f6ba

Comment by Gerrit Updater [ 28/Jul/15 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/15764
Subject: LU-4017 e2fsprogs: add inherit flags for project quota
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set: 1
Commit: 8a28e38abb5c8a52dd1ae38824b32a8baecc00dd

Comment by Andreas Dilger [ 28/Aug/15 ]

I've pushed an updated version of e2fsprogs-1.42.13.wc3 to the master-lustre-test repository and this has been able to build successfully. If this passes testing I'll push it over to master-lustre and you can rebase these patches. It removes lfsck support, which has been causing build problems and is no longer supported.

http://review.whamcloud.com/#/c/16121/

Comment by Gerrit Updater [ 31/Dec/15 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/17770
Subject: LU-4017 ldiskfs: add project quota support
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 673024e842df487b9974a8d1fa0b070377c3b344

Comment by James A Simmons [ 31/Dec/15 ]

Adding all that new quota to ldiskfs will never be accepted by the upstream kernel people. I think it would be better to abstract to the osd-ldiskfs layer instead. Also what about ZFS? Perhaps in that case we can abstract to its own layer or integrate it with the lquota code.

Comment by Shuichi Ihara (Inactive) [ 31/Dec/15 ]

NOTE, project quota inode filed is already reserved in ext4 superblock on upstream kernel.
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/fs/ext4?id=8b4953e13f4c5d9a3c869f5fca7d51e1700e7db0
All patches had been reviewed by ext4 maintainers. There are still some remained works (e.g. revised kernel/e2fsprogs pathces with several fixes. xfstests supprots to work on both xfs/ext4), but this project itself is going well and we are keeping all concensas with xfs/ext4 developpers.
We don't think it's so far from finish up. So, we do implement project quota based on current quota framework that uid/gid quota does with ext4 quota.
Right now, we look at several phase on this feature. 1) new inode proj_id support in ldiskfs 2) project quota implimentation with lustre/ldiskfs 3) zfs suppport

Comment by Andreas Dilger [ 31/Dec/15 ]

James, the upstream ext4 support for project quota is very close to landing. Also, we are just in the process of adding inode quotas to ZFS and it would be straight forward to add project quotas to ZFS in a similar manner. In the current osd-zfs code we had inode accounting separate from the core ZFS code and it didn't work very well, which is the reason we are moving it into the core ZFS code now.

Comment by James A Simmons [ 31/Dec/15 ]

That is excellent news. It will be a lot more work to carry around these patches but at least down the road it will be standard code in ext4.

Comment by Shuichi Ihara (Inactive) [ 04/Jan/16 ]

The big patch set is becouse we need to backport kernel patches against all supporteed linux distributions today. Shilong ported this patches against RHEL6 and SLES11 as well.
There is a big qustion here how long does Lustre keep RHEL6 and SLES11 server support? We know lustre-2.8 should support RHEL6/SLES11 server support, but lustre-2.9 or later, it will also support RHEL6/SLES11 for server? Or could be only RHEL7/SELS12?

The project quota feature is not ready today, but it will be 2.9 or 2.10. We really want to know what linux distribution will be supported at this point.
Otherwise, we have to continue maintance work for RHEL6/SLES11 a long while, but at the end, these kernels drop off from support list on lustre-2.9 or 2.10.

Comment by Peter Jones [ 04/Jan/16 ]

Ihara

I don't think that you need to worry about RHEL6/SLES11 - both of those are scheduled to be dropped in Lustre 2.9 - http://wiki.lustre.org/Release_2.9.0 . I would focus efforts on the latest RHEL 7.x though the incremental work to support the latest SLES12 SPx should not be much incremental work and having this would be well-received in some quarters.

Peter

Comment by Andreas Dilger [ 08/Jan/16 ]

News from Ted Ts'o on the ext4 developer concall today was that the project quota feature was going to land in the upstream kernel during this merge window. He was wondering if Li Xi had updated xfstests to test the project quota feature with ext4 yet, so that he didn't have to re-do this work himself? If yes, please repost the xfstests patches to the linux-ext4 mailing list.

Comment by Li Xi (Inactive) [ 08/Jan/16 ]

Hi Andreas,

Shilong Wang has added some tests to xfstests. And he will push the patch soon.

Thanks!

Comment by Gerrit Updater [ 23/Feb/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/18568
Subject: LU-4017 e2fsprogs: always read full inode structure
Project: tools/e2fsprogs
Branch: master
Current Patch Set: 1
Commit: 386ce6245651edabedb9413a3d4a4f981870e239

Comment by Gerrit Updater [ 23/Feb/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/18569
Subject: LU-4017 e2fsprogs: always read full inode structure
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set: 1
Commit: a52ebc45e9b50607ea322bf98819843780a29c18

Comment by Gerrit Updater [ 24/Feb/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/18598
Subject: LU-4017 e2fsprogs: always read full inode structure
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set: 1
Commit: 8950918d4a326d75ca899585873b0fda84150e48

Comment by Gerrit Updater [ 14/Mar/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/18894
Subject: LU-4017 quota: cleanup codes of quota for new type
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 3daf9324d738e1c6179c868838d2d1eebb256a96

Comment by Gerrit Updater [ 14/Mar/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/18895
Subject: LU-4017 e2fsprogs: add [ch/ls]attr support for project quota
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set: 1
Commit: f2e9f5413db33c8da0cdf334c6c60e7ae3422fe7

Comment by Gerrit Updater [ 27/Apr/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/19809
Subject: LU-4017 pools: generate pool ID for OST pools
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: dc5eb611b45beef03b33f7fe18aa65952c83d66d

Comment by Gerrit Updater [ 27/Apr/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/19810
Subject: LU-4017 pools: add pool ID to pool_new operations
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 240ca95e6f4ec340da3357573a867cbac53828d9

Comment by Gerrit Updater [ 27/Apr/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/19811
Subject: LU-4017 pools: add pool ID support to lov/lod
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: a6d0738c5da3308a5b2d34aab21f61cbb656867a

Comment by Gerrit Updater [ 28/Apr/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/19843
Subject: LU-4017 quota: redefine LL_MAXQUOTAS for Lustre
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 504a235ba0eb40e7eb72523eae0ea77e4e389026

Comment by Gerrit Updater [ 02/Jun/16 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/19843/
Subject: LU-4017 quota: redefine LL_MAXQUOTAS for Lustre
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 5522990660248930108e84c89bc7e5807bda9ea0

Comment by Gerrit Updater [ 09/Jun/16 ]

Andreas Dilger (andreas.dilger@intel.com) merged in patch http://review.whamcloud.com/18569/
Subject: LU-4017 e2fsprogs: always read full inode structure
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set:
Commit: bad5ce5ca531b65116edfc9c79abc65b1c6ab9ed

Comment by Gerrit Updater [ 09/Jun/16 ]

Andreas Dilger (andreas.dilger@intel.com) merged in patch http://review.whamcloud.com/15761/
Subject: LU-4017 e2fsprogs: clean up codes for adding new quota type
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set:
Commit: f57f262a8c0020ec5f031567c090815798a29122

Comment by Gerrit Updater [ 09/Jun/16 ]

Andreas Dilger (andreas.dilger@intel.com) merged in patch http://review.whamcloud.com/15763/
Subject: LU-4017 e2fsprogs: add project feature
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set:
Commit: 2a6d0ebb428b79563a9723c4143c3bb14001d94a

Comment by Gerrit Updater [ 12/Jul/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/21255
Subject: LU-4017 debugfs: add support for the project id field
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set: 1
Commit: 9532ee37c5c0a10f5f73aea453f71487b6b1b923

Comment by Gerrit Updater [ 19/Jul/16 ]

Andreas Dilger (andreas.dilger@intel.com) merged in patch http://review.whamcloud.com/15762/
Subject: LU-4017 e2fsprogs: add project quota support
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set:
Commit: 79a4709b30487d533eca12889704f6d7f62140d0

Comment by Gerrit Updater [ 19/Jul/16 ]

Andreas Dilger (andreas.dilger@intel.com) merged in patch http://review.whamcloud.com/15764/
Subject: LU-4017 e2fsprogs: add inherit flags for project quota
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set:
Commit: 68c2da290cb52a05a54a52f73ef87afc00a8c934

Comment by Gerrit Updater [ 09/Aug/16 ]

Andreas Dilger (andreas.dilger@intel.com) merged in patch http://review.whamcloud.com/18895/
Subject: LU-4017 e2fsprogs: add [ch/ls]attr support for project quota
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set:
Commit: b4dd0d87920d016d5d5a12d3990fc6560ec69e22

Comment by Gerrit Updater [ 09/Aug/16 ]

Andreas Dilger (andreas.dilger@intel.com) merged in patch http://review.whamcloud.com/21255/
Subject: LU-4017 debugfs: add support for the project id field
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set:
Commit: 7c9b7ec3c8b22a85ce5ccb6b140faeceb50cc426

Comment by Gerrit Updater [ 25/Nov/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23951
Subject: LU-4017 quota: save pool ID to mds objects
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 1a0b0a49b947a51eec23f62b40c5b414ceb980c1

Comment by Gerrit Updater [ 25/Nov/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23953
Subject: LU-4017 quota: enforce project quota limits
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: d52457cc9254a346e887145737e1647e7627b25f

Comment by Gerrit Updater [ 25/Nov/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23954
Subject: LU-4017 quota: enable project quota limits
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: df85bb52952ed4d3c21da9ef107072628ddc755b

Comment by Gerrit Updater [ 25/Nov/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23952
Subject: LU-4017 quota: save pool ID to OST objects
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: daf67bad764a6e77190906017e973314b5f3dc3c

Comment by Gerrit Updater [ 25/Nov/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23945
Subject: LU-4017 ldiskfs: export __ext4_ioctl_setproject for lustre
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 4c140deb360caee730f0eb1712ff2e4df4193072

Comment by Gerrit Updater [ 25/Nov/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23946
Subject: LU-4017 quota: add project quota support to system header
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: c579aaa933efd4f6424b0cb2a6f1d76640e8fb73

Comment by Gerrit Updater [ 25/Nov/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23947
Subject: LU-4017 quota: add project quota support for Lustre
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 659583670aafe694af34589dc6ebf53093780809

Comment by Gerrit Updater [ 25/Nov/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23948
Subject: LU-4017 quota: generate pool ID for OST pools
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 712c9d20f1fa91563d24904b1a85872a7f879cc8

Comment by Gerrit Updater [ 25/Nov/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23949
Subject: LU-4017 quota: add pool ID to pool_new operations
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 50874468389deaa025400a1096ab2fa7dcda1982

Comment by Gerrit Updater [ 25/Nov/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23950
Subject: LU-4017 quota: add pool ID support to lov/lod
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 1edf3c4d8a8509d583f5f6b2316ce99a552701c4

Comment by Gerrit Updater [ 25/Nov/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23955
Subject: LU-4017 quota: add project quota support to utils
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: c86567ceec9080b34a7d56a1cf929f0d7848451b

Comment by Gerrit Updater [ 25/Nov/16 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23956
Subject: LU-4017 quota: skip inode number check for project inode
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 4209a077b5615f536bddf2928e636444c84dae3f

Comment by Gerrit Updater [ 06/Mar/17 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/25812
Subject: LU-4017 quota: Add setting/getting project support
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 3b7f895335ed5f72a8158732eee068fb8f4d92a1

Comment by Gerrit Updater [ 27/Mar/17 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/26202
Subject: LU-4017 quota: add setting/getting project id function
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 75099cb3091f9ca857a391483421b918969a82e9

Comment by Gerrit Updater [ 06/Apr/17 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/26411
Subject: LU-4017 quota: extend sanity-quota to test project quota
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: e2c522ebed4f4a1347c45f8838222d773a12bfea

Comment by Gerrit Updater [ 09/Apr/17 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/26464
Subject: LU-4017 quota: add project id support to lfs find
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 01cac6f88ad0a7130ec662ea8dd7b9de99ffa269

Comment by Gerrit Updater [ 09/Apr/17 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/26463
Subject: LU-4017 quota: add project inherit attributes
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 019ef13a015d5b072a9d831879b93ebc0aa9f278

Comment by Gerrit Updater [ 13/Apr/17 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/26577
Subject: LU-4017 quota: cleanup to improve quota codes
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: b0de28a9c2bac140e7f55fe076abc489b85ac638

Comment by Gerrit Updater [ 13/Apr/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/17770/
Subject: LU-4017 ldiskfs: add project quota support
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 91fbc94f3eabe9a3587265d7f0b60f1b1e87b575

Comment by Gerrit Updater [ 13/Apr/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/18894/
Subject: LU-4017 quota: cleanup codes of quota for new type
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: a00a07567d4909251e58900a9e5ea27157960fd4

Comment by Gerrit Updater [ 13/Apr/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/23946/
Subject: LU-4017 quota: add project quota support to system header
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 5839fd5d6e5d24d16af26a7bc36eef773289dbbe

Comment by Gerrit Updater [ 13/Apr/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/23947/
Subject: LU-4017 quota: add project quota support for Lustre
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 792be6ca54810b04bdc4fd4f61e4b05fc701e587

Comment by Gerrit Updater [ 13/Apr/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/23945/
Subject: LU-4017 ldiskfs: export __ext4_ioctl_setproject for lustre
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 5a0094765d8d5fbfe361ef5518ad44c3fd336f07

Comment by Gerrit Updater [ 13/Apr/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/25812/
Subject: LU-4017 quota: add project id support
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 97fbb61dbe261a389897779ae376cfc3808442cf

Comment by Gerrit Updater [ 13/Apr/17 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/26581
Subject: LU-4017 tune2fs: fix BUGs of tuning project quota
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set: 1
Commit: 0ee8257e2b538ce9d74e20511ba0a5c71d3cf542

Comment by James A Simmons [ 13/Apr/17 ]

I just noticed with this work its no longer possible to have patchless kernels for lustre server side.

Comment by Bob Glossman (Inactive) [ 13/Apr/17 ]

Does support for this new feature only exist on el7 servers with ldiskfs?

Comment by James A Simmons [ 13/Apr/17 ]

Yes it appears this is not supported on any SLES systems as well. Project Quota appears to only be supported on RHEL systems with ldiskfs

Comment by Peter Jones [ 13/Apr/17 ]

Yes. That is true at this point. We are scoping out what is required to extend this work further.

Comment by Gerrit Updater [ 13/Apr/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/26202/
Subject: LU-4017 quota: add setting/getting project id function
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 39f63cf54c624d89439b5b473035c1afe35e10fa

Comment by Gerrit Updater [ 13/Apr/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/23954/
Subject: LU-4017 quota: enable project quota limits
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 1398ed438568d2a07f09a59da8b7b23ff04ed4ea

Comment by Gerrit Updater [ 13/Apr/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/23955/
Subject: LU-4017 quota: add project quota support to lfs quota/setquota
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 04fb37c5d93162e5e268baf78f796a0140de403f

Comment by Shuichi Ihara (Inactive) [ 14/Apr/17 ]

Yes. That is true at this point. We are scoping out what is required to extend this work further.

Exactly. SInce all ldiskfs (ext4) and kernel patches were merged in upstream linux kernel (linux-4.5), once lustre server supprots 4.5 kernel or above, we don't need patched server for project quota anymore. Meantime, ldisfks patches are still backportable for other kernel e.g. SLES12SP2 in the future work.
Also, once zfs supports project id, project quota will be extanable to supoprt osd-zfs.

Comment by James A Simmons [ 14/Apr/17 ]

Since RHEL tends to use 5 year old stacks we are looking at 2021 when we don't need patched kernels

Comment by Gerrit Updater [ 14/Apr/17 ]

Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/26621
Subject: LU-4017 tune2fs: fix BUGs of tuning project quota
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set: 1
Commit: e51dd021fa64a916e03df8dadf7def598e1f2529

Comment by Gerrit Updater [ 25/Apr/17 ]

Andreas Dilger (andreas.dilger@intel.com) merged in patch https://review.whamcloud.com/26581/
Subject: LU-4017 tune2fs: fix BUGs of tuning project quota
Project: tools/e2fsprogs
Branch: master-lustre
Current Patch Set:
Commit: a21925594cd18569f6618c02ea7b28c4afab477b

Comment by Gerrit Updater [ 02/May/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/26463/
Subject: LU-4017 quota: add project inherit attributes
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 2e92c57d71384d50f06baf5b6591d2809c8288b2

Comment by Gerrit Updater [ 02/May/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/26464/
Subject: LU-4017 quota: add project id support to lfs find
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 3dad616e09fc2a89174900a4d4dbb60308650418

Comment by Gerrit Updater [ 05/May/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/26411/
Subject: LU-4017 quota: extend to test project quota
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 0eff45318562be759517bc7702365dfd450e518d

Comment by Gerrit Updater [ 09/May/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/26577/
Subject: LU-4017 quota: cleanup to improve quota codes
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 4ce3219eb8e6a07c5c37e4b425b29195488005c3

Comment by Peter Jones [ 09/May/17 ]

Landed for 2.10

Comment by nasf (Inactive) [ 25/Jul/17 ]

How to handle the project ID attribute via MDT file-level backup/restore? Originally, we can do that via tar/getfattr for backup, then untar/setfattr for restore, but it seems the project ID cannot be handled like that, right? Or I missed anything?

Comment by Li Xi (Inactive) [ 25/Jul/17 ]

Hi Fanyong,

I think backup/restore using tar will simply discards project ID. Project ID has an IOCTL with the name of EXT4_IOC_FSGETXATTR(FS_IOC_FSGETXATTR). And in order to support project ID backup, an ioctl of that type needs to be called. And as far as we've tested, that is not supported by tar. We need to push a patch to add that support.

 

BTW, I think there are better ways to backup MDT files than tar.

Comment by nasf (Inactive) [ 25/Jul/17 ]

I am thinking how to backup/restore the system among different backends, such as from ldiskfs to ZFS or the reverse case. I do not know whether there are other better solution for that.

Comment by Andreas Dilger [ 26/Jul/17 ]

One option would be to expose the projid value as a virtual xattr (e.g. "trusted.projid" or similar) from ext4/ldiskfs, so that it can be backed up and restored via getfattr/setfattr. This is also a problem for regular ext4 filesystems, so I would ask on the linux-ext4 mailing list to see what the agreement is there for implementing this.

Comment by Nathan Rutman [ 24/Oct/17 ]

Description of this bug and early design docs talk about "pool quotas", as in the ability to administer quotas based on Lustre OST pools. The final landed feature seems to be about project quotas, not pool quotas - am I correct in stating that the as-landed feature cannot support pool quotas? I.e. I can't place a special quota on a set of SSD OSTs?

Comment by Wang Shilong (Inactive) [ 25/Oct/17 ]

Hi Nathan Rutman,

Your understanding is right, as-landed feature is project quota, not 'pool quota'.

Generated at Sat Feb 10 01:38:55 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.