[LU-4017] Add project quota support feature Created: 27/Sep/13 Updated: 21/Nov/20 Resolved: 09/May/17 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | Lustre 2.10.0 |
| Type: | New Feature | Priority: | Minor |
| Reporter: | Li Xi (Inactive) | Assignee: | Niu Yawei (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | patch | ||
| Attachments: |
|
||||||||||||||||||||||||||||||||||||||||||||
| Issue Links: |
|
||||||||||||||||||||||||||||||||||||||||||||
| Sub-Tasks: |
|
||||||||||||||||||||||||||||||||||||||||||||
| Rank (Obsolete): | 10783 | ||||||||||||||||||||||||||||||||||||||||||||
| Description |
|
OST (or MDT) pool feature enables users to group OSTs together to make object placement more flexible which is a very useful mechanism for system management. However the pool support of quota is not completed now which limits the use of it. Luckily current quota framework is really powerful and flexible which makes it possible to add new extension. We believe that pool support of quota will be helpful for a lot of use cases, so we are trying to complete it. With the help from the community, we've made some progress. The full patch will be a big one which involves quite a lot of components of Lustre. Any advice or feedback about the implementation will be really helpful. |
| Comments |
| Comment by Li Xi (Inactive) [ 27/Sep/13 ] |
|
Here is where the patch is tracked. |
| Comment by Peter Jones [ 27/Sep/13 ] |
|
Thanks Li Xi. It may be a little while under we assess this feature due to the present focus on the 2.5.0 release, but I am sure that this will be a warmly received when our attention switches back to features again as the 2.6 release cycle commences. |
| Comment by Li Xi (Inactive) [ 27/Sep/13 ] |
|
Thanks Peter! No problem. I also need more time to finish and cleanup the patch, so that it will be easier for us to review the codes. |
| Comment by Li Xi (Inactive) [ 23/Oct/13 ] |
|
This is the presentation slides which describe pool based quota roughly. It might be useful since detailed design document is still in preparation. |
| Comment by Li Xi (Inactive) [ 23/Oct/13 ] |
|
Here is the presentation slides. |
| Comment by Andreas Dilger [ 25/Nov/13 ] |
|
Any update on the design document? We're hoping that this code will be ready in time for the 2.6 feature freeze. |
| Comment by Li Xi (Inactive) [ 26/Nov/13 ] |
|
Hi Andreas, I've just post the design document. Sorry for the delay. Please check it. |
| Comment by Niu Yawei (Inactive) [ 28/Nov/13 ] |
|
It looks to me that most of the complications come from "An OST can be a member of multiple pools", I'm not sure if this is important (is there any use case explains why an OST needs to be shared by multiple pools?), if there isn't any customer using this feature, we could probably change the rule as "An OST can only be member of single pool"? I believe that'll make things much easier. |
| Comment by Shuichi Ihara (Inactive) [ 28/Nov/13 ] |
|
Yes, we understood that would be more easy. The other point, a single pool consists of single OST, it doesn't make sense for performance perspective too. That's why we think an OST should be able to be member of multiple OST pools, and we can set quota to all of OST pools. |
| Comment by Niu Yawei (Inactive) [ 28/Nov/13 ] |
You mean use the OST pool to implement directory quota? I don't think that's a good example, in my perspective, pool quota is different from the directory quota, we may implement real directory quota in the furture. Anyway, I just don't quite see the point of sharing same OST between OST pools, maybe I missed some important use cases.
Single pool can of course have multiple OSTs, what I'm not sure is: does "sharing OST between multiple pools" make sense? |
| Comment by Shuichi Ihara (Inactive) [ 28/Nov/13 ] |
Even today, since lustre-1.8, we can make multiple OST pools using same OSTs. And these OST pools are assigned to each specific directory. If we can have quota function to these OST pools, we can have directory quota feature, eventually, can we? |
| Comment by Shuichi Ihara (Inactive) [ 28/Nov/13 ] |
|
"Directory quota" might be some confusions, but anyway, eventually, OST pool are assigned to specific directories. OSTs can belong to multiple pools for multiple directories, today. |
| Comment by Li Xi (Inactive) [ 28/Nov/13 ] |
|
Since Lustre already provide feature of sharing same OSTs in multiple pools, it seems unnecessary to add a extra limit right now which will limit us from using OST pools as well as pool based quota in many ways. As Ihara said, the upper limit of pool number will be the OST number which migh be far from enough on small systems. Personally, I'd like to use a flexible function in a limited way rather than using a very limited function. I believe the system administrators will figure out what are the suitable usages of OST pools for their use cases. It is easy to seperate OSTs into pools without any intersection if one wishes. Based on my personal experience, I don't think the flexibility of OST pools significantly increases the difficulty of implementing pool based quota. The main difficulty, I think, is to maintain compatibility with old systems which thus is discussed a lot in the design document. And yeah, 'direcotry quota' is confusing when it is placed together with OST pool. Currently, the patch does not support directory quota, i.e. we can not limit the total disk usages of directories using current patch. However, I believe it doesn't need much effort to add it. I'd like to add space accounting of pools along with user/group based accounting to enable it as soon as I get some spare time. |
| Comment by Andreas Dilger [ 28/Nov/13 ] |
|
Thank you for the good design document. Allowing pools to share the same OSTs is something that I would prefer to keep in the implementation. One thing that isn't quite explained is the detail of how the pool quota is identified internally. The quota reimplementation in 2.4 allowed a full 64-bit FID to identify the quota, so that e.g. the parent directory FID could be used to identify a directory quota. If the client is only passing a 16-bit pool identifier to the OSTs the network protocol will again need to be changed to support directory quotas, and is prefer to avoid that. I'd like to see some detail about how this would integrate with directory/project quotas if they were available? That doesn't mean you need to implement that feature, but I'd like to consider how one could have a directory/project quota on a tree and still be able to specify a pool on which to allocate the files. If two quotas apply to a file, which one takes precedence? Is it even possible to have two quotas on a file? I also think a 16-bit identifier may be too small, especially if this also starts being used for project/directory quotas. That would be fixed by using a full 128-bit FID for the pool identifier, but there may not be space in the RPC for another FID. Is there room for at least a 32-bit identifier? That would probably be large enough for most uses. For the "default" pool (ID = 0) is this just the regular user/group quotas? If there is no enforcement of quota on the default pool, then it would be easy for users to specify new files/directories with no pool and bypass the pool quotas entirely. It should be possible to specify a filesystem-wide default pool for any objects that do not specify a pool, so that users cannot bypass pool quotas if enabled. It doesn't make sense for there to be a separate pool xattr, since there is already space in the lov_mds_md_v3 to store the pool name. Some thought should be given to how this will integrate into LFSCK so that it can fix up the pool ID on existing files. For the upgrade process, it makes sense at minimum to document details of how to list the current pool configuration and then use that to recreate the pools again. Better would be to have a script that saves the current pool config to a file and can the use the file to recreate the pool config afterward. An even better option would be to have a separate config record which contains the name-to-ID mapping for the pool that could be added at the end of the config log for existing configs so that the config does not need to be rewritten at all, just added to. For new pools this record would be written when the pool is first created. |
| Comment by Li Xi (Inactive) [ 29/Nov/13 ] |
|
Hi Andreas, Thank you very much for your advices. They are really helpful! I'd like to explain more about the idea of directory/project quota using pool based quota. Currently, an object on an OST can only consume more disk space iff it acquies enough quotas for both of its user and its group. In order to do so, all the objects get and save their unique UID/GIDs on disk. With current patch, from the view of quota, we can consider OST pools as if they are different file systems. Different quotas of users/groups can be set to different pools, and different space usages can be got for different pools. However, generally, the current patch does not change the fact that all the accounting and limits are based on users and groups. Since quotas for projects and directories are eagerly required, we'd like to set space limit to the entire pool. That means, an object can only comsume more disk space iff 1) its user does not exceed the disk usage limit in the pool 2) its group does not exceed the disk usage limit in the pool 3) its poool does not exceed an entire usage limit. And we can continue considering pools as if they are different file systems. And futhermore, we got the ability to set the entire disk spaces of them, which makes it look like separate virtual file systems even more. In order to do so, the total disk usage of each pool should be accounted just like how the disk usage of each user/group is acounted. And entire usage limits of a pool should be enforced just like how the limite of each user/group is enforced. It is obvious that pool ID of an OST objects should be saved just like how its UID/GID is saved. But luckily, most of the work has be finished in the current patch. Pool ID is already save for objects on both MDTs and OSTs. And I don't think it need much work to complete quotas for entire pools.With the ablility to limit entire space usages of pools, we are able to set space usage limits to directories/projects easily, since we have got a really flexible pool feature. And I believe much more people will be interested in trying to use OST pools with this new feature. The current 16-bit pool ID means that we can define at most 65536 pools. I think it will be sufficient in a very long time enven though pool is used to set quotas to directories/projects. However, surely, more bits are always better if possible. Personally, I don't like the fact of saving pool ID into a extra extended attribute either since we aleady have XATTR_NAME_LOV. And I think it is possible to extract pool ID from XATTR_NAME_LOV on MDT. Howevr, objects on OSTs do not have that extended attribute (correct me if I am wrong), which means we have to add a new extended attribute anyway. For simplicity of codes, I just added it to all the objects. Do you have any better idea? Thanks! |
| Comment by Niu Yawei (Inactive) [ 29/Nov/13 ] |
Andreas, there were only 16 bits for pool ID in quota FID. see lquota_generate_fid().
LiXi, looks the design doesn't mention that how the usage for a user/group on specific pool is tracked, so I don't quite see how can we know if the user exceeds pool quota limit? And could you explain "3) its poool does not exceed an entire usage limit." more? |
| Comment by Li Xi (Inactive) [ 29/Nov/13 ] |
|
Hi Yawei, The main function of enforcing space limits is osd_declare_inode_qid(). This function invokes osd_declare_qid() for two time, first to check that quotas of the user is not exceeded, and second to check that quotas of the gourp is not exceeded too. When we trying to add usage limit of the entire pool, we can just add the third call of osd_declare_qid(). In this call, it will check that space limit of the entire poool is not exceeded. We need to add POOLQUOTA along with existing USRQUOTA and GRPQUOTA. And we need to create a quota file for each pool too. But I don't think the work will cost too much time. What is your opinion? I think the attempt to set limits to direcotries/projects using pool based quota is not a bad idea. There seems an efficiency problem if we tries to implement 'true' directory/subtree quota, because when we move files/directories from a directory to another one, all the disk usages should be updated immediately, which is not friendly for performance. The current implementation of pool based quota does not have that problem. |
| Comment by Niu Yawei (Inactive) [ 29/Nov/13 ] |
I'm little confused, current design/implementation of pool quota is quota per user per pool but not quota per pool, right? My question was where the space usage of user per pool is stored? The quota per pool (POOLQUOTA) you mentioned above explains the "3) its poool does not exceed an entire usage limit."? It's in your plan but not in current design, am I understanding right? |
| Comment by Li Xi (Inactive) [ 29/Nov/13 ] |
|
Yeah, it is a little bit confusing here. And you are right, the current patch only support quotas per user/group per pool, not quota for an entire pool. The space usages of users per pool are stroed as quota files on OSDs, one quota file per pool for each user/group. Actually, I didn't change many codes in this part, because the existing codes are really flexible and extendable. And it is correct that I have not implemented POOLQUOTA or quotas for entire pools. It is still under designing. |
| Comment by Andreas Dilger [ 29/Nov/13 ] |
|
The main problem with storing a separate xattr for the pool name is that this will cause the xattr to overflow the space on the OST inode and allocate a separate block for the pool xattr. That will really hurt performance, especially if this xattr needs to be read from disk for each new file access. As for pool vs project quota, if like to plan for the future once there is project quota, so that the protocol does not need to change again in the future. I think for project quota it makes sense that this can be set when the file is first created (maybe inherited from the parent directory) but we do not need to track this when the files are moved to another directory. In that regard, the current pool proposal is fine for project quota as well, if the same OSTs can be part of multiple projects (pools) since this allows users to use all of the OSTs (if they are in the pool) without loss of bandwidth. I think if there can be many projects then 65536 is too small a limit. I haven't looked at the protocol yet to see if there is room for a 32-bit value or not. One thing that would be needed is some way to change the project of a file after it is created. I don't know if that should be done by "lfs migrate" to change the OSTs, or just setxattr to change the pool name in the LOV xattr? |
| Comment by Niu Yawei (Inactive) [ 02/Dec/13 ] |
hmm, the quota accounting files were created by underlying backend filesystem, and the space usage was tracked/updated by the backend filesystem as well. Where is the accounting info for each pool is tracked/updated? In backend filesystem (ldiskfs/zfs) or in OSD?
I agree, 65536 looks little bit small for projects limit, I was thinking that directory FID could be better than pool ID for this purpose. My personal thoughts are:
|
| Comment by Li Xi (Inactive) [ 02/Dec/13 ] |
|
I don't know this in detail, so please correct me if I am wrong. I think current disk usages and limites of Lustre is tracked and enforced on QSD layer. The usages and limits of inodes/kbytes are maintained by QSD itself without any help from ldiskfs. I think it is one of the reasons why the current quota framework is flexible and powerful. And I guess it is why 'lfs quotacheck' does not work any more? As Andreas mentioned, it is not necessary that pool feature is only limited to OST. We might want to have pools for both OSTs and MDTs in the future as soon as DNE is widely used. In that regard, setting both space and inode limits to pools seems straightforward. When the MDT support of pool is ready, it needs little (if not no) work to enforce inode limits to pools. Current quota framework is really powerful, which makes it not so hard to implement quota support for pool. Yawei, do you have any good idea of implementing directory quota already? I will be very happy to discuss about it. |
| Comment by Niu Yawei (Inactive) [ 02/Dec/13 ] |
Quota enforcement (limit) is in OSD, but quota accounting (usage) is in backend filesystem. The reason we no longer need quotacheck is that ldiskfs (and zfs) now always enable quota accounting by default.
That means if we want to set quota for a project, we have to create two pools (one MDT pool and one OST pool), then set inode and block limit separately?
I didn't think on directory quota carefully, but I think the major work for directory quota is usage tracking for each directory, and if we can address the problem of usage tracking for each pool, then might be able to address the usage tracking for directory in the same way. |
| Comment by Andreas Dilger [ 02/Dec/13 ] |
|
I've always thought that MDTs might be in the same pool as OSTs, so no need for a separate pool. It would also be possibly to have an MDT-only pools if it only relates to namespace (e.g. DNE striped directory selection from an MDT pool). The difference between an OST and an MDT have already started disappearing with the Data-on-MDT project. I also think it will be easier to consider a "project" quota instead of a "directory" quota. The "project" quota would be inherited from the parent directory as one expects, but the main difference is that moving a file out of a project directory does not remove it from the project accounting. This avoids a lot of complexity in implementation to track when the files are removed from the project. My understanding is that this matches the semantics of the ext3/4 project quota implementation. It would make sense to check what the semantics are for XFS project quotas. As for accounting of the pool quota - Niu is correct. While the Lustre quota code handles granting of quota to servers, it is the underlying OSD quota accounting that tracks all of the space usage. |
| Comment by Li Xi (Inactive) [ 02/Dec/13 ] |
|
Thank you, Andreas and Yawei, for correcting me! I will check the codes for more detail. After a quick glimpse of codes, I think both project quota of XFS and subtree quota of ext4 have the similar idea. An internal attribute, project ID (or subtree ID for ext4) is set to inode so as to mark the file as a member of a project (or subtree). This ID is inherited from parent directory, and is kept unchanged when file is moved out of its former directory, which looks like the pool attribute of Lustre. This fact makes me believe that pool feature of Lustre is a good basis to implement project-like quota. |
| Comment by Li Xi (Inactive) [ 02/Dec/13 ] |
|
BTW, as far as I know, GPFS only supports 10000 independent file sets at most. Maybe I missed something, but it seems to me that 65536 pools are pretty sufficient enven for project quota. |
| Comment by Shuichi Ihara (Inactive) [ 04/Dec/13 ] |
|
Andreas, Niu |
| Comment by Niu Yawei (Inactive) [ 04/Dec/13 ] |
|
I think we may define three types of quota: 1. pool quota This kind of quota is to limit usage on real device. We now have only two default quota pools, MD pool and DT pool, all MDTs belong to MD pool and all OSTs belong to DT pool, these two default pools are for whole filesystem (MD manages whole fs inode limit, DT manages whole fs block limit). If user want some more specific quota rather than the whole fs quota, she can create quota pool and set limit for the specific pool. For instance, a Lustre has several fast OSTs which SSD drive in the backend, besides the whole fs limit, to set more restricted limit for those fast OSTs, one may create a 'fast' pool with all these OSTs in, and set smaller limit on it. Once the feature of small file on MDT done, MDT will consume block limit too (block quota is ignored on MDT for now), and user may want smaller block limit on MDTs comparing with OSTs, then she can create a 'small file' pool with all MDTs in and set smaller block limit for it. 2. directory quota / project quota / ... These kind of quotas are to limit usage on object (directory, project ...), they are harder for implementation comparing with pool quota, because usage tracking for each directory/project has to be done (not like pool quota, it can leverage the backend fs quota usage naturally). Such kind of quota is usually to limit the total size of directory or project, so the quota identiy should be directory FID (or parent directory FID of project) but not UID/GID, and directory/project quota should only be valid for default MD/DT pools imho. 3. per user directory quota / per user project quota / ... These kind of quotas are to limit specific user on object (directory, project ...), it's harder than previous two quotas, and looks current quota framework can't support it well. It looks to me that you want the 3rd one, but the design/implementation mixed with the 1st one? Anyway, I think we may start from the easy one (the 1st pool quota) if you want. |
| Comment by Gerrit Updater [ 28/Jul/15 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/15761 |
| Comment by Gerrit Updater [ 28/Jul/15 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/15762 |
| Comment by Gerrit Updater [ 28/Jul/15 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/15763 |
| Comment by Gerrit Updater [ 28/Jul/15 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/15764 |
| Comment by Andreas Dilger [ 28/Aug/15 ] |
|
I've pushed an updated version of e2fsprogs-1.42.13.wc3 to the master-lustre-test repository and this has been able to build successfully. If this passes testing I'll push it over to master-lustre and you can rebase these patches. It removes lfsck support, which has been causing build problems and is no longer supported. |
| Comment by Gerrit Updater [ 31/Dec/15 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/17770 |
| Comment by James A Simmons [ 31/Dec/15 ] |
|
Adding all that new quota to ldiskfs will never be accepted by the upstream kernel people. I think it would be better to abstract to the osd-ldiskfs layer instead. Also what about ZFS? Perhaps in that case we can abstract to its own layer or integrate it with the lquota code. |
| Comment by Shuichi Ihara (Inactive) [ 31/Dec/15 ] |
|
NOTE, project quota inode filed is already reserved in ext4 superblock on upstream kernel. |
| Comment by Andreas Dilger [ 31/Dec/15 ] |
|
James, the upstream ext4 support for project quota is very close to landing. Also, we are just in the process of adding inode quotas to ZFS and it would be straight forward to add project quotas to ZFS in a similar manner. In the current osd-zfs code we had inode accounting separate from the core ZFS code and it didn't work very well, which is the reason we are moving it into the core ZFS code now. |
| Comment by James A Simmons [ 31/Dec/15 ] |
|
That is excellent news. It will be a lot more work to carry around these patches but at least down the road it will be standard code in ext4. |
| Comment by Shuichi Ihara (Inactive) [ 04/Jan/16 ] |
|
The big patch set is becouse we need to backport kernel patches against all supporteed linux distributions today. Shilong ported this patches against RHEL6 and SLES11 as well. The project quota feature is not ready today, but it will be 2.9 or 2.10. We really want to know what linux distribution will be supported at this point. |
| Comment by Peter Jones [ 04/Jan/16 ] |
|
Ihara I don't think that you need to worry about RHEL6/SLES11 - both of those are scheduled to be dropped in Lustre 2.9 - http://wiki.lustre.org/Release_2.9.0 . I would focus efforts on the latest RHEL 7.x though the incremental work to support the latest SLES12 SPx should not be much incremental work and having this would be well-received in some quarters. Peter |
| Comment by Andreas Dilger [ 08/Jan/16 ] |
|
News from Ted Ts'o on the ext4 developer concall today was that the project quota feature was going to land in the upstream kernel during this merge window. He was wondering if Li Xi had updated xfstests to test the project quota feature with ext4 yet, so that he didn't have to re-do this work himself? If yes, please repost the xfstests patches to the linux-ext4 mailing list. |
| Comment by Li Xi (Inactive) [ 08/Jan/16 ] |
|
Hi Andreas, Shilong Wang has added some tests to xfstests. And he will push the patch soon. Thanks! |
| Comment by Gerrit Updater [ 23/Feb/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/18568 |
| Comment by Gerrit Updater [ 23/Feb/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/18569 |
| Comment by Gerrit Updater [ 24/Feb/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/18598 |
| Comment by Gerrit Updater [ 14/Mar/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/18894 |
| Comment by Gerrit Updater [ 14/Mar/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/18895 |
| Comment by Gerrit Updater [ 27/Apr/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/19809 |
| Comment by Gerrit Updater [ 27/Apr/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/19810 |
| Comment by Gerrit Updater [ 27/Apr/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/19811 |
| Comment by Gerrit Updater [ 28/Apr/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/19843 |
| Comment by Gerrit Updater [ 02/Jun/16 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/19843/ |
| Comment by Gerrit Updater [ 09/Jun/16 ] |
|
Andreas Dilger (andreas.dilger@intel.com) merged in patch http://review.whamcloud.com/18569/ |
| Comment by Gerrit Updater [ 09/Jun/16 ] |
|
Andreas Dilger (andreas.dilger@intel.com) merged in patch http://review.whamcloud.com/15761/ |
| Comment by Gerrit Updater [ 09/Jun/16 ] |
|
Andreas Dilger (andreas.dilger@intel.com) merged in patch http://review.whamcloud.com/15763/ |
| Comment by Gerrit Updater [ 12/Jul/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/21255 |
| Comment by Gerrit Updater [ 19/Jul/16 ] |
|
Andreas Dilger (andreas.dilger@intel.com) merged in patch http://review.whamcloud.com/15762/ |
| Comment by Gerrit Updater [ 19/Jul/16 ] |
|
Andreas Dilger (andreas.dilger@intel.com) merged in patch http://review.whamcloud.com/15764/ |
| Comment by Gerrit Updater [ 09/Aug/16 ] |
|
Andreas Dilger (andreas.dilger@intel.com) merged in patch http://review.whamcloud.com/18895/ |
| Comment by Gerrit Updater [ 09/Aug/16 ] |
|
Andreas Dilger (andreas.dilger@intel.com) merged in patch http://review.whamcloud.com/21255/ |
| Comment by Gerrit Updater [ 25/Nov/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23951 |
| Comment by Gerrit Updater [ 25/Nov/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23953 |
| Comment by Gerrit Updater [ 25/Nov/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23954 |
| Comment by Gerrit Updater [ 25/Nov/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23952 |
| Comment by Gerrit Updater [ 25/Nov/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23945 |
| Comment by Gerrit Updater [ 25/Nov/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23946 |
| Comment by Gerrit Updater [ 25/Nov/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23947 |
| Comment by Gerrit Updater [ 25/Nov/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23948 |
| Comment by Gerrit Updater [ 25/Nov/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23949 |
| Comment by Gerrit Updater [ 25/Nov/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23950 |
| Comment by Gerrit Updater [ 25/Nov/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23955 |
| Comment by Gerrit Updater [ 25/Nov/16 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: http://review.whamcloud.com/23956 |
| Comment by Gerrit Updater [ 06/Mar/17 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/25812 |
| Comment by Gerrit Updater [ 27/Mar/17 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/26202 |
| Comment by Gerrit Updater [ 06/Apr/17 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/26411 |
| Comment by Gerrit Updater [ 09/Apr/17 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/26464 |
| Comment by Gerrit Updater [ 09/Apr/17 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/26463 |
| Comment by Gerrit Updater [ 13/Apr/17 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/26577 |
| Comment by Gerrit Updater [ 13/Apr/17 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/17770/ |
| Comment by Gerrit Updater [ 13/Apr/17 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/18894/ |
| Comment by Gerrit Updater [ 13/Apr/17 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/23946/ |
| Comment by Gerrit Updater [ 13/Apr/17 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/23947/ |
| Comment by Gerrit Updater [ 13/Apr/17 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/23945/ |
| Comment by Gerrit Updater [ 13/Apr/17 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/25812/ |
| Comment by Gerrit Updater [ 13/Apr/17 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/26581 |
| Comment by James A Simmons [ 13/Apr/17 ] |
|
I just noticed with this work its no longer possible to have patchless kernels for lustre server side. |
| Comment by Bob Glossman (Inactive) [ 13/Apr/17 ] |
|
Does support for this new feature only exist on el7 servers with ldiskfs? |
| Comment by James A Simmons [ 13/Apr/17 ] |
|
Yes it appears this is not supported on any SLES systems as well. Project Quota appears to only be supported on RHEL systems with ldiskfs |
| Comment by Peter Jones [ 13/Apr/17 ] |
|
Yes. That is true at this point. We are scoping out what is required to extend this work further. |
| Comment by Gerrit Updater [ 13/Apr/17 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/26202/ |
| Comment by Gerrit Updater [ 13/Apr/17 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/23954/ |
| Comment by Gerrit Updater [ 13/Apr/17 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/23955/ |
| Comment by Shuichi Ihara (Inactive) [ 14/Apr/17 ] |
Exactly. SInce all ldiskfs (ext4) and kernel patches were merged in upstream linux kernel (linux-4.5), once lustre server supprots 4.5 kernel or above, we don't need patched server for project quota anymore. Meantime, ldisfks patches are still backportable for other kernel e.g. SLES12SP2 in the future work. |
| Comment by James A Simmons [ 14/Apr/17 ] |
|
Since RHEL tends to use 5 year old stacks we are looking at 2021 when we don't need patched kernels |
| Comment by Gerrit Updater [ 14/Apr/17 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/26621 |
| Comment by Gerrit Updater [ 25/Apr/17 ] |
|
Andreas Dilger (andreas.dilger@intel.com) merged in patch https://review.whamcloud.com/26581/ |
| Comment by Gerrit Updater [ 02/May/17 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/26463/ |
| Comment by Gerrit Updater [ 02/May/17 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/26464/ |
| Comment by Gerrit Updater [ 05/May/17 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/26411/ |
| Comment by Gerrit Updater [ 09/May/17 ] |
|
Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/26577/ |
| Comment by Peter Jones [ 09/May/17 ] |
|
Landed for 2.10 |
| Comment by nasf (Inactive) [ 25/Jul/17 ] |
|
How to handle the project ID attribute via MDT file-level backup/restore? Originally, we can do that via tar/getfattr for backup, then untar/setfattr for restore, but it seems the project ID cannot be handled like that, right? Or I missed anything? |
| Comment by Li Xi (Inactive) [ 25/Jul/17 ] |
|
Hi Fanyong, I think backup/restore using tar will simply discards project ID. Project ID has an IOCTL with the name of EXT4_IOC_FSGETXATTR(FS_IOC_FSGETXATTR). And in order to support project ID backup, an ioctl of that type needs to be called. And as far as we've tested, that is not supported by tar. We need to push a patch to add that support.
BTW, I think there are better ways to backup MDT files than tar. |
| Comment by nasf (Inactive) [ 25/Jul/17 ] |
|
I am thinking how to backup/restore the system among different backends, such as from ldiskfs to ZFS or the reverse case. I do not know whether there are other better solution for that. |
| Comment by Andreas Dilger [ 26/Jul/17 ] |
|
One option would be to expose the projid value as a virtual xattr (e.g. "trusted.projid" or similar) from ext4/ldiskfs, so that it can be backed up and restored via getfattr/setfattr. This is also a problem for regular ext4 filesystems, so I would ask on the linux-ext4 mailing list to see what the agreement is there for implementing this. |
| Comment by Nathan Rutman [ 24/Oct/17 ] |
|
Description of this bug and early design docs talk about "pool quotas", as in the ability to administer quotas based on Lustre OST pools. The final landed feature seems to be about project quotas, not pool quotas - am I correct in stating that the as-landed feature cannot support pool quotas? I.e. I can't place a special quota on a set of SSD OSTs? |
| Comment by Wang Shilong (Inactive) [ 25/Oct/17 ] |
|
Hi Nathan Rutman, Your understanding is right, as-landed feature is project quota, not 'pool quota'. |