Details
-
New Feature
-
Resolution: Unresolved
-
Minor
-
None
-
None
-
None
-
3
-
9223372036854775807
Description
For multi-tenancy configuration on a Lustre filesystem, it would be preferable to allow multiple Tenants using a different user authentication service that is owned by the Tenant, eg: an LDAP per tenant.
In this scenario, Tenants may have many tens or hundreds of users and groups defined, and these will be highly dynamic, with users and groups being added and removed. These UIDs, GIDs, and PROJIDs may overlap across tenants, and we do not have the possibility of dictating what UID/GID/PROJID ranges each tenant may allocate from.
To support such an environment, the storage administrator could create an individual mapping for each and every ID used by a Tenant, however this would be extremely onerous on the Storage administrator to maintain and keep up to date, and it the Storage Administrator may not have any access or knowledge about the Tenant IDs, working more as a Cloud Service Provider, separate from the team managing the Tenant). As well, having hundreds of thousands or millions of different IDs to map at the server level will put strain on the server resources (memory, CPU).
Instead, could we add a new configuration option to a nodemap to specify an ID mapping offset start and end, so that automatically, every Tenant local ID would be mapped to a new value "UID+OFFSET" on the storage, without any explicit mapping being set.
Then a storage administrator could choose to allocate 1M canonical UIDs to a Tenant, by specifying this property:
Tenant1: canonical UID = UID+100,000
Tenant2: canonical UID = UID+200,000
Tenant3: ...
This way each tenant, will have distinct canonical UID/GID/PROJID range on the storage, without any effort to maintain the mapping rules as new users are created/deleted within the Tenants.
As discussed with adilger and sebastien all nodemaps will be able to have their UID/GID/PROJID offset set individually, but by default will all be set the same. The offset values will also only be applied to the individual FSID values when they are being loaded, and the original mapped values are what are saved. For ease of use, after the offset is declared users will not need to add it to every mapping (i.e. the admin will specify CLID:FSID as if offset didn't exist), and the offset will be automatically added in the back end when the nodemap handled.