Details
-
Improvement
-
Resolution: Won't Fix
-
Minor
-
None
-
None
-
None
-
4900
Description
lu_object cache is specified to consume 20% of total memory. This limits 200 clients can be mounted on one node. We should make it adjustable so that customers have a chance to configure it by their needs.
Attachments
Issue Links
- is related to
-
LU-19 imperative recovery
-
- Resolved
-
Activity
yes, they should be close, but it doesn't matter if they are handled by different threads on different CPUs, instead of "hog" one thread on one CPU for seconds.
we want to rehash(or grow hash-table) just because we don't want to allocate huge amount of big hash tables, for example, obd_class::exp_lock_hash, it is possible that there are tens of thousands of locks in this hash although most cases it shouldn't be that many, so we should allocate a small hash table on initializing of export and grow it only if necessary, as we can have over hundreds of thousands exports on server, it will save a lot of memory.
btw: although not fully tested, I remember the new cfs_hash can support "shrink" of hash-table which is non-blocking too, we probably should test and enable it in the future.
It will help, but if you are using an evenly distributed hash function, I could say the time for each first-level bucket to be rehashed will be really close.
I just don't understand the intention of rehashing feature, BTW.
it's kind of off-topic, I think we can improve cfs_hash to make it support rehash-in-bucket in the future:
- user can provide (not necessary) two levels hash functions
- the first is for bucket-hash (each bucket has one lock and N entries (hlist_head))
- the second is for entry-hash inside bucket (hash element to hlist_head in that bucket)
- rehash can only happen in each bucket
- better scalability, because we don't do rehash for the whole hash table in one batch
- no element moving between buckets, so we don't need rwlock or lock dance for bucket locking
If the entries can be as small as 4096, I think that is absolutely fine.
I don't know how much exactly memory it consumes - it is prorated by memory size, but after I changed lu_cache_percent from 20 to 1, I could mount 1K mountpoints - it used to be 200 at most.
If the lu_site_init() picks a very small hash table size for clients, say 4096 entries, does that prevent you from mounting 1k clients on a single node? Is the lu_cache hash table the only significant memory user for each client mount? How much memory does the lu_cache hash table use on a server if it uses the default lu_htable_order() value?
Rehash is way too complex for me. Yes, we can add a parameter in lu_site_init() so that client and server can have different size of hash table. However, I'm afraid that we still may need a way to be able to configure it for special needs - for example, I have to mount 1K mountpoints to test the scalability of IR.
Andreas, yes it's only for hash table can grow and it's already on master for a while.
I suspect that growing of hash table with one million entries and millions of elements will consume very long time (probably a few seconds on a busy smp server) and too expensive, i.e: if we want to increase the hash entries from 512K to 1M, then we have to:
1) alloc a few megabytes as hash head
2) initialize 1 million hash head
3) move millions of elements from old hash list to new hash list
It will be even more expensive if we don't have the rwlock and just lock different buckets to move elements, and the worst case is that we have to lock/unlock different target buckets for each element moving. Although we do relax CPU while rehashing so other threads still can access the hash table, but I'm still a little nervous if we have such kind of heavy operations on servers.
Another thing we need to notice is lu_site is not using high-level cfs_hash APIs like cfs_hash_find/add/del which will hide locks of cfs_hash, lu_site will directly refer to cfs_hash locks and low-level bucket APIs, so it can use those hash locks to protect it's own data, for example, counters and LRU for shrinker, some waitq etc. Which means we need to make some changes to lu_site if we want to enable rehash.
I think there is another option to support growing of lu_site, we can have multiple cfs_hash tables for the lu_site, i.e: 64 hash tables, and hash objects to different hash tables, any of these hash tables can grow when necessary and we don't need to worry about "big rehash" with millions of elements, global lock wouldn't be an issue either because we have many of these hash tables.
btw: shouldn't caller of lu_site_init() know about which stack (server/client) the lu_site is created for? If so can we just pass in a flag or whatever to indicate client stack to use smaller hash table?
Liang, is this needed also for a hash table that can only grow? Probably yes, but just to confirm. Is the improved hash table code already landed on master?
Unfortunately (I think) there is no way to know when the lu_cache is set up there is no way to know whether there is going to be a server or only a client on that node. I also assume that it is not possible/safe to share the lu_cache on the client between mountpoints.
I wonder if we might have some scalable method for hash table resize that does not need a single rwlock for the whole table? One option is to implement rehash as two independent hash tables, and as long as the migration of entries from the old table to the new table is done while locking both the source and target bucket then it should be transparent to the users,band relatively low contention (only two of all the buckets are locked at one time).
the reason we don't allow rehash lu_site especially on server side is because if we want to enabled "rehash" (by using flag cfs_hash_create(..CFS_HASH_REHASH)), then there has to be a single rwlock to protect the whole hash-table, which could be overhead for such a high contention hash-table.
The memory usage is because it allocates a large hash table when it is mounted. With this patch and set lu_cache_percent to be 1, I can run 1K client on one node without any problem.
I agree it will be good to have dynamic hash table size, especially on the server size. Personally I don't think we need it on clients because it's not desirable for clients to have incredible # of objects.
I'll use this patch for IR test only.