Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-569

make lu_object cache size adjustable

Details

    • Improvement
    • Resolution: Won't Fix
    • Minor
    • None
    • None
    • None
    • 4900

    Description

      lu_object cache is specified to consume 20% of total memory. This limits 200 clients can be mounted on one node. We should make it adjustable so that customers have a chance to configure it by their needs.

      Attachments

        Issue Links

          Activity

            [LU-569] make lu_object cache size adjustable

            Integrated in lustre-master » x86_64,server,el5,ofa #285
            LU-569: Make lu_object cache size adjustable

            Oleg Drokin : c8d7c99ec50c81a33eea43ed1c535fa4d65cef23
            Files :

            • lustre/obdclass/lu_object.c
            hudson Build Master (Inactive) added a comment - Integrated in lustre-master » x86_64,server,el5,ofa #285 LU-569 : Make lu_object cache size adjustable Oleg Drokin : c8d7c99ec50c81a33eea43ed1c535fa4d65cef23 Files : lustre/obdclass/lu_object.c

            Integrated in lustre-master » i686,server,el6,inkernel #285
            LU-569: Make lu_object cache size adjustable

            Oleg Drokin : c8d7c99ec50c81a33eea43ed1c535fa4d65cef23
            Files :

            • lustre/obdclass/lu_object.c
            hudson Build Master (Inactive) added a comment - Integrated in lustre-master » i686,server,el6,inkernel #285 LU-569 : Make lu_object cache size adjustable Oleg Drokin : c8d7c99ec50c81a33eea43ed1c535fa4d65cef23 Files : lustre/obdclass/lu_object.c

            Integrated in lustre-master » x86_64,client,sles11,inkernel #285
            LU-569: Make lu_object cache size adjustable

            Oleg Drokin : c8d7c99ec50c81a33eea43ed1c535fa4d65cef23
            Files :

            • lustre/obdclass/lu_object.c
            hudson Build Master (Inactive) added a comment - Integrated in lustre-master » x86_64,client,sles11,inkernel #285 LU-569 : Make lu_object cache size adjustable Oleg Drokin : c8d7c99ec50c81a33eea43ed1c535fa4d65cef23 Files : lustre/obdclass/lu_object.c

            I'll use this patch for IR test only.

            jay Jinshan Xiong (Inactive) added a comment - I'll use this patch for IR test only.
            liang Liang Zhen (Inactive) added a comment - - edited

            yes, they should be close, but it doesn't matter if they are handled by different threads on different CPUs, instead of "hog" one thread on one CPU for seconds.
            we want to rehash(or grow hash-table) just because we don't want to allocate huge amount of big hash tables, for example, obd_class::exp_lock_hash, it is possible that there are tens of thousands of locks in this hash although most cases it shouldn't be that many, so we should allocate a small hash table on initializing of export and grow it only if necessary, as we can have over hundreds of thousands exports on server, it will save a lot of memory.

            btw: although not fully tested, I remember the new cfs_hash can support "shrink" of hash-table which is non-blocking too, we probably should test and enable it in the future.

            liang Liang Zhen (Inactive) added a comment - - edited yes, they should be close, but it doesn't matter if they are handled by different threads on different CPUs, instead of "hog" one thread on one CPU for seconds. we want to rehash(or grow hash-table) just because we don't want to allocate huge amount of big hash tables, for example, obd_class::exp_lock_hash, it is possible that there are tens of thousands of locks in this hash although most cases it shouldn't be that many, so we should allocate a small hash table on initializing of export and grow it only if necessary, as we can have over hundreds of thousands exports on server, it will save a lot of memory. btw: although not fully tested, I remember the new cfs_hash can support "shrink" of hash-table which is non-blocking too, we probably should test and enable it in the future.

            It will help, but if you are using an evenly distributed hash function, I could say the time for each first-level bucket to be rehashed will be really close.
            I just don't understand the intention of rehashing feature, BTW.

            jay Jinshan Xiong (Inactive) added a comment - It will help, but if you are using an evenly distributed hash function, I could say the time for each first-level bucket to be rehashed will be really close. I just don't understand the intention of rehashing feature, BTW.

            it's kind of off-topic, I think we can improve cfs_hash to make it support rehash-in-bucket in the future:

            • user can provide (not necessary) two levels hash functions
              • the first is for bucket-hash (each bucket has one lock and N entries (hlist_head))
              • the second is for entry-hash inside bucket (hash element to hlist_head in that bucket)
            • rehash can only happen in each bucket
              • better scalability, because we don't do rehash for the whole hash table in one batch
              • no element moving between buckets, so we don't need rwlock or lock dance for bucket locking
            liang Liang Zhen (Inactive) added a comment - it's kind of off-topic, I think we can improve cfs_hash to make it support rehash-in-bucket in the future: user can provide (not necessary) two levels hash functions the first is for bucket-hash (each bucket has one lock and N entries (hlist_head)) the second is for entry-hash inside bucket (hash element to hlist_head in that bucket) rehash can only happen in each bucket better scalability, because we don't do rehash for the whole hash table in one batch no element moving between buckets, so we don't need rwlock or lock dance for bucket locking

            If the entries can be as small as 4096, I think that is absolutely fine.

            I don't know how much exactly memory it consumes - it is prorated by memory size, but after I changed lu_cache_percent from 20 to 1, I could mount 1K mountpoints - it used to be 200 at most.

            jay Jinshan Xiong (Inactive) added a comment - If the entries can be as small as 4096, I think that is absolutely fine. I don't know how much exactly memory it consumes - it is prorated by memory size, but after I changed lu_cache_percent from 20 to 1, I could mount 1K mountpoints - it used to be 200 at most.

            If the lu_site_init() picks a very small hash table size for clients, say 4096 entries, does that prevent you from mounting 1k clients on a single node? Is the lu_cache hash table the only significant memory user for each client mount? How much memory does the lu_cache hash table use on a server if it uses the default lu_htable_order() value?

            adilger Andreas Dilger added a comment - If the lu_site_init() picks a very small hash table size for clients, say 4096 entries, does that prevent you from mounting 1k clients on a single node? Is the lu_cache hash table the only significant memory user for each client mount? How much memory does the lu_cache hash table use on a server if it uses the default lu_htable_order() value?

            Rehash is way too complex for me. Yes, we can add a parameter in lu_site_init() so that client and server can have different size of hash table. However, I'm afraid that we still may need a way to be able to configure it for special needs - for example, I have to mount 1K mountpoints to test the scalability of IR.

            jay Jinshan Xiong (Inactive) added a comment - Rehash is way too complex for me. Yes, we can add a parameter in lu_site_init() so that client and server can have different size of hash table. However, I'm afraid that we still may need a way to be able to configure it for special needs - for example, I have to mount 1K mountpoints to test the scalability of IR.

            Andreas, yes it's only for hash table can grow and it's already on master for a while.
            I suspect that growing of hash table with one million entries and millions of elements will consume very long time (probably a few seconds on a busy smp server) and too expensive, i.e: if we want to increase the hash entries from 512K to 1M, then we have to:
            1) alloc a few megabytes as hash head
            2) initialize 1 million hash head
            3) move millions of elements from old hash list to new hash list
            It will be even more expensive if we don't have the rwlock and just lock different buckets to move elements, and the worst case is that we have to lock/unlock different target buckets for each element moving. Although we do relax CPU while rehashing so other threads still can access the hash table, but I'm still a little nervous if we have such kind of heavy operations on servers.

            Another thing we need to notice is lu_site is not using high-level cfs_hash APIs like cfs_hash_find/add/del which will hide locks of cfs_hash, lu_site will directly refer to cfs_hash locks and low-level bucket APIs, so it can use those hash locks to protect it's own data, for example, counters and LRU for shrinker, some waitq etc. Which means we need to make some changes to lu_site if we want to enable rehash.

            I think there is another option to support growing of lu_site, we can have multiple cfs_hash tables for the lu_site, i.e: 64 hash tables, and hash objects to different hash tables, any of these hash tables can grow when necessary and we don't need to worry about "big rehash" with millions of elements, global lock wouldn't be an issue either because we have many of these hash tables.

            btw: shouldn't caller of lu_site_init() know about which stack (server/client) the lu_site is created for? If so can we just pass in a flag or whatever to indicate client stack to use smaller hash table?

            liang Liang Zhen (Inactive) added a comment - Andreas, yes it's only for hash table can grow and it's already on master for a while. I suspect that growing of hash table with one million entries and millions of elements will consume very long time (probably a few seconds on a busy smp server) and too expensive, i.e: if we want to increase the hash entries from 512K to 1M, then we have to: 1) alloc a few megabytes as hash head 2) initialize 1 million hash head 3) move millions of elements from old hash list to new hash list It will be even more expensive if we don't have the rwlock and just lock different buckets to move elements, and the worst case is that we have to lock/unlock different target buckets for each element moving. Although we do relax CPU while rehashing so other threads still can access the hash table, but I'm still a little nervous if we have such kind of heavy operations on servers. Another thing we need to notice is lu_site is not using high-level cfs_hash APIs like cfs_hash_find/add/del which will hide locks of cfs_hash, lu_site will directly refer to cfs_hash locks and low-level bucket APIs, so it can use those hash locks to protect it's own data, for example, counters and LRU for shrinker, some waitq etc. Which means we need to make some changes to lu_site if we want to enable rehash. I think there is another option to support growing of lu_site, we can have multiple cfs_hash tables for the lu_site, i.e: 64 hash tables, and hash objects to different hash tables, any of these hash tables can grow when necessary and we don't need to worry about "big rehash" with millions of elements, global lock wouldn't be an issue either because we have many of these hash tables. btw: shouldn't caller of lu_site_init() know about which stack (server/client) the lu_site is created for? If so can we just pass in a flag or whatever to indicate client stack to use smaller hash table?

            People

              jay Jinshan Xiong (Inactive)
              jay Jinshan Xiong (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: