Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-569

make lu_object cache size adjustable

Details

    • Improvement
    • Resolution: Won't Fix
    • Minor
    • None
    • None
    • None
    • 4900

    Description

      lu_object cache is specified to consume 20% of total memory. This limits 200 clients can be mounted on one node. We should make it adjustable so that customers have a chance to configure it by their needs.

      Attachments

        Issue Links

          Activity

            [LU-569] make lu_object cache size adjustable

            Rehash is way too complex for me. Yes, we can add a parameter in lu_site_init() so that client and server can have different size of hash table. However, I'm afraid that we still may need a way to be able to configure it for special needs - for example, I have to mount 1K mountpoints to test the scalability of IR.

            jay Jinshan Xiong (Inactive) added a comment - Rehash is way too complex for me. Yes, we can add a parameter in lu_site_init() so that client and server can have different size of hash table. However, I'm afraid that we still may need a way to be able to configure it for special needs - for example, I have to mount 1K mountpoints to test the scalability of IR.

            Andreas, yes it's only for hash table can grow and it's already on master for a while.
            I suspect that growing of hash table with one million entries and millions of elements will consume very long time (probably a few seconds on a busy smp server) and too expensive, i.e: if we want to increase the hash entries from 512K to 1M, then we have to:
            1) alloc a few megabytes as hash head
            2) initialize 1 million hash head
            3) move millions of elements from old hash list to new hash list
            It will be even more expensive if we don't have the rwlock and just lock different buckets to move elements, and the worst case is that we have to lock/unlock different target buckets for each element moving. Although we do relax CPU while rehashing so other threads still can access the hash table, but I'm still a little nervous if we have such kind of heavy operations on servers.

            Another thing we need to notice is lu_site is not using high-level cfs_hash APIs like cfs_hash_find/add/del which will hide locks of cfs_hash, lu_site will directly refer to cfs_hash locks and low-level bucket APIs, so it can use those hash locks to protect it's own data, for example, counters and LRU for shrinker, some waitq etc. Which means we need to make some changes to lu_site if we want to enable rehash.

            I think there is another option to support growing of lu_site, we can have multiple cfs_hash tables for the lu_site, i.e: 64 hash tables, and hash objects to different hash tables, any of these hash tables can grow when necessary and we don't need to worry about "big rehash" with millions of elements, global lock wouldn't be an issue either because we have many of these hash tables.

            btw: shouldn't caller of lu_site_init() know about which stack (server/client) the lu_site is created for? If so can we just pass in a flag or whatever to indicate client stack to use smaller hash table?

            liang Liang Zhen (Inactive) added a comment - Andreas, yes it's only for hash table can grow and it's already on master for a while. I suspect that growing of hash table with one million entries and millions of elements will consume very long time (probably a few seconds on a busy smp server) and too expensive, i.e: if we want to increase the hash entries from 512K to 1M, then we have to: 1) alloc a few megabytes as hash head 2) initialize 1 million hash head 3) move millions of elements from old hash list to new hash list It will be even more expensive if we don't have the rwlock and just lock different buckets to move elements, and the worst case is that we have to lock/unlock different target buckets for each element moving. Although we do relax CPU while rehashing so other threads still can access the hash table, but I'm still a little nervous if we have such kind of heavy operations on servers. Another thing we need to notice is lu_site is not using high-level cfs_hash APIs like cfs_hash_find/add/del which will hide locks of cfs_hash, lu_site will directly refer to cfs_hash locks and low-level bucket APIs, so it can use those hash locks to protect it's own data, for example, counters and LRU for shrinker, some waitq etc. Which means we need to make some changes to lu_site if we want to enable rehash. I think there is another option to support growing of lu_site, we can have multiple cfs_hash tables for the lu_site, i.e: 64 hash tables, and hash objects to different hash tables, any of these hash tables can grow when necessary and we don't need to worry about "big rehash" with millions of elements, global lock wouldn't be an issue either because we have many of these hash tables. btw: shouldn't caller of lu_site_init() know about which stack (server/client) the lu_site is created for? If so can we just pass in a flag or whatever to indicate client stack to use smaller hash table?

            Liang, is this needed also for a hash table that can only grow? Probably yes, but just to confirm. Is the improved hash table code already landed on master?

            Unfortunately (I think) there is no way to know when the lu_cache is set up there is no way to know whether there is going to be a server or only a client on that node. I also assume that it is not possible/safe to share the lu_cache on the client between mountpoints.

            I wonder if we might have some scalable method for hash table resize that does not need a single rwlock for the whole table? One option is to implement rehash as two independent hash tables, and as long as the migration of entries from the old table to the new table is done while locking both the source and target bucket then it should be transparent to the users,band relatively low contention (only two of all the buckets are locked at one time).

            adilger Andreas Dilger added a comment - Liang, is this needed also for a hash table that can only grow? Probably yes, but just to confirm. Is the improved hash table code already landed on master? Unfortunately (I think) there is no way to know when the lu_cache is set up there is no way to know whether there is going to be a server or only a client on that node. I also assume that it is not possible/safe to share the lu_cache on the client between mountpoints. I wonder if we might have some scalable method for hash table resize that does not need a single rwlock for the whole table? One option is to implement rehash as two independent hash tables, and as long as the migration of entries from the old table to the new table is done while locking both the source and target bucket then it should be transparent to the users,band relatively low contention (only two of all the buckets are locked at one time).

            the reason we don't allow rehash lu_site especially on server side is because if we want to enabled "rehash" (by using flag cfs_hash_create(..CFS_HASH_REHASH)), then there has to be a single rwlock to protect the whole hash-table, which could be overhead for such a high contention hash-table.

            liang Liang Zhen (Inactive) added a comment - the reason we don't allow rehash lu_site especially on server side is because if we want to enabled "rehash" (by using flag cfs_hash_create(..CFS_HASH_REHASH)), then there has to be a single rwlock to protect the whole hash-table, which could be overhead for such a high contention hash-table.

            The memory usage is because it allocates a large hash table when it is mounted. With this patch and set lu_cache_percent to be 1, I can run 1K client on one node without any problem.

            I agree it will be good to have dynamic hash table size, especially on the server size. Personally I don't think we need it on clients because it's not desirable for clients to have incredible # of objects.

            jay Jinshan Xiong (Inactive) added a comment - The memory usage is because it allocates a large hash table when it is mounted. With this patch and set lu_cache_percent to be 1, I can run 1K client on one node without any problem. I agree it will be good to have dynamic hash table size, especially on the server size. Personally I don't think we need it on clients because it's not desirable for clients to have incredible # of objects.

            In addition to just allowing Lustre to consume more memory on the client, I think it is also/more important to determine WHY it is consuming so much memory, and try to reduce the actual memory used. Is it because of too-large hash tables, that could be started at a small size and dynamically grown only as needed? Is it because of other large/static arrays per mountpoint?

            My 1.8 client consumes about 7MB after flushing the LDLM cache (lctl get_param memused). It should be fairly straight forward to run with +malloc debug for a second/third/fourth mount and dump the debug logs, parse with lustre/tests/leakfinder.pl (may need some fixing) and determine where all of the memory is being used.

            adilger Andreas Dilger added a comment - In addition to just allowing Lustre to consume more memory on the client, I think it is also/more important to determine WHY it is consuming so much memory, and try to reduce the actual memory used. Is it because of too-large hash tables, that could be started at a small size and dynamically grown only as needed? Is it because of other large/static arrays per mountpoint? My 1.8 client consumes about 7MB after flushing the LDLM cache (lctl get_param memused). It should be fairly straight forward to run with +malloc debug for a second/third/fourth mount and dump the debug logs, parse with lustre/tests/leakfinder.pl (may need some fixing) and determine where all of the memory is being used.
            jay Jinshan Xiong (Inactive) added a comment - patch is at: http://review.whamcloud.com/1188

            People

              jay Jinshan Xiong (Inactive)
              jay Jinshan Xiong (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: