Details

    • Task
    • Resolution: Fixed
    • Blocker
    • None
    • None
    • None
    • 9223372036854775807

    Description

      Many stripe count test

      The many stripe count functional test is intended to show that a DNE2 configuration can handle many MDTs in a single filesystem, and a single directory can be striped over many MDTs. Due to the virtual AWS environment in which this is being tested, while performance will be measured, neither performance scaling nor load testing are primary goals of this test. It is rather a functional scaling test of the ability of the filesystem configuration and directory striping code to handle a large number of MDTs.

      1. Create a filesystem with 128 MDTs, 128 OSTs and at least 128 client mount points (multiple mounts per client)
      2. Create striped directories with stripe count N in 16, 32, 64, 96, 128:
                lfs setdirstripe -c N /mnt/lustre/testN
        

        Note: This command creates a striped directory across N MDTs.

                lfs setdirstripe -D -c N /mnt/lustre/testN
        

        Note: This command sets the default stripe count to N. All directories created within this directory will have this default stripe count applied.

      3. Run mdtest on all client mount points, and each thread will create/stat/unlink at least 128k files in the striped test directory. Run this test under a striped directory with default stripes, so all of subdirectories will be striped directory.
                lfs setdirstripe -c N /mnt/lustre/testN
                lfs setdirstripe -D -c N /mnt/lustre/testN
        
      4. No errors will be observed, and balanced striping of files across MDTs will be observed.

      Attachments

        1. 20150629-bench.log
          275 kB
        2. 20150629-results.json
          1 kB
        3. 20150701-bench96.log
          76 kB
        4. 20150701-results96.json
          0.3 kB

        Issue Links

          Activity

            [LU-6737] many stripe testing of DNE2
            rread Robert Read added a comment -

            My tools are ready, but haven't had a chance to go run the full test yet. Will try to get to this today.

            rread Robert Read added a comment - My tools are ready, but haven't had a chance to go run the full test yet. Will try to get to this today.

            Robert, any update for the test? Thanks.

            di.wang Di Wang (Inactive) added a comment - Robert, any update for the test? Thanks.
            rread Robert Read added a comment -

            "lfs getdirstripe <dir>" is only printing the stripe info of the one directory so the reason for the long pause was not obvious. I had to use strace to see it reading all the dirents after it prints the stripe info. Yes I'd agree it's a bug. I peaked at the code, and this behavior appears to be buried in the details of the llapi_semantic_traverse().

            Yes, test128/dir-0 has 128 stripes with 128k regular files in one directory. test/dir-0 also 128k regular files in one directory.

            I'll try with 128 unstriped subdirectories for comparison next time, but I suspect scanning that will still be quick.

            rread Robert Read added a comment - "lfs getdirstripe <dir>" is only printing the stripe info of the one directory so the reason for the long pause was not obvious. I had to use strace to see it reading all the dirents after it prints the stripe info. Yes I'd agree it's a bug. I peaked at the code, and this behavior appears to be buried in the details of the llapi_semantic_traverse(). Yes, test128/dir-0 has 128 stripes with 128k regular files in one directory. test/dir-0 also 128k regular files in one directory. I'll try with 128 unstriped subdirectories for comparison next time, but I suspect scanning that will still be quick.

            If you have enough OSTs(let's say >= 32), then single stripe, otherwise zero stripe.

            I assume /mnt/scratch/test128/dir-0 has 128 stripes? all children(131073) under dir-0 are regular files? Strange, I did not expect lfs find under striped directory are so slow. IMHO, it should be similar as no-striped directory. something might be wrong. probably statahead. Could you please collect client side -1 debug log? Thanks

            di.wang Di Wang (Inactive) added a comment - If you have enough OSTs(let's say >= 32), then single stripe, otherwise zero stripe. I assume /mnt/scratch/test128/dir-0 has 128 stripes? all children(131073) under dir-0 are regular files? Strange, I did not expect lfs find under striped directory are so slow. IMHO, it should be similar as no-striped directory. something might be wrong. probably statahead. Could you please collect client side -1 debug log? Thanks

            It seems like a bug for "lfs getdirstripe" to scan all the entries in the subdirectory I think? That should require "-R" to scan subdirectories.

            As for "lfs find" I guess it is doing the reassure on all 128 directory shards, but it would be interesting to compare if this is slower than e.g. "lfs find" on a directory with 128 subdirs with an equal number of files (i.e. 1000/subdir).

            adilger Andreas Dilger added a comment - It seems like a bug for "lfs getdirstripe" to scan all the entries in the subdirectory I think? That should require "-R" to scan subdirectories. As for "lfs find" I guess it is doing the reassure on all 128 directory shards, but it would be interesting to compare if this is slower than e.g. "lfs find" on a directory with 128 subdirs with an equal number of files (i.e. 1000/subdir).
            rread Robert Read added a comment -

            As I already have some automation built around mdsrate, I'll use that and ensure all threads use a single, shared directory. I also have a patch to mdsrate that adds support for directory striping, though I'll use the lfs commands to make it explicit.

            I'll add an `lfs df -i` before and after the create step of each test so we can confirm, at least manually, that the files are balanced.

            Do you want the files created with 0-stripes or normally with a single stripe?

            BTW, I've been putting our provisioning tools through the paces today, but thought I'd try one client with 128k files. I noticed that large DNE striping has a pretty big impact on directory scanning performance:

            [ec2-user@client00 ~]$ lfs getdirstripe /mnt/scratch/test/dir-0/
            /mnt/scratch/test/dir-0/
            lmv_stripe_count: 0 lmv_stripe_offset: 0
            
            [ec2-user@client00 ~]$ time lfs find /mnt/scratch/test/dir-0/ |wc -l 
            131073
            
            real	0m0.112s
            user	0m0.038s
            sys	0m0.159s
            
            [ec2-user@client00 ~]$ time lfs getdirstripe /mnt/scratch/test128/dir-0/ 
            /mnt/scratch/test128/dir-0/
            lmv_stripe_count: 128 lmv_stripe_offset: 0
            [stripe details deleted]
            
            [ec2-user@client00 ~]$ time lfs find /mnt/scratch/test128/dir-0 |wc -l
            131073
            
            real	0m43.969s
            user	0m0.053s
            sys	0m43.990s
            

            I noticed this because "lfs getdirstripe" was taking ~40s to return because for some reason lfs reads all the directory entries after printing the striping data. I'll make sure to only do this on empty directories for now.

            rread Robert Read added a comment - As I already have some automation built around mdsrate, I'll use that and ensure all threads use a single, shared directory. I also have a patch to mdsrate that adds support for directory striping, though I'll use the lfs commands to make it explicit. I'll add an `lfs df -i` before and after the create step of each test so we can confirm, at least manually, that the files are balanced. Do you want the files created with 0-stripes or normally with a single stripe? BTW, I've been putting our provisioning tools through the paces today, but thought I'd try one client with 128k files. I noticed that large DNE striping has a pretty big impact on directory scanning performance: [ec2-user@client00 ~]$ lfs getdirstripe /mnt/scratch/test/dir-0/ /mnt/scratch/test/dir-0/ lmv_stripe_count: 0 lmv_stripe_offset: 0 [ec2-user@client00 ~]$ time lfs find /mnt/scratch/test/dir-0/ |wc -l 131073 real 0m0.112s user 0m0.038s sys 0m0.159s [ec2-user@client00 ~]$ time lfs getdirstripe /mnt/scratch/test128/dir-0/ /mnt/scratch/test128/dir-0/ lmv_stripe_count: 128 lmv_stripe_offset: 0 [stripe details deleted] [ec2-user@client00 ~]$ time lfs find /mnt/scratch/test128/dir-0 |wc -l 131073 real 0m43.969s user 0m0.053s sys 0m43.990s I noticed this because "lfs getdirstripe" was taking ~40s to return because for some reason lfs reads all the directory entries after printing the striping data. I'll make sure to only do this on empty directories for now.

            No, it is not required. i.e. lfs setdirstripe will set default stripes on both normal dir and striped directory.

            di.wang Di Wang (Inactive) added a comment - No, it is not required. i.e. lfs setdirstripe will set default stripes on both normal dir and striped directory.
            rread Robert Read added a comment -

            Is it required for testN to be a striped directory in order set the default to be striped? In other words, would the following result in striped subdirectories of testN:

                mkdir /mnt/lustre/testN
                lfs setdirstripe -D -c N /mnt/lustre/testN
            
            rread Robert Read added a comment - Is it required for testN to be a striped directory in order set the default to be striped? In other words, would the following result in striped subdirectories of testN: mkdir /mnt/lustre/testN lfs setdirstripe -D -c N /mnt/lustre/testN
             
            lfs setdirstripe -c N /mnt/lustre/testN
            lfs mdkir -c N /mnt/lustre/testN
            

            They are same, and they will create a striped directory with stripe_count =2. Note: if you do not indicate -i here, the master stripe(stripe 0) will be in the same MDT with its parent.

            lfs setdirstripe -D -c N /mnt/lustre/testN
            

            This will be used to set default stripe count of testN, i.e. all of the subdirectories under testN will be created with this layout (-c N). And also the default stripe will be inherited by these subdirectories as well.

            di.wang Di Wang (Inactive) added a comment - lfs setdirstripe -c N /mnt/lustre/testN lfs mdkir -c N /mnt/lustre/testN They are same, and they will create a striped directory with stripe_count =2. Note: if you do not indicate -i here, the master stripe(stripe 0) will be in the same MDT with its parent. lfs setdirstripe -D -c N /mnt/lustre/testN This will be used to set default stripe count of testN, i.e. all of the subdirectories under testN will be created with this layout (-c N). And also the default stripe will be inherited by these subdirectories as well.
            rread Robert Read added a comment -

            What is the difference between these three commands?

                    lfs setdirstripe -c N /mnt/lustre/testN
                    lfs setdirstripe -D -c N /mnt/lustre/testN
                    lfs mdkir -c N /mnt/lustre/testN
            
            rread Robert Read added a comment - What is the difference between these three commands? lfs setdirstripe -c N /mnt/lustre/testN lfs setdirstripe -D -c N /mnt/lustre/testN lfs mdkir -c N /mnt/lustre/testN

            Since this is mostly intended to be a functional test of how many MDTs the DNE2 code can use instead of a performance test, there is a large leeway in terms of the testing options. I don't have a strong preference for mdtest over mdsrate, with the minor caveat that mdsrate is a Lustre-specific benchmark while mdtest is not. The goal would be to create all of the files in the one striped directory, rather than having each client/thread create its own subdirectory.

            There could be multiple threads per client mountpoint, since even without the multi-slot last_rcvd patches or other workarounds there can be one RPC in flight per MDT so this would also provide natural scaling at the client as the number of MDTs increases.

            As for determining the MDT load balance, given the large numbers of files and the fact that these are newly formatted filesystems I think that lfs df -i before and after each test would be enough to determine whether the created files are roughly evenly distributed across MDTs or not. Since the MDT selection is done via a hash function, the distribution should be fairly even but not perfectly so. Ideally, if you already have infrastructure in CE to monitor MDS load (e.g. LMT) then it would be interesting to see if the load is distributed evenly across MDSes during runtime, but that is not a requirement for this testing since it is targeted at testing the limits of MDSes and MDTs counts.

            "No errors" means at a minimum no application-visible errors. For a purely functional test like this I would also expect that there are no Lustre-level errors either (timeouts, etc). If any Lustre errors are printed during the test run please attach them here, or create a new ticket if they indicate some more serious problem.

            As for 128000 vs 131072 I don't think it really matters - the goal is to create a decent number of files per MDT to ensure a reasonable minimum runtime without creating so many that the tests with low MDT counts take too long. Creating 128 * 128000 files = 16M files, which would likely be too many for a 1-stripe directory, but should be reasonable for 16+ stripes (~1M/MDT at 16 stripes, down to ~128K/MDT at 128 stripes) which is the minimum for this test unless the hash distribution is totally broken.

            adilger Andreas Dilger added a comment - Since this is mostly intended to be a functional test of how many MDTs the DNE2 code can use instead of a performance test, there is a large leeway in terms of the testing options. I don't have a strong preference for mdtest over mdsrate, with the minor caveat that mdsrate is a Lustre-specific benchmark while mdtest is not. The goal would be to create all of the files in the one striped directory, rather than having each client/thread create its own subdirectory. There could be multiple threads per client mountpoint, since even without the multi-slot last_rcvd patches or other workarounds there can be one RPC in flight per MDT so this would also provide natural scaling at the client as the number of MDTs increases. As for determining the MDT load balance, given the large numbers of files and the fact that these are newly formatted filesystems I think that lfs df -i before and after each test would be enough to determine whether the created files are roughly evenly distributed across MDTs or not. Since the MDT selection is done via a hash function, the distribution should be fairly even but not perfectly so. Ideally, if you already have infrastructure in CE to monitor MDS load (e.g. LMT) then it would be interesting to see if the load is distributed evenly across MDSes during runtime, but that is not a requirement for this testing since it is targeted at testing the limits of MDSes and MDTs counts. "No errors" means at a minimum no application-visible errors. For a purely functional test like this I would also expect that there are no Lustre-level errors either (timeouts, etc). If any Lustre errors are printed during the test run please attach them here, or create a new ticket if they indicate some more serious problem. As for 128000 vs 131072 I don't think it really matters - the goal is to create a decent number of files per MDT to ensure a reasonable minimum runtime without creating so many that the tests with low MDT counts take too long. Creating 128 * 128000 files = 16M files, which would likely be too many for a 1-stripe directory, but should be reasonable for 16+ stripes (~1M/MDT at 16 stripes, down to ~128K/MDT at 128 stripes) which is the minimum for this test unless the hash distribution is totally broken.

            People

              rread Robert Read
              rhenwood Richard Henwood (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: