[LU-6737] many stripe testing of DNE2 Created: 17/Jun/15  Updated: 16/Jul/15  Resolved: 02/Jul/15

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Blocker
Reporter: Richard Henwood (Inactive) Assignee: Robert Read (Inactive)
Resolution: Fixed Votes: 0
Labels: None

Attachments: Text File 20150629-bench.log     File 20150629-results.json     Text File 20150701-bench96.log     File 20150701-results96.json    
Issue Links:
Blocker
is blocking LU-6858 Demonstrate DNE2 functionality Open
is blocked by LU-6602 ASSERTION( rec->lrh_len <= 8192 ) failed Resolved
Rank (Obsolete): 9223372036854775807

 Description   

Many stripe count test

The many stripe count functional test is intended to show that a DNE2 configuration can handle many MDTs in a single filesystem, and a single directory can be striped over many MDTs. Due to the virtual AWS environment in which this is being tested, while performance will be measured, neither performance scaling nor load testing are primary goals of this test. It is rather a functional scaling test of the ability of the filesystem configuration and directory striping code to handle a large number of MDTs.

  1. Create a filesystem with 128 MDTs, 128 OSTs and at least 128 client mount points (multiple mounts per client)
  2. Create striped directories with stripe count N in 16, 32, 64, 96, 128:
            lfs setdirstripe -c N /mnt/lustre/testN
    

    Note: This command creates a striped directory across N MDTs.

            lfs setdirstripe -D -c N /mnt/lustre/testN
    

    Note: This command sets the default stripe count to N. All directories created within this directory will have this default stripe count applied.

  3. Run mdtest on all client mount points, and each thread will create/stat/unlink at least 128k files in the striped test directory. Run this test under a striped directory with default stripes, so all of subdirectories will be striped directory.
            lfs setdirstripe -c N /mnt/lustre/testN
            lfs setdirstripe -D -c N /mnt/lustre/testN
    
  4. No errors will be observed, and balanced striping of files across MDTs will be observed.


 Comments   
Comment by Robert Read (Inactive) [ 17/Jun/15 ]

Can mdsrate be used instead of mdtest? If mdtest is required, please post a script to run this as I don't have one for mdtest.

Should the stats and unlinks be done on different client nodes than the one that created them?

Just one thread per client mount point?

Please provide a method to determine if directory striping is balanced. Since test requires unlink, this need to be integrated into the test loop and ideally could be done using MPI, too.

Does "No errors" just mean no application errors? Or does this also mean there should be no lustre messages printed on any any of the consoles during the test run?

Just to be precise, does 128k mean 12800 or 2^17 (131072)?

Comment by Andreas Dilger [ 17/Jun/15 ]

Since this is mostly intended to be a functional test of how many MDTs the DNE2 code can use instead of a performance test, there is a large leeway in terms of the testing options. I don't have a strong preference for mdtest over mdsrate, with the minor caveat that mdsrate is a Lustre-specific benchmark while mdtest is not. The goal would be to create all of the files in the one striped directory, rather than having each client/thread create its own subdirectory.

There could be multiple threads per client mountpoint, since even without the multi-slot last_rcvd patches or other workarounds there can be one RPC in flight per MDT so this would also provide natural scaling at the client as the number of MDTs increases.

As for determining the MDT load balance, given the large numbers of files and the fact that these are newly formatted filesystems I think that lfs df -i before and after each test would be enough to determine whether the created files are roughly evenly distributed across MDTs or not. Since the MDT selection is done via a hash function, the distribution should be fairly even but not perfectly so. Ideally, if you already have infrastructure in CE to monitor MDS load (e.g. LMT) then it would be interesting to see if the load is distributed evenly across MDSes during runtime, but that is not a requirement for this testing since it is targeted at testing the limits of MDSes and MDTs counts.

"No errors" means at a minimum no application-visible errors. For a purely functional test like this I would also expect that there are no Lustre-level errors either (timeouts, etc). If any Lustre errors are printed during the test run please attach them here, or create a new ticket if they indicate some more serious problem.

As for 128000 vs 131072 I don't think it really matters - the goal is to create a decent number of files per MDT to ensure a reasonable minimum runtime without creating so many that the tests with low MDT counts take too long. Creating 128 * 128000 files = 16M files, which would likely be too many for a 1-stripe directory, but should be reasonable for 16+ stripes (~1M/MDT at 16 stripes, down to ~128K/MDT at 128 stripes) which is the minimum for this test unless the hash distribution is totally broken.

Comment by Robert Read (Inactive) [ 17/Jun/15 ]

What is the difference between these three commands?

        lfs setdirstripe -c N /mnt/lustre/testN
        lfs setdirstripe -D -c N /mnt/lustre/testN
        lfs mdkir -c N /mnt/lustre/testN
Comment by Di Wang [ 17/Jun/15 ]
 
lfs setdirstripe -c N /mnt/lustre/testN
lfs mdkir -c N /mnt/lustre/testN

They are same, and they will create a striped directory with stripe_count =2. Note: if you do not indicate -i here, the master stripe(stripe 0) will be in the same MDT with its parent.

lfs setdirstripe -D -c N /mnt/lustre/testN

This will be used to set default stripe count of testN, i.e. all of the subdirectories under testN will be created with this layout (-c N). And also the default stripe will be inherited by these subdirectories as well.

Comment by Robert Read (Inactive) [ 17/Jun/15 ]

Is it required for testN to be a striped directory in order set the default to be striped? In other words, would the following result in striped subdirectories of testN:

    mkdir /mnt/lustre/testN
    lfs setdirstripe -D -c N /mnt/lustre/testN
Comment by Di Wang [ 17/Jun/15 ]

No, it is not required. i.e. lfs setdirstripe will set default stripes on both normal dir and striped directory.

Comment by Robert Read (Inactive) [ 18/Jun/15 ]

As I already have some automation built around mdsrate, I'll use that and ensure all threads use a single, shared directory. I also have a patch to mdsrate that adds support for directory striping, though I'll use the lfs commands to make it explicit.

I'll add an `lfs df -i` before and after the create step of each test so we can confirm, at least manually, that the files are balanced.

Do you want the files created with 0-stripes or normally with a single stripe?

BTW, I've been putting our provisioning tools through the paces today, but thought I'd try one client with 128k files. I noticed that large DNE striping has a pretty big impact on directory scanning performance:

[ec2-user@client00 ~]$ lfs getdirstripe /mnt/scratch/test/dir-0/
/mnt/scratch/test/dir-0/
lmv_stripe_count: 0 lmv_stripe_offset: 0

[ec2-user@client00 ~]$ time lfs find /mnt/scratch/test/dir-0/ |wc -l 
131073

real	0m0.112s
user	0m0.038s
sys	0m0.159s

[ec2-user@client00 ~]$ time lfs getdirstripe /mnt/scratch/test128/dir-0/ 
/mnt/scratch/test128/dir-0/
lmv_stripe_count: 128 lmv_stripe_offset: 0
[stripe details deleted]

[ec2-user@client00 ~]$ time lfs find /mnt/scratch/test128/dir-0 |wc -l
131073

real	0m43.969s
user	0m0.053s
sys	0m43.990s

I noticed this because "lfs getdirstripe" was taking ~40s to return because for some reason lfs reads all the directory entries after printing the striping data. I'll make sure to only do this on empty directories for now.

Comment by Andreas Dilger [ 18/Jun/15 ]

It seems like a bug for "lfs getdirstripe" to scan all the entries in the subdirectory I think? That should require "-R" to scan subdirectories.

As for "lfs find" I guess it is doing the reassure on all 128 directory shards, but it would be interesting to compare if this is slower than e.g. "lfs find" on a directory with 128 subdirs with an equal number of files (i.e. 1000/subdir).

Comment by Di Wang [ 18/Jun/15 ]

If you have enough OSTs(let's say >= 32), then single stripe, otherwise zero stripe.

I assume /mnt/scratch/test128/dir-0 has 128 stripes? all children(131073) under dir-0 are regular files? Strange, I did not expect lfs find under striped directory are so slow. IMHO, it should be similar as no-striped directory. something might be wrong. probably statahead. Could you please collect client side -1 debug log? Thanks

Comment by Robert Read (Inactive) [ 18/Jun/15 ]

"lfs getdirstripe <dir>" is only printing the stripe info of the one directory so the reason for the long pause was not obvious. I had to use strace to see it reading all the dirents after it prints the stripe info. Yes I'd agree it's a bug. I peaked at the code, and this behavior appears to be buried in the details of the llapi_semantic_traverse().

Yes, test128/dir-0 has 128 stripes with 128k regular files in one directory. test/dir-0 also 128k regular files in one directory.

I'll try with 128 unstriped subdirectories for comparison next time, but I suspect scanning that will still be quick.

Comment by Di Wang [ 25/Jun/15 ]

Robert, any update for the test? Thanks.

Comment by Robert Read (Inactive) [ 25/Jun/15 ]

My tools are ready, but haven't had a chance to go run the full test yet. Will try to get to this today.

Comment by Robert Read (Inactive) [ 30/Jun/15 ]

Log file and results summary for test run.

Details

  • 8 MDS nodes, each with 16x MDT
  • 8 OSS nodes, each with 16x OST
  • 8 clients, each with 16 mount points
  • all nodes were m3.2xlarge instances
  • 4 test runs, each in a single shared 16, 32, 64, 128 striped directory
  • mdsrate --create, --stat, --unlink in each directory
  • 128k files per MDT for each run
  • 8 threads per MDT for each run
Comment by Robert Read (Inactive) [ 30/Jun/15 ]

Although this was not intended to be a performance test, I did notice that the stripe allocation policy for striped directories appears to be simplistic. As you can see, it appears to always allocate N sequential targets starting from MDT0. This means usage of MDTs will be very uneven unless all directories are widely striped.

CE is designed to provision targets sequentially on each node, and with during striped directory allocation scheme this results in the initial 16 MDT striped directory using a single MDS, rather than using all of them. In the interest of saving time, I changed the target allocation scheme specifically for this test so targets were staggered across the servers, and this balance IO across all MDS instances for all test runs.

Comment by Andreas Dilger [ 30/Jun/15 ]

Robert, you are correct that the current DNE MDT allocation policy is not as balanced as the OST allocation policy. That is an enhancement for the future, including taking MDT space usage into account.

It should be noted that the DNE allocation policy isn't necessarily to always start at MDT0, but rather (I believe by default) it will use the parent directory as the master (stripe 0) and round-robin from there, so if all of the directories are created off the filesystem root they will use MDT0 as a starting point. This can be changed via lfs mkdir -i <master_mdt_idx> -c N to explicitly start the stripe creation on a different MDT, but it isn't as good as an improved MDT allocation policy.

Comment by Robert Read (Inactive) [ 01/Jul/15 ]

Results from the 96 stripe run.

Comment by Richard Henwood (Inactive) [ 02/Jul/15 ]

Thanks for you help Robert - we've got the data we need.

Generated at Sat Feb 10 02:02:48 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.