[LU-255] use ext4 features by default for newly formatted filesystems Created: 29/Apr/11 Updated: 19/May/11 Resolved: 19/May/11 |
|
| Status: | Closed |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.1.0 |
| Fix Version/s: | Lustre 2.1.0 |
| Type: | Improvement | Priority: | Major |
| Reporter: | Andreas Dilger | Assignee: | Andreas Dilger |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Rank (Obsolete): | 5035 |
| Description |
|
There are a number of ext4 features that we should be enabling by default for newly-formatted ldiskfs filesystems. In particular, the flex_bg option is important for reducing e2fsck time as well as avoiding "slow first write" issues that have hit a number of customers with fuller OSTs. Using flex_bg would avoid 10-minute delay at mount time or for each e2fsck run. As well, it would be useful to also enable other features like huge_file (files > 2TB) and dir_nlink (> 65000 subdirectories) by default. All of these features are enabled by default if we format the filesystem with the option "-t ext4". Alternately, we could enable these individually in enable_default_backfs_features(). See http://events.linuxfoundation.org/slides/2010/linuxcon_japan/linuxcon_jp2010_fujita.pdf for a summary of improvements. While we won't see the 12h e2fsck -> 5 minute e2fsck improvement shown there (we already use extents and uninit_bg), the flex_bg feature is definitely still a win. |
| Comments |
| Comment by Jeremy Filizetti [ 29/Apr/11 ] |
|
I'll be running some testing with ~8 TB and larger LUNs over the next few weeks to see the performance impacts of various settings for the groups in a flexible block group, when I have some results I will post them here. My main focus though is to alleviate the slow mounts and issues from |
| Comment by Peter Jones [ 02/May/11 ] |
|
Andreas seems to be working on this |
| Comment by Andreas Dilger [ 05/May/11 ] |
|
Jeremy, test RPMs are available via http://review.whamcloud.com/#change,480 if you are able to test them. They are built from the lustre-release repository, so the mkfs.lustre is not directly useful to you if you are testing on 1.8.x. The default parameters for an OST with this patch (assuming a large-enough LUN size and ext4-based ldiskfs) are: mke2fs -j -b 4096 -L lustre-OSTffff -J size=400 -I 256 -i 262144 -O extents,uninit_bg,dir_nlink,huge_file,flex_bg -G 256 -E resize=4290772992,lazy_journal_init, -F {dev}For an MDT they are: mke2fs -j -b 4096 -L lustre-MDTffff -J size=400 -I 512 -i 2048 -O dirdata,uninit_bg,dir_nlink,huge_file,flex_bg -E lazy_journal_init, -F {dev} |
| Comment by Andreas Dilger [ 13/May/11 ] |
|
Oleg, this patch should be included into the 2.1 release - it dramatically speeds up mkfs and should fix (for new filesystems) the slow startup problems seen in |
| Comment by Shuichi Ihara (Inactive) [ 15/May/11 ] |
|
I'm also interested in this patches and just tested patched RPMs. When I formatted the MDT (16TB), it failed due to the following errors. Any advises? OST format worked well.
Permanent disk data: device size = 14934016MB mkfs.lustre FATAL: Unable to build fs /dev/mpath/mdt (256) mkfs.lustre FATAL: mkfs failed 256 |
| Comment by Shuichi Ihara (Inactive) [ 15/May/11 ] |
|
Formatting MDT also worked, when I added --mkfsoptions="-i 4096" to mkfs.lustre... |
| Comment by Andreas Dilger [ 15/May/11 ] |
|
Ihara, thanks for testing. Did you teat on 2.x or 1.8? As for the problem hit on the MDT, I agree that the mkfs.lustre command should handle this case better. However, I also think that it doesn't make sense to have a 16TB MDT because that much space will never be used. One of the changes being made in this patch is to reduce the default inode ratio to 2048 bytes per inode, which is still very safe but allows more inodes for a given LUN size. I would recommend simply using a smaller LUN for the MDT. With the new inode ratio 8TB is enough for the maximum 4B inodes. |
| Comment by Oleg Drokin [ 15/May/11 ] |
|
I wonder how safe is it to not zero the journal? Suppose this is mkfs on top of previous ext4. Could it happen then that in certain cases old transactions from the journal would be picked up? |
| Comment by Andreas Dilger [ 15/May/11 ] |
|
Realistically, it is very unlikely to re-use anything from the internal journal in this case. The journal superblock will be rewritten, with a new journal transaction ID of 1, and marking no oustanding transactions to recover, and when it is mounted the TID will increment from 1. If the node crashed before it had overwritten the journal (unlikely even under relatively low usage) there would still need to be transactions left in the journal that aligned right after the end of the current transaction, and also with the next TID in sequence. In practice I think the chance of this is very low except in test filesystems that are reformatted repeatedly after a very short lifespan, but if you want I could drop this part of the patch. It avoids 400MB of IO to the device at mke2fs time, but even then this is a small portion of the inode table blocks being written. |
| Comment by Shuichi Ihara (Inactive) [ 16/May/11 ] |
|
I'm testing on 2.x. (got RPMs from http://review.whamcloud.com/#change,480) there are some test updates. We have a 8TB (changed size from 16TB) MDT and 16TB OSTs, here is time for mkfs.lustre. un-patched(sec) patched(sec) MDT 3591 3361 OST 1836 15 Formatting the OSTs was dramatically speedup, but didn't see big acceleration of formatting MDT. |
| Comment by Andreas Dilger [ 16/May/11 ] |
|
I suspect that the MDT format time is actually more than 2x as fast per_inode, because it is writing 2x as many inodes for the same amount of space (using "-i 2048" for patched, and "-i 4096" for unpatched). Even if it isn't running mke2fs faster on the MDT, it should also be running e2fsck faster due to flex_bg. |
| Comment by Shuichi Ihara (Inactive) [ 16/May/11 ] |
|
ah, yes. is it worth to test "-i 2048" on un-patched to make sure speedup? And, I'm going to test e2fsck to MDT and OST (in 0%, 50%, 80% usage case) on un-patched and patched. |
| Comment by Build Master (Inactive) [ 18/May/11 ] |
|
Integrated in Oleg Drokin : eb012d4a10208b26c2d3e795a90f1bb07dde6d91
|
| Comment by Build Master (Inactive) [ 18/May/11 ] |
|
Integrated in Oleg Drokin : eb012d4a10208b26c2d3e795a90f1bb07dde6d91
|
| Comment by Build Master (Inactive) [ 18/May/11 ] |
|
Integrated in Oleg Drokin : eb012d4a10208b26c2d3e795a90f1bb07dde6d91
|
| Comment by Build Master (Inactive) [ 18/May/11 ] |
|
Integrated in Oleg Drokin : eb012d4a10208b26c2d3e795a90f1bb07dde6d91
|
| Comment by Build Master (Inactive) [ 18/May/11 ] |
|
Integrated in Oleg Drokin : eb012d4a10208b26c2d3e795a90f1bb07dde6d91
|
| Comment by Build Master (Inactive) [ 18/May/11 ] |
|
Integrated in Oleg Drokin : eb012d4a10208b26c2d3e795a90f1bb07dde6d91
|
| Comment by Build Master (Inactive) [ 18/May/11 ] |
|
Integrated in Oleg Drokin : eb012d4a10208b26c2d3e795a90f1bb07dde6d91
|
| Comment by Build Master (Inactive) [ 18/May/11 ] |
|
Integrated in Oleg Drokin : eb012d4a10208b26c2d3e795a90f1bb07dde6d91
|
| Comment by Build Master (Inactive) [ 18/May/11 ] |
|
Integrated in Oleg Drokin : eb012d4a10208b26c2d3e795a90f1bb07dde6d91
|
| Comment by Build Master (Inactive) [ 18/May/11 ] |
|
Integrated in Oleg Drokin : eb012d4a10208b26c2d3e795a90f1bb07dde6d91
|
| Comment by Build Master (Inactive) [ 18/May/11 ] |
|
Integrated in Oleg Drokin : eb012d4a10208b26c2d3e795a90f1bb07dde6d91
|
| Comment by Build Master (Inactive) [ 18/May/11 ] |
|
Integrated in Oleg Drokin : eb012d4a10208b26c2d3e795a90f1bb07dde6d91
|
| Comment by Build Master (Inactive) [ 18/May/11 ] |
|
Integrated in Oleg Drokin : eb012d4a10208b26c2d3e795a90f1bb07dde6d91
|
| Comment by Build Master (Inactive) [ 18/May/11 ] |
|
Integrated in Oleg Drokin : eb012d4a10208b26c2d3e795a90f1bb07dde6d91
|
| Comment by Andreas Dilger [ 19/May/11 ] |
|
Patch is landed for 2.1.0, closing bug. |