[LU-1365] Implement ldiskfs LARGEDIR support for e2fsprogs Created: 03/May/12 Updated: 04/Jun/21 Resolved: 11/Feb/19 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.3.0 |
| Fix Version/s: | Lustre 2.12.4 |
| Type: | New Feature | Priority: | Minor |
| Reporter: | Andreas Dilger | Assignee: | Artem Blagodarenko (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | LTS12, e2fsprogs, ldiskfs, patch | ||
| Attachments: |
|
||||||||||||||||||||||||||||||||||||||||
| Issue Links: |
|
||||||||||||||||||||||||||||||||||||||||
| Story Points: | 3 | ||||||||||||||||||||||||||||||||||||||||
| Rank (Obsolete): | 10210 | ||||||||||||||||||||||||||||||||||||||||
| Description |
|
This INCOMPAT_LARGEDIR feature allows larger directories to be created in ldiskfs, both with directory sizes over 2GB and and a maximum htree depth of 3 instead of the current limit of 2. These features are needed in order to exceed the current limit of approximately 10M entries in a single directory. The INCOMPAT_LARGEDIR feature was added to ldiskfs as part of the pdirops Tasks that need to be completed before INCOMPAT_LARGEDIR can be used:
|
| Comments |
| Comment by Gerrit Updater [ 18/Aug/16 ] |
|
Artem Blagodarenko (artem.blagodarenko@seagate.com) uploaded a new patch: http://review.whamcloud.com/22008 |
| Comment by Gerrit Updater [ 18/Aug/16 ] |
|
Artem Blagodarenko (artem.blagodarenko@seagate.com) uploaded a new patch: http://review.whamcloud.com/22009 |
| Comment by Artem Blagodarenko (Inactive) [ 18/Aug/16 ] |
This patch is enough to enable large_dir. http://review.whamcloud.com/22008
Test 100 in http://review.whamcloud.com/22009 test failed if no large_dir enabled n: creating hard link `/mnt/lustre/d100.conf-sanity/Kv74Cdj3FnDyJrJ9gbgqM0GIrlWtGKKTxTO4N5usmjXjRkDk3DKDhdPqjTq5Fw8JgKh5rADZMb5omdc2ySMqUURJUfIcjE5O2FSTqs2WNtOQKCNqK8vnLM5wawDd26Txd27GwLEagRRA6KipNNUj4NLb711dvwt46hBGuvJfeiN6iir9NMjqiJfcLfXQPOYwheMKBVAtjwauj5sdi3zSdSSzTeyCUIka7p3MHYAiPduo90fQWVA2GPtbvMVJzp0': No space left on device and passed with option "large_dir". >add parallel-scale.sh test LARGEDIR and >2GB directories with Lustre using 255-byte names and 10M entries (2GB exceeded at 4M entries, 4GB exceeded at 8M entries). This might be done using a smaller number of hard-linked inodes (nlink_max = 65000), to avoid overhead of accessing and caching a large number of different nodes. Andreas, I can't find the reasons we need add such test in parallel-scale.sh. mdtest execution can help to estimate performance. I created functional test that shows the possibility of creating " >2GB directories with Lustre using 255-byte names and 10M entries" but believe parallel-scale.sh is not the best place for it. So I placed it to config_sanity.sh (test_101). config_sanity.sh test_101 creates 12M hard links. Here is creation rates on my local testing system: Testing system on virtual machine is not ideal, so we tested 120M hard links creations on cluster. But ldiskfs was used (this exclude all other code except ldiskfs and can show its possible troubles). Here is graph of creation rates: |
| Comment by Gerrit Updater [ 08/Sep/16 ] |
|
Artem Blagodarenko (artem.blagodarenko@seagate.com) uploaded a new patch: http://review.whamcloud.com/22384 |
| Comment by Gerrit Updater [ 17/Nov/16 ] |
|
Niu Yawei (yawei.niu@intel.com) uploaded a new patch: http://review.whamcloud.com/23831 |
| Comment by Gerrit Updater [ 02/Apr/17 ] |
|
Anonymous Coward (jjkky@yahoo.com) uploaded a new patch: https://review.whamcloud.com/26311 |
| Comment by Gerrit Updater [ 02/Apr/17 ] |
|
Anonymous Coward (jjkky@yahoo.com) uploaded a new patch: https://review.whamcloud.com/26312 |
| Comment by Gerrit Updater [ 02/Apr/17 ] |
|
Anonymous Coward (jjkky@yahoo.com) uploaded a new patch: https://review.whamcloud.com/26313 |
| Comment by Artem Blagodarenko (Inactive) [ 03/Apr/17 ] |
|
Andreas, Do I need to resend ext4(fsprogs) patches again to the email list? I see new patches are uploaded there, but don't understand the reason. Thanks. |
| Comment by Gerrit Updater [ 05/May/17 ] |
|
Andreas Dilger (andreas.dilger@intel.com) merged in patch https://review.whamcloud.com/23831/ |
| Comment by Andreas Dilger [ 16/Sep/17 ] |
|
This landed to upstream e2fsprogs-1.44, and kernel 4.14. |
| Comment by Gerrit Updater [ 18/Jan/18 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/30912 |
| Comment by Gerrit Updater [ 18/Jan/18 ] |
|
Wang Shilong (wshilong@ddn.com) uploaded a new patch: https://review.whamcloud.com/30913 |
| Comment by Artem Blagodarenko (Inactive) [ 19/Nov/18 ] |
|
adilger, I have attached logs for conf_sanity_124 and conf_sanity_125 just. Test session in my local environment. They are passed. I also installed packages that maloo built, and test started successfully (I haven't wait until it finished, but they are not failed in start like in maloo). I have no idea how to fix maloo test session. Do you have any suggestions? |
| Comment by Andreas Dilger [ 27/Nov/18 ] |
|
Reopen this issue while the patch is still unlanded. There also appears to be an issue with e2fsck and directories over 2GB: Pass 1: Checking inodes, blocks, and sizes Inode 252 is too big. Truncate? no Block #524289 (552395) causes directory to be too big. IGNORED. Block #524290 (552396) causes directory to be too big. IGNORED. Block #524291 (552397) causes directory to be too big. IGNORED. Block #524292 (552398) causes directory to be too big. IGNORED. Block #524293 (552399) causes directory to be too big. IGNORED. |
| Comment by Dongyang Li [ 28/Nov/18 ] |
|
Artem, Looks like in process_block() from pass1, the limit of dir size is still 2GB. with large_dir we could end up with a dir larger than 2GB, like the one created in conf_sanity test_125. I also noticed that stat and ls from debugfs is showing the size of the dir as a negative value for the same dir, the reason is we are just using inode->i_size rather than EXT2_I_SIZE(inode). Can you please fix them in e2fsprogs upstream? Also please push a patch to gerrit for the master-lustre branch so we can land it from our side. Thanks DY |
| Comment by Gerrit Updater [ 29/Nov/18 ] |
|
Artem Blagodarenko (c17828@cray.com) uploaded a new patch: https://review.whamcloud.com/33756 |
| Comment by Gerrit Updater [ 29/Nov/18 ] |
|
Artem Blagodarenko (c17828@cray.com) uploaded a new patch: https://review.whamcloud.com/33757 |
| Comment by Gerrit Updater [ 10/Dec/18 ] |
|
Artem Blagodarenko (c17828@cray.com) uploaded a new patch: https://review.whamcloud.com/33813 |
| Comment by Gerrit Updater [ 10/Dec/18 ] |
|
Artem Blagodarenko (c17828@cray.com) uploaded a new patch: https://review.whamcloud.com/33814 |
| Comment by Gerrit Updater [ 12/Dec/18 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/33814/ |
| Comment by Gerrit Updater [ 13/Dec/18 ] |
|
Andreas Dilger (adilger@whamcloud.com) merged in patch https://review.whamcloud.com/33813/ |
| Comment by Artem Blagodarenko (Inactive) [ 21/Dec/18 ] |
|
Test conf_sanity 125 successfully passed on my local environment. It's took 1,5 hours. Full logs are attached to this issue. total: 60000 link in 119.82 seconds: 500.76 ops/second in first iteration. To total: 60000 link in 123.89 seconds: 484.31 ops/second I belive previous performance drop could be because we sent full directory name to createmany utility and lookup require more time for large directory. This time createmany acts on the current directory. |
| Comment by Gerrit Updater [ 30/Jan/19 ] |
|
Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/33757/ |
| Comment by Gerrit Updater [ 30/Jan/19 ] |
|
Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/22009/ |
| Comment by Gerrit Updater [ 11/Feb/19 ] |
|
Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/33756/ |
| Comment by Peter Jones [ 11/Feb/19 ] |
|
Landed for 2.13 |
| Comment by Colin Faber [X] (Inactive) [ 29/May/19 ] |
|
should this be closed? |
| Comment by Peter Jones [ 29/May/19 ] |
|
We usually leave tickets as RESOLVED rather than CLOSED because then the ticket can be updated when needed (if landed to maintenance branches, say) without the extra email generated by having to go through the states REOPEN then RESOLVED then CLOSED again. |
| Comment by Colin Faber [X] (Inactive) [ 29/May/19 ] |
|
Got it. How do you guys keep track of where it's and when this / tickets like this are ready to close? |
| Comment by Peter Jones [ 29/May/19 ] |
|
We just consider RESOLVED to be the primary task is complete. |
| Comment by Gerrit Updater [ 18/Nov/19 ] |
|
Minh Diep (mdiep@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/36778 |
| Comment by Gerrit Updater [ 18/Nov/19 ] |
|
Minh Diep (mdiep@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/36779 |
| Comment by Gerrit Updater [ 05/Dec/19 ] |
|
Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/36778/ |