Details

    • Improvement
    • Resolution: Fixed
    • Minor
    • Lustre 2.9.0
    • None
    • 9223372036854775807

    Description

      Unlanded patches exist in upstream ZFS to increase the dnode size which need to be evaluated for their impact (hopefully improvement) for Lustre metadata performance on ZFS MDTs:

      https://github.com/zfsonlinux/zfs/pull/3542

      Attachments

        Issue Links

          Activity

            [LU-8068] Large ZFS Dnode support

            We're currently testing with the following patch to mitigate the performance impact of metadnode backfilling. It uses a naive heuristic (rescan after 4096 unlinks at most once per txg) but this is simple and probably achieves 99% of the performance to be gained here.

            https://github.com/LLNL/zfs/commit/050b0e69

            nedbass Ned Bass (Inactive) added a comment - We're currently testing with the following patch to mitigate the performance impact of metadnode backfilling. It uses a naive heuristic (rescan after 4096 unlinks at most once per txg) but this is simple and probably achieves 99% of the performance to be gained here. https://github.com/LLNL/zfs/commit/050b0e69
            adilger Andreas Dilger added a comment - - edited

            The large dnode patch is blocked behind https://github.com/zfsonlinux/zfs/pull/4460 which is the performance problem that Alex and Ned identified, but currently that patch is only a workaround and needs to be improved before landing. I've described in that ticket what seems to be a reasonable approach for making a production-ready solution, but to summarize:

            • by default the dnode allocator should just use a counter that continues at the next file offset (as in the existing 4460 patch)
            • if dnodes are being unlinked, a (per-cpu?) counter of unlinked dnodes and the minimum unlinked dnode number should be tracked (these values could be racy since it isn't critical that their values be 100% accurate)
            • when the unlinked dnode counter exceeds some threshold (e.g. 4x number of inodes created in previous TXG, or 64x the number of dnodes that fit into a leaf block, or some tunable number of unlinked dnodes specified by userspace) then scanning should restart at the minimum unlinked dnode number instead of "0" to avoid scanning a large number of already-allocated dnode blocks

            Alex, in order to move the large dnode patch forward, could you or Nathaniel work on an updated 4460 patch so that we can get on with landing the large dnode patch.

            adilger Andreas Dilger added a comment - - edited The large dnode patch is blocked behind https://github.com/zfsonlinux/zfs/pull/4460 which is the performance problem that Alex and Ned identified, but currently that patch is only a workaround and needs to be improved before landing. I've described in that ticket what seems to be a reasonable approach for making a production-ready solution, but to summarize: by default the dnode allocator should just use a counter that continues at the next file offset (as in the existing 4460 patch) if dnodes are being unlinked, a (per-cpu?) counter of unlinked dnodes and the minimum unlinked dnode number should be tracked (these values could be racy since it isn't critical that their values be 100% accurate) when the unlinked dnode counter exceeds some threshold (e.g. 4x number of inodes created in previous TXG, or 64x the number of dnodes that fit into a leaf block, or some tunable number of unlinked dnodes specified by userspace) then scanning should restart at the minimum unlinked dnode number instead of "0" to avoid scanning a large number of already-allocated dnode blocks Alex, in order to move the large dnode patch forward, could you or Nathaniel work on an updated 4460 patch so that we can get on with landing the large dnode patch.

            Ned refreshed the patch to address that performance issue and now it's doing much better.
            first of all, now I'm able to complete some tests where I was getting OOM (because of huge memory consumption by 8K spill, I guess).
            now it makes sense to benchmark on a real storage as amount of IO with this patch is few times less:
            1K vs (512byte dnode + 8K spill) per dnode OR 976MB vs 8300MB per 1M dnodes.

            bzzz Alex Zhuravlev added a comment - Ned refreshed the patch to address that performance issue and now it's doing much better. first of all, now I'm able to complete some tests where I was getting OOM (because of huge memory consumption by 8K spill, I guess). now it makes sense to benchmark on a real storage as amount of IO with this patch is few times less: 1K vs (512byte dnode + 8K spill) per dnode OR 976MB vs 8300MB per 1M dnodes.

            tried Lustre with that patch (on top of master ZFS):
            before:
            mdt 1 file 500000 dir 1 thr 1 create 21162.48 [ 18998.73, 22999.10]
            after:
            mdt 1 file 500000 dir 1 thr 1 create 18019.70 [ 15999.09, 19999.20]

            osd-zfs/ was modified to ask for 1K dnodes, verified with zdb:
            Object lvl iblk dblk dsize dnsize lsize %full type
            10000 1 16K 512 0 1K 512 0.00 ZFS plain file

            notice zero dsize meaning no spill was allocated.

            bzzz Alex Zhuravlev added a comment - tried Lustre with that patch (on top of master ZFS): before: mdt 1 file 500000 dir 1 thr 1 create 21162.48 [ 18998.73, 22999.10] after: mdt 1 file 500000 dir 1 thr 1 create 18019.70 [ 15999.09, 19999.20] osd-zfs/ was modified to ask for 1K dnodes, verified with zdb: Object lvl iblk dblk dsize dnsize lsize %full type 10000 1 16K 512 0 1K 512 0.00 ZFS plain file notice zero dsize meaning no spill was allocated.

            clean zfs/master:
            Created 1000000 in 29414ms in 1 threads - 33997/sec
            Created 1000000 in 20045ms in 2 threads - 49887/sec
            Created 1000000 in 19259ms in 4 threads - 51923/sec
            Created 1000000 in 17284ms in 8 threads - 57856/sec

            zfs/master + large dnodes:
            Created 1000000 in 40618ms in 1 threads - 24619/sec
            Created 1000000 in 28142ms in 2 threads - 35534/sec
            Created 1000000 in 25731ms in 4 threads - 38863/sec
            Created 1000000 in 25244ms in 8 threads - 39613/sec

            bzzz Alex Zhuravlev added a comment - clean zfs/master: Created 1000000 in 29414ms in 1 threads - 33997/sec Created 1000000 in 20045ms in 2 threads - 49887/sec Created 1000000 in 19259ms in 4 threads - 51923/sec Created 1000000 in 17284ms in 8 threads - 57856/sec zfs/master + large dnodes: Created 1000000 in 40618ms in 1 threads - 24619/sec Created 1000000 in 28142ms in 2 threads - 35534/sec Created 1000000 in 25731ms in 4 threads - 38863/sec Created 1000000 in 25244ms in 8 threads - 39613/sec
            bzzz Alex Zhuravlev added a comment - - edited

            hmm, I was using old version.. let me try the new one. this will take some time - the patch doesn't apply to 0.6.5

            bzzz Alex Zhuravlev added a comment - - edited hmm, I was using old version.. let me try the new one. this will take some time - the patch doesn't apply to 0.6.5

            yes, I was about to play with the code, but got confused by that performance issue. and yes, 1K should be more than enough: LinkEA would be 48+ bytes, LOVEA is something like 56+, then LMA and VBR (which I'd hope we can put into ZPL dnode, but in the worst case it's another 24+8 bytes).

            bzzz Alex Zhuravlev added a comment - yes, I was about to play with the code, but got confused by that performance issue. and yes, 1K should be more than enough: LinkEA would be 48+ bytes, LOVEA is something like 56+, then LMA and VBR (which I'd hope we can put into ZPL dnode, but in the worst case it's another 24+8 bytes).

            Alex, your comment is on an old version of the patch, and not on the main pull request (https://github.com/zfsonlinux/zfs/pull/3542), so I don't think Ned will be looking there? Also, hopefully you are not using this old version of the patch (8f9fdb228), but rather the newest patch (ba39766)?

            adilger Andreas Dilger added a comment - Alex, your comment is on an old version of the patch, and not on the main pull request ( https://github.com/zfsonlinux/zfs/pull/3542 ), so I don't think Ned will be looking there? Also, hopefully you are not using this old version of the patch (8f9fdb228), but rather the newest patch (ba39766)?

            Rereading the large dnode patch, it seems that the caller can specify the dnode size on a per-dnode basis, so ideally we can add support for this to the osd-zfs code, but if not specified it will take the dataset property. Is 1KB large enough to hold the dnode + LOVEA + linkea + FID?

            adilger Andreas Dilger added a comment - Rereading the large dnode patch, it seems that the caller can specify the dnode size on a per-dnode basis, so ideally we can add support for this to the osd-zfs code, but if not specified it will take the dataset property. Is 1KB large enough to hold the dnode + LOVEA + linkea + FID?

            Andreas, I've made a comment to github already, no reply so far. Hope Ned has that seen.

            so far I've tested large dnodes with ZPL only and noticed significant degradation, so I took a timeout hoping to see comments from Ned.
            I haven't tested Lustre with large dnodes.

            the patch allows to ask for dnode of specific size and I think we can do this given that we declare everything (including LOVEA of known size) ahead.
            we can easly track this in OSD.

            bzzz Alex Zhuravlev added a comment - Andreas, I've made a comment to github already, no reply so far. Hope Ned has that seen. so far I've tested large dnodes with ZPL only and noticed significant degradation, so I took a timeout hoping to see comments from Ned. I haven't tested Lustre with large dnodes. the patch allows to ask for dnode of specific size and I think we can do this given that we declare everything (including LOVEA of known size) ahead. we can easly track this in OSD.

            Alex, could you please post on the patch in GitHub so the LLNL folks can see.

            Also, it isn't clear what the difference is between your two tests. In the first case you wrote the create rate is down from 29k to 20k, is that for ZPL create rate? I don't expect this feature to help the non-Lustre case, since ZPL doesn't use SAs that can fit into the large dnode space, so it is just overhead.

            In the second case you wrote the create rate is up from 13k to 20k when you shrink the LOVEA, so presumably this is Lustre, but without the large dnode patch?

            What is the performance with Lustre with normal LOVEA size (1-4 stripes) + large dnodes? Presumably that would be 13k +/- some amount, not 29k +/- some amount?

            Also, my (vague) understanding of this patch is that it dynamically allocates space for the dnode, possibly using up space for dnode numbers following it? Does this fail if the dnode is not declared large enough for all future SAs during the initial allocation? IIRC, the osd-zfs code stores the layout and link xattrs to the dnode in a separate operation, which may make the large dnode patch ineffective. It may also have problems with multiple threads allocating dnodes from the same block in parallel, since it doesn't know at dnode allocation time how large the SA space Lustre eventually needs. Maybe my understanding of how this feature was implemented is wrong?

            adilger Andreas Dilger added a comment - Alex, could you please post on the patch in GitHub so the LLNL folks can see. Also, it isn't clear what the difference is between your two tests. In the first case you wrote the create rate is down from 29k to 20k, is that for ZPL create rate? I don't expect this feature to help the non-Lustre case, since ZPL doesn't use SAs that can fit into the large dnode space, so it is just overhead. In the second case you wrote the create rate is up from 13k to 20k when you shrink the LOVEA, so presumably this is Lustre, but without the large dnode patch? What is the performance with Lustre with normal LOVEA size (1-4 stripes) + large dnodes? Presumably that would be 13k +/- some amount, not 29k +/- some amount? Also, my (vague) understanding of this patch is that it dynamically allocates space for the dnode, possibly using up space for dnode numbers following it? Does this fail if the dnode is not declared large enough for all future SAs during the initial allocation? IIRC, the osd-zfs code stores the layout and link xattrs to the dnode in a separate operation, which may make the large dnode patch ineffective. It may also have problems with multiple threads allocating dnodes from the same block in parallel, since it doesn't know at dnode allocation time how large the SA space Lustre eventually needs. Maybe my understanding of how this feature was implemented is wrong?

            People

              bzzz Alex Zhuravlev
              adilger Andreas Dilger
              Votes:
              0 Vote for this issue
              Watchers:
              19 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: