[LU-8068] Large ZFS Dnode support Created: 21/Aug/15  Updated: 27/Feb/17  Resolved: 02/Jun/16

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: Lustre 2.9.0

Type: Improvement Priority: Minor
Reporter: Andreas Dilger Assignee: Alex Zhuravlev
Resolution: Fixed Votes: 0
Labels: llnl

Issue Links:
Blocker
is blocking LU-7895 zfs metadata performance improvements Resolved
Duplicate
is duplicated by LU-8424 osd_object.c:1330:22: error: 'DN_MAX_... Resolved
Related
is related to LU-6483 Add xattrset to mdsrate Resolved
is related to LU-8124 MDT zpool capacity consumed at greate... Resolved
Rank (Obsolete): 9223372036854775807

 Description   

Unlanded patches exist in upstream ZFS to increase the dnode size which need to be evaluated for their impact (hopefully improvement) for Lustre metadata performance on ZFS MDTs:

https://github.com/zfsonlinux/zfs/pull/3542



 Comments   
Comment by Andreas Dilger [ 21/Aug/15 ]

Any performance testing of this patch should be done using a Lustre MDT rather than just a local ZFS filesystem with a standard create/stat/unlink workload. Otherwise, the large dnodes will just slow down metadata performance due to increased IO, and the overhead of xattrs and external spill blocks used by Lustre will not be measured.

It may be possible to use mdsrate --create --setxattr changes from LU-6483 (or equivalent) to test on a local ZFS filesystem, but this still needs an enhancement to allow storing smaller xattrs since --setxattr currently only stores a 4000-byte xattr. It needs to be enhanced to allow --setxattr=<size> to store an xattr of a specific size, say 384 bytes for ZFS with 1024-byte dnodes.

Comment by Joseph Gmitter (Inactive) [ 24/Aug/15 ]

Hi Jinshan,
Can you have a look at this topic?
Thanks.
Joe

Comment by Jinshan Xiong (Inactive) [ 11/Sep/15 ]

It looks like this feature has some conflicts with storing XATTR as system attribute, which blocks further performance benchmark. I'm waiting for the author's response to move forward.

Comment by Alex Zhuravlev [ 22/Mar/16 ]

I tried this patch with createmany on a directly mounted ZFS. it degrades create performance from ~29K/sec to ~20K/sec. I'm not sure how quickly this degradation can be addressed, but in general large dnode patch looks very important. to simulate it I tweaked the code to shrink LOVEA to just few bytes so that we fit bonus. and this brought creation rate from ~13K to ~20K in mds-survey.

Comment by Andreas Dilger [ 22/Mar/16 ]

Alex, could you please post on the patch in GitHub so the LLNL folks can see.

Also, it isn't clear what the difference is between your two tests. In the first case you wrote the create rate is down from 29k to 20k, is that for ZPL create rate? I don't expect this feature to help the non-Lustre case, since ZPL doesn't use SAs that can fit into the large dnode space, so it is just overhead.

In the second case you wrote the create rate is up from 13k to 20k when you shrink the LOVEA, so presumably this is Lustre, but without the large dnode patch?

What is the performance with Lustre with normal LOVEA size (1-4 stripes) + large dnodes? Presumably that would be 13k +/- some amount, not 29k +/- some amount?

Also, my (vague) understanding of this patch is that it dynamically allocates space for the dnode, possibly using up space for dnode numbers following it? Does this fail if the dnode is not declared large enough for all future SAs during the initial allocation? IIRC, the osd-zfs code stores the layout and link xattrs to the dnode in a separate operation, which may make the large dnode patch ineffective. It may also have problems with multiple threads allocating dnodes from the same block in parallel, since it doesn't know at dnode allocation time how large the SA space Lustre eventually needs. Maybe my understanding of how this feature was implemented is wrong?

Comment by Alex Zhuravlev [ 22/Mar/16 ]

Andreas, I've made a comment to github already, no reply so far. Hope Ned has that seen.

so far I've tested large dnodes with ZPL only and noticed significant degradation, so I took a timeout hoping to see comments from Ned.
I haven't tested Lustre with large dnodes.

the patch allows to ask for dnode of specific size and I think we can do this given that we declare everything (including LOVEA of known size) ahead.
we can easly track this in OSD.

Comment by Andreas Dilger [ 22/Mar/16 ]

Rereading the large dnode patch, it seems that the caller can specify the dnode size on a per-dnode basis, so ideally we can add support for this to the osd-zfs code, but if not specified it will take the dataset property. Is 1KB large enough to hold the dnode + LOVEA + linkea + FID?

Comment by Andreas Dilger [ 22/Mar/16 ]

Alex, your comment is on an old version of the patch, and not on the main pull request (https://github.com/zfsonlinux/zfs/pull/3542), so I don't think Ned will be looking there? Also, hopefully you are not using this old version of the patch (8f9fdb228), but rather the newest patch (ba39766)?

Comment by Alex Zhuravlev [ 22/Mar/16 ]

yes, I was about to play with the code, but got confused by that performance issue. and yes, 1K should be more than enough: LinkEA would be 48+ bytes, LOVEA is something like 56+, then LMA and VBR (which I'd hope we can put into ZPL dnode, but in the worst case it's another 24+8 bytes).

Comment by Alex Zhuravlev [ 22/Mar/16 ]

hmm, I was using old version.. let me try the new one. this will take some time - the patch doesn't apply to 0.6.5

Comment by Alex Zhuravlev [ 22/Mar/16 ]

clean zfs/master:
Created 1000000 in 29414ms in 1 threads - 33997/sec
Created 1000000 in 20045ms in 2 threads - 49887/sec
Created 1000000 in 19259ms in 4 threads - 51923/sec
Created 1000000 in 17284ms in 8 threads - 57856/sec

zfs/master + large dnodes:
Created 1000000 in 40618ms in 1 threads - 24619/sec
Created 1000000 in 28142ms in 2 threads - 35534/sec
Created 1000000 in 25731ms in 4 threads - 38863/sec
Created 1000000 in 25244ms in 8 threads - 39613/sec

Comment by Alex Zhuravlev [ 22/Mar/16 ]

tried Lustre with that patch (on top of master ZFS):
before:
mdt 1 file 500000 dir 1 thr 1 create 21162.48 [ 18998.73, 22999.10]
after:
mdt 1 file 500000 dir 1 thr 1 create 18019.70 [ 15999.09, 19999.20]

osd-zfs/ was modified to ask for 1K dnodes, verified with zdb:
Object lvl iblk dblk dsize dnsize lsize %full type
10000 1 16K 512 0 1K 512 0.00 ZFS plain file

notice zero dsize meaning no spill was allocated.

Comment by Alex Zhuravlev [ 28/Mar/16 ]

Ned refreshed the patch to address that performance issue and now it's doing much better.
first of all, now I'm able to complete some tests where I was getting OOM (because of huge memory consumption by 8K spill, I guess).
now it makes sense to benchmark on a real storage as amount of IO with this patch is few times less:
1K vs (512byte dnode + 8K spill) per dnode OR 976MB vs 8300MB per 1M dnodes.

Comment by Andreas Dilger [ 26/Apr/16 ]

The large dnode patch is blocked behind https://github.com/zfsonlinux/zfs/pull/4460 which is the performance problem that Alex and Ned identified, but currently that patch is only a workaround and needs to be improved before landing. I've described in that ticket what seems to be a reasonable approach for making a production-ready solution, but to summarize:

  • by default the dnode allocator should just use a counter that continues at the next file offset (as in the existing 4460 patch)
  • if dnodes are being unlinked, a (per-cpu?) counter of unlinked dnodes and the minimum unlinked dnode number should be tracked (these values could be racy since it isn't critical that their values be 100% accurate)
  • when the unlinked dnode counter exceeds some threshold (e.g. 4x number of inodes created in previous TXG, or 64x the number of dnodes that fit into a leaf block, or some tunable number of unlinked dnodes specified by userspace) then scanning should restart at the minimum unlinked dnode number instead of "0" to avoid scanning a large number of already-allocated dnode blocks

Alex, in order to move the large dnode patch forward, could you or Nathaniel work on an updated 4460 patch so that we can get on with landing the large dnode patch.

Comment by Ned Bass [ 20/May/16 ]

We're currently testing with the following patch to mitigate the performance impact of metadnode backfilling. It uses a naive heuristic (rescan after 4096 unlinks at most once per txg) but this is simple and probably achieves 99% of the performance to be gained here.

https://github.com/LLNL/zfs/commit/050b0e69

Comment by Gerrit Updater [ 20/May/16 ]

Ned Bass (bass6@llnl.gov) uploaded a new patch: http://review.whamcloud.com/20367
Subject: LU-8068 osd-zfs: large dnode compatibility
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: f0d8afaec213a7f471c3b22b9940de5c5cd192e3

Comment by Gerrit Updater [ 02/Jun/16 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/20367/
Subject: LU-8068 osd-zfs: large dnode support
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 9765c6174ef580fb4deef4e7faea6d5ed634b00f

Comment by Peter Jones [ 02/Jun/16 ]

Landed for 2.9

Generated at Sat Feb 10 02:14:22 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.