[LU-12336] Update ZFS Version to 0.8.2 Created: 24/May/19  Updated: 18/Feb/20  Resolved: 18/Feb/20

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.13.0
Fix Version/s: None

Type: Task Priority: Minor
Reporter: Nathaniel Clark Assignee: Nathaniel Clark
Resolution: Won't Fix Votes: 0
Labels: llnl, zfs

Attachments: PNG File ZFS_ on, ZFS_ off, ldiskfs_ on and ldiskfs_ off.png     PNG File blksz=512, blksz=8K and ldiskfs_ usec per OSD call.png     PNG File ldiskfs vs zfs_ usec per OSD call -- no CONFIG_DEBUG_PAGEALLOC and CONFIG_DEBUG_SLAB.png     PNG File zfs to ldiskfs.png     PNG File zfs vs ldiskfs_ usec per OSD call in sanity-benchmark.png    
Issue Links:
Blocker
is blocking LU-12637 Support RHEL 8.1 Resolved
is blocking LU-13178 Update ZFS Version to 0.8.3 Resolved
is blocked by LU-11170 sanity test 415 fails with 'rename t... Reopened
is blocked by LU-12383 lfs project inhert difference between... Resolved
is blocked by LU-12100 sanity-quota test_2: user create fail... Resolved
Related
is related to LU-2160 Implement ZFS dmu_tx_hold_append() de... Open
is related to LU-1941 ZFS FIEMAP support Open
is related to LU-12745 Lustre fails to compile against zfs d... Resolved
is related to LU-12830 RHEL8.3 and ZFS: oom on OSS Resolved
is related to LU-13122 osd-zfs to use 8K blocksize for llog ... Resolved
Rank (Obsolete): 9223372036854775807

 Description   

New Features

  • Native encryption #5769 - The encryption property enables the creation of encrypted filesystems and volumes. The aes-256-ccm algorithm is used by default. Per-dataset keys are managed with zfs load-key and associated subcommands.
  • Raw encrypted 'zfs send/receive' #5769 - The zfs send -w option allows an encrypted dataset to be sent and received to another pool without decryption. The received dataset is protected by the original user key from the sending side. This allows datasets to be efficiently backed up to an untrusted system without fear of the data being compromised.
  • Device removal #6900 - This feature allows single and mirrored top-level devices to be removed from the storage pool with zpool remove. All data is copied in the background to the remaining top-level devices and the pool capacity is reduced accordingly.
  • Pool checkpoints #7570 - The zpool checkpoint subcommand allows you to preserve the entire state of a pool and optionally revert back to that exact state. It can be thought of as a pool wide snapshot. This is useful when performing complex administrative actions which are otherwise irreversible (e.g. enabling a new feature flag, destroying a dataset, etc).
  • Pool TRIM #8419 - The zpool trim subcommand provides a way to notify the underlying devices which sectors are no longer allocated. This allows an SSD to more efficiently manage itself and helps prevent performance from degrading. Continuous background trimming can be enabled via the new autotrim pool property.
  • Pool initialization #8230 - The zpool initialize subcommand writes a pattern to all the unallocated space. This eliminates the first access performance penalty, which may exist on some virtualized storage (e.g. VMware VMDKs).
  • Project accounting and quota #6290 - This features adds project based usage accounting and quota enforcement to the existing space accounting and quota functionality. Project quotas add an additional dimension to traditional user/group quotas. The zfs project and zfs projectspace subcommands have been added to manage projects, set quota limits and report on usage.
  • Channel programs #6558 - The zpool program subcommand can be used to perform compound ZFS administrative actions via Lua scripts in a sandboxed environment (with time and memory limits).
  • Pyzfs #7230 - The new pyzfs library is intended to provide a stable interface for the programmatic administration of ZFS. This wrapper provides a one-to-one mapping for the libzfs_core API functions, but the signatures and types are more natural to Python.
  • Python 3 compatibility #8096 - The arcstat, arcsummary, and dbufstat utilities have been updated to be compatible with Python 3.
  • Direct IO #7823 - Adds support for Linux's direct IO interface.

Performance

  • Sequential scrub and resilver #6256 - When scrubbing or resilvering a pool the process has been split into two phases. The first phase scans the pool metadata in order to determine where the data blocks are stored on disk. This allows the second phase to issue scrub I/O as sequentially as possible, greatly improving performance.
  • Allocation classes #5182 - Allows a pool to include a small number of high-performance SSD devices that are dedicated to storing specific types of frequently accessed blocks (e.g. metadata, DDT data, or small file blocks). A pool can opt-in to this feature by adding a special or dedup top-level device.
  • Administrative commands #7668 - Improved performance due to targeted caching of the metadata required for administrative commands like zfs list and zfs get.
  • Parallel allocation #7682 - The allocation process has been parallelized by creating multiple "allocators" per-metaslab group. This results in improved allocation performance on high-end systems.
  • Deferred resilvers #7732 - This feature allows new resilvers to be postponed if an existing one is already in progress. By waiting for the running resilver to complete redundancy is restored as quickly as possible.
  • ZFS Intent Log (ZIL) #6566 - New log blocks are created and issued while there are still outstanding blocks being serviced by the storage, effectively reducing the overall latency observed by the application.
  • Volumes #8615 - When a pool contains a large number of volumes they are more promptly registered with the system and made available for use after a zpool import.
  • QAT #7295 #7282 #6767 - Support for accelerated SHA256 checksums, AES-GCM encryption, and the new QAT Intel(R) C62x Chipset / Atom(R) C3000 Processor Product Family SoC.

Changes in Behavior

  • Relaxed (ref)reservation constraints on volumes, they may now be set larger than the volume size.
  • The arcstat.py, arc_summary.py, and dbufstat.py commands have been renamed arcstat, arc_summary, and dbufstat respectively.
  • The SPL source is now included in the ZFS repository removing the need for separate packages.
  • The dedupditto pool property and zfs send -D option have been deprecated and will be removed in a future release.

Additional Information

  • Supported kernels - Compatible with 2.6.32 - 5.1* Linux kernels.
  • SIMD acceleration is currently not supported for Linux 5.0 and newer kernels.
  • Module options - The default values for the module options were selected to yield good performance for the majority of workloads and configurations. They should not need to be tuned for most systems but are available for performance analysis and tuning. See the zfs-module-parameters(5) man page for the complete list of the options and what they control.


 Comments   
Comment by Gerrit Updater [ 24/May/19 ]

Nathaniel Clark (nclark@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/34951
Subject: LU-12336 build: Update ZFS version to 0.8.0
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: b175c78c61d59740d3f5a219e0094dc6264908df

Comment by James A Simmons [ 24/Jun/19 ]

We are trying ZFS 0.8.1 with lustre and we are seeing:

yum install kmod-lustre-osd-zfs-2.12.2_30_g989217d_dirty-1.el7.x86_64.rpm
Loaded plugins: langpacks, search-disabled-repos
Examining kmod-lustre-osd-zfs-2.12.2_30_g989217d_dirty-1.el7.x86_64.rpm: kmod-lustre-osd-zfs-2.12.2_30_g989217d_dirty-1.el7.x86_64
Marking kmod-lustre-osd-zfs-2.12.2_30_g989217d_dirty-1.el7.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package kmod-lustre-osd-zfs.x86_64 0:2.12.2_30_g989217d_dirty-1.el7 will be installed
--> Processing Dependency: ksym(__cv_broadcast) = 0xb75ecbeb for package: kmod-lustre-osd-zfs-2.12.2_30_g989217d_dirty-1.el7.x86_64
--> Processing Dependency: ksym(arc_add_prune_callback) = 0x23573478 for package: kmod-lustre-osd-zfs-2.12.2_30_g989217d_dirty-1.el7.x86_64
--> Processing Dependency: ksym(arc_buf_size) = 0x3180449b for package: kmod-lustre-osd-zfs-2.12.2_30_g989217d_dirty-1.el7.x86_64
--> Processing Dependency: ksym(arc_remove_prune_callback) = 0x6f8b923b for package: kmod-lustre-osd-zfs-2.12.2_30_g989217d_dirty-1.el7.x86_64
--> Processing Dependency: ksym(dbuf_create_bonus) = 0x0d804452 for package: kmod-lustre-osd-zfs-2.12.2_30_g989217d_dirty-1.el7.x86_64

This doesn't happen with 0.7.13 and we have tried a few things but nothing seems to resolve this. Any ideas?

Comment by Patrick Farrell (Inactive) [ 24/Jun/19 ]

James,

There's no error there?  Can you give a little more output?

Comment by James A Simmons [ 24/Jun/19 ]

Actually the osd-zfs module will not install complaining that the ksyms don't match. Well it can load if we install all the debuginfo zfs packages as well. Never seen this behavior before so are admins are confused.

Comment by Nathaniel Clark [ 24/Jun/19 ]

I just updated the patch to be for 0.8.1 and it compiles fine.  Did you build rpms ./configure --with-spec=redhat?

 

Comment by James A Simmons [ 24/Jun/19 ]

I did not know about --with-spec. I will try that.

Comment by Nathaniel Clark [ 01/Aug/19 ]

ZFS 0.8.x seems to fail consistently in sanity-quota test_3/test_4a.  These failures are linked to LU-12100

Comment by Gerrit Updater [ 21/Aug/19 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/34951/
Subject: LU-12336 build: Update ZFS version to 0.8.1
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: dba32e417635359b1d68180b77193e1c9ddd1e8f

Comment by Peter Jones [ 21/Aug/19 ]

Landed for 2.13

Comment by Gerrit Updater [ 09/Sep/19 ]

James Nunez (jnunez@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/36137
Subject: LU-12336 build: revert ZFS version to 0.8.1
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 07db6f79833906a2c5eaf39816a6cdf80c36a751

Comment by Gerrit Updater [ 11/Sep/19 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/36137/
Subject: LU-12336 build: Revert Update ZFS version to 0.8.1
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 98e19e1fd2b8e2cb4b626f531694e51fe26f1c2b

Comment by Peter Jones [ 11/Sep/19 ]

Reverted from ZFS 0.8.1 back to 0.7.13 as default due to high occurrence of ZFS DNE failures.

Comment by Cory Spitz [ 11/Sep/19 ]

FYI, the revert to master has a confusing title. It reads, "LU-12336 build: Revert Update ZFS version to 0.8.1", which makes it sounds like you are revert to that version rather than reverting from that version.

Comment by Patrick Farrell (Inactive) [ 11/Sep/19 ]

Doesn't confuse me - It's "Revert" [name of previous patch].  It's a standard format (generated by Gerrit).

Comment by Olaf Faaland [ 16/Sep/19 ]

Does someone know the LU- numbers for the DNE failures that are believed to be related to zfs-0.8.1?

Comment by James Nunez (Inactive) [ 16/Sep/19 ]

Olaf - Here are a few of the tickets that are thought to be caused by or we saw much more frequently when we switched over to run with ZFS 0.8.1:

LU-11170, LU-12632, LU-12710, LU-4671, LU-12689, LU-12713 and LU-12706.

If I find more, I'll add the ticket numbers to the list.

Comment by Olaf Faaland [ 26/Sep/19 ]

Thank you James. I see Minh working on patching lbuild to build zfs-0.7 for RHEL 8. Does Whamcloud plan to issue a Lustre release for RHEL 8 based on zfs-0.7?

Comment by Andreas Dilger [ 27/Sep/19 ]

The 0.8.2 release was made today:

Supported Kernels

    Compatible with 2.6.32 - 5.3 Linux kernels

Changes

    Disabled resilver_defer feature leads to looping resilvers #9299 #9338
    Fix dsl_scan_ds_clone_swapped logic #9140 #9163
    Scrubbing root pools may deadlock on kernels without elevator_change() (#9321) #9321
    QAT related bug fixes #9276 #9303
    kmodtool: depmod path #8724 #9310
    Fix /etc/hostid on root pool deadlock #9256 #9285
    BuildRequires libtirpc-devel needed for RHEL 8 #9289
    Fix zpool subcommands error message with some unsupported options #9270
    Fix zfs-dkms .deb package warning in prerm script #9271
    zvol_wait script should ignore partially received zvols #9260
    New service that waits on zvol links to be created #8975
    Always refuse receving non-resume stream when resume state exists #9252
    Fix Intel QAT / ZFS compatibility on v4.7.1+ kernels #9268 #9269
    etc/init.d/zfs-functions.in: remove arch warning
    zfs_handle used after being closed/freed in change_one callback #9165
    Fix zil replay panic when TX_REMOVE followed by TX_CREATE #7151 #8910 #9123 #9145
    zfs_ioc_snapshot: check user-prop permissions on snapshotted datasets #9179 #9180
    Fix Plymouth passphrase prompt in initramfs script #9202
    Fix deadlock in 'zfs rollback' #9203
    Make slog test setup more robust #9194
    zfs-mount-genrator: dependencies should be space-separated #9174
    Linux 5.3: Fix switch() fall though compiler errors #9170
    Linux 5.3 compat: Makefile subdir-m no longer supported #9169
    Fix out-of-order ZIL txtype lost on hardlinked files #8769 #9061
    Increase default zcmd allocation to 256K #9084
    Improve performance by using dmu_tx_hold_*_by_dnode() #9081
    Fix channel programs on s390x #8992 #9080
    Race between zfs-share and zfs-mount services #9083
    Implement secpolicy_vnode_setid_retain() #9035 #9043
    zed crashes when devid not present #9054 #9060
    Don't directly cast unsigned long to void* #9065
    Fix module_param() type for zfs_read_chunk_size #9051
    Move some tests to cli_user/zpool_status #9057
    Race condition between spa async threads and export #9015 #9044
    hdr_recl calls zthr_wakeup() on destroyed zthr #9047
    Fix wrong comment on zcr_blksz_{min,max} #9052
    Retire unused spl_{mutex,rwlock}_{init_fini} #9029
    Linux 5.3 compat: retire rw_tryupgrade() #9029
    Linux 5.3 compat: rw_semaphore owner #9029
    Fix lockdep recursive locking false positive in dbuf_destroy #8984
    Add missing __GFP_HIGHMEM flag to vmalloc #9031
    Use zfsctl_snapshot_hold() wrapper #9039
    Minor style cleanup #9030
    Fix get_special_prop() build failure #9020
    systemd encryption key support #8750 #8848
    Drop redundant POSIX ACL check in zpl_init_acl() #9009
    Export dnode symbols #9027
    Ensure dsl_destroy_head() decrypts objsets #9021
    Disable unused pathname::pn_path* (unneeded in Linux) #9025
    Fixes: #8934 Large kmem_alloc #8934 #9011
    Fix ZTS killed processes detection #9003
    pkg-utils python sitelib for SLES15 #8969
    Fix race in parallel mount's thread dispatching algorithm #8450 #8833 #8878
    Fix dracut Debian/Ubuntu packaging #8990 #8991
    Remove VERIFY from dsl_dataset_crypt_stats() #8976
    Improve "Unable to automount" error message. #8959
    Check b_freeze_cksum under ZFS_DEBUG_MODIFY conditional #8979
    Fix error text for EINVAL in zfs_receive_one() #8977
    Don't use d_path() for automount mount point for chroot'd process #8903 #8966
    nopwrites on dmu_sync-ed blocks can result in a panic #8957
    Avoid extra taskq_dispatch() calls by DMU #8909
    -Y option for zdb is valid #8926
    Fix error message on promoting encrypted dataset #8905 #8935
    Fix out-of-tree build failures #8921 #8943
    dn_struct_rwlock can not be held in dmu_tx_try_assign() #8929
    Remove arch and relax version dependency #8914
    Add libnvpair to libzfs pkg-config #8919
    Let zfs mount all tolerate in-progress mounts #8881
    zstreamdump: add per-record-type counters and an overhead counter #8432
    Fix comments on zfs_bookmark_phys #8945
    Add SCSI_PASSTHROUGH to zvols to enable UNMAP support #8933
    Prevent pointer to an out-of-scope local variable #8924 #8940
    dedup=verify doesn't clear the blkptr's dedup flag #8936
    Update vdev_ops_t from illumos #8925
    Allow unencrypted children of encrypted datasets #8737 #8870
    Replace whereis with type in zfs-lib.sh #8920 #8938
    Use ZFS_DEV macro instead of literals #8912
    Fix memory leak in check_disk() #8897 #8911
    kmod-zfs-devel rpm should provide kmod-spl-devel #8930
    ZTS: Fix mmp_interval failure #8906
    Minimize aggsum_compare(&arc_size, arc_c) calls. #8901
    Python config cleanup #8895
    lz4_decompress_abd declared but not defined #8894
    panic in removal_remap test on 4K devices #8893
    compress metadata in later sync passes #8892
    Move write aggregation memory copy out of vq_lock #8890
    Restrict filesystem creation if name referred either '.' or '..' #8842 #8564
    ztest: dmu_tx_assign() gets ENOSPC in spa_vdev_remove_thread() #8889
    Fix lockdep warning on insmod #8868 #8884
    fat zap should prefetch when iterating #8862
    Target ARC size can get reduced to arc_c_min #8864
    Fix typo in vdev_raidz_math.c #8875 #8880
    Improve ZTS block_device_wait debugging #8839
    Block_device_wait does not return an error code #8839
    Remove redundant redundant remove #8839
    Fix logic error in setpartition function #8839
    Allow metaslab to be unloaded even when not freed from #8837
    Avoid updating zfs_gitrev.h when rev is unchanged #8860
    l2arc_apply_transforms: Fix typo in comment #8822
    Reduced IOPS when all vdevs are in the zfs_mg_fragmentation_threshold #8859
    Drop objid argument in zfs_znode_alloc() (sync with OpenZFS) #8841
    Remove vn_set_fs_pwd()/vn_set_pwd() (no need to be at / during insmod) #8826
    grammar: it is / plural agreement #8818
    Refactor parent dataset handling in libzfs zfs_rename() #8815
    Update comments to match code #8759
    Update descriptions for vnops #8767
    Drop local definition of MOUNT_BUSY #8765
    kernel timer API rework #8647
Comment by Gerrit Updater [ 27/Sep/19 ]

Nathaniel Clark (nclark@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/36310
Subject: LU-12336 build: Update ZFS version to 0.8.2
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 605e397dcf130087b88fc21ac589a48a3b28221b

Comment by Peter Jones [ 04/Oct/19 ]

ofaaland no - we're just looking at ways to keep things moving along

Comment by Peter Jones [ 08/Oct/19 ]

Olaf

Below are the details of the testing we ran over the weekend that I was talking about. Does this give you some pointers as to under which conditions ZFS 0.8.2 is slower that 0.7.x?

Peter

With ZFS 0.8.2 support patch https://review.whamcloud.com/36310 applied on master branch, I triggered 8 times runs against the following patch review test sessions:
review-zfs
review-dne-zfs-part-1
review-dne-zfs-part-2
review-dne-zfs-part-3
review-dne-zfs-part-4

Test results showed that the following ZFS 0.8.1/0.8.2 specific failures occurred in the 8 runs:

sanity test 415 LU-11170 (occurred 5 times)
sanity test 43 LU-4671 (occurred 3 times)
sanity-hsm test 90 LU-12632 (occurred 3 times)
sanity-pfl test 20b LU-12572 (occurred twice)

Comment by Alex Zhuravlev [ 09/Jan/20 ]

in my case lots of tests doing much slower, for example

with ldiskfs:
== sanity test 60a: llog_test run from kernel module and test llog_reader ============================ 07:59:15 (1578020355)
PASS 60a (43s)

with zfs:
PASS 60a (1153s)

Comment by Alex Zhuravlev [ 10/Jan/20 ]

during profiling I observed that some memory related functions takes a lot.
for example, kmem_cache_free(zio_cache, zio) was taking ~16 usec per call, which is increadible long.
so I tried to disable two expensive options: CONFIG_DEBUG_SLAB and CONFIG_DEBUG_PAGEALLOC and that helped to some extent.
check the graph attached.
ZFS is especially sensative to this kind of stuff given volume of allocations (in short - N times more than ldiskfs).

Comment by Alex Zhuravlev [ 10/Jan/20 ]

even with debugging disabled ZFS is still far behind ldiskfs on few operations like short reads/writes used in llog and other places.

Comment by Andreas Dilger [ 10/Jan/20 ]

Alex, is it possible that ZFS is using a large blocksize for the llog files that is causing excessive read-modify-write for small writes to large blocks? We could consider to explicitly set the blocksize to be small for llog files? I believe that we already set the mirror copies = 2 for llog files in ZFS.

Comment by Alex Zhuravlev [ 10/Jan/20 ]

actually it's small blocksize, at least for llog - in many cases we read/modify llog by blocks (8K) while default blocksize is 512 bytes, thus ZFS has to lookup/create 16 dbufs..
I tried a trivial patch setting blocksize to 8K for llog objects. now partial sanity/60a takes 52s with ZFS (30s with ldiskfs).
dbuf lookup is still very expensive, order of magnitude slower compared to ldiskfs.
I guess that's because bh LRU ldiskfs benefits from, it would be interesting to try something similar for ZFS.

Comment by Alex Zhuravlev [ 10/Jan/20 ]

with another trivial patch to dmu_buf_hold_array_by_dnode() to create zio on real demand full sanity/60a takes 81s
(104s initially, 90s with 8K blocksize, 63s with ldiskfs). going to push the patch into ZFS repo for review..

Comment by Alex Zhuravlev [ 10/Jan/20 ]

tried with kernel debugging enabled and that takes it up to 240s... better than 1153s before, but something to take into consideration.

Comment by Andreas Dilger [ 11/Jan/20 ]

We don't use CONFIG_DEBUG_SLAB on production builds, so that is only an issue on your test system.

Comment by Andreas Dilger [ 11/Jan/20 ]

Please link the ZoL Github PR here when it is available.

Comment by Andreas Dilger [ 11/Jan/20 ]

It looks like we never got the ZFS optimized "hold for append" feature implemented. That could also improve llog performance if we

Comment by Alex Zhuravlev [ 13/Jan/20 ]

PR created https://github.com/zfsonlinux/zfs/pull/9836

Comment by Alex Zhuravlev [ 13/Jan/20 ]

PR created https://github.com/zfsonlinux/zfs/pull/9836

Comment by Alex Zhuravlev [ 13/Jan/20 ]

Peter, I looked at 5 reports mentioned in LU-11170 (from serial consoles):
https://testing.whamcloud.com/test_sets/98efeea0-7f0b-11e8-8fe6-52540065bddc – v0.7.9-1
https://testing.whamcloud.com/test_sets/98efeea0-7f0b-11e8-8fe6-52540065bddc – same report
https://testing.whamcloud.com/test_sets/8de6a208-8945-11e8-9028-52540065bddc – v0.7.9-1
https://testing.whamcloud.com/test_sets/9bb5e252-8e9c-11e8-b0aa-52540065bddc – v0.7.9-1
https://testing.whamcloud.com/test_sets/029c73ea-8e9e-11e8-87f3-52540065bddc – v0.7.9-1

then LU-12632:
https://testing.whamcloud.com/test_sets/d52265ac-d409-11e9-9fc9-52540065bddc – v0.8.1-1
https://testing.whamcloud.com/test_sets/c12d9bb2-f96f-11e9-b62b-52540065bddc – v0.7.13-1
https://testing.whamcloud.com/test_sets/a70981de-0fb2-11ea-bbc3-52540065bddc – v0.7.13-1

and LU-12572:
https://testing.whamcloud.com/test_sets/2894009c-b56f-11e9-b023-52540065bddc – v0.7.13-1
https://testing.whamcloud.com/test_sets/b04ecaf6-c953-11e9-97d5-52540065bddc – v0.8.1-1
https://testing.whamcloud.com/test_sets/19cb1252-90b8-11e9-abe3-52540065bddc – v0.7.13-1
https://testing.whamcloud.com/test_sets/f5ff99e2-9a1c-11e9-b26a-52540065bddc – v0.7.13-1
https://testing.whamcloud.com/test_sets/5e5172b4-c8e8-11e9-90ad-52540065bddc – v0.8.1-1
https://testing.whamcloud.com/test_sets/d66394cc-abc3-11e9-a0be-52540065bddc – v0.7.13-1

I will trigger additional testing with 0.8.2 to see how it's doing..

Comment by Olaf Faaland [ 13/Jan/20 ]

thanks Alex

Comment by Olaf Faaland [ 13/Jan/20 ]

Alex,

Re: the graph with series labeled zfs/ldiskfs:on/off, what is on or off?  CONFIG_DEBUG_SLAB and CONFIG_DEBUG_PAGEALLOC?

Thanks

Comment by Alex Zhuravlev [ 13/Jan/20 ]

yes, on - DEBUG_SLAB and DEBUG_PAGEALLOC are enabled, off - disabled.

Comment by Alex Zhuravlev [ 14/Jan/20 ]

one interesting observation is that object destroy is way-way slower on ZFS, even with all debugging disabled - 5 times slower than ldiskfs's one.
another interesting observation is how declarations are expensive, in sanity/60a again:

decl create               520650 samples [usec] 6 283 3300193
create                    29 samples [usec] 11 92 909
decl write                4666640 samples [usec] 2 9796 14906209
write                     6205641 samples [usec] 0 2562 1803711

so for real write taking 0.3 usec we declared new potential llog object (each declaration taking 6.3 usec and write declaration itself taking 3.2 usec. overall time of writes is 10% of declaration time.. no jokes. and 29 real creates in the end.

Comment by Nathaniel Clark [ 29/Jan/20 ]

ZFS 0.8.3 was released 2020-01-23

Comment by Gerrit Updater [ 30/Jan/20 ]

Nathaniel Clark (nclark@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/37373
Subject: LU-12336 build: Update ZFS version to 0.8.3
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 35180ae88fd06ff3cc4bf6c8f1513df20317abeb

Comment by Olaf Faaland [ 18/Feb/20 ]

Should this be closed due to being superseded by  LU-13178 ?

Comment by Peter Jones [ 18/Feb/20 ]

Yes I think so

Generated at Sat Feb 10 02:51:41 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.