|
Supported Kernels
- Compatible with 2.6.32 - 4.2 Linux kernels.
New Functionality
- Support for temporary mount options.
- Support for accessing the .zfs/snapshot over NFS.
- Support for estimating send stream size when source is a bookmark.
- Administrative commands are allowed to use reserved space improving robustness.
- New notify ZEDLETs support email and pushbullet notifications.
- New keyword 'slot' for vdev_id.conf to control what is use for the slot number.
- New zpool export -a option unmounts and exports all imported pools.
- New zpool iostat -y omits the first report with statistics since boot.
- New zdb can now open the root dataset.
- New zdb can print the numbers of ganged blocks.
- New zdb -ddddd can print details of block pointer objects.
- New zdb -b performance improved.
- New zstreamdump -d prints contents of blocks.
New Feature Flags
- large_blocks - This feature allows the record size on a dataset to be set larger than 128KB. We currently support block sizes from 512 bytes to 16MB. The benefits of larger blocks, and thus larger IO, need to be weighed against the cost of COWing a giant block to modify one byte. Additionally, very large blocks can have an impact on I/O latency, and also potentially on the memory allocator. Therefore, we do not allow the record size to be set larger than zfs_max_recordsize (default 1MB). Larger blocks can be created by changing this tuning, pools with larger blocks can always be imported and used, regardless of this setting.
- filesystem_limits - This feature enables filesystem and snapshot limits. These limits can be used to control how many filesystems and/or snapshots can be created at the point in the tree on which the limits are set.
Performance
- Improved zvol performance on all kernels (>50% higher throughput, >20% lower latency)
- Improved zil performance on Linux 2.6.39 and earlier kernels (10x lower latency)
- Improved allocation behavior on mostly full SSD/file pools (5% to 10% improvement on 90% full pools)
- Improved performance when removing large files.
- Caching improvements (ARC):
- Better cached read performance due to reduced lock contention.
- Smarter heuristics for managing the total size of the cache and the distribution of data/metadata.
- Faster release of cached buffers due to unexpected memory pressure.
Changes in Behavior
- Default reserved space was increased from 1.6% to 3.3% of total pool capacity. This default percentage can be controlled through the new spa_slop_shift module option, setting it to 6 will restore the previous percentage.
- Loading of the ZFS module stack is now handled by systemd or the sysv init scripts. Invoking the zfs/zpool commands will not cause the modules to be automatically loaded. The previous behavior can be restored by setting the ZFS_MODULE_LOADING=yes environment variable but this functionality will be removed in a future release.
- Unified SYSV and Gentoo OpenRC initialization scripts. The previous functionality has been split in to zfs-import, zfs-mount, zfs-share, and zfs-zed scripts. This allows for independent control of the services and is consistent with the unit files provided for a systemd based system. Complete details of the functionality provided by the updated scripts can be found here.
- Task queues are now dynamic and worker threads will be created and destroyed as needed. This allows the system to automatically tune itself to ensure the optimal number of threads are used for the active workload which can result in a performance improvement.
- Task queue thread priorities were correctly aligned with the default Linux file system thread priorities. This allows ZFS to compete fairly with other active Linux file systems when the system is under heavy load.
- When compression=on the default compression algorithm will be lz4 as long as the feature is enabled. Otherwise the default remains lzjb. Similarly lz4 is now the preferred method for compressing meta data when available.
- The use of mkdir/rmdir/mv in the .zfs/snapshot directory has been disabled by default both locally and via NFS clients. The zfs_admin_snapshot module option can be used to re-enable this functionality.
- LBA weighting is automatically disabled on files and SSDs ensuring the entire device is used fairly.
- iostat accounting on zvols running on kernels older than Linux 3.19 is no longer supported.
- The known issues preventing swap on zvols for Linux 3.9 and newer kernels have been resolved. However, deadlocks are still possible for older kernels.
Module Options
- Changed zfs_arc_c_min default from 4M to 32M to accommodate large blocks.
- Added metaslab_aliquot to control how many bytes are written to a top-level vdev before moving on to the next one. Increasing this may be helpful when using blocks larger than 1M.
- Added spa_slop_shift, see 'reserved space' comment in the 'changes to behavior' section.
- Added zfs_admin_snapshot, enable/disable the use of mkdir/rmdir/mv in .zfs/snapshot directory.
- Added zfs_arc_lotsfree_percent, throttle I/O when free system memory drops below this percentage.
- Added zfs_arc_num_sublists_per_state, used to allow more fine-grained locking.
- Added zfs_arc_p_min_shift, used to set a floor on arc_p.
- Added zfs_arc_sys_free, the target number of bytes the ARC should leave as free.
- Added zfs_dbgmsg_enable, used to enable the 'dbgmsg' kstat.
- Added zfs_dbgmsg_maxsize, sets the maximum size of the dbgmsg buffer.
- Added zfs_max_recordsize, used to control the maximum allowed record size.
- Added zfs_arc_meta_strategy, used to select the preferred ARC reclaim strategy.
- Removed metaslab_min_alloc_size, it was unused internally due to prior changes.
- Removed zfs_arc_memory_throttle_disable, replaced by zfs_arc_lotsfree_percent.
- Removed zvol_threads, zvols no longer require a dedicated task queue.
- See zfs-module-parameters(5) for complete details on available module options.
Bug Fixes
|