[LU-7771] ZFS MDT running with high fragmentation Created: 10/Feb/16  Updated: 11/Feb/16  Resolved: 11/Feb/16

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.8.0
Fix Version/s: None

Type: Question/Request Priority: Minor
Reporter: Jeff Johnson (Inactive) Assignee: Andreas Dilger
Resolution: Not a Bug Votes: 0
Labels: zfs
Environment:

CentOS 6.6, Lustre 2.7.64, ZFS 0.6.5.3


Rank (Obsolete): 9223372036854775807

 Description   

After six months of production the LFS MDT (only one MDT, no DNE) is running a zpool fragmentation level of 86%.

The customer has concerns about the high fragmentation.

Q) Is there a long term risk, other than reduced performance, of running at a high fragmentation level?

Also, the only way I know to clear the fragmentation is to quiesce the LFS, go offline, zfs send the MDT pool to a remote pool. Destroy and recreate the MDT pool and zfs send the remote pool back to local.

Q) Is there another way of accomplishing this?

Running the mdt:zfs_send->remote_pool-destroy/recreate-remotepool:zfs_send_>new_mdt_pool process is akin to doing a block level dd backup of an ldiskfs MDT. I am all for the conservative path (with user data) but I want to inquire if there is a better path developed to reach the same result.

Thanks



 Comments   
Comment by Jeff Johnson (Inactive) [ 10/Feb/16 ]

I've come to learn that this is expected with ZFS MDT operation. The zfs send_recv process is the only way to remedy it and that other than performance degradation, less so with SSDs, it's not a data integrity risk.

Once large_dnode lands and gets tested to production safe levels this will subside.

You can close this question.

Comment by Andreas Dilger [ 10/Feb/16 ]

The ZFS "fragmentation percentage" is a bit misleading since this doesn't indicate the fragmentation of the file data but rather the fragmentation of free space, see https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSZpoolFragmentationMeaning for details. A fragmentation score of 86% means the average free blocksize is about 32KB, though I'm not sure if that is mean or median.

IIRC, for the MDT the largest blocks it will ever allocate are 32KB (or possibly 64KB, don't recall offhand) for ZAP leaf blocks and interior tree blocks and dnode blocks are also 32KB so having a higher fragmentation score is expected for the MDT and not in itself a sign of problems. With SSD storage for the MDT this is even less of a concern, though there is some overhead at the SSD level due to larger erase block size and write amplification.

Is there a noticeable performance degradation that is observed, or is the concern only about the reported fragmentation percentage? Doing the send/recv is only going to resolve this issue for a short time and will require an outage (though it could be minimized with some preparation), so I don't think it is necessarily a valuable exercise. That said, I would recommend regularly making and keeping the MDT backup with zfs send on a periodic basis for disaster recovery purposes.

Comment by Jeff Johnson (Inactive) [ 10/Feb/16 ]

I think this is a reaction to seeing the zpool report a large fragmentation percentage and the age old wisdom that fragmentation is bad. No performance issues reported and on SSDs the performance hit would be must less noticeable than with rotating disks.

Generated at Sat Feb 10 02:11:46 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.