Details
-
Bug
-
Resolution: Fixed
-
Major
-
Lustre 2.12.6
-
None
-
This was tested on single node client/server CentOS-7.5/ZFS-0.7.13 with lustre 2.12.6 branch/master. Further this was also seen on CentOS 7.8 with Lustre 2.12.3 and 2.12.6 w/ ZFS 0.7.13.
-
3
-
9223372036854775807
Description
Changing recordsize from 1M(default) to 32K breaks the 'df' output. 'lfs df' however works correctly. The 'size', 'used' and 'Avail' fields of the 'df' output shows wrong values. This is seen immediately. Switching record size back to 32K, it was observed that it fixes the issue.
Steps to recreate:
$ df -h $ cp <file> /mnt/lustre $ df -h $ zfs set recordsize=32768 gpool/data $ df -h /* Almost immediately starts showing wrong results, lfs df is good */ $ zfs set recordsize=1048576 gpool/data $ df -h /* Results are good again */
Details
# df -h Filesystem Size Used Avail Use% Mounted on ... gpool/metadata 77M 3.0M 72M 5% /mnt/zfsmdt gpool/data 76M 3.0M 71M 5% /mnt/zfsost 192.168.50.72@tcp:/lustre 76M 3.0M 71M 5% /mnt/lustre
# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 76.6M 3.0M 71.6M 5% /mnt/lustre[MDT:0] lustre-OST0000_UUID 76.0M 3.0M 71.0M 5% /mnt/lustre[OST:0] filesystem_summary: 76.0M 3.0M 71.0M 5% /mnt/lustre
Verify recordsize
# zfs get recordsize gpool/data NAME PROPERTY VALUE SOURCE gpool/data recordsize 1M local
# cp configure /mnt/lustre # ls -ali configure 670300 -rwxr-xr-x 1 root root 1346008 Mar 26 11:49 configure
# df -h Filesystem Size Used Avail Use% Mounted on ... gpool/metadata 75M 3.0M 70M 5% /mnt/zfsmdt gpool/data 76M 5.0M 69M 7% /mnt/zfsost 192.168.50.72@tcp:/lustre 76M 5.0M 69M 7% /mnt/lustre # lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 74.6M 3.0M 69.6M 5% /mnt/lustre[MDT:0] lustre-OST0000_UUID 76.0M 5.0M 69.0M 7% /mnt/lustre[OST:0] filesystem_summary: 76.0M 5.0M 69.0M 7% /mnt/lustre
Change the record size
zfs set recordsize=32768 gpool/data
# df -h ... gpool/metadata 75M 3.0M 70M 5% /mnt/zfsmdt gpool/data 77M 5.1M 70M 7% /mnt/zfsost 192.168.50.72@tcp:/lustre 2.4G 163M 2.2G 7% /mnt/lustre <~~~ Bumps to 2.4GB
# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 74.6M 3.0M 69.6M 5% /mnt/lustre[MDT:0] lustre-OST0000_UUID 76.8M 5.1M 69.7M 7% /mnt/lustre[OST:0] filesystem_summary: 76.8M 5.1M 69.7M 7% /mnt/lustre
Attachments
Issue Links
- is related to
-
LU-15853 /mnt/lustre path is hardcoded in sanity 104c
- Resolved