[LU-8123] MDT zpool capacity being consumed at a faster rate than expected Created: 10/May/16 Updated: 11/May/16 Resolved: 10/May/16 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.7.0, Lustre 2.8.0, Lustre 2.9.0 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Minor |
| Reporter: | Andreas Dilger | Assignee: | Andreas Dilger |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | None | ||
| Environment: |
ZFS MDT ashift=12 recordsize=4096 |
||
| Issue Links: |
|
||||||||
| Severity: | 3 | ||||||||
| Rank (Obsolete): | 9223372036854775807 | ||||||||
| Description |
|
When running mdtest to create zero byte files on a new LFS to benchmark the MDS I noticed that in creating 600K zero byte files I only used a small percentage of Lustre inodes but the MDT zpool capacity was 65% used. The ratio of inodes used to capacity used not only seems way off but it appears on track to run out of zpool space before Lustre thinks it's out of inodes. I know another large site is seeing similar on a production LFS. In my case the MDT is a five-plex mirror vdev made in the following way: zpool create -o ashift=12 -O recordsize=4096 mdt.pool mirror A1 A2 mirror A3 A4 mirror A5 A6 mirror A7 A8 mirror A9 A10 I have also seen the behavior using default recordsize. Maybe I'm overlooking something but it seems the capacity consumption is overtaking inode allocation. This could be something in the way ZFS reports capacity used when Lustre is hooking in at a level below ZFS layer but from the cockpit it looks like my MDT tops out while Lustre thinks there are inodes available. |
| Comments |
| Comment by Alex Zhuravlev [ 10/May/16 ] |
|
any numbers? |
| Comment by Alex Zhuravlev [ 10/May/16 ] |
|
you can check with zdb how many bytes are spent on every dnode? |
| Comment by Andreas Dilger [ 10/May/16 ] |
|
Duplicate of |