[LU-9747] posix.sh needs to check for zfs on servers rather than clients Created: 07/Jul/17 Updated: 18/May/20 Resolved: 18/May/20 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.10.0, Lustre 2.12.0, Lustre 2.13.0 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Minor |
| Reporter: | James Casper | Assignee: | James Nunez (Inactive) |
| Resolution: | Won't Fix | Votes: | 0 |
| Labels: | ZFS | ||
| Environment: |
full group zfs configs (el7+el7+zfs & el7+el7+zfs+dne) |
||
| Severity: | 3 |
| Rank (Obsolete): | 9223372036854775807 |
| Description |
|
Here's a recent example of posix being skipped on a zfs config: https://testing.hpdd.intel.com/test_sessions/f38b5926-8503-4f08-9e4b-9f7cd561db9d It looks like the script is checking for zfs on the client: 18 if [[ $(facet_fstype $SINGLEMDS) = zfs ]]; then 19 BASELINE_FS=zfs 20 ! which $ZFS $ZPOOL >/dev/null 2>&1 && 21 skip_env "need $ZFS and $ZPOOL commands" && exit 0 Line 20 should return false if run on servers, but not on clients: From MDS/OST:
From clients:
|
| Comments |
| Comment by Andreas Dilger [ 07/Jul/17 ] |
|
The problem is that the POSIX test suite itself runs on the client node, and the intent is that Lustre+ZFS (or Lustre+ldiskfs) should work at least as well as a local ZFS (or ldiskfs) filesystem w.r.t. POSIX compliance. I don't know how ldiskfs/ext4 compares with ZFS in terms of POSIX compliance. If they are relatively close, we might just consider to remove the "what fstype is the server" and just always compare with the local ldiskfs/ext4 results. If they are not similar, we could consider to install the ZFS RPMs onto the client node when running the posix test or full test session. |
| Comment by Gerrit Updater [ 06/Aug/19 ] |
|
James Nunez (jnunez@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/35706 |
| Comment by James Nunez (Inactive) [ 18/May/20 ] |
|
We will not fix this issue because we’ve replaced the POSIX test suite with pjdfstest. |