[LU-3264] recovery-*-scale tests failed with FSTYPE=zfs and FAILURE_MODE=HARD Created: 02/May/13 Updated: 15/Aug/13 Resolved: 23/Jul/13 |
|
| Status: | Closed |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.4.0 |
| Fix Version/s: | Lustre 2.4.1, Lustre 2.5.0 |
| Type: | Bug | Priority: | Critical |
| Reporter: | Jian Yu | Assignee: | Jian Yu |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | zfs | ||
| Environment: |
FSTYPE=zfs |
||
| Severity: | 3 |
| Rank (Obsolete): | 8083 |
| Description |
|
While running recovery-*-scale tests with FSTYPE=zfs and FAILURE_MODE=HARD under failover configuration, the tests failed as follows: Failing mds1 on wtm-9vm3 + pm -h powerman --off wtm-9vm3 Command completed successfully waiting ! ping -w 3 -c 1 wtm-9vm3, 4 secs left ... waiting ! ping -w 3 -c 1 wtm-9vm3, 3 secs left ... waiting ! ping -w 3 -c 1 wtm-9vm3, 2 secs left ... waiting ! ping -w 3 -c 1 wtm-9vm3, 1 secs left ... waiting for wtm-9vm3 to fail attempts=3 + pm -h powerman --off wtm-9vm3 Command completed successfully reboot facets: mds1 + pm -h powerman --on wtm-9vm3 Command completed successfully Failover mds1 to wtm-9vm7 04:28:49 (1367234929) waiting for wtm-9vm7 network 900 secs ... 04:28:49 (1367234929) network interface is UP CMD: wtm-9vm7 hostname mount facets: mds1 Starting mds1: lustre-mdt1/mdt1 /mnt/mds1 CMD: wtm-9vm7 mkdir -p /mnt/mds1; mount -t lustre lustre-mdt1/mdt1 /mnt/mds1 wtm-9vm7: mount.lustre: lustre-mdt1/mdt1 has not been formatted with mkfs.lustre or the backend filesystem type is not supported by this tool Start of lustre-mdt1/mdt1 on mds1 failed 19 Maloo report: https://maloo.whamcloud.com/test_sets/ac7cbc10-b0e3-11e2-b2c4-52540035b04c |
| Comments |
| Comment by Li Wei (Inactive) [ 02/May/13 ] |
|
(CC'ed Brian. How does LLNL implement failovers with ZFS?) The pool lustre-mdt1 needs to be imported via "zpool import -f ..." on wtm-9vm7. The tricky part, however, is how to prevent wtm-9vm3 from playing with the pool after rebooting. It might be doable by never caching Lustre pool configurations ("-o cachefile=none" at creation time), so that none of them will be automatically imported anywhere. It would be great if two nodes and a shared device are available to experiments. |
| Comment by Jian Yu [ 02/May/13 ] |
Let me setup the test environment and do some experiments. |
| Comment by Andreas Dilger [ 02/May/13 ] |
|
This kind of problem is why we would want to have MMP for ZFS, but that hasn't been developed yet. However, for the sake of this bug, we just need to fix the ZFS import problem so that our automated testing scripts work. |
| Comment by Brian Behlendorf [ 02/May/13 ] |
|
Until we have MMP for ZFS we've resolved this issue by delegating full authority for starting/stopping servers to heartbeat. See the lustre/scripts/Lustre.ha_v2 resource scripts ZPOOL_IMPORT_ARGS='-f' line which is used to always force importing the pool. We also boot all of our nodes diskless so they never have a persistent cache file and thus never get automatically imported. I admit it's a stop gap until we have real MMP, but in practice it's been working thus far. |
| Comment by Li Wei (Inactive) [ 03/May/13 ] |
|
Thanks, Brian. There's little info like this on the web. (Perhaps it would be worthwhile to add an FAQ entry on zfsonlinux.org sometime.) |
| Comment by Jian Yu [ 03/May/13 ] |
|
Patch for master branch is in http://review.whamcloud.com/6258. |
| Comment by Jian Yu [ 14/May/13 ] |
|
Patch was landed on master branch. |
| Comment by Alex Zhuravlev [ 16/May/13 ] |
|
can you confirm the patch does work on a local setup? |
| Comment by Alex Zhuravlev [ 16/May/13 ] |
|
with REFORMAT=y FSTYPE=zfs sh llmount.sh -v I'm getting: Format mds1: lustre-mdt1/mdt1 Permanent disk data: mkfs_cmd = zpool create -f -O canmount=off lustre-mdt1 /tmp/lustre-mdt1 |
| Comment by Jian Yu [ 16/May/13 ] |
For "zpool import" command, if the -d option is not specified, the command will only search for devices in "/dev". However, for ZFS storage pool which has file-based virtual device, we need explicitly specify the search directory otherwise the import command will not find the device. The patch for master branch is in http://review.whamcloud.com/6358. |
| Comment by Jian Yu [ 21/May/13 ] |
The patch was landed on both Lustre b2_4 and master branches. |
| Comment by Nathaniel Clark [ 23/May/13 ] |
|
Reworked patch with fixes merged in: |
| Comment by Nathaniel Clark [ 23/Jul/13 ] |
| Comment by Jian Yu [ 12/Aug/13 ] |
|
The patch needs to be back-ported to Lustre b2_4 branch. |
| Comment by Jian Yu [ 15/Aug/13 ] |
|
Patch was landed on Lustre b2_4 branch. |