[LU-13934] ldev.conf should consider multiple failover nodes Created: 27/Aug/20 Updated: 30/Sep/20 |
|
| Status: | Open |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major |
| Reporter: | Joe Grund | Assignee: | WC Triage |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | lsnapshot | ||
| Issue Links: |
|
||||
| Severity: | 3 | ||||
| Rank (Obsolete): | 9223372036854775807 | ||||
| Description |
|
ldev.conf appears to specify a primary and single failover node (http://doc.lustre.org/lustre_manual.xhtml#dbdoclet.zfssnapshotConfig).
However, It's not necessarily the case that resources are limited to a primary and failover pair; there may be more placements than two.
For example, Exascaler HA is configured such that an OST could be mounted on any of four available VMs.
It seems that we should expand the ldev.conf such that multiple failovers could be listed after the primary. |
| Comments |
| Comment by Joe Grund [ 27/Aug/20 ] |
|
Concern here is what happens to snapshot commands if one of the lustre targets cannot be found due to it being mounted on a different node than what's listed in primary / failover. This appears possible for everything besides the MGS which is currently constrained to two VMS. |