Details

    • New Feature
    • Resolution: Fixed
    • Minor
    • Lustre 2.10.0
    • None
    • None
    • 9223372036854775807

    Description

      A new script to be used in Pacemaker to manage ZFS pools and Lustre targets.

      This RA is able to manage (import/export) ZFS pools and Lustre Target (mount/umount).

      pcs resource create <Resource Name> ocf:heartbeat:LustreZFS \ 
      pool="<ZFS Pool Name>" \
      volume="<ZFS Volume Name>" \
      mountpoint="<Mount Point" \
      OCF_CHECK_LEVEL=10
      

      where:

      • pool is the pool name of the ZFS resource created in advance
      • volume is the volume name created on the ZFS pool during the Lustre format (mkfs.lustre).
      • mount point is the mount point created in advance on both the Lustre servers
      • OCF_CHECK_LEVEL is optional and enable an extra monitor on the status of the pool

      This script should be located in /usr/lib/ocf/resource.d/heartbeat/ of both the Lustre servers with permission 755.

      The script provides protection from double imports of the pools. In order to activate this functionality is important to configure the hostid protection in ZFS using the genhostid command.

      Default values:

      • no defaults

      Default timeout:

      • start timeout 300s
      • stop timeout 300s
      • monitor timeout 300s interval 20s

      Compatible and tested:

      • pacemaker 1.1.13
      • corosync 2.3.4
      • pcs 0.9.143
      • RHEL/CentOS 7.2

      Attachments

        Issue Links

          Activity

            [LU-8455] Pacemaker script for Lustre and ZFS
            utopiabound Nathaniel Clark made changes -
            Fix Version/s New: Lustre 2.10.0 [ 12204 ]
            Resolution New: Fixed [ 1 ]
            Status Original: Open [ 1 ] New: Resolved [ 5 ]
            utopiabound Nathaniel Clark made changes -
            Link New: This issue is related to SPLN-2 [ SPLN-2 ]
            gabriele.paciucci Gabriele Paciucci (Inactive) made changes -
            Attachment New: Lustre-ZFS-RA-0.99.5-1.noarch.rpm [ 25505 ]
            jlanclos Jason Lanclos made changes -
            Link New: This issue is related to PEC-8 [ PEC-8 ]
            keith Keith Mannthey (Inactive) made changes -
            Link New: This issue is related to PEC-7 [ PEC-7 ]
            adilger Andreas Dilger made changes -
            Link New: This issue duplicates LU-8458 [ LU-8458 ]
            malkolm Malcolm Cowe (Inactive) made changes -
            Link New: This issue is related to PP-85 [ PP-85 ]
            gabriele.paciucci Gabriele Paciucci (Inactive) made changes -
            Attachment Original: LustreZFS-099 [ 22422 ]
            gabriele.paciucci Gabriele Paciucci (Inactive) made changes -
            Assignee Original: WC Triage [ wc-triage ] New: Gabriele Paciucci [ gabriele.paciucci ]
            gabriele.paciucci Gabriele Paciucci (Inactive) made changes -
            Description Original: A new script to be used in Pacemaker to manage ZFS pools and Lustre targets.

            This RA is able to manage (import/export) ZFS pools and Lustre Target (mount/umount).

            {noformat}
            pcs resource create <Resource Name> ocf:heartbeat:LustreZFS \
            pool="<ZFS Pool Name>" \
            volume="<ZFS Volume Name>" \
            mountpoint="<Mount Point" \
            OCF_CHECK_LEVEL=10
            {noformat}

            where:
            * {{pool}} is the pool name of the ZFS resource created in advance
            * {{volume}} is the volume name created on the ZFS pool during the Lustre format (mkfs.lustre).
            * {{mount point}} is the mount point created in advance on both the Lustre servers
            * {{OCF_CHECK_LEVEL}} is optional and enable an extra monitor on the status of the pool

            This script should be located in {{/usr/lib/ocf/resource.d/heartbeat/}} of both the Lustre servers with permission 755.

            The script provides protection from double imports of the pools. In order to activate this functionality is important to configure the {{hostid}} protection in ZFS using the {{genhostid}} command.

            Default values:
            * no defaults

            Default timeout:
            * start timeout 300s
            * stop timeout 300s
            * monitor timeout 300s interval 20s
            New: A new script to be used in Pacemaker to manage ZFS pools and Lustre targets.

            This RA is able to manage (import/export) ZFS pools and Lustre Target (mount/umount).

            {noformat}
            pcs resource create <Resource Name> ocf:heartbeat:LustreZFS \
            pool="<ZFS Pool Name>" \
            volume="<ZFS Volume Name>" \
            mountpoint="<Mount Point" \
            OCF_CHECK_LEVEL=10
            {noformat}

            where:
            * {{pool}} is the pool name of the ZFS resource created in advance
            * {{volume}} is the volume name created on the ZFS pool during the Lustre format (mkfs.lustre).
            * {{mount point}} is the mount point created in advance on both the Lustre servers
            * {{OCF_CHECK_LEVEL}} is optional and enable an extra monitor on the status of the pool

            This script should be located in {{/usr/lib/ocf/resource.d/heartbeat/}} of both the Lustre servers with permission 755.

            The script provides protection from double imports of the pools. In order to activate this functionality is important to configure the {{hostid}} protection in ZFS using the {{genhostid}} command.

            Default values:
            * no defaults

            Default timeout:
            * start timeout 300s
            * stop timeout 300s
            * monitor timeout 300s interval 20s

            Compatible and tested:
            * pacemaker 1.1.13
            * corosync 2.3.4
            * pcs 0.9.143
            * RHEL/CentOS 7.2

            People

              gabriele.paciucci Gabriele Paciucci (Inactive)
              gabriele.paciucci Gabriele Paciucci (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              15 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: