Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-4277

Integrate ZFS zpool resilver status with OFD OS_STATE_DEGRADED flag

Details

    • Improvement
    • Resolution: Fixed
    • Minor
    • Lustre 2.11.0
    • Lustre 2.11.0
    • 11749

    Description

      The OFD statfs() handled can optionally add an OS_STATE_DEGRADED flag to the statfs reply, which the MDS uses to help decide which OSTs to allocate new file objects from. Unless all other OSTs are also degraded, offline, or full, the DEGRADED OSTs will be skipped for newly created files.

      This avoids the application waiting for slow writes because of the rebuild long after it has completed on other healthy OSTs. It also avoids the new writes from interfering with the OST rebuild process, so it is a double win.

      This was previously implemented as a /proc tunable suitable for mdadm or a hardware-RAID utility to set from userspace, but since ZFS RAID is in the kernel it should be possible to query this status directly from the kernel when the MDS statfs() arrives.

      Attachments

        Issue Links

          Activity

            [LU-4277] Integrate ZFS zpool resilver status with OFD OS_STATE_DEGRADED flag
            pjones Peter Jones added a comment -

            Landed for 2.11

             

            pjones Peter Jones added a comment - Landed for 2.11  

            Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/30907/
            Subject: LU-4277 scripts: ofd status integrated with zpool status
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: 8ef3ddd2f2798d04b495c8223673a38452ac5c99

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/30907/ Subject: LU-4277 scripts: ofd status integrated with zpool status Project: fs/lustre-release Branch: master Current Patch Set: Commit: 8ef3ddd2f2798d04b495c8223673a38452ac5c99

            Nathaniel Clark (nathaniel.l.clark@intel.com) uploaded a new patch: https://review.whamcloud.com/30907
            Subject: LU-4277 scripts: ofd status integrated with zpool status
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 2f58dcb71a6246ce29a3a548f9ccafd006b97d44

            gerrit Gerrit Updater added a comment - Nathaniel Clark (nathaniel.l.clark@intel.com) uploaded a new patch: https://review.whamcloud.com/30907 Subject: LU-4277 scripts: ofd status integrated with zpool status Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 2f58dcb71a6246ce29a3a548f9ccafd006b97d44
            pjones Peter Jones added a comment -

            Nathaniel

            Can you please see what is required to move this forward?

            Thanks

            Peter

            pjones Peter Jones added a comment - Nathaniel Can you please see what is required to move this forward? Thanks Peter

            The /etc/zfs/zed.d links are links to: /usr/libexec/zfs/zed.d/ for example:

            1. ls -lart /etc/zfs/zed.d/notify.sh
              lrwxrwxrwx. 1 root root 45 Apr 27 15:59 /etc/zfs/zed.d/scrub_finish-notify.sh -> /usr/libexec/zfs/zed.d/scrub_finish-notify.sh
              lrwxrwxrwx. 1 root root 48 Apr 27 15:59 /etc/zfs/zed.d/resilver_finish-notify.sh -> /usr/libexec/zfs/zed.d/resilver_finish-notify.sh
              lrwxrwxrwx. 1 root root 37 Apr 27 15:59 /etc/zfs/zed.d/data-notify.sh -> /usr/libexec/zfs/zed.d/data-notify.sh
              lrwxrwxrwx. 1 root root 44 Apr 27 15:59 /etc/zfs/zed.d/statechange-notify.sh -> /usr/libexec/zfs/zed.d/statechange-notify.sh
            jsalians_intel John Salinas (Inactive) added a comment - The /etc/zfs/zed.d links are links to: /usr/libexec/zfs/zed.d/ for example: ls -lart /etc/zfs/zed.d/ notify .sh lrwxrwxrwx. 1 root root 45 Apr 27 15:59 /etc/zfs/zed.d/scrub_finish-notify.sh -> /usr/libexec/zfs/zed.d/scrub_finish-notify.sh lrwxrwxrwx. 1 root root 48 Apr 27 15:59 /etc/zfs/zed.d/resilver_finish-notify.sh -> /usr/libexec/zfs/zed.d/resilver_finish-notify.sh lrwxrwxrwx. 1 root root 37 Apr 27 15:59 /etc/zfs/zed.d/data-notify.sh -> /usr/libexec/zfs/zed.d/data-notify.sh lrwxrwxrwx. 1 root root 44 Apr 27 15:59 /etc/zfs/zed.d/statechange-notify.sh -> /usr/libexec/zfs/zed.d/statechange-notify.sh

            Don,
            is there a directory for zedlet scripts to be installed (e.g. /etc/zfs/zed.d/) where they will be run automatically when installed?

            Then, it should be straight forward to submit a patch (per above process) to add your script as lustre/scripts/statechange-lustre.sh and install it into the above directory via lustre/scripts/Makefile.am if ZFS is enabled:

             if ZFS_ENABLED
             sbin_SCRIPTS += zfsobj2fid
            
            +zeddir = $(sysconfdir)/zfs/zed.d
            +zed_SCRIPTS = statechange-lustre.sh
             endif
             :
             :
            +EXTRA_DIST += statechange-lustre.sh
            
            

            and then package it in lustre.spec.in and lustre-dkms.spec.in as part of the osd-zfs-mount RPM (Lustre userspace tools for ZFS-backed targets):

             %files osd-zfs-mount
             %defattr(-,root,root)
             %{_libdir}/@PACKAGE@/mount_osd_zfs.so
            +%{_sysconfdir}/zfs/zed.d/statechange-lustre.sh
            
            

            Now, when ZFS server support is installed, your zedlet will also be installed on all the servers, and should start to handle the degraded/offline events automatically.

            adilger Andreas Dilger added a comment - Don, is there a directory for zedlet scripts to be installed (e.g. /etc/zfs/zed.d/ ) where they will be run automatically when installed? Then, it should be straight forward to submit a patch (per above process) to add your script as lustre/scripts/statechange-lustre.sh and install it into the above directory via lustre/scripts/Makefile.am if ZFS is enabled: if ZFS_ENABLED sbin_SCRIPTS += zfsobj2fid +zeddir = $(sysconfdir)/zfs/zed.d +zed_SCRIPTS = statechange-lustre.sh endif : : +EXTRA_DIST += statechange-lustre.sh and then package it in lustre.spec.in and lustre-dkms.spec.in as part of the osd-zfs-mount RPM (Lustre userspace tools for ZFS-backed targets): %files osd-zfs-mount %defattr(-,root,root) %{_libdir}/@PACKAGE@/mount_osd_zfs.so +%{_sysconfdir}/zfs/zed.d/statechange-lustre.sh Now, when ZFS server support is installed, your zedlet will also be installed on all the servers, and should start to handle the degraded/offline events automatically.

            Thanks Andreas for the feedback. I inadvertently attached my local copy used for testing but I can provide the generic one. I'll also address the issues are repost an update. Is there an example license block I can refer to?

            dbrady Don Brady (Inactive) added a comment - Thanks Andreas for the feedback. I inadvertently attached my local copy used for testing but I can provide the generic one. I'll also address the issues are repost an update. Is there an example license block I can refer to?

            It would be good to get the script in the form of a patch against the fs/lustre-release repo so that it can be reviewed properly. Some general comments first, however:

            • the license needs to be dual CDDL/GPL or possibly dual GPL/BSD so that there isn't any problem to package it with other GPL Lustre code (though technically GPL only affects distribution of binaries and not sources, it is better to avoid any uncertainty).
            • there are some pathnames that hard-code Don's home directory, which are not suitable for use in a script that is deployed in production. The location of the zfs, zpool, lctl and grep commands should be found in $PATH.
            • the ZFS pool GUID is hard-coded, or is that some sort of event GUID for the state change?
            • the echos are fine for a demo, but not suitable for production use if they are too noisy. Maybe a non-issue if this script is only run rarely.
            • should it actually be an error if the state change is not DEGRADED or ONLINE? I don't know what the impact of an error return from zedlet is, so maybe a non-issue?
            • I don't know if it makes sense for set_degraded_state() to check the current state before setting the new state. This could introduce races, and I don't think it reduces them. I don't think there is much more overhead to always (re)set the state rather than to check it first and then set it.

            My thought is that the script would be installed as part of the ost-mount-zfs RPM in some directory (something like /etc/zfs/zed/zedlets.d/ akin to /etc/modprobe.d or /etc/logrotate.d) that is a place to dump zedlets that will be run (at least the next time zed is started) and do not really need any kind of edit from the user to specify the Lustre targets. Then, it would get events from the kernel when a zpool becomes degraded and update lctl obdfilter.$target.degraded for targets in that zpool, and does nothing for non-Lustre pools (e.g. root pool for OS).

            adilger Andreas Dilger added a comment - It would be good to get the script in the form of a patch against the fs/lustre-release repo so that it can be reviewed properly. Some general comments first, however: the license needs to be dual CDDL/GPL or possibly dual GPL/BSD so that there isn't any problem to package it with other GPL Lustre code (though technically GPL only affects distribution of binaries and not sources, it is better to avoid any uncertainty). there are some pathnames that hard-code Don's home directory, which are not suitable for use in a script that is deployed in production. The location of the zfs , zpool , lctl and grep commands should be found in $PATH . the ZFS pool GUID is hard-coded, or is that some sort of event GUID for the state change? the echos are fine for a demo, but not suitable for production use if they are too noisy. Maybe a non-issue if this script is only run rarely. should it actually be an error if the state change is not DEGRADED or ONLINE ? I don't know what the impact of an error return from zedlet is, so maybe a non-issue? I don't know if it makes sense for set_degraded_state() to check the current state before setting the new state. This could introduce races, and I don't think it reduces them. I don't think there is much more overhead to always (re)set the state rather than to check it first and then set it. My thought is that the script would be installed as part of the ost-mount-zfs RPM in some directory (something like /etc/zfs/zed/zedlets.d/ akin to /etc/modprobe.d or /etc/logrotate.d ) that is a place to dump zedlets that will be run (at least the next time zed is started) and do not really need any kind of edit from the user to specify the Lustre targets. Then, it would get events from the kernel when a zpool becomes degraded and update lctl obdfilter.$target.degraded for targets in that zpool, and does nothing for non-Lustre pools (e.g. root pool for OS).

            Attached a zedlet, statechange-lustre.sh, that will propagate degraded state changes from zfs to Lustre.

            dbrady Don Brady (Inactive) added a comment - Attached a zedlet, statechange-lustre.sh , that will propagate degraded state changes from zfs to Lustre.

            The degrade state is part of the vdev. Getting this info strictly through the spa interface would yield a ton of data (i.e. the entire config) and require nvlist parsing. A new API, like a spa_get_vdev_state(), to pull out the vdev state of the root vdev would be require to get at this state in a simple manner.

            We can easily set the state as it changes using a zedlet. We now have a state change event for all healthy<-->degraded vdev states that could be used to initiate a check of the pool state and post that state via lctl as you suggest above.

            dbrady Don Brady (Inactive) added a comment - The degrade state is part of the vdev. Getting this info strictly through the spa interface would yield a ton of data (i.e. the entire config) and require nvlist parsing. A new API, like a spa_get_vdev_state(), to pull out the vdev state of the root vdev would be require to get at this state in a simple manner. We can easily set the state as it changes using a zedlet. We now have a state change event for all healthy<-->degraded vdev states that could be used to initiate a check of the pool state and post that state via lctl as you suggest above.

            People

              utopiabound Nathaniel Clark
              adilger Andreas Dilger
              Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: