Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-4277

Integrate ZFS zpool resilver status with OFD OS_STATE_DEGRADED flag

Details

    • Improvement
    • Resolution: Fixed
    • Minor
    • Lustre 2.11.0
    • Lustre 2.11.0
    • 11749

    Description

      The OFD statfs() handled can optionally add an OS_STATE_DEGRADED flag to the statfs reply, which the MDS uses to help decide which OSTs to allocate new file objects from. Unless all other OSTs are also degraded, offline, or full, the DEGRADED OSTs will be skipped for newly created files.

      This avoids the application waiting for slow writes because of the rebuild long after it has completed on other healthy OSTs. It also avoids the new writes from interfering with the OST rebuild process, so it is a double win.

      This was previously implemented as a /proc tunable suitable for mdadm or a hardware-RAID utility to set from userspace, but since ZFS RAID is in the kernel it should be possible to query this status directly from the kernel when the MDS statfs() arrives.

      Attachments

        Issue Links

          Activity

            [LU-4277] Integrate ZFS zpool resilver status with OFD OS_STATE_DEGRADED flag

            The /etc/zfs/zed.d links are links to: /usr/libexec/zfs/zed.d/ for example:

            1. ls -lart /etc/zfs/zed.d/notify.sh
              lrwxrwxrwx. 1 root root 45 Apr 27 15:59 /etc/zfs/zed.d/scrub_finish-notify.sh -> /usr/libexec/zfs/zed.d/scrub_finish-notify.sh
              lrwxrwxrwx. 1 root root 48 Apr 27 15:59 /etc/zfs/zed.d/resilver_finish-notify.sh -> /usr/libexec/zfs/zed.d/resilver_finish-notify.sh
              lrwxrwxrwx. 1 root root 37 Apr 27 15:59 /etc/zfs/zed.d/data-notify.sh -> /usr/libexec/zfs/zed.d/data-notify.sh
              lrwxrwxrwx. 1 root root 44 Apr 27 15:59 /etc/zfs/zed.d/statechange-notify.sh -> /usr/libexec/zfs/zed.d/statechange-notify.sh
            jsalians_intel John Salinas (Inactive) added a comment - The /etc/zfs/zed.d links are links to: /usr/libexec/zfs/zed.d/ for example: ls -lart /etc/zfs/zed.d/ notify .sh lrwxrwxrwx. 1 root root 45 Apr 27 15:59 /etc/zfs/zed.d/scrub_finish-notify.sh -> /usr/libexec/zfs/zed.d/scrub_finish-notify.sh lrwxrwxrwx. 1 root root 48 Apr 27 15:59 /etc/zfs/zed.d/resilver_finish-notify.sh -> /usr/libexec/zfs/zed.d/resilver_finish-notify.sh lrwxrwxrwx. 1 root root 37 Apr 27 15:59 /etc/zfs/zed.d/data-notify.sh -> /usr/libexec/zfs/zed.d/data-notify.sh lrwxrwxrwx. 1 root root 44 Apr 27 15:59 /etc/zfs/zed.d/statechange-notify.sh -> /usr/libexec/zfs/zed.d/statechange-notify.sh

            Don,
            is there a directory for zedlet scripts to be installed (e.g. /etc/zfs/zed.d/) where they will be run automatically when installed?

            Then, it should be straight forward to submit a patch (per above process) to add your script as lustre/scripts/statechange-lustre.sh and install it into the above directory via lustre/scripts/Makefile.am if ZFS is enabled:

             if ZFS_ENABLED
             sbin_SCRIPTS += zfsobj2fid
            
            +zeddir = $(sysconfdir)/zfs/zed.d
            +zed_SCRIPTS = statechange-lustre.sh
             endif
             :
             :
            +EXTRA_DIST += statechange-lustre.sh
            
            

            and then package it in lustre.spec.in and lustre-dkms.spec.in as part of the osd-zfs-mount RPM (Lustre userspace tools for ZFS-backed targets):

             %files osd-zfs-mount
             %defattr(-,root,root)
             %{_libdir}/@PACKAGE@/mount_osd_zfs.so
            +%{_sysconfdir}/zfs/zed.d/statechange-lustre.sh
            
            

            Now, when ZFS server support is installed, your zedlet will also be installed on all the servers, and should start to handle the degraded/offline events automatically.

            adilger Andreas Dilger added a comment - Don, is there a directory for zedlet scripts to be installed (e.g. /etc/zfs/zed.d/ ) where they will be run automatically when installed? Then, it should be straight forward to submit a patch (per above process) to add your script as lustre/scripts/statechange-lustre.sh and install it into the above directory via lustre/scripts/Makefile.am if ZFS is enabled: if ZFS_ENABLED sbin_SCRIPTS += zfsobj2fid +zeddir = $(sysconfdir)/zfs/zed.d +zed_SCRIPTS = statechange-lustre.sh endif : : +EXTRA_DIST += statechange-lustre.sh and then package it in lustre.spec.in and lustre-dkms.spec.in as part of the osd-zfs-mount RPM (Lustre userspace tools for ZFS-backed targets): %files osd-zfs-mount %defattr(-,root,root) %{_libdir}/@PACKAGE@/mount_osd_zfs.so +%{_sysconfdir}/zfs/zed.d/statechange-lustre.sh Now, when ZFS server support is installed, your zedlet will also be installed on all the servers, and should start to handle the degraded/offline events automatically.

            Thanks Andreas for the feedback. I inadvertently attached my local copy used for testing but I can provide the generic one. I'll also address the issues are repost an update. Is there an example license block I can refer to?

            dbrady Don Brady (Inactive) added a comment - Thanks Andreas for the feedback. I inadvertently attached my local copy used for testing but I can provide the generic one. I'll also address the issues are repost an update. Is there an example license block I can refer to?

            It would be good to get the script in the form of a patch against the fs/lustre-release repo so that it can be reviewed properly. Some general comments first, however:

            • the license needs to be dual CDDL/GPL or possibly dual GPL/BSD so that there isn't any problem to package it with other GPL Lustre code (though technically GPL only affects distribution of binaries and not sources, it is better to avoid any uncertainty).
            • there are some pathnames that hard-code Don's home directory, which are not suitable for use in a script that is deployed in production. The location of the zfs, zpool, lctl and grep commands should be found in $PATH.
            • the ZFS pool GUID is hard-coded, or is that some sort of event GUID for the state change?
            • the echos are fine for a demo, but not suitable for production use if they are too noisy. Maybe a non-issue if this script is only run rarely.
            • should it actually be an error if the state change is not DEGRADED or ONLINE? I don't know what the impact of an error return from zedlet is, so maybe a non-issue?
            • I don't know if it makes sense for set_degraded_state() to check the current state before setting the new state. This could introduce races, and I don't think it reduces them. I don't think there is much more overhead to always (re)set the state rather than to check it first and then set it.

            My thought is that the script would be installed as part of the ost-mount-zfs RPM in some directory (something like /etc/zfs/zed/zedlets.d/ akin to /etc/modprobe.d or /etc/logrotate.d) that is a place to dump zedlets that will be run (at least the next time zed is started) and do not really need any kind of edit from the user to specify the Lustre targets. Then, it would get events from the kernel when a zpool becomes degraded and update lctl obdfilter.$target.degraded for targets in that zpool, and does nothing for non-Lustre pools (e.g. root pool for OS).

            adilger Andreas Dilger added a comment - It would be good to get the script in the form of a patch against the fs/lustre-release repo so that it can be reviewed properly. Some general comments first, however: the license needs to be dual CDDL/GPL or possibly dual GPL/BSD so that there isn't any problem to package it with other GPL Lustre code (though technically GPL only affects distribution of binaries and not sources, it is better to avoid any uncertainty). there are some pathnames that hard-code Don's home directory, which are not suitable for use in a script that is deployed in production. The location of the zfs , zpool , lctl and grep commands should be found in $PATH . the ZFS pool GUID is hard-coded, or is that some sort of event GUID for the state change? the echos are fine for a demo, but not suitable for production use if they are too noisy. Maybe a non-issue if this script is only run rarely. should it actually be an error if the state change is not DEGRADED or ONLINE ? I don't know what the impact of an error return from zedlet is, so maybe a non-issue? I don't know if it makes sense for set_degraded_state() to check the current state before setting the new state. This could introduce races, and I don't think it reduces them. I don't think there is much more overhead to always (re)set the state rather than to check it first and then set it. My thought is that the script would be installed as part of the ost-mount-zfs RPM in some directory (something like /etc/zfs/zed/zedlets.d/ akin to /etc/modprobe.d or /etc/logrotate.d ) that is a place to dump zedlets that will be run (at least the next time zed is started) and do not really need any kind of edit from the user to specify the Lustre targets. Then, it would get events from the kernel when a zpool becomes degraded and update lctl obdfilter.$target.degraded for targets in that zpool, and does nothing for non-Lustre pools (e.g. root pool for OS).

            Attached a zedlet, statechange-lustre.sh, that will propagate degraded state changes from zfs to Lustre.

            dbrady Don Brady (Inactive) added a comment - Attached a zedlet, statechange-lustre.sh , that will propagate degraded state changes from zfs to Lustre.

            The degrade state is part of the vdev. Getting this info strictly through the spa interface would yield a ton of data (i.e. the entire config) and require nvlist parsing. A new API, like a spa_get_vdev_state(), to pull out the vdev state of the root vdev would be require to get at this state in a simple manner.

            We can easily set the state as it changes using a zedlet. We now have a state change event for all healthy<-->degraded vdev states that could be used to initiate a check of the pool state and post that state via lctl as you suggest above.

            dbrady Don Brady (Inactive) added a comment - The degrade state is part of the vdev. Getting this info strictly through the spa interface would yield a ton of data (i.e. the entire config) and require nvlist parsing. A new API, like a spa_get_vdev_state(), to pull out the vdev state of the root vdev would be require to get at this state in a simple manner. We can easily set the state as it changes using a zedlet. We now have a state change event for all healthy<-->degraded vdev states that could be used to initiate a check of the pool state and post that state via lctl as you suggest above.

            Don or Brian,
            is there some straight forward way for osd-zfs at the DMU level to determine if ZFS is currently degraded and/or doing a drive resilver operation, or would this need some new API to access this info from the VDEV? If we had some mechanism to determine this easily, I think it would be straight forward for someone to add this functionality to Lustre. The alternative would be for ZED to set lctl set_param ofd.<ost>.degraded=1 from userspace when it detects a degraded device and/or when the device is undergoing resilvering, and clearing it afterward.

            adilger Andreas Dilger added a comment - Don or Brian, is there some straight forward way for osd-zfs at the DMU level to determine if ZFS is currently degraded and/or doing a drive resilver operation, or would this need some new API to access this info from the VDEV? If we had some mechanism to determine this easily, I think it would be straight forward for someone to add this functionality to Lustre. The alternative would be for ZED to set lctl set_param ofd.<ost>.degraded=1 from userspace when it detects a degraded device and/or when the device is undergoing resilvering, and clearing it afterward.

            http://review.whamcloud.com/8378 is a basic patch to fix handling in the LOD code for DEGRADED and READONLY flags. It doesn't yet fix the osd-zfs code in udmu_objset_statfs() that should be setting the flags.

            adilger Andreas Dilger added a comment - http://review.whamcloud.com/8378 is a basic patch to fix handling in the LOD code for DEGRADED and READONLY flags. It doesn't yet fix the osd-zfs code in udmu_objset_statfs() that should be setting the flags.

            People

              utopiabound Nathaniel Clark
              adilger Andreas Dilger
              Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: