Details

    • Bug
    • Resolution: Fixed
    • Major
    • Lustre 2.4.0
    • Lustre 2.4.0
    • Builds from the build server
    • 3
    • 7569

    Description

      While working on LU-3109 it seems the ZFS we package is old.

      Christopher Morrone added a comment - 05/Apr/13 1:04 AM
      
      rc10 is old, you should definitely upgrade to the latest rc of 0.6.0. 0.6.1 is a little TOO new, because packaging has changed there, and lustre will need a little tweaking to find the new paths and things automatically. You can build by hand by giving spl and zfs paths, but the latest 0.6.0 rc will just be easier.
      
      

      It seems we need to say current with the ZFS version.

      Attachments

        Activity

          [LU-3117] Build: ZFS version is old

          Nathaniel, think about it this way: You are modifying an rpm spec file, which means that you are in an rpm environment. However, your patch is explicitly to subvert the rpm way of building packages.

          I understand why you are trying to do this, and I can certainly commiserate. The fundamental problem is that the Intel Lustre build farm lacks any system to recognize and honor rpm dependecies.

          But while I understand, I don't feel like we should condone that bad behavior of the build farm by adjusting zfs to make it easier to behave badly.

          By Lustre 2.5, I very much hope to see the build farm improved to handle rpms in a more reasonable fashion. In the mean time, perhaps you can put the workaround at the source of the problem (i.e. the build farm).

          Why do you guys want to build spl/zfs at all? Why not simply install the spl/zfs packages? DKMS versions of the spl/zfs packages are available.

          morrone Christopher Morrone (Inactive) added a comment - Nathaniel, think about it this way: You are modifying an rpm spec file, which means that you are in an rpm environment. However, your patch is explicitly to subvert the rpm way of building packages. I understand why you are trying to do this, and I can certainly commiserate. The fundamental problem is that the Intel Lustre build farm lacks any system to recognize and honor rpm dependecies. But while I understand, I don't feel like we should condone that bad behavior of the build farm by adjusting zfs to make it easier to behave badly. By Lustre 2.5, I very much hope to see the build farm improved to handle rpms in a more reasonable fashion. In the mean time, perhaps you can put the workaround at the source of the problem (i.e. the build farm). Why do you guys want to build spl/zfs at all? Why not simply install the spl/zfs packages? DKMS versions of the spl/zfs packages are available.

          Setup pull request for ZFS https://github.com/zfsonlinux/zfs/pull/1413 to add ability to override spl directory passed to configure during rpm creation.

          utopiabound Nathaniel Clark added a comment - Setup pull request for ZFS https://github.com/zfsonlinux/zfs/pull/1413 to add ability to override spl directory passed to configure during rpm creation.

          Structure of the zfs spec files makes it hard to override the location of spl directory during build in zfs 0.6.1

          utopiabound Nathaniel Clark added a comment - Structure of the zfs spec files makes it hard to override the location of spl directory during build in zfs 0.6.1

          I've worked with Chris to update the zfs & spl versions in the build system to 0.6.1. This will probably break builds until this patch is landed with a fix to the lbuild script for the new versions of zfs and spl.

          The change in the new version that caused lbuild to fail is the removal of all the autotools products. The fix in the patch is to run autogen.sh before trying to call configure.

          utopiabound Nathaniel Clark added a comment - I've worked with Chris to update the zfs & spl versions in the build system to 0.6.1. This will probably break builds until this patch is landed with a fix to the lbuild script for the new versions of zfs and spl. The change in the new version that caused lbuild to fail is the removal of all the autotools products. The fix in the patch is to run autogen.sh before trying to call configure.
          behlendorf Brian Behlendorf added a comment - http://review.whamcloud.com/#change,5960
          behlendorf Brian Behlendorf added a comment - - edited

          Two more thoughts related to this:

          • It would be ideal if you could add the ZFS EPEL repository to your builders. This way with minimal effort you'll always be testing the latest tagged release. Just run 'yum update' and if there are new packages they will be built and installed.
          • Longer term if DKMS style packaging is added to Lustre it could be easily hosted in a yum repository like ZFS.
          behlendorf Brian Behlendorf added a comment - - edited Two more thoughts related to this: It would be ideal if you could add the ZFS EPEL repository to your builders. This way with minimal effort you'll always be testing the latest tagged release. Just run 'yum update' and if there are new packages they will be built and installed. Longer term if DKMS style packaging is added to Lustre it could be easily hosted in a yum repository like ZFS.
          behlendorf Brian Behlendorf added a comment - - edited

          The packaging changes which effect the Lustre build system are the only concern. My intention is to push a patch today which addresses these issues. If we can get this merged before 2.4 is tagged then people will be able to run ZFS 0.6.1 with Lustre 2.4.0 easily. The intention is RHEL/Centos users can add ZFS+Lustre support as follows:

          # Install the ZFS EPEL repository and install the ZFS DKMS packages.
          sudo yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release-1-2.el6.noarch.rpm
          sudo yum install zfs zfs-devel
          
          # Build and install Lustre as usual, it will automatically detect the ZFS packages and enable OSD support.
          cd lustre
          sh autogen.sh
          ./configure
          make rpm
          
          behlendorf Brian Behlendorf added a comment - - edited The packaging changes which effect the Lustre build system are the only concern. My intention is to push a patch today which addresses these issues. If we can get this merged before 2.4 is tagged then people will be able to run ZFS 0.6.1 with Lustre 2.4.0 easily. The intention is RHEL/Centos users can add ZFS+Lustre support as follows: # Install the ZFS EPEL repository and install the ZFS DKMS packages. sudo yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release-1-2.el6.noarch.rpm sudo yum install zfs zfs-devel # Build and install Lustre as usual, it will automatically detect the ZFS packages and enable OSD support. cd lustre sh autogen.sh ./configure make rpm

          Chris, Brian,
          is your concern with 0.6.1 only about the packaging changes, or are there other reasons not to go to 0.6.1? For anyone using Lustre+ZFS, they need to do their own ZFS build and install, so it is likely that they will be using 0.6.1 anyway. It makes sense that Nathaniel resolve any issues with Lustre using 0.6.1 anyway, since it makes sense to have the Lustre 2.4.0 release use the 0.6.1 ZFS code.

          Are there any fixes after the 0.6.1 tag that we should include?

          adilger Andreas Dilger added a comment - Chris, Brian, is your concern with 0.6.1 only about the packaging changes, or are there other reasons not to go to 0.6.1? For anyone using Lustre+ZFS, they need to do their own ZFS build and install, so it is likely that they will be using 0.6.1 anyway. It makes sense that Nathaniel resolve any issues with Lustre using 0.6.1 anyway, since it makes sense to have the Lustre 2.4.0 release use the 0.6.1 ZFS code. Are there any fixes after the 0.6.1 tag that we should include?

          Nathaniel,
          Could you please have a look to see if you can upgrade this?
          Thank you!

          jlevi Jodi Levi (Inactive) added a comment - Nathaniel, Could you please have a look to see if you can upgrade this? Thank you!

          People

            utopiabound Nathaniel Clark
            keith Keith Mannthey (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            9 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: