[LU-1930] Fix Lustre build of backend file systems Created: 13/Sep/12  Updated: 24/Jun/13  Resolved: 26/Oct/12

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.4.0
Fix Version/s: Lustre 2.4.0

Type: Bug Priority: Minor
Reporter: James A Simmons Assignee: Minh Diep
Resolution: Fixed Votes: 0
Labels: build
Environment:

Any Lustre server using ZFS or ldiskfs


Issue Links:
Duplicate
Related
Severity: 3
Rank (Obsolete): 6325

 Description   

Several problems exist for building support for back end file systems on Lustre. For example mount_utils_zfs.c doesn't build using the zfs source tree but instead one has to install the zfs development rpm on their build box to get it to compile. While testing ZFS stand alone also discovered was disabling ldiskfs build also failed in many ways. Mostly due to ldiskfs building being controlled under SERVER instead of LDISKFS_ENABLE. The work here will try to address these issues.



 Comments   
Comment by James A Simmons [ 13/Sep/12 ]

Patch at http://review.whamcloud.com/3980

Comment by Peter Jones [ 13/Sep/12 ]

Thanks James!

Minh could you please comment on this one?

Comment by James A Simmons [ 14/Sep/12 ]

The patch does build on my side, so I looked at the Hudson logs to see why it fails to build. The reason for this is the way hudson builds zfs. For the lustre zfs user land utilities it has hard code paths to where the zfs user land headers would be. Those are installed with zfs-devel*.rpm. As for the lustre zfs kernel side code you are using

--with-zfs=blah/blah/zfs-0.6.0-rc10/2.6.32-279.5.1.el6_lustre.gd5cec75.i686

which are only the kernel side headers.

With this patch I use --with-zfs=blah/blah/zfs-0.6.0-rc10 and use the include directory "zfs-0.6.0-rc10/include" which has both kernel and user space header so everything builds out this directory. Same for spl. If you use --with-zfs=blah/blah/zfs-0.6.0-rc10 on your side it should still build because it will use the headers in the top include directory. I have verified this before I started this patch.

Comment by James A Simmons [ 14/Sep/12 ]

More digging lead me to lbuild where I see how you are building things. I see you are using the developement rpms from zfs/spl to do the build but the approach I took was to use the source from the tar ball directly. Which path should be taken?

Using the source tar ball has the disadvantage there is that make rpm in the lustre tree will want to pull in all the zpl/zfs binaries as well. In that case you have to do something similar to ldiskfs, that is remove the extra modules in the lustre.spec file. The advantage is using one tree with all the headers we need to build both zfs utilities and kernel modules.
The other approach of using the development header rpms is that we avoid the issue of extra files wanting to go into the lustre rpms. The disadvantage is we have to specify the path to the user land headers from zfs-devel*.rpm. That would require adding another configure option.

Comment by Andreas Dilger [ 14/Sep/12 ]

Brian, any comments on how we might be able to allow Lustre to build with the ZFS/SPL code out of a build tree instead of from installed RPMs?

Comment by Brian Murrell (Inactive) [ 14/Sep/12 ]

Without having unpacked

{zfs,spl}-devel package and source trees for each in front me it's difficult to say definitively. I'm getting an impression however that there is less than a 1:1 mapping of file locations in the source tree to how they are laid out in the tree that {zfs,spl}

-devel lays down.

If that is the case, probably some more magic in autoconf is going to be needed to hint to make where things are. That being said, I'd not advocate particularly for more switches, just that the autoconf logic behind the "--with-spl" and "--with-zfs" switches figure out if it's being pointed at an unpacked

{zfs,spl}-devel tree or a source tree (or pointed at nothing with the implication that the installed {zfs,spl}

-devel locations are to be used – i.e. /usr/include, etc.) and make arrangements for the Makefiles to find the bits and pieces it needs accordingly.

There are a number of examples of doing this already. The --with-o2ib is one example of a switch that can take many different kinds of options and autoconf figures out based on context what to do.

Comment by Andreas Dilger [ 14/Sep/12 ]

I guess the other comment here is that extra -I paths are not harmful. For e2fsprogs, the --with-lustre= path adds both -I $LUSTRE/include/lustre and -I $LUSTRE/lustre/utils (defaulting to LUSTRE=/usr), so that it can work with both the install path and the build-from-tree path. This is also possible when linking to libraries, by adding multiple -L paths.

Comment by James A Simmons [ 28/Sep/12 ]

Moved your question Brian to the JIRA ticket. I stared what you stated. My response follows.

[ I hope you don't mind, James, I reformatted your comment to quote my words so that I could better understand who's saying what – Brian ]

Ideally there is one switch, --with-zfs. It has the following values:

1. yes

  • build using system installed zfs-devel package

This is the case of figuring out option 3 and 4 automatically.

2. no

  • not interested in any zfs support at all

No problem here.

3. /path/to/unpacked-devel

  • inspect /path/to/... to determine if it's an unpacked
    zfs-devel package and set variables accordingly to direct
    the compiler and linker to finding the products it needs
    under the /path/to/unpacked-devel path

In the case of using development rpms we have three rpms that are needed.

zfs-modules-devel-*.rpm - kernel module headers only for ZFS. Having --with-zfs point
to this allows the lustre zfs kernel modules to build
successfully. There are no user land library headers in
this rpm so the lustre zfs utilities will fail to build.
The current solution is to hard code the paths from the
zfs-devel-*.rpm. In that case the user has to be root
on the build box to install this rpm to be able build
lustre.

The kernel headers are in $ZFS, $SPL. No user land library
headers available

zfs-devel-*.rpm - contains only user land library headers. Doing a --with-zfs
to this will result in the zfs kernel modules failing to build.
In fact it will not event configure since zfs_config.h is missing
in this rpm since it is only a kernel header. The lustre zfs
utilities will build tho.

These headers are normally installed in

/usr/include/libspl
/usr/include/libzfs

4. /path/to/zfs/source/tree

  • inspect /path/to/... to determine if it's a zfs source
    tree and set variables accordingly to direct the
    compiler and linker to finding the products it needs
    under the /path/to/zfs/source/tree path

The source tree is the easy solution. In the zfs source tree you have both kernel and
module headers present in the share directory. Note those directories are in
$ZFS_DIR/lib/libspl/include -I $ZFS_DIR/include. Also note how different the
directory tree structure from the rpms.

Comment by Brian Murrell (Inactive) [ 02/Oct/12 ]

So for case #3, yes, there are two include paths, one for userspace and one for kernel-space. So you need two variables inside of configure/Makefiles, one for each path. But once your algorithm for detecting which kind of argument was given to --with-

{zfs,spl}

detects that it was #3 (for bonus points, make it work for pointing at either of the -devel RPM paths) it can detect the other path simply using "known locations" (i.e. by knowing where each of the -devel RPMs put it's files and testing if they are present where we think they should be) and the internal configure/make variables accordingly.

Comment by James A Simmons [ 04/Oct/12 ]

Okay we can go this way. If it is case #3 I can test /usr/include for zfs headers and if they are not present there will test if they are in the rpmbuild root directory.

Comment by Brian Murrell (Inactive) [ 04/Oct/12 ]

No, if #3 is used, then the user is specifically asking to not use /usr/include. /usr/include is where the headers would be installed for and should be used from for #1. If #3, the user has given a path to a usr/include that is under some other directory, that was created by unpacking (i.e. rpm2cpio) the zfs-devel RPM, i.e.:

cd ~/my/unpacked/zfs-devel && rpm2cpio ~/zfs-devel-*.rpm | cpio -id && configure --with-zfs=~/my/unpacked/zfs-devel

Comment by James A Simmons [ 26/Oct/12 ]

Patch has been landed. This ticket can be closed.

Comment by Peter Jones [ 26/Oct/12 ]

Landed for 2.4

Comment by James A Simmons [ 24/Jun/13 ]

Can this ticket be reopened. It appears this fixed was removed and now you can't build ZFS anymore unless you have zfs development header installed on your system. Why was the patch removed?

Comment by Peter Jones [ 24/Jun/13 ]

Probably just a mistake. I suggest that you open a new ticket to track this rather than reopen this one because it spans release boundaries...

Generated at Sat Feb 10 01:20:56 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.