[LU-7601] lustre-initialization-1: mkfs.lustre: command not found Created: 10/Jun/14  Updated: 21/Jan/16  Resolved: 11/Jan/16

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: Lustre 2.8.0

Type: Bug Priority: Blocker
Reporter: Andreas Dilger Assignee: Dmitry Eremin (Inactive)
Resolution: Fixed Votes: 0
Labels: triage

Issue Links:
Blocker
Duplicate
is duplicated by LU-6054 lustre-initialization-1: mkfs.lustre:... Resolved
is duplicated by LU-6113 lustre-initialization-1: short descri... Resolved
is duplicated by LU-7526 lustre-initialization-1 lustre-initia... Resolved
Related
is related to LU-7601 lustre-initialization-1: mkfs.lustre:... Resolved
is related to LU-7679 auto-strengthen lustre[-client]-dkms ... Resolved
is related to LU-7601 lustre-initialization-1: mkfs.lustre:... Resolved
is related to LU-7679 auto-strengthen lustre[-client]-dkms ... Resolved
Severity: 3
Rank (Obsolete): 14250

 Description   

This issue was created by maloo for Andreas Dilger <andreas.dilger@intel.com>

This issue relates to the following test suite run: http://maloo.whamcloud.com/test_sets/3a80b878-eff3-11e3-a29d-52540035b04c.

It seems that the lustre-tests RPM is not being installed on the test nodes for some reason?

The sub-test lustre-initialization_1 failed with the following error in the autotest logs:

bash: line 0: cd: /usr/lib64/lustre/tests: No such file or directory
sh: mkfs.lustre: command not found

Info required for matching: lustre-initialization-1 lustre-initialization_1



 Comments   
Comment by Joshua Kugler (Inactive) [ 10/Jun/14 ]

What RPM should contain mkfs.lustre? What Distro(s) was this?

Comment by Andreas Dilger [ 15/Jul/14 ]

The "lustre" RPM holds all of the userspace tools. "lustre-modules" or "lustre-client-modules" hold the kernel modules.

The distro information is available on the maloo link provided.

I hit this again at: https://testing.hpdd.intel.com/test_sets/3f7c6628-0bce-11e4-a04d-5254006e85c2

Comment by Joshua Kugler (Inactive) [ 15/Jul/14 ]

That is really weird. Isee this in the problem report:

bash: line 0: cd: /usr/lib64/lustre/tests: No such file or directory
sh: mkfs.lustre: command not found

Does this mean the lustre RPM is not being installed? Does nothing in lustre-modules or lustre-client-modules require the lustre RPM? Is that by design, or a bug?

Is /usr/lib64/lustre/tests also owned by the lustre RPM?

Comment by Andreas Dilger [ 15/Jul/14 ]

The lustre/tests directory is from the lustre-tests RPM. The lustre-tests RPM Requires: lustre and lustre-modules as one would expect, but nothing depends on it.

Comment by Joshua Kugler (Inactive) [ 16/Jul/14 ]

I just did an el6 install with
loadjenkinsbuild -n onyx-29vm1 -p test -d el6 -a x86_64 -j lustre-reviews -b 25190 -t server -i inkernel -v -o -r
and after install, I had the following packages:

# rpm -qa|grep lustre
lustre-osd-ldiskfs-2.6.50-2.6.32_431.20.3.el6_lustre.x86_64_g76c39dd.x86_64
kernel-2.6.32-431.20.3.el6_lustre.x86_64
kernel-headers-2.6.32-431.20.3.el6_lustre.x86_64
lustre-modules-2.6.50-2.6.32_431.20.3.el6_lustre.x86_64_g76c39dd.x86_64
lustre-tests-2.6.50-2.6.32_431.20.3.el6_lustre.x86_64_g76c39dd.x86_64
kernel-firmware-2.6.32-431.20.3.el6_lustre.x86_64
lustre-iokit-2.6.50-2.6.32_431.20.3.el6_lustre.x86_64_g76c39dd.x86_64
lustre-2.6.50-2.6.32_431.20.3.el6_lustre.x86_64_g76c39dd.x86_64

Both of the failed sessions above are EL6 x86_64, which is what I just did. I really don't know why those installs were any different; the autotest command line is, effectively, the same as what I did.

If you hit it again, I'll try to pull out more details.

Comment by Charlie Olmstead [ 05/Sep/14 ]

Please re-open if this happens again.

Comment by nasf (Inactive) [ 28/Oct/14 ]

Hit issue again:
https://testing.hpdd.intel.com/test_sets/cfa0bb12-5e87-11e4-a2a3-5254006e85c2

Comment by Andreas Dilger [ 09/Dec/14 ]

Hit also on 2014-12-05 https://testing.hpdd.intel.com/test_sets/a78c8af4-7cd0-11e4-bc8c-5254006e85c2

Comment by Andreas Dilger [ 11/Dec/14 ]

Hit again on 2014-12-10: https://testing.hpdd.intel.com/test_sets/b25b45ec-80ea-11e4-b2c2-5254006e85c2

Comment by Dmitry Eremin (Inactive) [ 19/Dec/14 ]

One more error: https://testing.hpdd.intel.com/test_logs/80723cb0-8744-11e4-b712-5254006e85c2

Comment by Charlie Olmstead [ 12/Jan/15 ]

Dmitry, this was caused yum timing out on the install.

yum install -y kernel-2.6.32-431.29.2.el6_lustre.g50de7d6.x86_64 lustre-ldiskfs lustre-modules lustre lustre-tests
+ yum install -y kernel-2.6.32-431.29.2.el6_lustre.g50de7d6.x86_64 lustre-ldiskfs lustre-modules lustre lustre-tests
Loaded plugins: fastestmirror, security
http://10.1.0.10/cobbler/localmirror/hudson/lustre-reviews/29174/build_type-server_distro-el6_arch-x86_64_ib_stack-inkernel/repodata/repomd.xml: [Errno 12] Timeout on http://10.1.0.10/cobbler/localmirror/hudson/lustre-reviews/29174/build_type-server_distro-el6_arch-x86_64_ib_stack-inkernel/repodata/repomd.xml: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds')
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: lustre-build. Please verify its path and try again
/sbin/grubby --set-default=/boot/kernel-2.6.32-431.29.2.el6_lustre.g50de7d6.x86_64
+ /sbin/grubby --set-default=/boot/kernel-2.6.32-431.29.2.el6_lustre.g50de7d6.x86_64

Comment by Charlie Olmstead [ 12/Jan/15 ]

This is not an autotest issue, the causes range from jenkins 502s (TEI-3005) and environmental/networking problems.

Comment by Bruno Faccini (Inactive) [ 07/Feb/15 ]

Seems I got a new occurence with https://testing.hpdd.intel.com/test_sets/605dbd40-ad65-11e4-adac-5254006e85c2

Comment by Andreas Dilger [ 11/Feb/15 ]

Charlie, I think that all of these problems have a related root cause, namely that the current node-provisioning/lustre-initialization code is not robust in the face of network errors. Since we can't prevent the occurrence of network errors, something needs to be done at the autotest level to retry RPM installation and/or restart that test session on a different node so that these intermittent network errors are not visible to Maloo or Gerrit. That may slow down the start of one test session somewhat, but will save time in the long run by avoiding the need to resubmit all of the test sessions because of a single lustre-initialization failure.

Comment by Jian Yu [ 26/Mar/15 ]

While testing patch http://review.whamcloud.com/14133, I hit LU-6054, which is marked as a duplicate of this one, so I report the failure here:
https://testing.hpdd.intel.com/test_sets/4253d7a4-d325-11e4-94cf-5254006e85c2

Comment by Jian Yu [ 27/Mar/15 ]

More failures:
https://testing.hpdd.intel.com/test_sets/62f21a70-d438-11e4-a21e-5254006e85c2
https://testing.hpdd.intel.com/test_sets/e9b28b5c-eb1d-11e4-aa1a-5254006e85c2

Comment by Jinshan Xiong (Inactive) [ 22/Jul/15 ]

https://testing.hpdd.intel.com/test_sets/2b027154-302d-11e5-9b7c-5254006e85c2

Comment by James Nunez (Inactive) [ 07/Dec/15 ]

Another instance at https://testing.hpdd.intel.com/test_sessions/c12c0a68-92f8-11e5-913f-5254006e85c2

Comment by Charlie Olmstead [ 07/Dec/15 ]

Investigating latest instance

Comment by Charlie Olmstead [ 08/Dec/15 ]
[12/7/15, 4:25:36 PM] Charlie Olmstead: https://testing.hpdd.intel.com/test_sessions/c12c0a68-92f8-11e5-913f-5254006e85c2
failed with 
14:12:30:shadow-7vm12: mkfs.lustre FATAL: unhandled/unloaded fs type 5 'zfs'
[12/7/15, 4:35:09 PM] Minh Diep: let me check
[12/7/15, 4:41:19 PM] Minh Diep: Charlie, most likely this is Lustre issue. I suggest they install and verify this offline. I think this occured before

Please try a manual run to see if it occurs.

Comment by Sarah Liu [ 08/Dec/15 ]

ok, I will try it manually.

Comment by Sarah Liu [ 08/Dec/15 ]

Hello,

I tried manually and still hit this issue, please let me know if you need any other info

mkfs.lustre --mgs --fsname=lustre --mdt --index=0 --param=sys.timeout=20 --param=lov.stripesize=1048576 --param=lov.stripecount=0 --param=mdt.identity_upcall=/usr/sbin/l_getidentity --backfstype=zfs --device-size=3145728 --reformat lustre-mdt1/mdt1 /dev/sdb1
Failed to initialize ZFS library

mkfs.lustre FATAL: unhandled/unloaded fs type 5 'zfs'

mkfs.lustre FATAL: unable to prepare backend (22)
mkfs.lustre: exiting with 22 (Invalid argument)
[root@onyx-23 ~]# 
[root@onyx-23 ~]# rpm -qa|grep lustre
lustre-dkms-2.7.64-1.el6.noarch
lustre-iokit-2.7.64-2.6.32_573.8.1.el6_lustre.x86_64.x86_64
kernel-devel-2.6.32-573.8.1.el6_lustre.x86_64
lustre-osd-zfs-mount-2.7.64-2.6.32_573.8.1.el6_lustre.x86_64.x86_64
lustre-tests-2.7.64-2.6.32_573.8.1.el6_lustre.x86_64.x86_64
kernel-2.6.32-573.8.1.el6_lustre.x86_64
kernel-headers-2.6.32-573.8.1.el6_lustre.x86_64
lustre-2.7.64-2.6.32_573.8.1.el6_lustre.x86_64.x86_64
lustre-osd-zfs-2.7.64-2.6.32_573.8.1.el6_lustre.x86_64.x86_64
kernel-firmware-2.6.32-573.8.1.el6_lustre.x86_64
[root@onyx-23 ~]# 

command I used for provision is

[w3liu@ssh-2 ~]$ loadjenkinsbuild -n onyx-23 -j lustre-master -b 3264 -t server -i inkernel -d el6.7 -a x86_64 --profile test --packages="expect,lsof,curl,gcc,make,cvs,bc,byacc,posix,compat-glibc-headers" --reboot --powerup --usedkms
Comment by Sarah Liu [ 08/Dec/15 ]

Per the conversation with John Salinas, the node is running an unpatched kernal

[12/8/15, 12:08:05 PM] John Salinas: let me track down which rpm is missing or bad
[12/8/15, 12:15:36 PM] John Salinas: It does not look like onyx-23 has a lustre patched kernel?  Is that expected
[12/8/15, 12:16:04 PM] sarah_lw liu: no
[12/8/15, 12:16:25 PM] John Salinas: [root@onyx-23 sys]# uname -r
2.6.32-573.8.1.el6.x86_64
[12/8/15, 12:16:52 PM] John Salinas: Is there a way we can see this build on jenkins?
[12/8/15, 12:18:31 PM] sarah_lw liu: this is the build https://build.hpdd.intel.com/job/lustre-master/3264/arch=x86_64,build_type=server,distro=el6.7,ib_stack=inkernel/
[12/8/15, 12:19:09 PM] sarah_lw liu: we also hit the same issue on a previous build #3252
[12/8/15, 12:22:12 PM] John Salinas: This looks like a mismatch: [root@onyx-23 sys]# find /lib -print |grep zfs |grep ko 
/lib/modules/2.6.32-573.8.1.el6_lustre.x86_64/extra/kernel/fs/lustre/osd_zfs.ko
[root@onyx-23 sys]# uname -a 
Linux onyx-23.onyx.hpdd.intel.com 2.6.32-573.8.1.el6.x86_64 #1 SMP Tue Nov 10 18:01:38 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[12/8/15, 12:22:40 PM] John Salinas: so the osd-zfs kernel object is built only for Lustre but we are running a vanilla kernel so it cannot load
[12/8/15, 12:23:18 PM] John Salinas: that and I believe we are missing: /extra/zfs.ko

The node is not running lustre patched kernal which it should

[root@onyx-23 ~]# ls /lib/modules/
2.6.32-573.8.1.el6_lustre.x86_64  2.6.32-573.8.1.el6.x86_64
[root@onyx-23 ~]# uname -a
Linux onyx-23.onyx.hpdd.intel.com 2.6.32-573.8.1.el6.x86_64 #1 SMP Tue Nov 10 18:01:38 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@onyx-23 ~]# 
Comment by Sarah Liu [ 08/Dec/15 ]

more failures:
this issue also affect ldiskfs

https://testing.hpdd.intel.com/test_sets/f362e37a-9a56-11e5-9d42-5254006e85c2
branch: lustre-reviews

22:08:23:onyx-41vm7: mkfs.lustre FATAL: unhandled/unloaded fs type 1 'ldiskfs'
22:08:23:onyx-41vm7: 
22:08:23:onyx-41vm7: mkfs.lustre FATAL: unable to prepare backend (22)
22:08:23:onyx-41vm7: mkfs.lustre: exiting with 22 (Invalid argument)

zfs instance:
branch: lustre-master
https://testing.hpdd.intel.com/test_sessions/41d46cfe-9dd6-11e5-8427-5254006e85c2

Comment by Minh Diep [ 09/Dec/15 ]

since you are using dkms, the node should not run lustre patched kernel. Please try this

modprobe lustre

if it fails, check the console or watch it during the end of os installation to see if zfs, spl, lustre build is successful

HTH

Comment by James Nunez (Inactive) [ 11/Dec/15 ]

An ldiskfs case on master:
2015-12-10 12:56:32 - https://testing.hpdd.intel.com/test_sets/f8b5e42c-9f46-11e5-8d81-5254006e85c2

Comment by James Nunez (Inactive) [ 15/Dec/15 ]

Charlie - I think this was assigned to me accidentally. I'm not sure you are the correct person to work on this, but I've assigned it back to you.

Comment by Charlie Olmstead [ 15/Dec/15 ]

I assigned it to you James since you initiated it being re-opened.

Minh suspects this is a lustre issue and recommended a few steps above:

since you are using dkms, the node should not run lustre patched kernel. Please try this
modprobe lustre
if it fails, check the console or watch it during the end of os installation to see if zfs, spl, lustre build is successful
HTH
Comment by James Nunez (Inactive) [ 16/Dec/15 ]

Sarah - Will you please run the test that Minh suggested, and Charlie copied in his last message, and report back on what you see?

Comment by Sarah Liu [ 16/Dec/15 ]

Sure, I will update the ticket

Comment by Sarah Liu [ 17/Dec/15 ]

Here is what I got

Failed to initialize ZFS library

mkfs.lustre FATAL: unhandled/unloaded fs type 5 'zfs'

mkfs.lustre FATAL: unable to prepare backend (22)
mkfs.lustre: exiting with 22 (Invalid argument)
[root@onyx-23 ~]# modprobe lustre
FATAL: Module lustre not found.
[root@onyx-23 ~]# 
Comment by Minh Diep [ 17/Dec/15 ]

let me log into onyx-23

Comment by Minh Diep [ 17/Dec/15 ]

Here is the hint in the console

Enabling /etc/fstab swaps: Adding 16465916k swap on /dev/sda2. Priority:-1 ext
ents:1 across:16465916k
[ OK ]^M^M
Entering non-interactive startup^M
Calling the system activity data collector (sadc)... ^M
FATAL: Module zfs not found.^M
with^M
'/kernel', '/updates', or '/extra' in record #0.^M
dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with^M
'/kernel', '/updates', or '/extra' in record #1.^M
dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with^M
'/kernel', '/updates', or '/extra' in record #2.^M
dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with^M
'/kernel', '/updates', or '/extra' in record #3.^M
dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with^M
'/kernel', '/updates', or '/extra' in record #4.^M
dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with^M
'/kernel', '/updates', or '/extra' in record #5.^M
dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with^M
'/kernel', '/updates', or '/extra' in record #6.^M
dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with^M
'/kernel', '/updates', or '/extra' in record #7.^M
dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with^M
'/kernel', '/updates', or '/extra' in record #8.^M

please make sure zfs built correctly, ie run zfs command.

Comment by Dmitry Eremin (Inactive) [ 25/Dec/15 ]

I think this is TEI issue. Originally I though that this happens because of using autoconf variable without proper initialization. But looking into this more deeply I think in this case we have an issue with unresolved dependency. When DKMS package is installing it launch the configure script for autoconf variables initialization. If this script fails because of any reason the dkms.conf became broken and we get mentored above error messages. So, I'd like to understand the reason of configure fail to resolve this ticket.

Can you help me to find the machine which have this issue and not changed (re-installed) yet to login and manually understand why the configure script fails?
I assume that just few packages missed or not installed yet.

Comment by Andreas Dilger [ 05/Jan/16 ]

Minh, any chance to help Dmitry debug this failure?

Dmitry, is it possible to make a patch to the autotest or DKMS script that will print out the installed modules and/or other information needed to debug this failure?

Comment by Bruno Faccini (Inactive) [ 05/Jan/16 ]

It is true that any failure/lack causing lustre-dkms RPM configure step to early exit is likely to cause such later consequence.
As part of a strengthening effort, I can also add the necessary error handling stuff to catch configure errors in the lustre-dkms RPM scripts.

Comment by Bruno Faccini (Inactive) [ 05/Jan/16 ]

After having hands-on an isolated node that suffered the same failure, I have found that the "configure" failure comes from the auto-generated "lustre-dkms_post-add.sh" script (from LU-1032 latest patch to handle Client case) causing wrong configure parameters construction, when zfs/spl DKMS RPMs have only been added ...
Will push a patch to fix this.
And a workaround would be to build/install ZFS/SPL DKMS RPMs before to install Lustre DKMS RPM.

Comment by Gerrit Updater [ 05/Jan/16 ]

Faccini Bruno (bruno.faccini@intel.com) uploaded a new patch: http://review.whamcloud.com/17829
Subject: LU-7601 build: fix typo for spl/zfs added case handler
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 554e31c64a2f8762598cfc0865772b818edcdce9

Comment by Gerrit Updater [ 11/Jan/16 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/17829/
Subject: LU-7601 build: fix typo for spl/zfs added case handler
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: cb73cb5083ee2db5c6ed607c44e1002788b4eee6

Comment by Peter Jones [ 11/Jan/16 ]

Landed for 2.8. Let's track follow on work to improve robustness of this area under a new ticket.

Comment by Bruno Faccini (Inactive) [ 13/Jan/16 ]

Just to be complete about this ticket's problem, and even if we will "track follow on work to improve robustness of this area under a new ticket", my patch fixes a problem/typo causing configure to fail when [spl,zfs]-dkms packages are in "added" state, in the sense of DKMS. But the main problem is that these packages are still in tis "added" state at the time of configure step during lustre-dkms install.
This should never happen since both [spl,zfs]-dkms packages should be built/installed during their respective RPM's post-install script.
Having a look to an affected node's KickStart/install log, this could be linked to the following msgs/errors :

............................
+ yum install -y kernel-2.6.32-573.8.1.el6.x86_64 kernel-devel-2.6.32-573.8.1.el6.x86_64
Loaded plugins: fastestmirror, security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package kernel.x86_64 0:2.6.32-573.8.1.el6 will be installed
---> Package kernel-devel.x86_64 0:2.6.32-573.8.1.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package            Arch         Version                    Repository     Size
================================================================================
Installing:
 kernel             x86_64       2.6.32-573.8.1.el6         updates        30 M
 kernel-devel       x86_64       2.6.32-573.8.1.el6         updates        10 M

Transaction Summary
================================================================================
Install       2 Package(s)

Total download size: 40 M
Installed size: 151 M
Downloading Packages:
--------------------------------------------------------------------------------
Total                                           4.2 MB/s |  40 MB     00:09
warning: rpmts_HdrFromFdno: Header V3 RSA/SHA1 Signature, key ID c105b9de: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
Importing GPG key 0xC105B9DE:
 Userid : CentOS-6 Key (CentOS 6 Official Signing Key) <centos-6-key@centos.org>
 Package: centos-release-6-7.el6.centos.12.3.x86_64 (@anaconda-CentOS-201508042137.x86_64/6.7)
 From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
^M  Installing : kernel-devel-2.6.32-573.8.1.el6.x86_64                       1/2
^M  Installing : kernel-2.6.32-573.8.1.el6.x86_64                             2/2
^M  Verifying  : kernel-2.6.32-573.8.1.el6.x86_64                             1/2
^M  Verifying  : kernel-devel-2.6.32-573.8.1.el6.x86_64                       2/2

Installed:
  kernel.x86_64 0:2.6.32-573.8.1.el6  kernel-devel.x86_64 0:2.6.32-573.8.1.el6

Complete!
+ break
      yuminstall zfs-dkms spl-dkms zfs
+ yuminstall zfs-dkms spl-dkms zfs
+ local 'packages=zfs-dkms spl-dkms zfs'
+ local max=5
+ local i=0
+ ((  i < max  ))
+ yum install -y zfs-dkms spl-dkms zfs
Loaded plugins: fastestmirror, security
Setting up Install Process
Determining fastest mirrors
 * base: centos.mirror.constant.com
 * extras: repos.lax.quadranet.com
 * updates: mirrors.cmich.edu
Resolving Dependencies
--> Running transaction check
---> Package spl-dkms.noarch 0:0.6.5.3-1.el6 will be installed
--> Processing Dependency: dkms >= 2.2.0.2 for package: spl-dkms-0.6.5.3-1.el6.noarch
---> Package zfs.x86_64 0:0.6.5.3-1.el6 will be installed
--> Processing Dependency: spl = 0.6.5.3 for package: zfs-0.6.5.3-1.el6.x86_64
--> Processing Dependency: libzpool2 = 0.6.5.3 for package: zfs-0.6.5.3-1.el6.x86_64
--> Processing Dependency: libzfs2 = 0.6.5.3 for package: zfs-0.6.5.3-1.el6.x86_64
--> Processing Dependency: libuutil1 = 0.6.5.3 for package: zfs-0.6.5.3-1.el6.x86_64
--> Processing Dependency: libnvpair1 = 0.6.5.3 for package: zfs-0.6.5.3-1.el6.x86_64
--> Processing Dependency: libzpool.so.2()(64bit) for package: zfs-0.6.5.3-1.el6.x86_64
--> Processing Dependency: libzfs_core.so.1()(64bit) for package: zfs-0.6.5.3-1.el6.x86_64
--> Processing Dependency: libzfs.so.2()(64bit) for package: zfs-0.6.5.3-1.el6.x86_64
--> Processing Dependency: libuutil.so.1()(64bit) for package: zfs-0.6.5.3-1.el6.x86_64
--> Processing Dependency: libnvpair.so.1()(64bit) for package: zfs-0.6.5.3-1.el6.x86_64
---> Package zfs-dkms.noarch 0:0.6.5.3-1.el6 will be installed
--> Running transaction check
---> Package dkms.noarch 0:2.2.0.3-30.git.7c3e7c5.el6 will be installed
---> Package libnvpair1.x86_64 0:0.6.5.3-1.el6 will be installed
---> Package libuutil1.x86_64 0:0.6.5.3-1.el6 will be installed
---> Package libzfs2.x86_64 0:0.6.5.3-1.el6 will be installed
---> Package libzpool2.x86_64 0:0.6.5.3-1.el6 will be installed
---> Package spl.x86_64 0:0.6.5.3-1.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package      Arch     Version                       Repository            Size
================================================================================
Installing:
 spl-dkms     noarch   0.6.5.3-1.el6                 lustre-build         449 k
 zfs          x86_64   0.6.5.3-1.el6                 lustre-build         323 k
 zfs-dkms     noarch   0.6.5.3-1.el6                 lustre-build         1.9 M
Installing for dependencies:
 dkms         noarch   2.2.0.3-30.git.7c3e7c5.el6    addon-epel6-x86_64    77 k
 libnvpair1   x86_64   0.6.5.3-1.el6                 lustre-build          27 k
 libuutil1    x86_64   0.6.5.3-1.el6                 lustre-build          32 k
 libzfs2      x86_64   0.6.5.3-1.el6                 lustre-build         113 k
 libzpool2    x86_64   0.6.5.3-1.el6                 lustre-build         401 k
 spl          x86_64   0.6.5.3-1.el6                 lustre-build          25 k

Transaction Summary
================================================================================
Install       9 Package(s)

Total download size: 3.3 M
Installed size: 16 M
Downloading Packages:
--------------------------------------------------------------------------------
Total                                           1.3 MB/s | 3.3 MB     00:02
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
^M  Installing : libuutil1-0.6.5.3-1.el6.x86_64                               1/9
^M  Installing : libnvpair1-0.6.5.3-1.el6.x86_64                              2/9
^M  Installing : libzpool2-0.6.5.3-1.el6.x86_64                               3/9
^M  Installing : dkms-2.2.0.3-30.git.7c3e7c5.el6.noarch                       4/9
^M  Installing : spl-dkms-0.6.5.3-1.el6.noarch                                5/9
Loading new spl-0.6.5.3 DKMS files...
It is likely that 2.6.32-573.el6.x86_64 belongs to a chroot's host
Building for 2.6.32-573.12.1.el6.x86_64 and 2.6.32-573.8.1.el6.x86_64                         <<<<<<<<<<<<<<<<<<<
/usr/sbin/dkms: line 1958: /dev/fd/62: No such file or directory
/usr/sbin/dkms: line 1890: /dev/fd/62: No such file or directory
warning: %post(spl-dkms-0.6.5.3-1.el6.noarch) scriptlet failed, exit status 1   <<<<<<< spl-dkms RPM post-install script failed, leading to its DKMS build/install steps not done
Non-fatal POSTIN scriptlet failure in rpm package spl-dkms-0.6.5.3-1.el6.noarch
^M  Installing : zfs-dkms-0.6.5.3-1.el6.noarch                                6/9
Loading new zfs-0.6.5.3 DKMS files...
It is likely that 2.6.32-573.el6.x86_64 belongs to a chroot's host
Building for 2.6.32-573.12.1.el6.x86_64 and 2.6.32-573.8.1.el6.x86_64                         <<<<<<<<<<<<<<<<<<<
/usr/sbin/dkms: line 1958: /dev/fd/62: No such file or directory
/usr/sbin/dkms: line 1890: /dev/fd/62: No such file or directory
warning: %post(zfs-dkms-0.6.5.3-1.el6.noarch) scriptlet failed, exit status 1   <<<<<<< zfs-dkms RPM post-install script failed, leading to its DKMS build/install steps not done
Non-fatal POSTIN scriptlet failure in rpm package zfs-dkms-0.6.5.3-1.el6.noarch
^M  Installing : spl-0.6.5.3-1.el6.x86_64                                     7/9
^M  Installing : libzfs2-0.6.5.3-1.el6.x86_64                                 8/9
^M  Installing : zfs-0.6.5.3-1.el6.x86_64                                     9/9
^M  Verifying  : spl-dkms-0.6.5.3-1.el6.noarch                                1/9
^M  Verifying  : dkms-2.2.0.3-30.git.7c3e7c5.el6.noarch                       2/9
^M  Verifying  : libzpool2-0.6.5.3-1.el6.x86_64                               3/9
^M  Verifying  : libuutil1-0.6.5.3-1.el6.x86_64                               4/9
^M  Verifying  : zfs-dkms-0.6.5.3-1.el6.noarch                                5/9
^M  Verifying  : spl-0.6.5.3-1.el6.x86_64                                     6/9
^M  Verifying  : zfs-0.6.5.3-1.el6.x86_64                                     7/9
^M  Verifying  : libzfs2-0.6.5.3-1.el6.x86_64                                 8/9
^M  Verifying  : libnvpair1-0.6.5.3-1.el6.x86_64                              9/9

Installed:
  spl-dkms.noarch 0:0.6.5.3-1.el6           zfs.x86_64 0:0.6.5.3-1.el6
  zfs-dkms.noarch 0:0.6.5.3-1.el6

Dependency Installed:
  dkms.noarch 0:2.2.0.3-30.git.7c3e7c5.el6   libnvpair1.x86_64 0:0.6.5.3-1.el6
  libuutil1.x86_64 0:0.6.5.3-1.el6           libzfs2.x86_64 0:0.6.5.3-1.el6
  libzpool2.x86_64 0:0.6.5.3-1.el6           spl.x86_64 0:0.6.5.3-1.el6

Complete!
+ break
      yuminstall lustre-dkms lustre-osd-zfs lustre lustre-tests
+ yuminstall lustre-dkms lustre-osd-zfs lustre lustre-tests
+ local 'packages=lustre-dkms lustre-osd-zfs lustre lustre-tests'
+ local max=5
+ local i=0
+ ((  i < max  ))
+ yum install -y lustre-dkms lustre-osd-zfs lustre lustre-tests
Loaded plugins: fastestmirror, security
Setting up Install Process
Loading mirror speeds from cached hostfile
 * base: centos.mirror.constant.com
 * extras: repos.lax.quadranet.com
 * updates: mirrors.cmich.edu
Resolving Dependencies
--> Running transaction check
---> Package lustre.x86_64 0:2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c will be installed
--> Processing Dependency: lustre-osd-mount for package: lustre-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c.x86_64
--> Processing Dependency: libnetsnmpmibs.so.20()(64bit) for package: lustre-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c.x86_64
--> Processing Dependency: libnetsnmphelpers.so.20()(64bit) for package: lustre-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c.x86_64
--> Processing Dependency: libnetsnmpagent.so.20()(64bit) for package: lustre-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c.x86_64
--> Processing Dependency: libnetsnmp.so.20()(64bit) for package: lustre-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c.x86_64
---> Package lustre-dkms.noarch 0:2.7.64-1.el6 will be installed
--> Processing Dependency: /usr/bin/expect for package: lustre-dkms-2.7.64-1.el6.noarch
---> Package lustre-osd-zfs.x86_64 0:2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c will be installed
---> Package lustre-tests.x86_64 0:2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c will be installed
--> Processing Dependency: lustre-iokit for package: lustre-tests-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c.x86_64
--> Running transaction check
---> Package expect.x86_64 0:5.44.1.15-5.el6_4 will be installed
---> Package lustre-iokit.x86_64 0:2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c will be installed
--> Processing Dependency: sg3_utils for package: lustre-iokit-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c.x86_64
---> Package lustre-osd-zfs-mount.x86_64 0:2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c will be installed
---> Package net-snmp-libs.x86_64 1:5.5-54.el6_7.1 will be installed
--> Running transaction check
---> Package sg3_utils.x86_64 0:1.28-8.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package              Arch   Version                         Repository    Size
================================================================================
Installing:
 lustre               x86_64 2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c
                                                             lustre-build 569 k
 lustre-dkms          noarch 2.7.64-1.el6                    lustre-build  12 M
 lustre-osd-zfs       x86_64 2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c
                                                             lustre-build  92 k
 lustre-tests         x86_64 2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c
                                                             lustre-build 8.4 M
Installing for dependencies:
 expect               x86_64 5.44.1.15-5.el6_4               base         256 k
 lustre-iokit         x86_64 2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c
                                                             lustre-build  42 k
 lustre-osd-zfs-mount x86_64 2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c
                                                             lustre-build 7.7 k
 net-snmp-libs        x86_64 1:5.5-54.el6_7.1                updates      1.5 M
 sg3_utils            x86_64 1.28-8.el6                      base         500 k

Transaction Summary
================================================================================
Install       9 Package(s)

Total download size: 24 M
Installed size: 53 M
Downloading Packages:
--------------------------------------------------------------------------------
Total                                           4.2 MB/s |  24 MB     00:05
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
^M  Installing : lustre-osd-zfs-mount-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3   1/9
^M  Installing : expect-5.44.1.15-5.el6_4.x86_64                              2/9
^M  Installing : lustre-dkms-2.7.64-1.el6.noarch                              3/9
Loading new lustre-2.7.64 DKMS files...
/usr/sbin/dkms: line 1958: /dev/fd/62: No such file or directory                         <<<<<<<<<<<<<<<<<<<  same error /dev/fd/62 problem than previously for [spl,zfs]-dkms
/usr/sbin/dkms: line 1890: /dev/fd/62: No such file or directory
/usr/sbin/dkms: line 1958: /dev/fd/62: No such file or directory
/usr/sbin/dkms: line 1890: /dev/fd/62: No such file or directory
configure: error: Kernel source  could not be found.
It is likely that 2.6.32-573.el6.x86_64 belongs to a chroot's host
Building for 2.6.32-573.12.1.el6.x86_64 and 2.6.32-573.8.1.el6.x86_64
/usr/sbin/dkms: line 1958: /dev/fd/62: No such file or directory
/usr/sbin/dkms: line 1890: /dev/fd/62: No such file or directory
warning: %post(lustre-dkms-2.7.64-1.el6.noarch) scriptlet failed, exit status 1 <<<<<< lustre-dkms RPM post-script also fails
Non-fatal POSTIN scriptlet failure in rpm package lustre-dkms-2.7.64-1.el6.noarch
^M  Installing : lustre-osd-zfs-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x   4/9
^M  Installing : 1:net-snmp-libs-5.5-54.el6_7.1.x86_64                        5/9
^M  Installing : lustre-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g5   6/9
^M  Installing : sg3_utils-1.28-8.el6.x86_64                                  7/9
^M  Installing : lustre-iokit-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86   8/9
^M  Installing : lustre-tests-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86   9/9
^M  Verifying  : sg3_utils-1.28-8.el6.x86_64                                  1/9
^M  Verifying  : lustre-osd-zfs-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x   2/9
^M  Verifying  : lustre-dkms-2.7.64-1.el6.noarch                              3/9
^M  Verifying  : 1:net-snmp-libs-5.5-54.el6_7.1.x86_64                        4/9
^M  Verifying  : lustre-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g5   5/9
^M  Verifying  : lustre-osd-zfs-mount-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3   6/9
^M  Verifying  : lustre-iokit-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86   7/9
^M  Verifying  : expect-5.44.1.15-5.el6_4.x86_64                              8/9
^M  Verifying  : lustre-tests-2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86   9/9

Installed:
  lustre.x86_64 0:2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c
  lustre-dkms.noarch 0:2.7.64-1.el6
  lustre-osd-zfs.x86_64 0:2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c
  lustre-tests.x86_64 0:2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c

Dependency Installed:
  expect.x86_64 0:5.44.1.15-5.el6_4
  lustre-iokit.x86_64 0:2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c
  lustre-osd-zfs-mount.x86_64 0:2.7.64-2.6.32_573.8.1.el6_lustre.gbd3d354.x86_64_g554e31c
  net-snmp-libs.x86_64 1:5.5-54.el6_7.1
  sg3_utils.x86_64 0:1.28-8.el6

Complete!
............................
Comment by Bruno Faccini (Inactive) [ 18/Jan/16 ]

Follow-on for this ticket, to add/enhance error handling/reporting (mainly during configure step) during processing of lustre[-client]-dkms package content by DKMS framework, is being tracked as part of LU-7679.

Generated at Sat Feb 10 02:10:18 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.