[LU-3450] lustre-ldiskfs upgrades don't get installed when upgrading from yum Created: 11/Jun/13  Updated: 28/Jun/13

Status: Open
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.3.0, Lustre 2.1.6
Fix Version/s: None

Type: Story Priority: Major
Reporter: John Spray (Inactive) Assignee: Minh Diep
Resolution: Unresolved Votes: 0
Labels: mq213

Issue Links:
Related
Rank (Obsolete): 8630

 Description   

To reproduce:

  • Install Lustre 2.1.x on server A
  • Create a yum repository serving Lustre 2.3.x packages on server B
  • Edit yum configuration on server A to point to new repo on server B
  • Run "yum update lustre" on server A

Expected outcome:

  • All the Lustre packages on Server A upgraded to the 2.3.x versions

Actual outcome:

  • All the Lustre packages except lustre-ldiskfs are upgraded – lustre-ldiskfs remains at the 2.1.x version, causing subsequent filesystem mounts to fail due to the old version of ldiskfs.

Background:

  • lustre-modules depends on lustre-backing-fs, which is provided by lustre-ldiskfs. However, there are no versions specified in this dependency chain, so when a 2.1.x lustre-ldiskfs is already installed, that satisfies the dependency of the lustre 2.3.x lustre-modules. As a result, yum sees no need to install the updated ldiskfs package.

Suggested fix:

  • Make lustre-modules depend on a specific version of lustre-backend-fs or lustre-ldiskfs


 Comments   
Comment by Brian Murrell (Inactive) [ 11/Jun/13 ]

Just so that reproducing this doesn't get hung up in trying to create yum repos, etc. an equally valid reproducer is to simply upgrade a Lustre 2.1.x system to Lustre 2.3.0 but DO NOT include the lustre-ldiskfs RPM in your rpm upgrade operation. So, i.e.:

# rpm -Uvh lustre-2.3.0-2.6.32_279.5.1.el6_lustre.gb16fe80.x86_64.x86_64.rpm lustre-modules-2.3.0-2.6.32_279.5.1.el6_lustre.gb16fe80.x86_64.x86_64.rpm

That command should happily work without error. The problem is however, that as John says, the lustre-ldiskfs that got left back from the 2.1.5 installation simply doesn't work with Lustre 2.3.0 and so the packaging needs to ensure that a compatible ldiskfs is pulled in (i.e. required).

This is probably as simple as adding versioning to the "Provides: lustre-backing-fs" in the ldiskfs specfile and a corresponding versioning to the "Requires: lustre-backing-fs" thats in the lustre specfile. If you want to maintain some amount of flexibility in matching ldiskfs to Lustre, you can use >= operators in your Requires:.

Comment by Peter Jones [ 11/Jun/13 ]

Minh

Could you please help with this one?

Thanks

Peter

Comment by Andreas Dilger [ 17/Jun/13 ]

This problem is unfortunately caused by the ldiskfs RPM not changing its version for some time, and also the Lustre code not depending on a particular ldiskfs RPM version (though increasing the version wouldn't have helped in itself).

For IEEL and 2.1.6 the ldiskfs build version needs to be modified (to 3.3.6 for 2.1.6, and 3.4.1 for IEEL), and then add a Requires: lustre-ldiskfs = for those versions in the lustre.spec.in file. See http://review.whamcloud.com/5938 for how this was done for 2.4.0.

Comment by Minh Diep [ 28/Jun/13 ]

patch for b2_1 http://review.whamcloud.com/#/c/6754/

Generated at Sat Feb 10 01:34:01 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.