Details
-
New Feature
-
Resolution: Fixed
-
Major
-
Lustre 2.8.0
-
IBM Power8 hardware. Currently Ubuntu and later RHEL7.1 support will be added to this platform.
-
9223372036854775807
Description
Currently working with Power8 client nodes running Ubuntu with a 3.13 kernel. Later the nodes will be moved to RHEL7.1 as support improves.
Here the work will be pushed to be able to build and run lustre in such an
environment.
Attachments
Issue Links
- is related to
-
LU-8485 workqueue overflows with mlx5 on power8 platforms.
-
- Resolved
-
-
LU-8693 ko2iblnd recieving IB_WC_MW_BIND_ERR errors.
-
- Resolved
-
-
LU-10752 Lustre rpm build issues due to improper lsvcgss packaging
-
- Resolved
-
-
LU-11453 sanity test 184a: Basic layout swap panics on Power8
-
- Resolved
-
-
LU-6284 FLD read is not swabbed correctly
-
- Resolved
-
-
LU-8117 lustre-ppc fails to build on ppc64 el7
-
- Resolved
-
-
LU-8776 fix weird inline definitions
-
- Resolved
-
-
LU-11246 New lustre e2fsprogs 1.44 issues
-
- Closed
-
-
LU-11278 LNet failures on Power8
-
- Closed
-
-
LU-7321 LustreError: 61814:0:(ldlm_lockd.c:692:ldlm_handle_ast_error()) ### client (nid 172.20.17.9@o2ib500) returned -5 from glimpse AST
-
- Closed
-
-
LU-8567 mdc_reint.c:57:mdc_reint()) error in handling -17 encountered on power8 node
-
- Closed
-
-
LU-12419 ppc64le: "LNetError: RDMA has too many fragments for peer_ni" when reading two files
-
- Closed
-
-
LU-11200 Centos 8 arm64 server support
-
- Resolved
-
-
LU-11440 Make e2fsprogs-1.44.3-wc1 release
-
- Resolved
-
-
LU-10300 Can the Lustre 2.10.x clients support 64K kernel page?
-
- Resolved
-
- is related to
-
LU-10157 LNET_MAX_IOV hard coded to 256
-
- Resolved
-
-
LU-8700 dkms fails to build lustre on Power8 due to llite_loop missing
-
- Resolved
-
Here are all the test that fail on Power8 with ZFS server backend.
sanity: FAIL: test_43A execute /lustre/lustre/d43A.sanity/f43A.sanity succeeded
sanity: FAIL: test_56j '/usr/bin/lfs find -type d /lustre/lustre/d56g.sanity' wrong: found 3, expected 4
sanity: FAIL: test_56o lfs find -mtime +0 /lustre/lustre/d56o.sanity: found 0 expect 4
sanity: FAIL: test_56p '/usr/bin/lfs find -uid 2004 /lustre/lustre/d56p.sanity' wrong: found 0, expected 3
sanity: FAIL: test_56q '/usr/bin/lfs find -gid 2647 /lustre/lustre/d56q.sanity' wrong: found 0, expected 3
sanity: FAIL: test_56r '/usr/bin/lfs find -size 5 -type f /lustre/lustre/d56r.sanity' wrong: found 0, expected 1
sanity: FAIL: test_56t '/usr/bin/lfs find -S 8M /lustre/lustre/d56t.sanity' wrong: found 0, expected 3
sanity: FAIL: test_56u '/usr/bin/lfs find -stripe-index 0 -type f /lustre/lustre/d56u.sanity' wrong: found 0, expected 12
sanity: FAIL: test_56wb file was not migrated to pool testpool
sanity: FAIL: test_56y search raid0: found 0 files != 2
sanity: FAIL: test_56ab >16M size files 0 isn't 3 as expected
sanity: FAIL: test_56ba lfs find -E 1M found 0 != 10 files
sanity: FAIL: test_56ca /usr/bin/lfs find --mirror-count 3 --type f /lustre/lustre/d56ca.sanity: 0 != 10 files
sanity: FAIL: test_77g write error: rc=1
sanity: FAIL: test_78 rdwr failed
sanity: FAIL: test_81a write should success, but failed for 28
sanity: FAIL: test_82 test_82 failed with 61
sanity: FAIL: test_103a permissions failed
sanity: FAIL: test_133e Bad write_bytes sum, expected 1376256, got 1409024
sanity: FAIL: test_133f proc file read failed
sanity: FAIL: test_155e dd of=/tmp/f155e.sanity bs=0 count=1k failed
sanity: FAIL: test_155f dd of=/tmp/f155f.sanity bs=0 count=1k failed
sanity: FAIL: test_155g dd of=/tmp/f155g.sanity bs=0 count=1k failed
sanity: FAIL: test_155h dd of=/tmp/f155h.sanity bs=0 count=1k failed
sanity: FAIL: test_241b test_241b failed with 1
sanity: FAIL: test_243 A group lock test failed
sanity: FAIL: test_255c Ladvise test 12, bad lock count, returned 1, actual 0
sanity: FAIL: test_270a file data is different
sanity: FAIL: test_270e lfs find -L: found 0, expected 20
sanity: FAIL: test_270e lfs find -L: found 0, expected 20
sanity: FAIL: test_315 read is not accounted ()
Since both ARM and Power8 use the same kernel version I expect the same failures.