[LU-11310] support for SLES 15 Created: 06/Feb/18  Updated: 13/Aug/20  Resolved: 24/Mar/20

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: Lustre 2.14.0

Type: Improvement Priority: Minor
Reporter: Brad Hoagland (Inactive) Assignee: Jian Yu
Resolution: Fixed Votes: 0
Labels: None

Attachments: Text File 0001-LDEV-645-build-add-support-to-build-for-SLES15.patch     Text File iversion.patch    
Issue Links:
Duplicate
duplicates LU-11295 Add ldiskfs patch series for SLES 15 Closed
Related
is related to LU-10560 Fixes for 4.14 kernel Resolved
is related to LU-13204 sanity test 100: netstat: command not... Resolved
is related to LU-12137 update client to use iterate_shared Resolved
is related to LU-13187 sanity test_129: current dir size 409... Resolved
is related to LU-13177 add e2fsprog support for SLES15SP1 Resolved
is related to LU-13405 kernel update [SLES15 SP1 4.12.14-197... Resolved
Rank (Obsolete): 9223372036854775807

 Description   

Ticket to coordinate testing of SLES 15:

Beta: January 23, 2018

GA: June 2018 

https://www.suse.com/releasenotes/x86_64/SUSE-SLES/15/



 Comments   
Comment by Bob Glossman (Inactive) [ 06/Feb/18 ]

Beta6 release for sles15 available since 2/6.
Earlier releases have been available for months.

Alpha & Beta releases are available at suse.com.
Anybody with a subscription can apply & get access, they aren't partner-only.

Comment by Bob Glossman (Inactive) [ 06/Feb/18 ]

some of the patches from LU-10560 are needed in order to build for sles15.
In particular this one: https://review.whamcloud.com/31153

Comment by Bob Glossman (Inactive) [ 06/Feb/18 ]

there are still some buffer overflows from snprintf() calls blocking build.
These have probably always been there, but more stringent error checks that are in gcc7 now report them.

sles15 has gcc7.

Comment by Bob Glossman (Inactive) [ 06/Feb/18 ]

Can't find a suitable bit to use for EXT4_MOUNT_DIRDATA.
Values used in earlier SLES versions and those used in other distros are now all being used in upstream ext4 in sles15.

Comment by Bob Glossman (Inactive) [ 16/Feb/18 ]

Beta7 release is now available.

Comment by Bob Glossman (Inactive) [ 16/Feb/18 ]

all the ACL calls are failing. errors:

== sanity test 103a: acl test ======================================================================== 09:41:42 (1518802902)
Adding user daemon to group bin
Adding user daemon to group bin
performing cp ...
[3] $ umask 022 -- ok
[4] $ mkdir d -- ok
[5] $ cd d -- ok
[6] $ touch f -- ok
[7] $ setfacl -m u:bin:rw f -- failed
~                                     ? setfacl: f: Invalid argument           
[8] $ ls -l f | awk -- '{ print $1 }' -- failed
-rw-rw-r--+                           ? -rw-r--r--                             
[11] $ cp f g -- ok
[12] $ ls -l g | awk -- '{sub(/\./, "", $1); print $1 }' -- ok
[15] $ rm g -- ok
[16] $ cp -p f g -- ok
[17] $ ls -l f | awk -- '{ print $1 }' -- failed
-rw-rw-r--+                           ? -rw-r--r--                             
[20] $ mkdir h -- ok
[21] $ echo blubb > h/x -- ok
[22] $ cp -rp h i -- ok
[23] $ cat i/x -- ok
[26] $ rm -r i -- ok
[31] $ setfacl -R -m u:bin:rwx h -- failed
~                                     ? setfacl: h: Invalid argument           
~                                     ? setfacl: h/x: Invalid argument         
[32] $ getfacl --omit-header h/x -- failed
user::rw-                             ? getfacl: h/x: Invalid argument         
user:bin:rwx                          ? ~                                      
group::r--                            ? ~                                      
mask::rwx                             ? ~                                      
other::r--                            ? ~                                      
                                      ? ~                                      
[40] $ cp -rp h i -- ok
[41] $ getfacl --omit-header i/x -- failed
user::rw-                             ? getfacl: i/x: Invalid argument         
user:bin:rwx                          ? ~                                      
group::r--                            ? ~                                      
mask::rwx                             ? ~                                      
other::r--                            ? ~                                      
                                      ? ~                                      
[49] $ cd .. -- ok
[50] $ rm -r d -- ok
22 commands (16 passed, 6 failed)
 sanity test_103a: @@@@@@ FAIL: run_acl_subtest cp failed 
  Trace dump:
  = /usr/lib64/lustre/tests/test-framework.sh:5343:error()
  = /usr/lib64/lustre/tests/sanity.sh:8147:test_103a()
  = /usr/lib64/lustre/tests/test-framework.sh:5619:run_one()
  = /usr/lib64/lustre/tests/test-framework.sh:5658:run_one_logged()
  = /usr/lib64/lustre/tests/test-framework.sh:5457:run_test()
  = /usr/lib64/lustre/tests/sanity.sh:8198:main()
Dumping lctl log to /tmp/test_logs/2018-02-16/094127/sanity.test_103a.*.1518802905.log
Resetting fail_loc on all nodes...done.
FAIL 103a (6s)

strongly suspect kernel APIs for ACL access are significantly changed in a way not adapted to by the current set of autoconf tests for XATTR things.

Comment by Andreas Dilger [ 21/Feb/18 ]

Bob, the EXT4_MOUNT_DIRDATA flag is now part of the upstream kernel, so you could check what value is used there.

For the ACL issue, that needs strace or kernel debug logs to track what is going on. That should probably be put into a separate ticket. If it can be reproduced with Ubuntu or a vanilla kernel, it could be a regular LU ticket, otherwise an LDEV ticket for the time being.

Comment by Bob Glossman (Inactive) [ 21/Feb/18 ]

strace of failing getfacl:

# strace getfacl /mnt/lustre/f
   .
   .
lstat("/mnt/lustre/f", {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
getxattr("/mnt/lustre/f", "system.posix_acl_access", 0x7fff66c5d9d0, 132) = -1 EINVAL (Invalid argument)
  .
  .
write(2, "getfacl: /mnt/lustre/f: Invalid "..., 41getfacl: /mnt/lustre/f: Invalid argument
) = 41
exit_group(1)                           = ?
+++ exited with 1 +++

A similar command on a non-lustre file shows the following:

# strace getfacl /home/bogl/fff
  .
  .
lstat("/home/bogl/fff", {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
getxattr("/home/bogl/fff", "system.posix_acl_access", 0x7ffeeb001b00, 132) = -1 ENODATA (No data available)
stat("/home/bogl/fff", {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
  .
  .
write(1, "# group: users\n", 15# group: users
)        = 15
write(1, "user::rw-\ngroup::r--\n", 21user::rw-
group::r--
) = 21
write(1, "other::r--\n", 11other::r--
)            = 11
write(1, "\n", 1
)                       = 1
exit_group(0)                           = ?
+++ exited with 0 +++

Will look into putting this issue in a ticket by itself.

re EXT4_MOUNT_DIRDATA I can't find it declared in our repo https://git.hpdd.intel.com/fs/linux-staging.git in either the master or staging-next branches.
Am I looking in the wrong place?

Comment by Andreas Dilger [ 22/Feb/18 ]

Sorry, it might not have landed yet. Search the Linux-fedevel archives for a patch from Artem to find the value used in his patch. It should be landing in the next kernel.

Comment by Bob Glossman (Inactive) [ 07/Mar/18 ]

RC1 for sles15 is now available on suse.com
kernel version bumped up to 4.12.14-13.5

Comment by Bob Glossman (Inactive) [ 07/Mar/18 ]

in this kernel the type of inode.i_version has changed to atomic64_t.
it used to be u64.

This causes compile errors like the following in lustre builds:

  CC [M]  /home/bogl/lustre-release/lustre/llite/dir.o
/home/bogl/lustre-release/lustre/llite/dir.c: In function ‘ll_iterate’:
/home/bogl/lustre-release/lustre/llite/dir.c:396:18: error: incompatible types when assigning to type ‘u64 {aka long long unsigned int}’ from type ‘atomic64_t {aka struct <anonymous>}’
  filp->f_version = inode->i_version;
                  ^
Comment by Bob Glossman (Inactive) [ 07/Mar/18 ]

as a very ugly workaround the following 1 line mod allows builds to finish:

--- a/lustre/llite/dir.c
+++ b/lustre/llite/dir.c
@@ -393,7 +393,7 @@ static int ll_readdir(struct file *filp, void *cookie, filldir_t filldir)
        filp->f_pos = pos;
 #endif
        ll_finish_md_op_data(op_data);
-       filp->f_version = inode->i_version;
+       filp->f_version = inode->i_version.counter;
 
 out:
        if (!rc)

This will of course break the build on any old distros.
Some autoconf support to adapt is needed.

Comment by Bob Glossman (Inactive) [ 08/Mar/18 ]

RC1 is now up on Intel mirror. Previously it was only available at suse.com.

Comment by Andreas Dilger [ 08/Mar/18 ]

Is there a macro in the upstream kernel that is used to get/set the inode version? That could be ported to Lustre and used in the code. Otherwise, we geed a macro like "ll_get_inode_cersion()" that is conditional on whether i_version is a atomic or not.

Comment by Bob Glossman (Inactive) [ 08/Mar/18 ]

have attached a possible patch, somewhat along the lines of what Andreas suggested.
Does the right thing on sles15, don't know for sure it doesn't break other builds yet.

Comment by Bob Glossman (Inactive) [ 23/Mar/18 ]

RC2 for sles15 is now available on suse.com
Not working at all for me, fails install.

Have submitted a bug report to SUSE:
https://bugzilla.suse.com/show_bug.cgi?id=1086669

 

Comment by Bob Glossman (Inactive) [ 23/Mar/18 ]

nothing wrong. created my test VM with wrong settings. proceeding with early build and test.
have marked SUSE bugzilla report as closed.

Comment by Gerrit Updater [ 12/Apr/18 ]

Yang Sheng (yang.sheng@intel.com) uploaded a new patch: https://review.whamcloud.com/31976
Subject: LDEV-645 tests: for test.
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 3f8bc08642a95d224c06a1e2aaff56ce94b0d79f

Comment by Bob Glossman (Inactive) [ 12/Apr/18 ]

server build fails in osd-ldiskfs.
struct bio {} has changed in sles15.
bio.bi_bdev no longer exists.
Since this structure member is used in code in osd-ldiskfs, build fails.
errors like:

  CC [M]  /home/bogl/lustre-release/lustre/osd-ldiskfs/osd_io.o
/home/bogl/lustre-release/lustre/osd-ldiskfs/osd_io.c: In function osd_do_bio:
/home/bogl/lustre-release/lustre/osd-ldiskfs/osd_io.c:345:26: error: struct bio has no member named bi_bdev; did you mean bi_iter?
      bdev_get_queue(bio->bi_bdev);
                          ^~~~~~~
                          bi_iter
/home/bogl/lustre-release/lustre/osd-ldiskfs/osd_io.c:373:9: error: struct bio has no member named bi_bdev; did you mean bi_iter?
    bio->bi_bdev = inode->i_sb->s_bdev;
         ^~~~~~~
         bi_iter
make[6]: *** [scripts/Makefile.build:322:
/home/bogl/lustre-release/lustre/osd-ldiskfs/osd_io.o] Error 1
make[5]: *** [scripts/Makefile.build:593:
/home/bogl/lustre-release/lustre/osd-ldiskfs] Error 2
make[4]: *** [scripts/Makefile.build:593: /home/bogl/lustre-release/lustre]
Error 2
make[3]: *** [Makefile:1540: _module_/home/bogl/lustre-release] Error 2
make[3]: Leaving directory '/home/bogl/linux-4.12.14-16.4'
make[2]: *** [autoMakefile:1060: modules] Error 2
make[2]: Leaving directory '/home/bogl/lustre-release'
make[1]: *** [autoMakefile:601: all-recursive] Error 1
make[1]: Leaving directory '/home/bogl/lustre-release'
make: *** [autoMakefile:487: all] Error 2

will need some autoconf changes to adapt to changing struct bio.

Comment by Yang Sheng [ 13/Apr/18 ]

Hi, Bob,

Please reference https://review.whamcloud.com/31975

Thanks,
YangSheng

Comment by Bob Glossman (Inactive) [ 17/Apr/18 ]

RC3 for sles15 is now available on suse.com

Comment by Bob Glossman (Inactive) [ 18/Apr/18 ]

RC3 is now available on Intel mirror.
kernel version is 4.12.14-16-default.

Comment by Bob Glossman (Inactive) [ 18/Apr/18 ]

server build with ldiskfs has suspicious looking errors like:

Replacing 'ext4' with 'ldiskfs': xattr.h truncate.h mballoc.h fsmap.h extents_status.h ext4_jbd2.h ext4_extents.h ext4.h acl.h xattr_user.c xattr_trusted.c xattr_security.c xattr.c sysfs.c symlink.c super.c resize.c readpage.c page-io.c namei.c move_extent.c mmp.c migrate.c mballoc.c ioctl.c inode.c inline.c indirect.c ialloc.c hash.c fsync.c fsmap.c file.c extents_status.c extents.c ext4_jbd2.c dir.c block_validity.c bitmap.c balloc.c acl.c mmp.c htree_lock.c ext4_jbd2.h ext4_extents.h ext4.h ext4.h htree_lock.h
Making all in .
WARNING: "ldiskfs_get_inode_loc" [/tmp/rpmbuild-lustre-bogl-owrrznmL/BUILD/lustre-2.11.50_63_gd2c131e/lustre/osd-ldiskfs/osd_ldiskfs.ko] undefined!
Making all in lustre-iokit

and

make[3]: Entering directory '/home/bogl/lbuild_top/reused/usr/src/linux-4.12.14-16_lustre-obj/x86_64/default'
  Building modules, stage 2.
  MODPOST 29 modules
WARNING: "ldiskfs_get_inode_loc" [/tmp/rpmbuild-lustre-bogl-owrrznmL/BUILD/lustre-2.11.50_63_gd2c131e/lustre/osd-ldiskfs/osd_ldiskfs.ko] undefined!
make[3]: Leaving directory '/home/bogl/lbuild_top/reused/usr/src/linux-4.12.14-16_lustre-obj/x86_64/default'
make[3]: Entering directory '/tmp/rpmbuild-lustre-bogl-owrrznmL/BUILD/lustre-2.11.50_63_gd2c131e'

ldiskfs_get_inode_loc is used in osd-ldiskfs and isn't declared EXPORT_SYMBOL() anywhere.
Don't know why it has always worked before without errors, unless it's something new due to gcc7.

Comment by Bob Glossman (Inactive) [ 18/Apr/18 ]

Now that it is becoming possible to do ldiskfs builds for sles15 the need for lustre enabled e2fsprogs is more urgent. The version of our lustre enabled e2fsprogs is old compared to the native version on sles15; 1.42.13.wc6 vs. 1.43.8.

Comment by Bob Glossman (Inactive) [ 19/Apr/18 ]

will attach a patch for our master-lustre branch of e2fsprogs that allows building current, down rev version of lustre e2fsprogs for sles15. Don't know if this is in fact usable on sles15, but at least it builds.

Note the new .spec file for sles15 is just a copy of the one for sles12 for now.

Comment by Bob Glossman (Inactive) [ 04/May/18 ]

RC4 for sles15 is now available on suse.com

Comment by Bob Glossman (Inactive) [ 07/May/18 ]

recently landed mod https://review.whamcloud.com/#/c/31904, "LU-10886 build: fix warnings during autoconf", causes incorrect autoconf detection results when building on SLES 15 with gcc7.

In particular gcc7 is intolerant of empty initializers like "{ }" in some cases.
One example of this causing the wrong detection results can be seen in the detection of HAVE_KTIME_TO_TIMESPEC64 in libcfs/autoconf/lustre-libcfs.m4,
In the autoconf test function

AC_DEFUN([LIBCFS_KTIME_TO_TIMESPEC64],[
LB_CHECK_COMPILE([does function 'ktime_to_timespec64' exist],
ktime_to_timespec64, [
        #include <linux/hrtimer.h>
        #include <linux/ktime.h>
],[
        struct timespec64 ts;
        ktime_t now = { };

        ts = ktime_to_timespec64(now);
],[
        AC_DEFINE(HAVE_KTIME_TO_TIMESPEC64, 1,
                ['ktime_to_timespec64' is available])
])
]) # LIBCFS_KTIME_TO_TIMESPEC64

the test code is failing with an error like:

configure:16718: checking does function 'ktime_to_timespec64' exist
configure:16749: cp conftest.c build && make -d modules LDFLAGS= LD=/usr/x86_64-suse-linux/bin/ld -m elf_x86_64 CC=gcc -f /home/bogl/lustre-release/build/Makefile LUSTRE_LINUX_CONFIG=/home/bogl/linux-4.12.14-18/.config LINUXINCLUDE= -I/home/bogl/linux-4.12.14-18/arch/x86/include -Iinclude -Iarch/x86/include/generated -I/home/bogl/linux-4.12.14-18/include -Iinclude2 -I/home/bogl/linux-4.12.14-18/include/uapi -Iinclude/generated -I/home/bogl/linux-4.12.14-18/arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I/home/bogl/linux-4.12.14-18/include/uapi -Iinclude/generated/uapi -include /home/bogl/linux-4.12.14-18/include/linux/kconfig.h -o tmp_include_depends -o scripts -o include/config/MARKER -C /home/bogl/linux-4.12.14-18 EXTRA_CFLAGS=-Werror-implicit-function-declaration -g -I/home/bogl/lustre-release/libcfs/include -I/home/bogl/lustre-release/lnet/include -I/home/bogl/lustre-release/lustre/include -Wno-format-truncation M=/home/bogl/lustre-release/build
/home/bogl/lustre-release/build/conftest.c: In function ‘main’:
/home/bogl/lustre-release/build/conftest.c:58:16: error: empty scalar initializer
  ktime_t now = { };
                ^
/home/bogl/lustre-release/build/conftest.c:58:16: note: (near initialization for ‘now’)
make[1]: *** [scripts/Makefile.build:335: /home/bogl/lustre-release/build/conftest.o] Error 1
make: *** [Makefile:1549: _module_/home/bogl/lustre-release/build] Error 2

In fact this kernel does have a ktime_to_timespec64() API and the test should succeed.
With the "= { };" initializer edited out of the test it does succeed.

Comment by Bob Glossman (Inactive) [ 07/May/18 ]

now seeing fails in sanity, test 103a
errors like:

  .
  .
  .
[198] $ setfacl -m u:bin:rx e -- ok
[200] $ su bin -- ok
[201] $ echo e/* -- failed
e/h                                   ? e/*                                    
[208] $ touch e/i 2>&1 | sed -e "s/touch .*e\/i.*:/touch \'e\/i\':/" -- ok
[211] $ su -- ok
[212] $ setfacl -m u:bin:rwx e -- ok
[214] $ su bin -- ok
[215] $ echo i > e/i -- failed
~                                     ? e/i: Permission denied                 
[220] $ su -- ok
[221] $ touch g -- ok
[222] $ ln -s g l -- ok
[223] $ setfacl -m u:bin:rw l -- ok
[224] $ ls -l g | awk -- '{ print $1, $3, $4 }' -- ok
[234] $ mknod -m 0660 hdt b 91 64 -- ok
[235] $ mknod -m 0660 null c 1 3 -- ok
[236] $ mkfifo -m 0660 fifo -- ok
[238] $ su bin -- ok
[239] $ : < hdt -- ok
[241] $ : < null -- ok
[243] $ : < fifo -- ok
[246] $ su -- ok
[247] $ setfacl -m u:bin:rw hdt null fifo -- ok
[249] $ su bin -- ok
[250] $ : < hdt -- failed
hdt: No such device or address        ? hdt: Permission denied                 
[252] $ : < null -- failed
~                                     ? null: Permission denied                
[253] $ ( echo blah > fifo & ) ; cat fifo -- failed
blah                                  ? cat: fifo: Permission denied           
~                                     ? fifo: Permission denied                
[261] $ su -- ok
[262] $ mkdir -m 600 x -- ok
[263] $ chown daemon:daemon x -- ok
[264] $ echo j > x/j -- ok
[265] $ ls -l x/j | awk -- '{sub(/\./, "", $1); print $1, $3, $4 }' -- ok
[268] $ setfacl -m u:daemon:r x -- ok
[270] $ ls -l x/j | awk -- '{sub(/\./, "", $1); print $1, $3, $4 }' -- ok
[274] $ echo k > x/k -- ok
[277] $ chmod 750 x -- ok
[282] $ su -- ok
[283] $ cd .. -- ok
[284] $ rm -rf d -- ok
101 commands (96 passed, 5 failed)
 sanity test_103a: @@@@@@ FAIL: permissions failed 
  Trace dump:
  = /usr/lib64/lustre/tests/test-framework.sh:5738:error()
  = /usr/lib64/lustre/tests/sanity.sh:8366:test_103a()
  = /usr/lib64/lustre/tests/test-framework.sh:6019:run_one()
  = /usr/lib64/lustre/tests/test-framework.sh:6058:run_one_logged()
  = /usr/lib64/lustre/tests/test-framework.sh:5857:run_test()
  = /usr/lib64/lustre/tests/sanity.sh:8411:main()
Dumping lctl log to /tmp/test_logs/2018-05-07/133311/sanity.test_103a.*.1525725210.log
Resetting fail_loc on all nodes...done.
FAIL 103a (9s)
Comment by Bob Glossman (Inactive) [ 07/May/18 ]

seeing errors reported during all installs of built kmp .rpms.
examples:

# rpm -ivh lustre-kmp-default-2* lustre-osd-ldiskfs-kmp-default-2*
lustre-tests-kmp-default-2* lustre-osd-ldiskfs-mount-2* lustre-2*
lustre-iokit* lustre-tests-2*
Preparing...                          ################################# [100%]
Updating / installing...
   1:lustre-kmp-default-2.11.51_48_g34################################# [ 14%]
cat: write error: Broken pipe
cat: write error: Broken pipe
   2:lustre-osd-ldiskfs-mount-2.11.51_################################# [ 29%]
   3:lustre-osd-ldiskfs-kmp-default-2.################################# [ 43%]
cat: write error: Broken pipe
   4:lustre-2.11.51_48_g340f4d9_dirty-################################# [ 57%]
   5:lustre-tests-kmp-default-2.11.51_################################# [ 71%]
cat: write error: Broken pipe
cat: write error: Broken pipe
   6:lustre-iokit-2.11.51_48_g340f4d9_################################# [ 86%]
   7:lustre-tests-2.11.51_48_g340f4d9_################################# [100%]

Those "Broken pipe" errors don't seem to be fatal, but don't know where they are coming from. Suggest there may be some SLES specific .spec file flaws in package creation of .rpms with kernel modules in them. Not sure why they would only appear now in SLES 15.

Comment by Bob Glossman (Inactive) [ 08/May/18 ]

RC4 is now available on Intel mirror.
kernel version is 4.12.14-18-default.

Comment by Bob Glossman (Inactive) [ 07/Jun/18 ]

GMC for sles15 is now available on suse.com

Comment by Bob Glossman (Inactive) [ 07/Jun/18 ]

due to recent changes in osd-ldiskfs https://review.whamcloud.com/32621 is now needed for server builds on sles15.

This is a new mod in flight, not landed yet.

Comment by Bob Glossman (Inactive) [ 08/Jun/18 ]

GMC is now available on Intel mirror.
kernel version is 4.12.14-23-default.

Comment by Bob Glossman (Inactive) [ 08/Jun/18 ]

for unknown reasons lustre rpms generated by lbuild are now broken. The kmp rpms have no ksym() or kernel() entries in their Provides lists and are therefore not installable due to not satisfying any of the Requires that they should. lustre .rpms produced in a manual build are fine, they have full sets of Provides.

The install time "Broken Pipe" errors mentioned in earlier comments still happen.

Comment by Gerrit Updater [ 11/Jun/18 ]

Bob Glossman (bob.glossman@intel.com) uploaded a new patch: https://review.whamcloud.com/32693
Subject: LDEV-645 kernel: add sles15gmc support
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 7e1f2d3229f5795186b5236e9422815885cb9e2a

Comment by Chris Horn [ 14/Dec/18 ]

Is there any plan/ETA on adding SLES 15 to the build/test matrix?

Comment by Peter Jones [ 14/Dec/18 ]

Chris 

We haven't really talked about 2.13 and beyond in much detail at the LWG yet. Obviously we're still rather focused on closing out on 2.12 ATM but we could certainly discuss this at the next call in the new year.

Peter

Comment by Cory Spitz [ 25/Jul/19 ]

Cray has some additional patches in this area that could be landed, but that remains difficult while we lack SLES 15 servers in the test matrix.

We talked about it at today's LWG and we decided that it would be best to keep SLES 15 servers out of the mix as there is non-trivial amount of work and resources needed to get it to fly.

Comment by Gerrit Updater [ 06/Sep/19 ]

Shaun Tancheff (stancheff@cray.com) uploaded a new patch: https://review.whamcloud.com/36094
Subject: LU-11310 ldiskfs: Support for sle 4.12 r23 and r25 releases
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 51e381b837f06ee305dd9072f1c23ae60ceca442

Comment by Cory Spitz [ 07/Sep/19 ]

The LWG is in the process of re-vamping kernel support guidelines. So, as opposed to my comment from July 25, we'll be contributing these patches for review. However, per the new LWG policy under consideration, patches are contributed without obligation to fully vet any test current or future failures (and SLES 15 servers aren't in the the auto test framework anyway). We can provide test results as landing collateral, if needed.

Comment by Jian Yu [ 28/Jan/20 ]

The SUSE Linux Enterprise 15 SP1 kernel was updated to receive various security and bugfixes:
http://lists.suse.com/pipermail/sle-security-updates/2019-December/006269.html

Kernel version is 4.12.14-197.29.1. I'm updating https://review.whamcloud.com/32693 to support client first.

Comment by Jian Yu [ 09/Mar/20 ]

The SUSE Linux Enterprise 15 SP1 kernel was updated to receive various security and bugfixes:
http://lists.suse.com/pipermail/sle-security-updates/2020-March/006566.html

Kernel version is 4.12.14-197.34.1.

Comment by Gerrit Updater [ 11/Mar/20 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/32693/
Subject: LU-11310 kernel: new kernel [SLES15 SP1 4.12.14-197.29]
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: bb9eb8b09e951945304d818ae6e1a6e451353345

Comment by Gerrit Updater [ 24/Mar/20 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/36094/
Subject: LU-11310 ldiskfs: Support for SUSE 15 GA and SP1
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 862e9bf632dc44f1102bfc2aef10504e506f1225

Comment by Peter Jones [ 24/Mar/20 ]

Looks like everything has landed for 2.14

Comment by Gerrit Updater [ 16/Apr/20 ]

Neil Brown (neilb@suse.de) uploaded a new patch: https://review.whamcloud.com/38256
Subject: LU-11310 ldiskfs: Repair for SUSE 15 GA and SP1
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 50abe7726f06a79abff2949bf695e3d7c82565cc

Comment by Gerrit Updater [ 17/Apr/20 ]

Shaun Tancheff (shaun.tancheff@hpe.com) uploaded a new patch: https://review.whamcloud.com/38262
Subject: LU-11310 build: SUSE ldiskfs for 4.12.14-197.37
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: ec8e315732dd15e909293292157989b5fe29d92e

Comment by Gerrit Updater [ 14/May/20 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/38256/
Subject: LU-11310 ldiskfs: Repair support for SUSE 15 GA and SP1
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 46ed28c0d10ab2edeb95e6e0f50b254fb98fa8c6

Comment by Gerrit Updater [ 15/May/20 ]

Neil Brown (neilb@suse.de) uploaded a new patch: https://review.whamcloud.com/38611
Subject: LU-11310 ldiskfs: Repair support for SUSE 15 again
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: c7bca14141af167080981be2a83dc2d40542bc17

Comment by Gerrit Updater [ 02/Jun/20 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/38611/
Subject: LU-11310 ldiskfs: Repair support for SUSE 15 again
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 35d4c683a3dad51450db3281c9331b876f8313c5

Comment by Gerrit Updater [ 05/Aug/20 ]

Neil Brown (neilb@suse.de) uploaded a new patch: https://review.whamcloud.com/39571
Subject: LU-11310 ldiskfs: Fix suse15/ext4-max-dir-size.patch
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: fd4f0c97937a4a1f958307536db8ebbac37daec9

Comment by Gerrit Updater [ 13/Aug/20 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/39571/
Subject: LU-11310 ldiskfs: Fix suse15/ext4-max-dir-size.patch
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 6349e7a61199befe64ffc0b3221446719b8311eb

Generated at Sat Feb 10 02:42:46 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.