Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-11173

kernel update [SLES12 SP3 4.4.140-94.42.1]

Details

    • Bug
    • Resolution: Won't Fix
    • Minor
    • None
    • None
    • None
    • 3
    • 9223372036854775807

    Description

      The SUSE Linux Enterprise 12 SP3 kernel was updated to 4.4.140 to receive
      various security and bugfixes.

      The following security bugs were fixed:

      • CVE-2018-13053: The alarm_timer_nsleep function had an integer overflow
        via a large relative timeout because ktime_add_safe was not used
        (bnc#1099924)
      • CVE-2018-9385: Prevent overread of the "driver_override" buffer
        (bsc#1100491)
      • CVE-2018-13405: The inode_init_owner function allowed local users to
        create files with an unintended group ownership allowing attackers to
        escalate privileges by making a plain file executable and SGID
        (bnc#1100416)
      • CVE-2018-13406: An integer overflow in the uvesafb_setcmap function
        could have result in local attackers being able to crash the kernel or
        potentially elevate privileges because kmalloc_array is not used
        (bnc#1100418)

      For fixed non-security bugs, please refer to:

      http://lists.suse.com/pipermail/sle-security-updates/2018-July/004305.html

      Attachments

        Issue Links

          Activity

            [LU-11173] kernel update [SLES12 SP3 4.4.140-94.42.1]
            yujian Jian Yu added a comment -

            A more newer SLES12 SP3 kernel update is being worked in LU-11255. Let's close this ticket.

            yujian Jian Yu added a comment - A more newer SLES12 SP3 kernel update is being worked in LU-11255 . Let's close this ticket.

            Jian Yu (yujian@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33054
            Subject: LU-11173 kernel: kernel update [SLES12 SP3 4.4.140-94.42]
            Project: fs/lustre-release
            Branch: b2_10
            Current Patch Set: 1
            Commit: a204e2d527d3b6ce588413fd422610c587a340a0

            gerrit Gerrit Updater added a comment - Jian Yu (yujian@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33054 Subject: LU-11173 kernel: kernel update [SLES12 SP3 4.4.140-94.42] Project: fs/lustre-release Branch: b2_10 Current Patch Set: 1 Commit: a204e2d527d3b6ce588413fd422610c587a340a0
            simmonsja James A Simmons added a comment - - edited

            Really. We still missed a dev_read_only case  That means this test fails for Ubuntu support since it doesn't have the dev_read_only patches.

            simmonsja James A Simmons added a comment - - edited Really. We still missed a dev_read_only case  That means this test fails for Ubuntu support since it doesn't have the dev_read_only patches.
            yujian Jian Yu added a comment -

            After removing dev_read_only-3.9.patch, sanity test 802 (simulate readonly device) failed as follows:

            Lustre: DEBUG MARKER: mkdir -p /mnt/lustre-mds1; mount -t lustre -o rdonly_dev  /dev/mapper/mds1_flakey /mnt/lustre-mds1
            LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc
            format at osd_handler.c:7368:osd_mount doesn't end in newline
            Lustre: lustre-MDT0000-osd: not support dev_rdonly on this device
            LustreError: 14283:0:(obd_config.c:559:class_setup()) setup lustre-MDT0000-osd failed (-95)
            LustreError: 14283:0:(obd_mount.c:202:lustre_start_simple()) lustre-MDT0000-osd setup error -95
            LustreError: 14283:0:(obd_mount_server.c:1902:server_fill_super()) Unable to start osd on /dev/mapper/mds1_flakey: -95
            LustreError: 14283:0:(obd_mount.c:1599:lustre_fill_super()) Unable to mount  (-95)
            

            Maloo report: https://testing.whamcloud.com/test_sets/c973a2a4-a5f0-11e8-a5f2-52540065bddc

            yujian Jian Yu added a comment - After removing dev_read_only-3.9.patch, sanity test 802 (simulate readonly device) failed as follows: Lustre: DEBUG MARKER: mkdir -p /mnt/lustre-mds1; mount -t lustre -o rdonly_dev /dev/mapper/mds1_flakey /mnt/lustre-mds1 LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc format at osd_handler.c:7368:osd_mount doesn't end in newline Lustre: lustre-MDT0000-osd: not support dev_rdonly on this device LustreError: 14283:0:(obd_config.c:559:class_setup()) setup lustre-MDT0000-osd failed (-95) LustreError: 14283:0:(obd_mount.c:202:lustre_start_simple()) lustre-MDT0000-osd setup error -95 LustreError: 14283:0:(obd_mount_server.c:1902:server_fill_super()) Unable to start osd on /dev/mapper/mds1_flakey: -95 LustreError: 14283:0:(obd_mount.c:1599:lustre_fill_super()) Unable to mount (-95) Maloo report: https://testing.whamcloud.com/test_sets/c973a2a4-a5f0-11e8-a5f2-52540065bddc
            yujian Jian Yu added a comment -

            Thank you for the advice, Andreas.
            Do I understand correctly that for the current SLES12 SP3 patch list, we need keep the raid5-mmp patch and can remove the blkdev_tunables patch?

            yujian Jian Yu added a comment - Thank you for the advice, Andreas. Do I understand correctly that for the current SLES12 SP3 patch list, we need keep the raid5-mmp patch and can remove the blkdev_tunables patch?

            Yes, the raid5-mmp patch didn't get accepted. While there is still a race condition with the patch applied, it is definitely much smaller than without the patch at all.

            In any case, we haven't supported Lustre+ldiskfs on MD RAID devices for a long time (the MD-RAID rebuild was too slow for very large filesystems or needed dedicated flash devices), and instead we tell people to use ZFS when they want software RAID. So in summary, I don't care a huge amount about that patch anymore.

            Note that there are a couple of other kernel patches in the RHEL7 series to improve performance and/or add functionality (mainly quota related), and a new patch incoming for the T10-PI API change. My position is that these are optional patches and people can use them if they want, but we won't accept "required" kernel patches anymore. The Lustre code has to be able to build against the the vanilla kernel, possibly with some reduced functionality, and the ldiskfs module can be built/loaded independently of the main kernel. We've started building the client+server code against the unpatched kernel, and I think that should become part of the required builds for every review patch.

            We're getting closer on the remaining major ldiskfs features being included into upstream ext4 as well, and I'd be happy if we could move that further along. The main outlier at this point is the dir_data feature, and many of the remaining ext4 patches are for performance and adding exports to the code for osd-ldiskfs to use.

            adilger Andreas Dilger added a comment - Yes, the raid5-mmp patch didn't get accepted. While there is still a race condition with the patch applied, it is definitely much smaller than without the patch at all. In any case, we haven't supported Lustre+ldiskfs on MD RAID devices for a long time (the MD-RAID rebuild was too slow for very large filesystems or needed dedicated flash devices), and instead we tell people to use ZFS when they want software RAID. So in summary, I don't care a huge amount about that patch anymore. Note that there are a couple of other kernel patches in the RHEL7 series to improve performance and/or add functionality (mainly quota related), and a new patch incoming for the T10-PI API change. My position is that these are optional patches and people can use them if they want, but we won't accept "required" kernel patches anymore. The Lustre code has to be able to build against the the vanilla kernel, possibly with some reduced functionality, and the ldiskfs module can be built/loaded independently of the main kernel. We've started building the client+server code against the unpatched kernel, and I think that should become part of the required builds for every review patch. We're getting closer on the remaining major ldiskfs features being included into upstream ext4 as well, and I'd be happy if we could move that further along. The main outlier at this point is the dir_data feature, and many of the remaining ext4 patches are for performance and adding exports to the code for osd-ldiskfs to use.

            People

              yujian Jian Yu
              yujian Jian Yu
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: