Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-12830

RHEL8.3 and ZFS: oom on OSS

    XMLWordPrintable

Details

    • Bug
    • Resolution: Cannot Reproduce
    • Minor
    • None
    • Lustre 2.14.0, Lustre 2.15.0
    • 3
    • 9223372036854775807

    Description

      This issue was created by maloo for jianyu <yujian@whamcloud.com>

      This issue relates to the following test suite run: https://testing.whamcloud.com/test_sets/1e6f3bc6-e5ef-11e9-b62b-52540065bddc

      test_bonnie failed with oom on OSS:

      [16526.881544] Lustre: DEBUG MARKER: == sanity-benchmark test bonnie: bonnie++ ============================================================ 14:37:57 (1570027077)
      [16528.099983] Lustre: DEBUG MARKER: /usr/sbin/lctl mark min OST has 10511360kB available, using 3438712kB file size
      [16528.357585] Lustre: DEBUG MARKER: min OST has 10511360kB available, using 3438712kB file size
      [16567.214746] irqbalance invoked oom-killer: gfp_mask=0x200da, order=0, oom_score_adj=0
      [16567.215741] irqbalance cpuset=/ mems_allowed=0
      [16567.216221] CPU: 1 PID: 1179 Comm: irqbalance Kdump: loaded Tainted: P           OE  ------------   3.10.0-957.27.2.el7_lustre.x86_64 #1
      [16567.217451] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
      [16567.218033] Call Trace:
      [16567.218336]  [<ffffffffad565147>] dump_stack+0x19/0x1b
      [16567.218874]  [<ffffffffad55fb6a>] dump_header+0x90/0x229
      [16567.219422]  [<ffffffffad572b1f>] ? notifier_call_chain+0x4f/0x70
      [16567.220055]  [<ffffffffacec91c8>] ? __blocking_notifier_call_chain+0x58/0x70
      [16567.220779]  [<ffffffffacfbbaae>] check_panic_on_oom+0x2e/0x60
      [16567.221379]  [<ffffffffacfbbecb>] out_of_memory+0x23b/0x4f0
      [16567.221938]  [<ffffffffad56066e>] __alloc_pages_slowpath+0x5d6/0x724
      [16567.222585]  [<ffffffffacfc2524>] __alloc_pages_nodemask+0x404/0x420
      [16567.223225]  [<ffffffffad0128c5>] alloc_pages_vma+0xb5/0x200
      [16567.223840]  [<ffffffffad000b15>] __read_swap_cache_async+0x115/0x190
      [16567.224491]  [<ffffffffad000bb6>] read_swap_cache_async+0x26/0x60
      [16567.225104]  [<ffffffffad000c9c>] swapin_readahead+0xac/0x110
      [16567.225690]  [<ffffffffacfead92>] handle_pte_fault+0x812/0xd10
      [16567.226280]  [<fffffffface2a621>] ? __switch_to+0x151/0x580
      [16567.226858]  [<ffffffffacfed3ad>] handle_mm_fault+0x39d/0x9b0
      [16567.227444]  [<ffffffffacec6efd>] ? hrtimer_start_range_ns+0x1ed/0x3c0
      [16567.228100]  [<ffffffffad572603>] __do_page_fault+0x203/0x4f0
      [16567.228685]  [<ffffffffad5729d6>] trace_do_page_fault+0x56/0x150
      [16567.229287]  [<ffffffffad571f62>] do_async_page_fault+0x22/0xf0
      [16567.229890]  [<ffffffffad56e798>] async_page_fault+0x28/0x30
      

      VVVVVVV DO NOT REMOVE LINES BELOW, Added by Maloo for auto-association VVVVVVV
      ost-pools test_23b - trevis-21vm3 crashed during ost-pools test_23b

      Attachments

        Issue Links

          Activity

            People

              wc-triage WC Triage
              maloo Maloo
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: