Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-10268

rcu_sched self-detected stall in lfsck

Details

    • Bug
    • Resolution: Fixed
    • Major
    • Lustre 2.11.0, Lustre 2.10.3
    • None
    • toss 3.2-0rc8
      kernel-3.10.0-693.5.2.1chaos.ch6.x86_64
      lustre-2.8.0_13.chaos-1.ch6.x86_64

      See lustre-release-fe-llnl project in gerritt
    • 3
    • 9223372036854775807

    Description

      lquake-MDT0001 ran out of space while multiple invocations of "lfs migrate --mdt-index XX" were running in parallel. Space was freed up by deleting snapshots, and then an "lctl lfsck_start --all" was invoked on the node hosting the MGS and MDT0000.

      After the layout portion of the lfsck completed and the namespace portion started, we began seeing console messages like this on the node hosting MDT0008:

      INFO: rcu_sched self-detected stall on CPU[ 1678.988863] INFO: rcu_sched detected stalls on CPUs/tasks: { 12} (detected by 2, t=600017 jiffies, g=17401, c=17400, q=850241)
      Task dump for CPU 12:
      lfsck_namespace R  running task        0 36441      2 0x00000088
       0000000000000000 ffff88807ffd8000 0000000000000000 0000000000000002
       ffff88807ffd8008 ffff883f00000141 ffff8840a7003f40 ffff88807ffd7000
       0000000000000010 0000000000000000 fffffffffffffff8 0000000000000001
      Call Trace:
       [<ffffffff8119649f>] ? __alloc_pages_nodemask+0x17f/0x470
       [<ffffffffc030e35d>] ? spl_kmem_alloc_impl+0xcd/0x180 [spl]
       [<ffffffffc030e35d>] ? spl_kmem_alloc_impl+0xcd/0x180 [spl]
       [<ffffffffc0315cb4>] ? xdrmem_dec_bytes+0x64/0xa0 [spl]
       [<ffffffff8119355e>] ? __rmqueue+0xee/0x4a0
       [<ffffffff811ad598>] ? zone_statistics+0x88/0xa0
       [<ffffffff81195e22>] ? get_page_from_freelist+0x502/0xa00
       [<ffffffffc0328a50>] ? nvs_operation+0xf0/0x2e0 [znvpair]
       [<ffffffff816c88d5>] ? mutex_lock+0x25/0x42
       [<ffffffff8119649f>] ? __alloc_pages_nodemask+0x17f/0x470
       [<ffffffff811dd008>] ? alloc_pages_current+0x98/0x110
       [<ffffffffc032afc2>] ? nvlist_lookup_common.part.71+0xa2/0xb0 [znvpair]
       [<ffffffffc032b4b6>] ? nvlist_lookup_byte_array+0x26/0x30 [znvpair]
       [<ffffffffc123d2f3>] ? lfsck_namespace_filter_linkea_entry.isra.64+0x83/0x180 [lfsck]
       [<ffffffffc124f4da>] ? lfsck_namespace_double_scan_one+0x3aa/0x19d0 [lfsck]
       [<ffffffffc08356d6>] ? dbuf_rele+0x36/0x40 [zfs]
       [<ffffffffc11f9c17>] ? osd_index_it_rec+0x1a7/0x240 [osd_zfs]
       [<ffffffffc1250ead>] ? lfsck_namespace_double_scan_one_trace_file+0x3ad/0x830 [lfsck]
       [<ffffffffc1254af5>] ? lfsck_namespace_assistant_handler_p2+0x795/0xa70 [lfsck]
       [<ffffffff811ec173>] ? kfree+0x133/0x170
       [<ffffffffc10283e8>] ? ptlrpc_set_destroy+0x208/0x4f0 [ptlrpc]
       [<ffffffffc1238afe>] ? lfsck_assistant_engine+0x13de/0x21d0 [lfsck]
       [<ffffffff816ca33b>] ? __schedule+0x38b/0x780
       [<ffffffff810c9de0>] ? wake_up_state+0x20/0x20
       [<ffffffffc1237720>] ? lfsck_master_engine+0x1370/0x1370 [lfsck]
       [<ffffffff810b4eef>] ? kthread+0xcf/0xe0
       [<ffffffff810b4e20>] ? insert_kthread_work+0x40/0x40
       [<ffffffff816d6818>] ? ret_from_fork+0x58/0x90
       [<ffffffff810b4e20>] ? insert_kthread_work+0x40/0x40
      
      

      Also, lfsck_namespace process reported as stuck by watchdog. Stacks are all like this:

      NMI watchdog: BUG: soft lockup - CPU#12 stuck for 22s! [lfsck_namespace:36441]
      Modules linked in: osp(OE) mdd(OE) lod(OE) mdt(OE) lfsck(OE) mgc(OE) osd_zfs(OE) lquota(OE) fid(OE) fld(OE) ptlrpc(OE) obdclass(OE) ko2iblnd(OE) lnet(OE) sha512_ssse3 sha512_generic crypto_null libcfs(OE) nfsv3 ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm mlx5_ib iTCO_wdt iTCO_vendor_support ib_core sb_edac edac_core intel_powerclamp coretemp intel_rapl iosf_mbi kvm irqbypass mlx5_core pcspkr devlink joydev i2c_i801 ioatdma lpc_ich zfs(POE) zunicode(POE) zavl(POE) ses icp(POE) enclosure sg shpchp ipmi_si ipmi_devintf ipmi_msghandler acpi_power_meter acpi_cpufreq binfmt_misc zcommon(POE) znvpair(POE) spl(OE) msr_safe(OE) nfsd nfs_acl ip_tables rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache dm_round_robin sd_mod crc_t10dif crct10dif_generic scsi_transport_iscsi dm_multipath mgag200 8021q i2c_algo_bit garp drm_kms_helper stp syscopyarea crct10dif_pclmul llc sysfillrect crct10dif_common mrp crc32_pclmul sysimgblt fb_sys_fops crc32c_intel ttm ghash_clmulni_intel ixgbe(OE) drm ahci mpt3sas aesni_intel mxm_wmi libahci dca lrw gf128mul glue_helper ablk_helper cryptd ptp raid_class libata i2c_core scsi_transport_sas pps_core wmi sunrpc dm_mirror dm_region_hash dm_log dm_mod
      CPU: 12 PID: 36441 Comm: lfsck_namespace Tainted: P           OEL ------------   3.10.0-693.5.2.1chaos.ch6.x86_64 #1
      Hardware name: Intel Corporation S2600WTTR/S2600WTTR, BIOS SE5C610.86B.01.01.0016.033120161139 03/31/2016
      task: ffff883f1eed3f40 ti: ffff883f12b04000 task.ti: ffff883f12b04000
      RIP: 0010:[<ffffffffc123d2f5>]  [<ffffffffc123d2f5>] lfsck_namespace_filter_linkea_entry.isra.64+0x85/0x180 [lfsck]
      RSP: 0018:ffff883f12b07ad0  EFLAGS: 00000246
      RAX: 0000000000000000 RBX: ffffffffc032b4b6 RCX: ffff887f19214971
      RDX: 0000000000000000 RSI: ffff883ef42f1010 RDI: ffff883f12b07ba8
      RBP: ffff883f12b07b18 R08: 0000000000000000 R09: 0000000000000025
      R10: ffff883ef42f1010 R11: 0000000000000000 R12: ffff883f12b07ab4
      R13: ffff883ef42f1040 R14: ffff887f1c31a7e0 R15: ffffffffc1282fa3
      FS:  0000000000000000(0000) GS:ffff887f7df00000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 00007ffff7ad74f0 CR3: 0000000001a16000 CR4: 00000000001407e0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      Stack:
       ffff883f12b07ba8 ffff883ef42f1040 0000000000000001 ffff883f12b07b18
       ffff887f18d35ce8 ffff887f1c31a7e0 ffff883ef42f1000 ffff887f2c933c00
       ffff883ef42f1010 ffff883f12b07c18 ffffffffc124f4da ffffffffc08356d6
      Call Trace:
       [<ffffffffc124f4da>] lfsck_namespace_double_scan_one+0x3aa/0x19d0 [lfsck]
       [<ffffffffc08356d6>] ? dbuf_rele+0x36/0x40 [zfs]
       [<ffffffffc11f9c17>] ? osd_index_it_rec+0x1a7/0x240 [osd_zfs]
       [<ffffffffc1250ead>] lfsck_namespace_double_scan_one_trace_file+0x3ad/0x830 [lfsck]
       [<ffffffffc1254af5>] lfsck_namespace_assistant_handler_p2+0x795/0xa70 [lfsck]
       [<ffffffff811ec173>] ? kfree+0x133/0x170
       [<ffffffffc10283e8>] ? ptlrpc_set_destroy+0x208/0x4f0 [ptlrpc]
       [<ffffffffc1238afe>] lfsck_assistant_engine+0x13de/0x21d0 [lfsck]
       [<ffffffff816ca33b>] ? __schedule+0x38b/0x780
       [<ffffffff810c9de0>] ? wake_up_state+0x20/0x20
       [<ffffffffc1237720>] ? lfsck_master_engine+0x1370/0x1370 [lfsck]
       [<ffffffff810b4eef>] kthread+0xcf/0xe0
       [<ffffffff810b4e20>] ? insert_kthread_work+0x40/0x40
       [<ffffffff816d6818>] ret_from_fork+0x58/0x90
       [<ffffffff810b4e20>] ? insert_kthread_work+0x40/0x40
      Code: c7 47 10 00 00 00 00 45 31 e4 45 31 c0 4d 63 ce 66 0f 1f 44 00 00 4d 85 e4 74 41 41 0f b6 1c 24 41 0f b6 44 24 01 c1 e3 08 09 c3 <41> 39 de 41 89 5d 18 74 47 49 8b 4d 08 48 85 c9 0f 84 ad 00 00
      
      

      Attachments

        Activity

          [LU-10268] rcu_sched self-detected stall in lfsck

          John L. Hammond (john.hammond@intel.com) merged in patch https://review.whamcloud.com/30421/
          Subject: LU-10268 lfsck: postpone lfsck start until initialized
          Project: fs/lustre-release
          Branch: b2_10
          Current Patch Set:
          Commit: b1e6cdef3f28034f6d1c49e491fbb7837d388c22

          gerrit Gerrit Updater added a comment - John L. Hammond (john.hammond@intel.com) merged in patch https://review.whamcloud.com/30421/ Subject: LU-10268 lfsck: postpone lfsck start until initialized Project: fs/lustre-release Branch: b2_10 Current Patch Set: Commit: b1e6cdef3f28034f6d1c49e491fbb7837d388c22
          ofaaland Olaf Faaland added a comment -

          Peter,

          I see. It seems to me that use of "topllnl" might lead to mistakes, but I agree it can work if everyone knows the convention.

          Yes, I think the work for master is done.

          thanks,
          Olaf

          ofaaland Olaf Faaland added a comment - Peter, I see. It seems to me that use of "topllnl" might lead to mistakes, but I agree it can work if everyone knows the convention. Yes, I think the work for master is done. thanks, Olaf
          pjones Peter Jones added a comment - - edited

          Olaf

          I closed the ticket because the ticket itself is tracking the status for master - outstanding equivalent work against an older maintenance branch would still be tracked by the presence of the topllnl label. Is more work still needed for master?

          Peter

          pjones Peter Jones added a comment - - edited Olaf I closed the ticket because the ticket itself is tracking the status for master - outstanding equivalent work against an older maintenance branch would still be tracked by the presence of the topllnl label. Is more work still needed for master? Peter
          ofaaland Olaf Faaland added a comment -

          Peter,
          I believe you closed this too early. Fan Yong said:

          One most possible case is that such linkEA was corrupted

          and so fsck is not safe to run without the patch she is backporting. Her backport isn't yet reviewed and merged, so we're not done yet, right?

          ofaaland Olaf Faaland added a comment - Peter, I believe you closed this too early. Fan Yong said: One most possible case is that such linkEA was corrupted and so fsck is not safe to run without the patch she is backporting. Her backport isn't yet reviewed and merged, so we're not done yet, right?
          pjones Peter Jones added a comment -

          Landed for 2.11

          pjones Peter Jones added a comment - Landed for 2.11

          Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/30259/
          Subject: LU-10268 lfsck: postpone lfsck start until initialized
          Project: fs/lustre-release
          Branch: master
          Current Patch Set:
          Commit: f95ee72ab6ecffdaf6dd4f0202d954dfc45d0ba1

          gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/30259/ Subject: LU-10268 lfsck: postpone lfsck start until initialized Project: fs/lustre-release Branch: master Current Patch Set: Commit: f95ee72ab6ecffdaf6dd4f0202d954dfc45d0ba1
          pjones Peter Jones added a comment -

          Olaf

          node-provisioning failures indicate an issue in the auto test system rather than a problem with the patch itself. Changes were being made yesterday to split the tests into different test groups so perhaps that was the issue. Fan Yong has re-triggered the tests and they seem to be running ok now

          Peter

          pjones Peter Jones added a comment - Olaf node-provisioning failures indicate an issue in the auto test system rather than a problem with the patch itself. Changes were being made yesterday to split the tests into different test groups so perhaps that was the issue. Fan Yong has re-triggered the tests and they seem to be running ok now Peter
          ofaaland Olaf Faaland added a comment -

          Fan,
          I see that your backport wasn't tested because all the tests were failed in provisioning.

          ofaaland Olaf Faaland added a comment - Fan, I see that your backport wasn't tested because all the tests were failed in provisioning.

          I see that you did need to make changes. I'll try with your backport.

          ofaaland Olaf Faaland added a comment - I see that you did need to make changes. I'll try with your backport.
          ofaaland Olaf Faaland added a comment -

          I used the original commit from LU-8084 applied to master, https://review.whamcloud.com/#/c/19877/. I didn't see that you had started a backport. Did you find changes were required?

          ofaaland Olaf Faaland added a comment - I used the original commit from LU-8084 applied to master, https://review.whamcloud.com/#/c/19877/ . I didn't see that you had started a backport. Did you find changes were required?

          People

            yong.fan nasf (Inactive)
            ofaaland Olaf Faaland
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: