Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-5040

kernel BUG at fs/jbd2/transaction.c:1033

Details

    • 3
    • 13932

    Description

      mdt crashed with

      <4>------------[ cut here ]------------^M
      <2>kernel BUG at fs/jbd2/transaction.c:1033!^M
      [1]kdb> sr 8^M
      SysRq : Changing Loglevel^M
      Loglevel set to 8^M
      [1]kdb> sr p^M
      SysRq : Show Regs^M
      CPU 1 ^M
      Modules linked in: osp(U) lod(U) mdt(U) mgs(U) mgc(U) fsfilt_ldiskfs(U) osd_ldiskfs(U) ldiskfs(U) lquota(U) jbd2 mdd(U) lustre(U) lov(U) osc(U) mdc(U) fid(U) fld(U) ko2iblnd(U) ptlrpc(U) obdclass(U) lnet(U) lvfs(U) sha512_generic sha256_generic crc32c_intel libcfs(U) dm_round_robin scsi_dh_rdac lpfc(U) scsi_transport_fc scsi_tgt nfsd lockd nfs_acl auth_rpcgss exportfs sunrpc bonding 8021q garp stp llc ib_ucm(U) rdma_ucm(U) rdma_cm(U) iw_cm(U) ib_addr(U) ib_ipoib(U) ib_cm(U) ib_sa(U) ipv6 ib_uverbs(U) ib_umad(U) mlx4_ib(U) ib_mad(U) ib_core(U) dm_multipath tcp_bic power_meter dcdbas microcode iTCO_wdt iTCO_vendor_support shpchp mlx4_core(U) memtrack(U) ses enclosure sg tg3 hwmon ext3 jbd sd_mod crc_t10dif wmi megaraid_sas dm_mirror dm_region_hash dm_log dm_mod gru [last unloaded: scsi_wait_scan]^M
      ^M
      Pid: 13917, comm: mdt_rdpg02_017 Not tainted 2.6.32-358.23.2.el6.20140115.x86_64.lustre241 #1 Dell Inc. PowerEdge R720/0VWT90^M
      RIP: 0010:[<ffffffffa0bd88ad>]  [<ffffffffa0bd88ad>] jbd2_journal_dirty_metadata+0x10d/0x150 [jbd2]^M
      RSP: 0018:ffff880f537198a0  EFLAGS: 00010246^M
      RAX: ffff880f88da9cc0 RBX: ffff880eb8352d08 RCX: ffff880bf382b610^M
      RDX: 0000000000000000 RSI: ffff880bf382b610 RDI: 0000000000000000^M
      RBP: ffff880f537198c0 R08: 2010000000000000 R09: f3ee8046d0a58402^M
      R10: 0000000000000001 R11: ffff880863dd6e10 R12: ffff880f4897f518^M
      R13: ffff880bf382b610 R14: ffff881007dcc800 R15: 0000000000000008^M
      FS:  00007fffedaf3700(0000) GS:ffff88084c400000(0000) knlGS:0000000000000000^M
      CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b^M
      CR2: 000000000061c9b8 CR3: 0000000001a25000 CR4: 00000000000407e0^M
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000^M
      DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400^M
      Process mdt_rdpg02_017 (pid: 13917, threadinfo ffff880f53718000, task ffff880f5370aae0)^M
      Stack:^M
       ffff880eb8352d08 ffffffffa0ca92d0 ffff880bf382b610 0000000000000000^M
      <d> ffff880f53719900 ffffffffa0c680bb ffff880f537198f0 ffffffff810962ff^M
      <d> ffff8810213f3350 ffff880eb8352d08 0000000000000018 ffff880bf382b610^M
      all Trace:^M
       [<ffffffffa0c680bb>] __ldiskfs_handle_dirty_metadata+0x7b/0x100 [ldiskfs]^M
       [<ffffffff810962ff>] ? wake_up_bit+0x2f/0x40^M
       [<ffffffffa0c9ea55>] ldiskfs_quota_write+0x165/0x210 [ldiskfs]^M
       [<ffffffff811e2221>] v2_write_file_info+0xa1/0xe0^M
       [<ffffffff811de328>] dquot_acquire+0x138/0x140^M
       [<ffffffffa0c9d5f6>] ldiskfs_acquire_dquot+0x66/0xb0 [ldiskfs]^M
       [<ffffffff811e029c>] dqget+0x2ac/0x390^M
       [<ffffffff811e0848>] dquot_initialize+0x98/0x240^M
       [<ffffffffa0c9d812>] ldiskfs_dquot_initialize+0x62/0xc0 [ldiskfs]^M
       [<ffffffffa0cf8d6f>] osd_attr_set+0x12f/0x540 [osd_ldiskfs]^M
       [<ffffffffa0eb15cb>] lod_attr_set+0x12b/0x450 [lod]^M
       [<ffffffffa0b6d411>] mdd_attr_set_internal+0x151/0x230 [mdd]^M
       [<ffffffffa0b706ea>] mdd_attr_set+0x107a/0x1390 [mdd]^M
       [<ffffffffa06fd011>] ? lustre_pack_reply_v2+0x1e1/0x280 [ptlrpc]^M
       [<ffffffffa0e0e182>] mdt_mfd_close+0x502/0x6e0 [mdt]^M
       [<ffffffffa0e0f73a>] mdt_close+0x67a/0xab0 [mdt]^M
       [<ffffffffa0de7ad7>] mdt_handle_common+0x647/0x16d0 [mdt]^M
       [<ffffffffa0e21635>] mds_readpage_handle+0x15/0x20 [mdt]^M
       [<ffffffffa070d3d8>] ptlrpc_server_handle_request+0x398/0xc60 [ptlrpc]^M
       [<ffffffffa04175de>] ? cfs_timer_arm+0xe/0x10 [libcfs]^M
       [<ffffffffa0428d9f>] ? lc_watchdog_touch+0x6f/0x170 [libcfs]^M
       [<ffffffffa0704739>] ? ptlrpc_wait_event+0xa9/0x290 [ptlrpc]^M
       [<ffffffff81055813>] ? __wake_up+0x53/0x70^M
       [<ffffffffa070e76e>] ptlrpc_main+0xace/0x1700 [ptlrpc]^M
       [<ffffffffa070dca0>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]^M
       [<ffffffff8100c0ca>] child_rip+0xa/0x20^M
       [<ffffffffa070dca0>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]^M
       [<ffffffffa070dca0>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]^M
       [<ffffffff8100c0c0>] ? child_rip+0x0/0x20^M
      Code: c6 9c 03 00 00 4c 89 f7 e8 11 97 96 e0 48 8b 33 ba 01 00 00 00 4c 89 e7 e8 11 ec ff ff 4c 89 f0 66 ff 00 66 66 90 e9 73 ff ff ff <0f> 0b eb fe 0f 0b eb fe 0f 0b 66 
      Call Trace:^M
       [<ffffffffa0c680bb>] __ldiskfs_handle_dirty_metadata+0x7b/0x100 [ldiskfs]^M
       [<ffffffff810962ff>] ? wake_up_bit+0x2f/0x40^M
       [<ffffffffa0c9ea55>] ldiskfs_quota_write+0x165/0x210 [ldiskfs]^M
       [<ffffffff811e2221>] v2_write_file_info+0xa1/0xe0^M
       [<ffffffff811de328>] dquot_acquire+0x138/0x140^M
       [<ffffffffa0c9d5f6>] ldiskfs_acquire_dquot+0x66/0xb0 [ldiskfs]^M
       [<ffffffff811e029c>] dqget+0x2ac/0x390^M
       [<ffffffff811e0848>] dquot_initialize+0x98/0x240^M
       [<ffffffffa0c9d812>] ldiskfs_dquot_initialize+0x62/0xc0 [ldiskfs]^M
       [<ffffffffa0cf8d6f>] osd_attr_set+0x12f/0x540 [osd_ldiskfs]^M
       [<ffffffffa0eb15cb>] lod_attr_set+0x12b/0x450 [lod]^M
       [<ffffffffa0b6d411>] mdd_attr_set_internal+0x151/0x230 [mdd]^M
       [<ffffffffa0b706ea>] mdd_attr_set+0x107a/0x1390 [mdd]^M
       [<ffffffffa06fd011>] ? lustre_pack_reply_v2+0x1e1/0x280 [ptlrpc]^M
       [<ffffffffa0e0e182>] mdt_mfd_close+0x502/0x6e0 [mdt]^M
       [<ffffffffa0e0f73a>] mdt_close+0x67a/0xab0 [mdt]^M
       [<ffffffffa0de7ad7>] mdt_handle_common+0x647/0x16d0 [mdt]^M
       [<ffffffffa0e21635>] mds_readpage_handle+0x15/0x20 [mdt]^M
       [<ffffffffa070d3d8>] ptlrpc_server_handle_request+0x398/0xc60 [ptlrpc]^M
       [<ffffffffa04175de>] ? cfs_timer_arm+0xe/0x10 [libcfs]^M
       [<ffffffffa0428d9f>] ? lc_watchdog_touch+0x6f/0x170 [libcfs]^M
       [<ffffffffa0704739>] ? ptlrpc_wait_event+0xa9/0x290 [ptlrpc]^M
       [<ffffffff81055813>] ? __wake_up+0x53/0x70^M
       [<ffffffffa070e76e>] ptlrpc_main+0xace/0x1700 [ptlrpc]^M
       [<ffffffffa070dca0>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]^M
       [<ffffffff8100c0ca>] child_rip+0xa/0x20^M
       [<ffffffffa070dca0>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]^M
       [<ffffffffa070dca0>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]^M
       [<ffffffff8100c0c0>] ? child_rip+0x0/0x20^M
      

      After recover it crashed again at the same place.

      AFTER RECOVER

      Lustre: nbp7-MDT0000: recovery is timed out, evict stale exports^M
      Lustre: nbp7-MDT0000: disconnecting 30 stale clients^M
      LustreError: 5667:0:(mdt_lvb.c:157:mdt_lvbo_fill()) nbp7-MDT0000: expected 56 actual 0.^M
      Lustre: nbp7-MDT0000: Recovery over after 5:02, of 11832 clients 11802 recovered and 30 were evicted.^M
      ------------[ cut here ]------------^M
      kernel BUG at fs/jbd2/transaction.c:1033!^M
      

      Rebooted Ran fsck.

      Ran recovery Crashed again same place

      Rebooted Mounted with abort recover no crash so far.

      Attachments

        Issue Links

          Activity

            [LU-5040] kernel BUG at fs/jbd2/transaction.c:1033
            pjones Peter Jones added a comment -

            Landed for 2.5.4 and 2.7

            pjones Peter Jones added a comment - Landed for 2.5.4 and 2.7

            Thank you, Zhenyu, for the update. I will pick up the new patch set.

            jaylan Jay Lan (Inactive) added a comment - Thank you, Zhenyu, for the update. I will pick up the new patch set.
            bobijam Zhenyu Xu added a comment -

            the patch has been updated based on review result.

            bobijam Zhenyu Xu added a comment - the patch has been updated based on review result.

            That is fine, Zhenyu

            Peter mentioned we used to have too much information in JIRA and thus Intel no longer logs gerrit messages to JIRA.

            We do not need messages about Jenkins, Autotest or Maloo. A simple message "Patch Set # uploaded" to JIRA for every new patch set is sufficient and I do not consider it noisy. I think it can be implemented to your system.

            jaylan Jay Lan (Inactive) added a comment - That is fine, Zhenyu Peter mentioned we used to have too much information in JIRA and thus Intel no longer logs gerrit messages to JIRA. We do not need messages about Jenkins, Autotest or Maloo. A simple message "Patch Set # uploaded" to JIRA for every new patch set is sufficient and I do not consider it noisy. I think it can be implemented to your system.
            bobijam Zhenyu Xu added a comment -

            sorry for that, I forgot to update here, just updated in the gerrit.

            bobijam Zhenyu Xu added a comment - sorry for that, I forgot to update here, just updated in the gerrit.

            I think the LU should be updated when the patch provided is change/updated

            mhanafi Mahmoud Hanafi added a comment - I think the LU should be updated when the patch provided is change/updated
            green Oleg Drokin added a comment -

            Yes, unfortunately 7/17 version crashes in almost exactly the way (A bit different backtrace) in my testing, but the 7/25 does nto crash in my testing.
            So please try to apply the newer patch

            green Oleg Drokin added a comment - Yes, unfortunately 7/17 version crashes in almost exactly the way (A bit different backtrace) in my testing, but the 7/25 does nto crash in my testing. So please try to apply the newer patch

            We picked up the patch on 7/17. There is a newer version of patch on 7/25 that we were not aware of.

            jaylan Jay Lan (Inactive) added a comment - We picked up the patch on 7/17. There is a newer version of patch on 7/25 that we were not aware of.

            And a second one crashed

            Pid: 17618, comm: ll_ost03_056 Not tainted 2.6.32-358.23.2.el6.20140115.x86_64.lustre243 #1 SGI.COM SUMMIT/S2600GZ^M
            RIP: 0010:[<ffffffffa05038ad>]  [<ffffffffa05038ad>] jbd2_journal_dirty_metadata+0x10d/0x150 [jbd2]^M
            RSP: 0018:ffff881f7a1a9530  EFLAGS: 00010246^M
            RAX: ffff880b9a89d4c0 RBX: ffff881a9133aaf8 RCX: ffff882011fb7a20^M
            RDX: 0000000000000000 RSI: ffff882011fb7a20 RDI: 0000000000000000^M
            RBP: ffff881f7a1a9550 R08: 4010000000000000 R09: dfd00c8a5ed68802^M
            R10: 0000000000000001 R11: 0000000000000000 R12: ffff88118ba5e208^M
            R13: ffff882011fb7a20 R14: ffff881fe0bff800 R15: 0000000000001400^M
            FS:  00007fffedaf0700(0000) GS:ffff881078880000(0000) knlGS:0000000000000000^M
            CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b^M
            CR2: 00000000006c9038 CR3: 0000000001a25000 CR4: 00000000000407e0^M
            DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000^M
            DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400^M
            Process ll_ost03_056 (pid: 17618, threadinfo ffff881f7a1a8000, task ffff881f7a196aa0)^M
            Stack:^M
             ffff881a9133aaf8 ffffffffa0bec510 ffff882011fb7a20 0000000000000000^M
            <d> ffff881f7a1a9590 ffffffffa0bab0bb ffff881f7a1a9580 ffffffff810962ff^M
            <d> ffff88201f19e250 ffff881a9133aaf8 0000000000000400 ffff882011fb7a20^M
            Call Trace:^M
             [<ffffffffa0bab0bb>] __ldiskfs_handle_dirty_metadata+0x7b/0x100 [ldiskfs]^M
             [<ffffffff810962ff>] ? wake_up_bit+0x2f/0x40^M
             [<ffffffffa0be1c85>] ldiskfs_quota_write+0x165/0x210 [ldiskfs]^M
             [<ffffffff811e28ae>] write_blk+0x2e/0x30^M
             [<ffffffff811e2e5a>] remove_free_dqentry+0x8a/0x140^M
             [<ffffffff811e3807>] do_insert_tree+0x317/0x3d0^M
             [<ffffffff811e3775>] do_insert_tree+0x285/0x3d0^M
             [<ffffffff811e3775>] do_insert_tree+0x285/0x3d0^M
             [<ffffffff811e3775>] do_insert_tree+0x285/0x3d0^M
             [<ffffffff811e39b8>] qtree_write_dquot+0xf8/0x150^M
             [<ffffffff811e2c2e>] ? qtree_read_dquot+0x5e/0x200^M
             [<ffffffff811e2100>] v2_write_dquot+0x30/0x40^M
             [<ffffffff811de2b0>] dquot_acquire+0xc0/0x140^M
             [<ffffffffa0be07f6>] ldiskfs_acquire_dquot+0x66/0xb0 [ldiskfs]^M
             [<ffffffff811e029c>] dqget+0x2ac/0x390^M
             [<ffffffff811e1b86>] dquot_transfer+0x116/0x620^M
             [<ffffffff811e09ab>] ? dquot_initialize+0x1fb/0x240^M
             [<ffffffffa0be0558>] ? __ldiskfs_journal_stop+0x68/0xa0 [ldiskfs]^M
             [<ffffffff811de4bc>] vfs_dq_transfer+0x6c/0xd0^M
             [<ffffffffa0c12128>] osd_quota_transfer+0xa8/0x160 [osd_ldiskfs]^M
             [<ffffffffa05e63ab>] ? lu_context_init+0xab/0x260 [obdclass]^M
             [<ffffffffa0c1109e>] ? osd_trans_exec_op+0x1e/0x2e0 [osd_ldiskfs]^M
             [<ffffffffa0c23432>] osd_attr_set+0x102/0x4e0 [osd_ldiskfs]^M
             [<ffffffffa0cca879>] dt_attr_set.clone.2+0x29/0xc0 [ofd]^M
             [<ffffffffa0cce362>] ofd_attr_set+0x522/0x6c0 [ofd]^M
             [<ffffffffa0cbfe2a>] ofd_setattr+0x69a/0xb80 [ofd]^M
             [<ffffffffa0c9bc1c>] ost_setattr+0x31c/0x990 [ost]^M
             [<ffffffffa0c9f746>] ost_handle+0x21e6/0x48e0 [ost]^M
             [<ffffffffa0494124>] ? libcfs_id2str+0x74/0xb0 [libcfs]^M
             [<ffffffffa077e3b8>] ptlrpc_server_handle_request+0x398/0xc60 [ptlrpc]^M
             [<ffffffffa04885de>] ? cfs_timer_arm+0xe/0x10 [libcfs]^M
             [<ffffffffa0499d6f>] ? lc_watchdog_touch+0x6f/0x170 [libcfs]^M
             [<ffffffffa0775719>] ? ptlrpc_wait_event+0xa9/0x290 [ptlrpc]^M
             [<ffffffff81063be0>] ? default_wake_function+0x0/0x20^M
             [<ffffffffa077f74e>] ptlrpc_main+0xace/0x1700 [ptlrpc]^M
             [<ffffffffa077ec80>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]^M
             [<ffffffff8100c0ca>] child_rip+0xa/0x20^M
             [<ffffffffa077ec80>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]^M
             [<ffffffffa077ec80>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]^M
             [<ffffffff8100c0c0>] ? child_rip+0x0/0x20^M
            
            mhanafi Mahmoud Hanafi added a comment - And a second one crashed Pid: 17618, comm: ll_ost03_056 Not tainted 2.6.32-358.23.2.el6.20140115.x86_64.lustre243 #1 SGI.COM SUMMIT/S2600GZ^M RIP: 0010:[<ffffffffa05038ad>] [<ffffffffa05038ad>] jbd2_journal_dirty_metadata+0x10d/0x150 [jbd2]^M RSP: 0018:ffff881f7a1a9530 EFLAGS: 00010246^M RAX: ffff880b9a89d4c0 RBX: ffff881a9133aaf8 RCX: ffff882011fb7a20^M RDX: 0000000000000000 RSI: ffff882011fb7a20 RDI: 0000000000000000^M RBP: ffff881f7a1a9550 R08: 4010000000000000 R09: dfd00c8a5ed68802^M R10: 0000000000000001 R11: 0000000000000000 R12: ffff88118ba5e208^M R13: ffff882011fb7a20 R14: ffff881fe0bff800 R15: 0000000000001400^M FS: 00007fffedaf0700(0000) GS:ffff881078880000(0000) knlGS:0000000000000000^M CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b^M CR2: 00000000006c9038 CR3: 0000000001a25000 CR4: 00000000000407e0^M DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000^M DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400^M Process ll_ost03_056 (pid: 17618, threadinfo ffff881f7a1a8000, task ffff881f7a196aa0)^M Stack:^M ffff881a9133aaf8 ffffffffa0bec510 ffff882011fb7a20 0000000000000000^M <d> ffff881f7a1a9590 ffffffffa0bab0bb ffff881f7a1a9580 ffffffff810962ff^M <d> ffff88201f19e250 ffff881a9133aaf8 0000000000000400 ffff882011fb7a20^M Call Trace:^M [<ffffffffa0bab0bb>] __ldiskfs_handle_dirty_metadata+0x7b/0x100 [ldiskfs]^M [<ffffffff810962ff>] ? wake_up_bit+0x2f/0x40^M [<ffffffffa0be1c85>] ldiskfs_quota_write+0x165/0x210 [ldiskfs]^M [<ffffffff811e28ae>] write_blk+0x2e/0x30^M [<ffffffff811e2e5a>] remove_free_dqentry+0x8a/0x140^M [<ffffffff811e3807>] do_insert_tree+0x317/0x3d0^M [<ffffffff811e3775>] do_insert_tree+0x285/0x3d0^M [<ffffffff811e3775>] do_insert_tree+0x285/0x3d0^M [<ffffffff811e3775>] do_insert_tree+0x285/0x3d0^M [<ffffffff811e39b8>] qtree_write_dquot+0xf8/0x150^M [<ffffffff811e2c2e>] ? qtree_read_dquot+0x5e/0x200^M [<ffffffff811e2100>] v2_write_dquot+0x30/0x40^M [<ffffffff811de2b0>] dquot_acquire+0xc0/0x140^M [<ffffffffa0be07f6>] ldiskfs_acquire_dquot+0x66/0xb0 [ldiskfs]^M [<ffffffff811e029c>] dqget+0x2ac/0x390^M [<ffffffff811e1b86>] dquot_transfer+0x116/0x620^M [<ffffffff811e09ab>] ? dquot_initialize+0x1fb/0x240^M [<ffffffffa0be0558>] ? __ldiskfs_journal_stop+0x68/0xa0 [ldiskfs]^M [<ffffffff811de4bc>] vfs_dq_transfer+0x6c/0xd0^M [<ffffffffa0c12128>] osd_quota_transfer+0xa8/0x160 [osd_ldiskfs]^M [<ffffffffa05e63ab>] ? lu_context_init+0xab/0x260 [obdclass]^M [<ffffffffa0c1109e>] ? osd_trans_exec_op+0x1e/0x2e0 [osd_ldiskfs]^M [<ffffffffa0c23432>] osd_attr_set+0x102/0x4e0 [osd_ldiskfs]^M [<ffffffffa0cca879>] dt_attr_set.clone.2+0x29/0xc0 [ofd]^M [<ffffffffa0cce362>] ofd_attr_set+0x522/0x6c0 [ofd]^M [<ffffffffa0cbfe2a>] ofd_setattr+0x69a/0xb80 [ofd]^M [<ffffffffa0c9bc1c>] ost_setattr+0x31c/0x990 [ost]^M [<ffffffffa0c9f746>] ost_handle+0x21e6/0x48e0 [ost]^M [<ffffffffa0494124>] ? libcfs_id2str+0x74/0xb0 [libcfs]^M [<ffffffffa077e3b8>] ptlrpc_server_handle_request+0x398/0xc60 [ptlrpc]^M [<ffffffffa04885de>] ? cfs_timer_arm+0xe/0x10 [libcfs]^M [<ffffffffa0499d6f>] ? lc_watchdog_touch+0x6f/0x170 [libcfs]^M [<ffffffffa0775719>] ? ptlrpc_wait_event+0xa9/0x290 [ptlrpc]^M [<ffffffff81063be0>] ? default_wake_function+0x0/0x20^M [<ffffffffa077f74e>] ptlrpc_main+0xace/0x1700 [ptlrpc]^M [<ffffffffa077ec80>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]^M [<ffffffff8100c0ca>] child_rip+0xa/0x20^M [<ffffffffa077ec80>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]^M [<ffffffffa077ec80>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]^M [<ffffffff8100c0c0>] ? child_rip+0x0/0x20^M

            We hit this bug on a OSS with the patched applied

            -----------[ cut here ]------------
            kernel BUG at fs/jbd2/transaction.c:1033!
            BUG: unable to handle kernel paging request at fffffffffffffff8
            IP: [<ffffffff8145d81d>] kdb_bb+0x3bd/0x1290
            PGD 1a27067 PUD 1a28067 PMD 0 
            Oops: 0000 [#1] SMP 
            
            crash> bt
            PID: 8324   TASK: ffff880afaba4ae0  CPU: 11  COMMAND: "ll_ost03_000"
             #0 [ffff880afabaf340] machine_kexec at ffffffff81035e8b
             #1 [ffff880afabaf3a0] crash_kexec at ffffffff810c0492
             #2 [ffff880afabaf470] kdb_kdump_check at ffffffff812858d7
             #3 [ffff880afabaf480] kdb_main_loop at ffffffff81288ac7
             #4 [ffff880afabaf590] kdb_save_running at ffffffff81282c2e
             #5 [ffff880afabaf5a0] kdba_main_loop at ffffffff81463988
             #6 [ffff880afabaf5e0] kdb at ffffffff81285dc6
             #7 [ffff880afabaf650] report_bug at ffffffff812992b3
             #8 [ffff880afabaf680] die at ffffffff8100f2cf
             #9 [ffff880afabaf6b0] do_trap at ffffffff81542a34
            #10 [ffff880afabaf710] do_invalid_op at ffffffff8100cea5
            #11 [ffff880afabaf7b0] invalid_op at ffffffff8100be5b
                [exception RIP: jbd2_journal_dirty_metadata+269]
                RIP: ffffffffa0ca28ad  RSP: ffff880afabaf860  RFLAGS: 00010246
                RAX: ffff880bb027db80  RBX: ffff88072d37c468  RCX: ffff8805de35b748
                RDX: 0000000000000000  RSI: ffff8805de35b748  RDI: 0000000000000000
                RBP: ffff880afabaf880   R8: 9010000000000000   R9: fa03cbc04565d202
                R10: 0000000000000001  R11: 0000000000000000  R12: ffff8808062b9ba8
                R13: ffff8805de35b748  R14: ffff8805b6d0a800  R15: 0000000000000080
                ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
            #12 [ffff880afabaf888] __ldiskfs_handle_dirty_metadata at ffffffffa0d320bb [ldiskfs]
            #13 [ffff880afabaf8c8] osd_ldiskfs_write_record at ffffffffa0dce92c [osd_ldiskfs]
            #14 [ffff880afabaf958] osd_write at ffffffffa0dcf878 [osd_ldiskfs]
            #15 [ffff880afabaf998] dt_record_write at ffffffffa0638415 [obdclass]
            #16 [ffff880afabaf9c8] tgt_client_data_write at ffffffffa080dcac [ptlrpc]
            #17 [ffff880afabafa08] ofd_txn_stop_cb at ffffffffa0e96ad5 [ofd]
            #18 [ffff880afabafa68] dt_txn_hook_stop at ffffffffa0637f23 [obdclass]
            #19 [ffff880afabafa98] osd_trans_stop at ffffffffa0db0ca7 [osd_ldiskfs]
            #20 [ffff880afabafb18] ofd_trans_stop at ffffffffa0e96882 [ofd]
            #21 [ffff880afabafb28] ofd_attr_set at ffffffffa0e9b225 [ofd]
            #22 [ffff880afabafb88] ofd_setattr at ffffffffa0e8ce2a [ofd]
            #23 [ffff880afabafc18] ost_setattr at ffffffffa0e5dc1c [ost]
            #24 [ffff880afabafc78] ost_handle at ffffffffa0e61746 [ost]
            #25 [ffff880afabafdb8] ptlrpc_server_handle_request at ffffffffa07cf3b8 [ptlrpc]
            #26 [ffff880afabafeb8] ptlrpc_main at ffffffffa07d074e [ptlrpc]
            #27 [ffff880afabaff48] kernel_thread at ffffffff8100c0ca
            
            mhanafi Mahmoud Hanafi added a comment - We hit this bug on a OSS with the patched applied -----------[ cut here ]------------ kernel BUG at fs/jbd2/transaction.c:1033! BUG: unable to handle kernel paging request at fffffffffffffff8 IP: [<ffffffff8145d81d>] kdb_bb+0x3bd/0x1290 PGD 1a27067 PUD 1a28067 PMD 0 Oops: 0000 [#1] SMP crash> bt PID: 8324 TASK: ffff880afaba4ae0 CPU: 11 COMMAND: "ll_ost03_000" #0 [ffff880afabaf340] machine_kexec at ffffffff81035e8b #1 [ffff880afabaf3a0] crash_kexec at ffffffff810c0492 #2 [ffff880afabaf470] kdb_kdump_check at ffffffff812858d7 #3 [ffff880afabaf480] kdb_main_loop at ffffffff81288ac7 #4 [ffff880afabaf590] kdb_save_running at ffffffff81282c2e #5 [ffff880afabaf5a0] kdba_main_loop at ffffffff81463988 #6 [ffff880afabaf5e0] kdb at ffffffff81285dc6 #7 [ffff880afabaf650] report_bug at ffffffff812992b3 #8 [ffff880afabaf680] die at ffffffff8100f2cf #9 [ffff880afabaf6b0] do_trap at ffffffff81542a34 #10 [ffff880afabaf710] do_invalid_op at ffffffff8100cea5 #11 [ffff880afabaf7b0] invalid_op at ffffffff8100be5b [exception RIP: jbd2_journal_dirty_metadata+269] RIP: ffffffffa0ca28ad RSP: ffff880afabaf860 RFLAGS: 00010246 RAX: ffff880bb027db80 RBX: ffff88072d37c468 RCX: ffff8805de35b748 RDX: 0000000000000000 RSI: ffff8805de35b748 RDI: 0000000000000000 RBP: ffff880afabaf880 R8: 9010000000000000 R9: fa03cbc04565d202 R10: 0000000000000001 R11: 0000000000000000 R12: ffff8808062b9ba8 R13: ffff8805de35b748 R14: ffff8805b6d0a800 R15: 0000000000000080 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #12 [ffff880afabaf888] __ldiskfs_handle_dirty_metadata at ffffffffa0d320bb [ldiskfs] #13 [ffff880afabaf8c8] osd_ldiskfs_write_record at ffffffffa0dce92c [osd_ldiskfs] #14 [ffff880afabaf958] osd_write at ffffffffa0dcf878 [osd_ldiskfs] #15 [ffff880afabaf998] dt_record_write at ffffffffa0638415 [obdclass] #16 [ffff880afabaf9c8] tgt_client_data_write at ffffffffa080dcac [ptlrpc] #17 [ffff880afabafa08] ofd_txn_stop_cb at ffffffffa0e96ad5 [ofd] #18 [ffff880afabafa68] dt_txn_hook_stop at ffffffffa0637f23 [obdclass] #19 [ffff880afabafa98] osd_trans_stop at ffffffffa0db0ca7 [osd_ldiskfs] #20 [ffff880afabafb18] ofd_trans_stop at ffffffffa0e96882 [ofd] #21 [ffff880afabafb28] ofd_attr_set at ffffffffa0e9b225 [ofd] #22 [ffff880afabafb88] ofd_setattr at ffffffffa0e8ce2a [ofd] #23 [ffff880afabafc18] ost_setattr at ffffffffa0e5dc1c [ost] #24 [ffff880afabafc78] ost_handle at ffffffffa0e61746 [ost] #25 [ffff880afabafdb8] ptlrpc_server_handle_request at ffffffffa07cf3b8 [ptlrpc] #26 [ffff880afabafeb8] ptlrpc_main at ffffffffa07d074e [ptlrpc] #27 [ffff880afabaff48] kernel_thread at ffffffff8100c0ca

            Thanks, Zhenyu!

            jaylan Jay Lan (Inactive) added a comment - Thanks, Zhenyu!

            People

              bobijam Zhenyu Xu
              mhanafi Mahmoud Hanafi
              Votes:
              0 Vote for this issue
              Watchers:
              14 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: