Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-9818

replay-single test 29: lu_object_attr()) ASSERTION( ((o)->lo_header->loh_attr & LOHA_EXISTS) != 0 ) failed

    XMLWordPrintable

Details

    • Bug
    • Resolution: Duplicate
    • Minor
    • None
    • None
    • None
    • 3
    • 9223372036854775807

    Description

      Only got one such failure so far, while the assertion was hit before in other tickets, the stack here is unique it seems, so filing a new one.

      [69962.151605] Lustre: DEBUG MARKER: == replay-single test 29: open(O_CREAT), |X| unlink two, replay, close two (test mds_cleanup_orphans) ====================================================================================================== 08:57:55 (1501592275)
      [69963.207449] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000
      [69963.214497] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000
      [69971.727296] LustreError: 20636:0:(mgc_request.c:603:do_requeue()) failed processing log: -5
      [69989.855016] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
      [69989.857644] Lustre: Skipped 16 previous similar messages
      [69991.061406] Lustre: DEBUG MARKER: centos-69.localnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid
      [69991.289012] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec
      [69992.926591] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1315 to 0x0:1345
      [69992.926597] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1283 to 0x0:1313
      [69992.969758] LustreError: 29133:0:(lu_object.h:862:lu_object_attr()) ASSERTION( ((o)->lo_header->loh_attr & LOHA_EXISTS) != 0 ) failed: 
      [69992.979293] LustreError: 29133:0:(lu_object.h:862:lu_object_attr()) LBUG
      [69992.980129] Pid: 29133, comm: orph_cleanup_lu
      [69992.980875] 
      Call Trace:
      [69992.982288]  [<ffffffffa027d7ce>] libcfs_call_trace+0x4e/0x60 [libcfs]
      [69992.983218]  [<ffffffffa027d85c>] lbug_with_loc+0x4c/0xb0 [libcfs]
      [69992.985387]  [<ffffffffa0901a59>] orph_declare_index_delete+0x409/0x450 [mdd]
      [69992.986129]  [<ffffffffa13b5429>] ? lod_trans_create+0x39/0x50 [lod]
      [69992.986803]  [<ffffffffa0901ed1>] orph_key_test_and_del+0x431/0xd20 [mdd]
      [69992.987471]  [<ffffffffa0902d25>] __mdd_orphan_cleanup+0x565/0x7e0 [mdd]
      [69992.990291]  [<ffffffffa09027c0>] ? __mdd_orphan_cleanup+0x0/0x7e0 [mdd]
      [69992.991410]  [<ffffffff810a2eba>] kthread+0xea/0xf0
      [69992.992168]  [<ffffffff810a2dd0>] ? kthread+0x0/0xf0
      [69992.992935]  [<ffffffff8170fb98>] ret_from_fork+0x58/0x90
      [69993.005696]  [<ffffffff810a2dd0>] ? kthread+0x0/0xf0
      [69993.006502] 
      [69993.007197] Kernel panic - not syncing: LBUG
      [69993.007936] CPU: 9 PID: 29133 Comm: orph_cleanup_lu Tainted: P           OE  ------------   3.10.0-debug #2
      [69993.009789] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
      [69993.010582]  ffffffffa029ced2 0000000091292d93 ffff8800a719fcb0 ffffffff816fd3e4
      [69993.026450]  ffff8800a719fd30 ffffffff816f8c34 ffffffff00000008 ffff8800a719fd40
      [69993.029288]  ffff8800a719fce0 0000000091292d93 0000000091292d93 ffff88033e52d948
      [69993.076412] Call Trace:
      [69993.077147]  [<ffffffff816fd3e4>] dump_stack+0x19/0x1b
      [69993.077788]  [<ffffffff816f8c34>] panic+0xd8/0x1e7
      [69993.078553]  [<ffffffffa027d874>] lbug_with_loc+0x64/0xb0 [libcfs]
      [69993.079210]  [<ffffffffa0901a59>] orph_declare_index_delete+0x409/0x450 [mdd]
      [69993.079900]  [<ffffffffa13b5429>] ? lod_trans_create+0x39/0x50 [lod]
      [69993.080558]  [<ffffffffa0901ed1>] orph_key_test_and_del+0x431/0xd20 [mdd]
      [69993.082775]  [<ffffffffa0902d25>] __mdd_orphan_cleanup+0x565/0x7e0 [mdd]
      [69993.083436]  [<ffffffffa09027c0>] ? orph_key_test_and_del+0xd20/0xd20 [mdd]
      [69993.084105]  [<ffffffff810a2eba>] kthread+0xea/0xf0
      [69993.084726]  [<ffffffff810a2dd0>] ? kthread_create_on_node+0x140/0x140
      [69993.085372]  [<ffffffff8170fb98>] ret_from_fork+0x58/0x90
      [69993.086314]  [<ffffffff810a2dd0>] ? kthread_create_on_node+0x140/0x140
      

      crashdump and modules in /exports/crashdumps/192.168.123.169-2017-08-01-08:58:32 on onyx-68

      tag in centos7 chroot master-20170801

      Attachments

        Issue Links

          Activity

            People

              wc-triage WC Triage
              green Oleg Drokin
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: