Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-5264

ASSERTION( info->oti_r_locks == 0 ) at OST umount

Details

    • Bug
    • Resolution: Fixed
    • Minor
    • Lustre 2.8.0
    • Lustre 2.4.3
    • RHEL6 w/ kernel 2.6.32-431.17.1.el6.x86_64
    • 3
    • 14690

    Description

      While stopping a Lustre filesystem, the following LBUG occurred on an OSS:

      ----8< ----
      LustreError: 4581:0:(osd_handler.c:5343:osd_key_exit()) ASSERTION( info->oti_r_locks == 0 ) failed:
      Lustre: server umount scratch3-OST0130 complete
      LustreError: 4581:0:(osd_handler.c:5343:osd_key_exit()) LBUG
      Pid: 4581, comm: ll_ost00_070

      Call Trace:
      [<ffffffffa0d8c895>] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
      [<ffffffffa0d8ce97>] lbug_with_loc+0x47/0xb0 [libcfs]
      [<ffffffffa14da67b>] osd_key_exit+0x5b/0xc0 [osd_ldiskfs]
      [<ffffffffa0e5d9f8>] lu_context_exit+0x58/0xa0 [obdclass]
      [<ffffffffa0ffd749>] ptlrpc_main+0xa59/0x1700 [ptlrpc]
      [<ffffffffa0ffccf0>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]
      [<ffffffff8100c20a>] child_rip+0xa/0x20
      [<ffffffffa0ffccf0>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]
      [<ffffffffa0ffccf0>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]
      [<ffffffff8100c200>] ? child_rip+0x0/0x20

      Kernel panic - not syncing: LBUG
      Pid: 4581, comm: ll_ost00_070 Tainted: G W --------------- 2.6.32-431.11.2.el6.Bull.48.x86_64 #1
      Call Trace:
      [<ffffffff81528393>] ? panic+0xa7/0x16f
      [<ffffffffa0d8ceeb>] ? lbug_with_loc+0x9b/0xb0 [libcfs]
      [<ffffffffa14da67b>] ? osd_key_exit+0x5b/0xc0 [osd_ldiskfs]
      [<ffffffffa0e5d9f8>] ? lu_context_exit+0x58/0xa0 [obdclass]
      [<ffffffffa0ffd749>] ? ptlrpc_main+0xa59/0x1700 [ptlrpc]
      [<ffffffffa0ffccf0>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]
      [<ffffffff8100c20a>] ? child_rip+0xa/0x20
      [<ffffffffa0ffccf0>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]
      [<ffffffffa0ffccf0>] ? ptlrpc_main+0x0/0x1700 [ptlrpc]
      [<ffffffff8100c200>] ? child_rip+0x0/0x20
      ----8< ----

      There were 15 OSTs mounted on the OSS. One umount process has completed while others were still present at crash time. All umount processes were running in parallel because of Shine (shine stop -f scratch3 -n @io)

      ----8< ----
      crash> ps | grep umount
      21639 21636 25 ffff88084445e080 IN 0.0 105176 760 umount
      21642 21638 25 ffff8807e0f86b40 IN 0.0 105176 760 umount
      21643 21637 26 ffff880b6a78e100 IN 0.0 105176 760 umount
      21646 21640 25 ffff880c789fcb40 IN 0.0 106068 756 umount
      21649 21644 2 ffff880f9c9740c0 IN 0.0 105176 760 umount
      21651 21645 17 ffff88083ecfb580 IN 0.0 106068 756 umount
      21653 21648 15 ffff880f97935580 IN 0.0 105176 760 umount
      21655 21650 6 ffff880fc5c554c0 IN 0.0 105176 760 umount
      21657 21652 25 ffff881076294080 IN 0.0 105176 756 umount
      21659 21654 19 ffff880f8f245500 IN 0.0 106068 760 umount
      21661 21656 11 ffff8807ec1214c0 IN 0.0 105176 764 umount
      21663 21660 30 ffff8808122a9500 IN 0.0 106068 764 umount
      21664 21658 3 ffff880b3d1d7500 IN 0.0 105176 764 umount
      21665 21662 5 ffff8807ec120a80 IN 0.0 106068 764 umount
      ----8< ----

      Backtrace of the process:

      ----8< ----
      crash> bt
      PID: 4581 TASK: ffff881004af1540 CPU: 8 COMMAND: "ll_ost00_070"
      #0 [ffff881004af3cb8] machine_kexec at ffffffff8103915b
      #1 [ffff881004af3d18] crash_kexec at ffffffff810c5e42
      #2 [ffff881004af3de8] panic at ffffffff8152839a
      #3 [ffff881004af3e68] lbug_with_loc at ffffffffa0d8ceeb [libcfs]
      #4 [ffff881004af3e88] osd_key_exit at ffffffffa14da67b [osd_ldiskfs]
      #5 [ffff881004af3e98] lu_context_exit at ffffffffa0e5d9f8 [obdclass]
      #6 [ffff881004af3eb8] ptlrpc_main at ffffffffa0ffd749 [ptlrpc]
      #7 [ffff881004af3f48] kernel_thread at ffffffff8100c20a
      crash> bt -l 4581
      PID: 4581 TASK: ffff881004af1540 CPU: 8 COMMAND: "ll_ost00_070"
      #0 [ffff881004af3cb8] machine_kexec at ffffffff8103915b
      /usr/src/debug/kernel-2.6/linux-2.6.32-431.11.2.el6.Bull.48.x86_64/arch/x86/kernel/machine_kexec_64.c: 336
      #1 [ffff881004af3d18] crash_kexec at ffffffff810c5e42
      /usr/src/debug/kernel-2.6/linux-2.6.32-431.11.2.el6.Bull.48.x86_64/kernel/kexec.c: 1106
      #2 [ffff881004af3de8] panic at ffffffff8152839a
      /usr/src/debug/kernel-2.6/linux-2.6.32-431.11.2.el6.Bull.48.x86_64/kernel/panic.c: 111
      #3 [ffff881004af3e68] lbug_with_loc at ffffffffa0d8ceeb [libcfs]
      /usr/src/debug/lustre-2.4.3/libcfs/libcfs/linux/linux-debug.c: 176
      #4 [ffff881004af3e88] osd_key_exit at ffffffffa14da67b [osd_ldiskfs]
      /usr/src/debug/lustre-2.4.3/lustre/osd-ldiskfs/osd_handler.c: 5345
      #5 [ffff881004af3e98] lu_context_exit at ffffffffa0e5d9f8 [obdclass]
      /usr/src/debug/lustre-2.4.3/lustre/obdclass/lu_object.c: 1662
      #6 [ffff881004af3eb8] ptlrpc_main at ffffffffa0ffd749 [ptlrpc]
      /usr/src/debug/lustre-2.4.3/lustre/ptlrpc/service.c: 2514
      #7 [ffff881004af3f48] kernel_thread at ffffffff8100c20a
      /usr/src/debug//////////////////////////////////////////////////////////////////kernel-2.6/linux-2.6.32-431.11.2.el6.Bull.48.x86_64/arch/x86/kernel/entry_64.S: 1235
      ----8< ----

      You can find attached:

      • dmesg.txt: log from the crash
      • bt-all.merged.txt: merged foreach bt from the crash

      Attachments

        1. bt-all.merged.txt
          149 kB
        2. bt-f
          3 kB
        3. bt-F
          3 kB
        4. dmesg.txt
          212 kB

        Issue Links

          Activity

            [LU-5264] ASSERTION( info->oti_r_locks == 0 ) at OST umount
            pjones Peter Jones added a comment -

            Landed for 2.8

            pjones Peter Jones added a comment - Landed for 2.8

            Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/13103/
            Subject: LU-5264 obdclass: fix race during key quiescency
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: 42fdf8355791cb682c6120f7950bb2ecd50f97aa

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/13103/ Subject: LU-5264 obdclass: fix race during key quiescency Project: fs/lustre-release Branch: master Current Patch Set: Commit: 42fdf8355791cb682c6120f7950bb2ecd50f97aa

            Humm master patch version, has successfully passed auto-test but definitelly needs to pass the review step. Will try to get more involvment from the reviewers.

            bfaccini Bruno Faccini (Inactive) added a comment - Humm master patch version, has successfully passed auto-test but definitelly needs to pass the review step. Will try to get more involvment from the reviewers.

            Bruno,

            Can we go ahead and backport the patch to b2_5?

            bruno.travouillon Bruno Travouillon (Inactive) added a comment - Bruno, Can we go ahead and backport the patch to b2_5?

            Thanks Bruno, but I believe this could be the effect of Shine tool running all umounts in parallel ...

            bfaccini Bruno Faccini (Inactive) added a comment - Thanks Bruno, but I believe this could be the effect of Shine tool running all umounts in parallel ...

            I mean 14 umount processes running for 15 OSTs previously mounted, ie 1 OST has been successfully unmounted for first crash and 5 OSTs have been successfully unmounted for the second one.

            bruno.travouillon Bruno Travouillon (Inactive) added a comment - I mean 14 umount processes running for 15 OSTs previously mounted, ie 1 OST has been successfully unmounted for first crash and 5 OSTs have been successfully unmounted for the second one.

            Hi Bruno,

            I was just looking at the dumps at the customer site. For the first crash, there were still 14 umount processes running for 15 OSTs. For the second, 3 umount processes were remaining for 8 OSTs.

            bruno.travouillon Bruno Travouillon (Inactive) added a comment - Hi Bruno, I was just looking at the dumps at the customer site. For the first crash, there were still 14 umount processes running for 15 OSTs. For the second, 3 umount processes were remaining for 8 OSTs.

            After more debugging on my own crash-dump, I think problem comes from the fact that upon umount, presumably of last device using same OSD back-end, to prepare for module unload, lu_context_key_quiesce() is run to remove all module's key reference in any context linked on lu_context_remembered list. Thus threads must protect against such transversal processing when exiting from their context.
            Master patch to implement this is at http://review.whamcloud.com/13103.

            bfaccini Bruno Faccini (Inactive) added a comment - After more debugging on my own crash-dump, I think problem comes from the fact that upon umount, presumably of last device using same OSD back-end, to prepare for module unload, lu_context_key_quiesce() is run to remove all module's key reference in any context linked on lu_context_remembered list. Thus threads must protect against such transversal processing when exiting from their context. Master patch to implement this is at http://review.whamcloud.com/13103 .

            Faccini Bruno (bruno.faccini@intel.com) uploaded a new patch: http://review.whamcloud.com/13103
            Subject: LU-5264 obdclass: fix race during key quiescency
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 66dcaa9824f4ae35fb375bc9f9da18c481b7a8f7

            gerrit Gerrit Updater added a comment - Faccini Bruno (bruno.faccini@intel.com) uploaded a new patch: http://review.whamcloud.com/13103 Subject: LU-5264 obdclass: fix race during key quiescency Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 66dcaa9824f4ae35fb375bc9f9da18c481b7a8f7

            I have spent sometime this week doing more analysis on my own crash-dump and I should be able to push a patch soon now to fix the suspected race during umounts.

            BTW, can you check in your on-site crash-dump/logs that this was the last OST to be unmounted ?

            bfaccini Bruno Faccini (Inactive) added a comment - I have spent sometime this week doing more analysis on my own crash-dump and I should be able to push a patch soon now to fix the suspected race during umounts. BTW, can you check in your on-site crash-dump/logs that this was the last OST to be unmounted ?

            Yes, this is strange because you should have crashed with a Kernel Oops/BUG() instead of an LBUG ...
            But this may also be the result of a race where lc_value[21] should have still been populated when referenced, causing the LBUG thread to trigger the Assert when the concurent/racing thread may the have zero'ed it ...
            BTW, the "good" news is that during the testing of an other patch where I was running MDT/OST mount/umounts in a loop, I have triggered the same LBUG. And its 1st crash analysis steps show the same strange behavior !!! More to come soon.

            bfaccini Bruno Faccini (Inactive) added a comment - Yes, this is strange because you should have crashed with a Kernel Oops/BUG() instead of an LBUG ... But this may also be the result of a race where lc_value [21] should have still been populated when referenced, causing the LBUG thread to trigger the Assert when the concurent/racing thread may the have zero'ed it ... BTW, the "good" news is that during the testing of an other patch where I was running MDT/OST mount/umounts in a loop, I have triggered the same LBUG. And its 1st crash analysis steps show the same strange behavior !!! More to come soon.

            People

              bfaccini Bruno Faccini (Inactive)
              bruno.travouillon Bruno Travouillon (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: