Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-19516

"BUG: scheduling while atomic: rmmod/..." upon ptlrpc module unload

    XMLWordPrintable

Details

    • Bug
    • Resolution: Unresolved
    • Medium
    • None
    • None
    • None
    • 3
    • 9223372036854775807

    Description

      Problem occurs almost each time we want to fully shutdown Lustre on our client nodes.

      It is annoying because it creates false positive for our node health monitoring framework.

      To better be able to debug it we have set panic_on_warn=1 before unloading modules in order to get a crash-dump for post-mortem analysis.

      Thus we can get the following panic stack :

      PID: 234890   TASK: ffff000112844c80  CPU: 1    COMMAND: "rmmod"
       #0 [ffff8000a8a4f1f0] crash_setup_regs at ffffc6bc6d4cc0e4
       #1 [ffff8000a8a4f270] panic at ffffc6bc6d362698
       #2 [ffff8000a8a4f300] check_panic_on_warn at ffffc6bc6d3629d4
       #3 [ffff8000a8a4f310] __schedule_bug at ffffc6bc6d3b5588
       #4 [ffff8000a8a4f320] schedule_debug at ffffc6bc6d3be544
       #5 [ffff8000a8a4f3a0] __schedule at ffffc6bc6ea394c8
       #6 [ffff8000a8a4f3f0] schedule at ffffc6bc6ea39c4c
       #7 [ffff8000a8a4f460] schedule_timeout at ffffc6bc6ea423c4
       #8 [ffff8000a8a4f4a0] __wait_for_common at ffffc6bc6ea3af88
       #9 [ffff8000a8a4f500] wait_for_completion_timeout at ffffc6bc6ea3b118
      #10 [ffff8000a8a4f520] cmd_exec at ffffc6bc110b9edc [mlx5_core]
      #11 [ffff8000a8a4f5e0] mlx5_cmd_do at ffffc6bc110baab0 [mlx5_core]
      #12 [ffff8000a8a4f610] mlx5_cmd_exec at ffffc6bc110bab54 [mlx5_core]
      #13 [ffff8000a8a4f670] mlx5_core_destroy_mkey at ffffc6bc110ce3a0 [mlx5_core]
      #14 [ffff8000a8a4f680] destroy_mkey at ffffc6bc13f104b0 [mlx5_ib]
      #15 [ffff8000a8a4f6d0] __mlx5_ib_dereg_mr at ffffc6bc13f146f4 [mlx5_ib]
      #16 [ffff8000a8a4f720] mlx5_ib_dereg_mr at ffffc6bc13f13868 [mlx5_ib]
      #17 [ffff8000a8a4f750] ib_dereg_mr_user at ffffc6bc11de4090 [ib_core]
      #18 [ffff8000a8a4f790] kiblnd_destroy_fmr_pool.constprop.0 at ffffc6bc15dea040 [ko2iblnd]
      #19 [ffff8000a8a4f810] kiblnd_net_fini_pools at ffffc6bc15dea608 [ko2iblnd]
      #20 [ffff8000a8a4f8e0] kiblnd_shutdown at ffffc6bc15df0b30 [ko2iblnd]
      #21 [ffff8000a8a4f940] lnet_shutdown_lndni at ffffc6bc158da7d4 [lnet]
      #22 [ffff8000a8a4f9c0] lnet_shutdown_lndnet at ffffc6bc158dab64 [lnet]
      #23 [ffff8000a8a4fa20] lnet_shutdown_lndnets at ffffc6bc158dacc4 [lnet]
      #24 [ffff8000a8a4fa50] LNetNIFini at ffffc6bc158dafa0 [lnet]
      #25 [ffff8000a8a4fa80] ptlrpc_exit_portals at ffffc6bc15bb7808 [ptlrpc]
      #26 [ffff8000a8a4faa0] ptlrpc_exit at ffffc6bc15c1d2d8 [ptlrpc]
      #27 [ffff8000a8a4fb10] __do_sys_delete_module at ffffc6bc6d48b518
      #28 [ffff8000a8a4fb40] __arm64_sys_delete_module at ffffc6bc6d48b6d4
      #29 [ffff8000a8a4fe30] invoke_syscall.constprop.0 at ffffc6bc6d2bb5c8
      #30 [ffff8000a8a4fe60] do_el0_svc at ffffc6bc6d2bb690
      #31 [ffff8000a8a4fe80] el0_svc at ffffc6bc6ea32cb4
      #32 [ffff8000a8a4fea0] el0t_64_sync_handler at ffffc6bc6ea33448
      #33 [ffff8000a8a4ffd8] el0t_64_sync at ffffc6bc6d291694
           PC: 0000edc8e06bce4c   LR: 0000c2bdd8b588c0   SP: 0000ffffc28ade30
          X29: 0000ffffc28ade30  X28: 0000c2be152f3620  X27: 0000000000000000
          X26: 0000ffffc28ade98  X25: 0000c2be152f02a0  X24: 0000ffffc28af656
          X23: 0000000000000001  X22: 0000000000000000  X21: 0000ffffc28ade90
          X20: 0000000000000000  X19: 0000c2be152f3620  X18: 0000000000000006
          X17: 0000edc8e06bce40  X16: 0000c2bdd8b7fcc0  X15: 0000000000000001
          X14: 0000000000000000  X13: 0000000000001010  X12: 0000ffffc28adc20
          X11: 00000000ffffffd8  X10: 0000000000000000   X9: 000000000000000a
           X8: 000000000000006a   X7: 0000000000000000   X6: 0000ffffc28acdc9
           X5: 0000000000002002   X4: 0000edc8e07151b0   X3: 0000000000000000
           X2: 0000000000000000   X1: 0000000000000800   X0: 0000c2be152f3688
          ORIG_X0: 0000c2be152f3688  SYSCALLNO: 6a  PSTATE: 00001000 

      And according to crash-dump analysis (mainly full stack unwinding including inlined/macros) and associated code browsing, it appears that this regression has been introduced by https://review.whamcloud.com/c/fs/lustre-release/+/59059/ for LU-18966.

      I believe a better fix would be to not call 

      kiblnd_destroy_pool_list()/kiblnd_destroy_fmr_pool_list()

      functions, respectively in both 

      kiblnd_fini_poolset()/kiblnd_fini_fmr_poolset()

      functions, with [f]ps_lock spin-locked, but better move the targeted lists in alternate list_head under lock to later call them without spin-lock set, because a lot of cleanup work may need to be done with scheduling possible, like in this stack, and thus will trigger this warning.

      I am presently testing a patch and may be able to push it soon.

      Attachments

        Issue Links

          Activity

            People

              bfaccini-nvda Bruno Faccini
              bfaccini-nvda Bruno Faccini
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated: