Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-4552

osc_cache.c:899:osc_extent_wait() timeout quite often

Details

    • Bug
    • Resolution: Duplicate
    • Critical
    • None
    • Lustre 2.5.0, Lustre 2.4.2
    • None
    • RHEL6
    • 2
    • 12438

    Description

      We hit client hangs quite often on the all login nodes and following Lustre error messages printed out. It can't be recovery until client reboots.

      Jan 22 17:23:23 ff01 kernel: LustreError: 84026:0,(osc_cache.c:899:osc_extent_wait()) extent ffff8831a49b0678@{[0 > 0/255], [3|0|+|rpc|wihY|ffff88283005bc48], [4096|1|+||ffff8828fb76b228|256|ffff88319695e040]} home2-OST000b-osc-ffff883fdbbd8800: wait ext to 0 timedout, recovery in progress?
      

      Attachments

        1. lctl.dk.23.17.tgz
          1.40 MB
        2. lctl.dk.after.tgz
          885 kB
        3. lctl.dk1.tgz
          0.2 kB
        4. messages.after_call_trace
          1.75 MB
        5. messages.after_osc_msg
          2.51 MB
        6. messages.before_call_trace
          1.01 MB

        Issue Links

          Activity

            [LU-4552] osc_cache.c:899:osc_extent_wait() timeout quite often

            Thanks, Bruno!
            After "echo 0 > /proc/fs/lustre/ldlm/namespaces/*/early_lock_cancel" setting, the problem was not reproduced. So, it looks like this is same problem to LU-4300. We tried a couple of times, but didn't happen anything and installer finisehd without errors.

            ihara Shuichi Ihara (Inactive) added a comment - Thanks, Bruno! After "echo 0 > /proc/fs/lustre/ldlm/namespaces/*/early_lock_cancel" setting, the problem was not reproduced. So, it looks like this is same problem to LU-4300 . We tried a couple of times, but didn't happen anything and installer finisehd without errors.

            Hello Shuichi,
            After having a look to the back-traces (still need to review the Lustre debug-logs!), your problem seems similar to the one reported in LU-4300.
            Also, could you try to run the same 100% reproducer on a node where ELC has been disabled ?? I think this can be set with "echo 0 > /proc/fs/lustre/ldlm/namespaces/*/early_lock_cancel".

            bfaccini Bruno Faccini (Inactive) added a comment - Hello Shuichi, After having a look to the back-traces (still need to review the Lustre debug-logs!), your problem seems similar to the one reported in LU-4300 . Also, could you try to run the same 100% reproducer on a node where ELC has been disabled ?? I think this can be set with "echo 0 > /proc/fs/lustre/ldlm/namespaces/*/early_lock_cancel".

            Here is backtrace of 1) before dump calltrace, 2) after dump calltrace and 3) after printout of osc_extent_wait() timeout messages.

            ihara Shuichi Ihara (Inactive) added a comment - Here is backtrace of 1) before dump calltrace, 2) after dump calltrace and 3) after printout of osc_extent_wait() timeout messages.

            Jinshan, Hongchao
            No, this is not RPM package, and not opensource software. sow we can't share. It's java based installer. I don't think they are doing very specific things, but I didn't reproduce this problem with other way in our lab either.
            Howerver, we can 100% reproduce this problem with that java instlaler at the customer site. If you want any addtiianl information, please let me know. We will collect all informaiton whatever you want and repruce problem.

            ihara Shuichi Ihara (Inactive) added a comment - Jinshan, Hongchao No, this is not RPM package, and not opensource software. sow we can't share. It's java based installer. I don't think they are doing very specific things, but I didn't reproduce this problem with other way in our lab either. Howerver, we can 100% reproduce this problem with that java instlaler at the customer site. If you want any addtiianl information, please let me know. We will collect all informaiton whatever you want and repruce problem.
            hongchao.zhang Hongchao Zhang added a comment - - edited

            HI Ihara,

            do you use rpm as the software installer?
            I run " while [ true ]; do du -a /mnt/lustre >/dev/null 2>&1 ; done &" in the background, and continuously run "rpm -ivh", but can't reproduce it.
            and both 2.5.0 and 2.4.2 are tested.

            Thanks

            hongchao.zhang Hongchao Zhang added a comment - - edited HI Ihara, do you use rpm as the software installer? I run " while [ true ]; do du -a /mnt/lustre >/dev/null 2>&1 ; done &" in the background, and continuously run "rpm -ivh", but can't reproduce it. and both 2.5.0 and 2.4.2 are tested. Thanks

            is it easy to be reproduced? In that case, it'll be a good idea to share us the reproduce program.

            jay Jinshan Xiong (Inactive) added a comment - is it easy to be reproduced? In that case, it'll be a good idea to share us the reproduce program.

            Hi Jinshan,
            We didn't have it yet, but we can reproduce same problem and get backtrace. I Will collect them soon. Meantime, I'm uploading Lustre debug log that we got.

            ihara Shuichi Ihara (Inactive) added a comment - Hi Jinshan, We didn't have it yet, but we can reproduce same problem and get backtrace. I Will collect them soon. Meantime, I'm uploading Lustre debug log that we got.

            Will you show us the backtrace of all processes on the node when this occurs?

            jay Jinshan Xiong (Inactive) added a comment - Will you show us the backtrace of all processes on the node when this occurs?

            OK, reproduced many times. And we confirmed this hang happened when a software installer (install binaries to Lustre) is running and "du" command is also running as background job on same client.
            We thought LU-4509 might be related to this issue, and applied that patch and tested it.

            As far as we tested, finally, the same hang happened (running installer and du commands were hang). However, the filesystem was still accessible. (It was possible to do IO to Lustre and it worked) So, this is different situation before apply LU-4509 patches. (without patches, we couldn't do anything after problem happened)

            However, we are still hitting application hangs (not rater than node hang) even applied LU-4509 patches.

            btw, we saw following call trace messages before osc_extent_wait() timeout messages, but this doesn't mapper whether applied LU-4509 patch or not.

            Jan 29 03:30:51 ff04 kernel: INFO: task ldlm_bl_32:122671 blocked for more than 120 seconds.
            Jan 29 03:30:51 ff04 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
            Jan 29 03:30:51 ff04 kernel: ldlm_bl_32    D 000000000000000b     0 122671      2 0x00000000
            Jan 29 03:30:51 ff04 kernel: ffff883f1f083ce0 0000000000000046 0000000000000000 0000000100000020
            Jan 29 03:30:51 ff04 kernel: 52e7f60b0000000b 0000000000052daf 0001df2f00000000 0000015f00000000
            Jan 29 03:30:51 ff04 kernel: ffff883f1f081ab8 ffff883f1f083fd8 000000000000fb88 ffff883f1f081ab8
            Jan 29 03:30:51 ff04 kernel: Call Trace:
            Jan 29 03:30:51 ff04 kernel: [<ffffffff8150ed3e>] __mutex_lock_slowpath+0x13e/0x180
            Jan 29 03:30:51 ff04 kernel: [<ffffffff8150ebdb>] mutex_lock+0x2b/0x50
            Jan 29 03:30:51 ff04 kernel: [<ffffffffa0d4225f>] cl_lock_mutex_get+0x6f/0xd0 [obdclass]
            Jan 29 03:30:51 ff04 kernel: [<ffffffffa1286b9a>] osc_ldlm_blocking_ast+0x7a/0x350 [osc]
            Jan 29 03:30:51 ff04 kernel: [<ffffffffa0c75a81>] ? libcfs_debug_msg+0x41/0x50 [libcfs]
            Jan 29 03:30:51 ff04 kernel: [<ffffffffa0f06680>] ldlm_handle_bl_callback+0x130/0x400 [ptlrpc]
            Jan 29 03:30:51 ff04 kernel: [<ffffffffa0f06bb1>] ldlm_bl_thread_main+0x261/0x3c0 [ptlrpc]
            Jan 29 03:30:51 ff04 kernel: [<ffffffff81063310>] ? default_wake_function+0x0/0x20
            Jan 29 03:30:51 ff04 kernel: [<ffffffffa0f06950>] ? ldlm_bl_thread_main+0x0/0x3c0 [ptlrpc]
            Jan 29 03:30:51 ff04 kernel: [<ffffffff81096916>] kthread+0x96/0xa0
            Jan 29 03:30:51 ff04 kernel: [<ffffffff8100c0ca>] child_rip+0xa/0x20
            Jan 29 03:30:51 ff04 kernel: [<ffffffff81096880>] ? kthread+0x0/0xa0
            Jan 29 03:30:51 ff04 kernel: [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
            
            ihara Shuichi Ihara (Inactive) added a comment - OK, reproduced many times. And we confirmed this hang happened when a software installer (install binaries to Lustre) is running and "du" command is also running as background job on same client. We thought LU-4509 might be related to this issue, and applied that patch and tested it. As far as we tested, finally, the same hang happened (running installer and du commands were hang). However, the filesystem was still accessible. (It was possible to do IO to Lustre and it worked) So, this is different situation before apply LU-4509 patches. (without patches, we couldn't do anything after problem happened) However, we are still hitting application hangs (not rater than node hang) even applied LU-4509 patches. btw, we saw following call trace messages before osc_extent_wait() timeout messages, but this doesn't mapper whether applied LU-4509 patch or not. Jan 29 03:30:51 ff04 kernel: INFO: task ldlm_bl_32:122671 blocked for more than 120 seconds. Jan 29 03:30:51 ff04 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jan 29 03:30:51 ff04 kernel: ldlm_bl_32 D 000000000000000b 0 122671 2 0x00000000 Jan 29 03:30:51 ff04 kernel: ffff883f1f083ce0 0000000000000046 0000000000000000 0000000100000020 Jan 29 03:30:51 ff04 kernel: 52e7f60b0000000b 0000000000052daf 0001df2f00000000 0000015f00000000 Jan 29 03:30:51 ff04 kernel: ffff883f1f081ab8 ffff883f1f083fd8 000000000000fb88 ffff883f1f081ab8 Jan 29 03:30:51 ff04 kernel: Call Trace: Jan 29 03:30:51 ff04 kernel: [<ffffffff8150ed3e>] __mutex_lock_slowpath+0x13e/0x180 Jan 29 03:30:51 ff04 kernel: [<ffffffff8150ebdb>] mutex_lock+0x2b/0x50 Jan 29 03:30:51 ff04 kernel: [<ffffffffa0d4225f>] cl_lock_mutex_get+0x6f/0xd0 [obdclass] Jan 29 03:30:51 ff04 kernel: [<ffffffffa1286b9a>] osc_ldlm_blocking_ast+0x7a/0x350 [osc] Jan 29 03:30:51 ff04 kernel: [<ffffffffa0c75a81>] ? libcfs_debug_msg+0x41/0x50 [libcfs] Jan 29 03:30:51 ff04 kernel: [<ffffffffa0f06680>] ldlm_handle_bl_callback+0x130/0x400 [ptlrpc] Jan 29 03:30:51 ff04 kernel: [<ffffffffa0f06bb1>] ldlm_bl_thread_main+0x261/0x3c0 [ptlrpc] Jan 29 03:30:51 ff04 kernel: [<ffffffff81063310>] ? default_wake_function+0x0/0x20 Jan 29 03:30:51 ff04 kernel: [<ffffffffa0f06950>] ? ldlm_bl_thread_main+0x0/0x3c0 [ptlrpc] Jan 29 03:30:51 ff04 kernel: [<ffffffff81096916>] kthread+0x96/0xa0 Jan 29 03:30:51 ff04 kernel: [<ffffffff8100c0ca>] child_rip+0xa/0x20 Jan 29 03:30:51 ff04 kernel: [<ffffffff81096880>] ? kthread+0x0/0xa0 Jan 29 03:30:51 ff04 kernel: [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
            pjones Peter Jones added a comment -

            Hongchao

            Could you please advise on this ticket?

            Thanks

            Peter

            pjones Peter Jones added a comment - Hongchao Could you please advise on this ticket? Thanks Peter

            People

              hongchao.zhang Hongchao Zhang
              ihara Shuichi Ihara (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: