Details

    • Bug
    • Resolution: Fixed
    • Critical
    • Lustre 2.7.0
    • Lustre 2.6.0
    • OpenSFS cluster with 2 MDSs with 2MDTs each, 4 OSSs with two OSTs each
    • 3
    • 15018

    Description

      I was running the Small files create performance impact by LFSCK portion of the LFSCK Phase II test plan (LU-3423) and noticed that the speed limit flag was not working as expected.

      I ran:

      # lctl lfsck_start -M scratch-MDT0000 -A --reset --type layout  -s 1379
      Started LFSCK on the device scratch-MDT0000: scrub layout
      

      With the following results:

      # lctl get_param -n mdd.scratch-MDT0000.lfsck_layout
      
      name: lfsck_layout
      magic: 0xb173ae14
      version: 2
      status: completed
      flags:
      param: all_targets
      time_since_last_completed: 5 seconds
      time_since_latest_start: 2263 seconds
      time_since_last_checkpoint: 5 seconds
      latest_start_position: 0
      last_checkpoint_position: 241696769
      first_failure_position: 0
      success_count: 75
      repaired_dangling: 0
      repaired_unmatched_pair: 6400010
      repaired_multiple_referenced: 0
      repaired_orphan: 0
      repaired_inconsistent_owner: 0
      repaired_others: 0
      skipped: 0
      failed_phase1: 0
      failed_phase2: 0
      checked_phase1: 6912081
      checked_phase2: 0
      run_time_phase1: 2258 seconds
      run_time_phase2: 0 seconds
      average_speed_phase1: 3061 items/sec
      average_speed_phase2: 0 objs/sec
      real-time_speed_phase1: N/A
      real-time_speed_phase2: N/A
      current_position: N/A
      

      After that, I started running 'lctl lfsck_start" on the MDS with different values for the speed limit (-s). After a couple of 'lctl lfsck_start', LFSCK is stuck in scanning-phase-1 for 16 or so hours. Currently, I see:

      status: scanning-phase1
      

      for both 'lctl get_param mdd.scratch-MDT0000.lfsck_namespace' and 'lctl get_param mdd.scratch-MDT0000.lfsck_namespace' .

      I ran

      # lctl lfsck_stop -M scratch-MDT0000
      

      but it hasn't returned in the past 30 minutes.

      Prior to running lfsck_stop, dmesg on mds01, where the 'lctl lfsck_start' and 'lctl lfsck_stop' commands were run, I see the following:

      LNet: Service thread pid 32564 was inactive for 0.00s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
      LNet: Service thread pid 32564 completed after 0.00s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).
      Pid: 32564, comm: mdt02_003
      
      Call Trace:
       [<ffffffffa0576561>] ? libcfs_debug_msg+0x41/0x50 [libcfs]
       [<ffffffffa090ffc8>] ? ptlrpc_server_normal_pending+0x38/0xc0 [ptlrpc]
       [<ffffffffa0911565>] ptlrpc_wait_event+0x2c5/0x2d0 [ptlrpc]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa091ad9f>] ptlrpc_main+0x84f/0x1980 [ptlrpc]
       [<ffffffffa091a550>] ? ptlrpc_main+0x0/0x1980 [ptlrpc]
       [<ffffffff8109abf6>] kthread+0x96/0xa0
       [<ffffffff8100c20a>] child_rip+0xa/0x20
       [<ffffffff8109ab60>] ? kthread+0x0/0xa0
       [<ffffffff8100c200>] ? child_rip+0x0/0x20
      
      LustreError: 20602:0:(lustre_idl.h:775:ostid_to_fid()) bad ost_idx, 0x200000000:731978560 ost_idx:4294936591
      LustreError: 20602:0:(lustre_idl.h:775:ostid_to_fid()) Skipped 3 previous similar messages
      LustreError: 20602:0:(lustre_idl.h:775:ostid_to_fid()) bad ost_idx, 0x400000000:901663376 ost_idx:4294936589
      LustreError: 20602:0:(lustre_idl.h:775:ostid_to_fid()) Skipped 3 previous similar messages
      LustreError: 20602:0:(lustre_idl.h:775:ostid_to_fid()) bad ost_idx, 0x400000000:901663376 ost_idx:4294936589
      LustreError: 20602:0:(lustre_idl.h:775:ostid_to_fid()) Skipped 7 previous similar messages
      LustreError: 20602:0:(lustre_idl.h:775:ostid_to_fid()) bad ost_idx, 0x400000000:901663376 ost_idx:4294936589
      LustreError: 20602:0:(lustre_idl.h:775:ostid_to_fid()) Skipped 11 previous similar messages
      LustreError: 20602:0:(lustre_idl.h:775:ostid_to_fid()) bad ost_idx, 0x500000000:901663376 ost_idx:4294936589
      LustreError: 20602:0:(lustre_idl.h:775:ostid_to_fid()) Skipped 19 previous similar messages
      LustreError: 26090:0:(lustre_idl.h:775:ostid_to_fid()) bad ost_idx, 0x400000000:1515870810 ost_idx:1515870810
      LustreError: 26090:0:(lustre_idl.h:775:ostid_to_fid()) Skipped 19 previous similar messages
      LustreError: 1262:0:(lustre_idl.h:775:ostid_to_fid()) bad ost_idx, 0x0:3564759104 ost_idx:4294936583
      LustreError: 1262:0:(lustre_idl.h:775:ostid_to_fid()) Skipped 3 previous similar messages
      INFO: task lfsck:2905 blocked for more than 120 seconds.
            Not tainted 2.6.32-431.20.3.el6_lustre.x86_64 #1
      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      lfsck         D 000000000000000a     0  2905      2 0x00000080
       ffff880e2a51f9e0 0000000000000046 0000000000000000 ffffffffa057bd35
       0000000100000000 ffffc90036b69030 0000000000000246 0000000000000246
       ffff881031249af8 ffff880e2a51ffd8 000000000000fbc8 ffff881031249af8
      Call Trace:
       [<ffffffffa057bd35>] ? cfs_hash_bd_lookup_intent+0x65/0x130 [libcfs]
       [<ffffffffa06cf4d8>] lu_object_find_at+0xa8/0x360 [obdclass]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa06cf7cf>] lu_object_find_slice+0x1f/0x80 [obdclass]
       [<ffffffffa0f4494e>] lfsck_layout_scan_stripes+0x47e/0x1ad0 [lfsck]
       [<ffffffffa1093c24>] ? lod_xattr_get+0x154/0x640 [lod]
       [<ffffffffa0e16c68>] ? osd_object_read_unlock+0x88/0xd0 [osd_ldiskfs]
       [<ffffffffa0f5313b>] lfsck_layout_master_exec_oit+0x51b/0xc30 [lfsck]
       [<ffffffff8109b39c>] ? remove_wait_queue+0x3c/0x50
       [<ffffffffa0f2b780>] lfsck_exec_oit+0x70/0x9e0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f363da>] lfsck_master_oit_engine+0x41a/0x18b0 [lfsck]
       [<ffffffff81528cae>] ? thread_return+0x4e/0x760
       [<ffffffffa0f37bda>] lfsck_master_engine+0x36a/0x6f0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f37870>] ? lfsck_master_engine+0x0/0x6f0 [lfsck]
       [<ffffffff8109abf6>] kthread+0x96/0xa0
       [<ffffffff8100c20a>] child_rip+0xa/0x20
       [<ffffffff8109ab60>] ? kthread+0x0/0xa0
       [<ffffffff8100c200>] ? child_rip+0x0/0x20
      INFO: task lfsck:2905 blocked for more than 120 seconds.
            Not tainted 2.6.32-431.20.3.el6_lustre.x86_64 #1
      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      lfsck         D 000000000000000a     0  2905      2 0x00000080
       ffff880e2a51f9e0 0000000000000046 0000000000000000 ffffffffa057bd35
       0000000100000000 ffffc90036b69030 0000000000000246 0000000000000246
       ffff881031249af8 ffff880e2a51ffd8 000000000000fbc8 ffff881031249af8
      Call Trace:
       [<ffffffffa057bd35>] ? cfs_hash_bd_lookup_intent+0x65/0x130 [libcfs]
       [<ffffffffa06cf4d8>] lu_object_find_at+0xa8/0x360 [obdclass]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa06cf7cf>] lu_object_find_slice+0x1f/0x80 [obdclass]
       [<ffffffffa0f4494e>] lfsck_layout_scan_stripes+0x47e/0x1ad0 [lfsck]
       [<ffffffffa1093c24>] ? lod_xattr_get+0x154/0x640 [lod]
       [<ffffffffa0e16c68>] ? osd_object_read_unlock+0x88/0xd0 [osd_ldiskfs]
       [<ffffffffa0f5313b>] lfsck_layout_master_exec_oit+0x51b/0xc30 [lfsck]
       [<ffffffff8109b39c>] ? remove_wait_queue+0x3c/0x50
       [<ffffffffa0f2b780>] lfsck_exec_oit+0x70/0x9e0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f363da>] lfsck_master_oit_engine+0x41a/0x18b0 [lfsck]
       [<ffffffff81528cae>] ? thread_return+0x4e/0x760
       [<ffffffffa0f37bda>] lfsck_master_engine+0x36a/0x6f0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f37870>] ? lfsck_master_engine+0x0/0x6f0 [lfsck]
       [<ffffffff8109abf6>] kthread+0x96/0xa0
       [<ffffffff8100c20a>] child_rip+0xa/0x20
       [<ffffffff8109ab60>] ? kthread+0x0/0xa0
       [<ffffffff8100c200>] ? child_rip+0x0/0x20
      INFO: task lfsck:2905 blocked for more than 120 seconds.
            Not tainted 2.6.32-431.20.3.el6_lustre.x86_64 #1
      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      lfsck         D 000000000000000a     0  2905      2 0x00000080
       ffff880e2a51f9e0 0000000000000046 0000000000000000 ffffffffa057bd35
       0000000100000000 ffffc90036b69030 0000000000000246 0000000000000246
       ffff881031249af8 ffff880e2a51ffd8 000000000000fbc8 ffff881031249af8
      Call Trace:
       [<ffffffffa057bd35>] ? cfs_hash_bd_lookup_intent+0x65/0x130 [libcfs]
       [<ffffffffa06cf4d8>] lu_object_find_at+0xa8/0x360 [obdclass]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa06cf7cf>] lu_object_find_slice+0x1f/0x80 [obdclass]
       [<ffffffffa0f4494e>] lfsck_layout_scan_stripes+0x47e/0x1ad0 [lfsck]
       [<ffffffffa1093c24>] ? lod_xattr_get+0x154/0x640 [lod]
       [<ffffffffa0e16c68>] ? osd_object_read_unlock+0x88/0xd0 [osd_ldiskfs]
       [<ffffffffa0f5313b>] lfsck_layout_master_exec_oit+0x51b/0xc30 [lfsck]
       [<ffffffff8109b39c>] ? remove_wait_queue+0x3c/0x50
       [<ffffffffa0f2b780>] lfsck_exec_oit+0x70/0x9e0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f363da>] lfsck_master_oit_engine+0x41a/0x18b0 [lfsck]
       [<ffffffff81528cae>] ? thread_return+0x4e/0x760
       [<ffffffffa0f37bda>] lfsck_master_engine+0x36a/0x6f0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f37870>] ? lfsck_master_engine+0x0/0x6f0 [lfsck]
       [<ffffffff8109abf6>] kthread+0x96/0xa0
       [<ffffffff8100c20a>] child_rip+0xa/0x20
       [<ffffffff8109ab60>] ? kthread+0x0/0xa0
       [<ffffffff8100c200>] ? child_rip+0x0/0x20
      INFO: task lfsck:2905 blocked for more than 120 seconds.
            Not tainted 2.6.32-431.20.3.el6_lustre.x86_64 #1
      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      lfsck         D 000000000000000a     0  2905      2 0x00000080
       ffff880e2a51f9e0 0000000000000046 0000000000000000 ffffffffa057bd35
       0000000100000000 ffffc90036b69030 0000000000000246 0000000000000246
       ffff881031249af8 ffff880e2a51ffd8 000000000000fbc8 ffff881031249af8
      Call Trace:
       [<ffffffffa057bd35>] ? cfs_hash_bd_lookup_intent+0x65/0x130 [libcfs]
       [<ffffffffa06cf4d8>] lu_object_find_at+0xa8/0x360 [obdclass]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa06cf7cf>] lu_object_find_slice+0x1f/0x80 [obdclass]
       [<ffffffffa0f4494e>] lfsck_layout_scan_stripes+0x47e/0x1ad0 [lfsck]
       [<ffffffffa1093c24>] ? lod_xattr_get+0x154/0x640 [lod]
       [<ffffffffa0e16c68>] ? osd_object_read_unlock+0x88/0xd0 [osd_ldiskfs]
       [<ffffffffa0f5313b>] lfsck_layout_master_exec_oit+0x51b/0xc30 [lfsck]
       [<ffffffff8109b39c>] ? remove_wait_queue+0x3c/0x50
       [<ffffffffa0f2b780>] lfsck_exec_oit+0x70/0x9e0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f363da>] lfsck_master_oit_engine+0x41a/0x18b0 [lfsck]
       [<ffffffff81528cae>] ? thread_return+0x4e/0x760
       [<ffffffffa0f37bda>] lfsck_master_engine+0x36a/0x6f0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f37870>] ? lfsck_master_engine+0x0/0x6f0 [lfsck]
       [<ffffffff8109abf6>] kthread+0x96/0xa0
       [<ffffffff8100c20a>] child_rip+0xa/0x20
       [<ffffffff8109ab60>] ? kthread+0x0/0xa0
       [<ffffffff8100c200>] ? child_rip+0x0/0x20
      INFO: task lfsck:2905 blocked for more than 120 seconds.
            Not tainted 2.6.32-431.20.3.el6_lustre.x86_64 #1
      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      lfsck         D 000000000000000a     0  2905      2 0x00000080
       ffff880e2a51f9e0 0000000000000046 0000000000000000 ffffffffa057bd35
       0000000100000000 ffffc90036b69030 0000000000000246 0000000000000246
       ffff881031249af8 ffff880e2a51ffd8 000000000000fbc8 ffff881031249af8
      Call Trace:
       [<ffffffffa057bd35>] ? cfs_hash_bd_lookup_intent+0x65/0x130 [libcfs]
       [<ffffffffa06cf4d8>] lu_object_find_at+0xa8/0x360 [obdclass]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa06cf7cf>] lu_object_find_slice+0x1f/0x80 [obdclass]
       [<ffffffffa0f4494e>] lfsck_layout_scan_stripes+0x47e/0x1ad0 [lfsck]
       [<ffffffffa1093c24>] ? lod_xattr_get+0x154/0x640 [lod]
       [<ffffffffa0e16c68>] ? osd_object_read_unlock+0x88/0xd0 [osd_ldiskfs]
       [<ffffffffa0f5313b>] lfsck_layout_master_exec_oit+0x51b/0xc30 [lfsck]
       [<ffffffff8109b39c>] ? remove_wait_queue+0x3c/0x50
       [<ffffffffa0f2b780>] lfsck_exec_oit+0x70/0x9e0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f363da>] lfsck_master_oit_engine+0x41a/0x18b0 [lfsck]
       [<ffffffff81528cae>] ? thread_return+0x4e/0x760
       [<ffffffffa0f37bda>] lfsck_master_engine+0x36a/0x6f0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f37870>] ? lfsck_master_engine+0x0/0x6f0 [lfsck]
       [<ffffffff8109abf6>] kthread+0x96/0xa0
       [<ffffffff8100c20a>] child_rip+0xa/0x20
       [<ffffffff8109ab60>] ? kthread+0x0/0xa0
       [<ffffffff8100c200>] ? child_rip+0x0/0x20
      INFO: task lfsck:2905 blocked for more than 120 seconds.
            Not tainted 2.6.32-431.20.3.el6_lustre.x86_64 #1
      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      lfsck         D 000000000000000a     0  2905      2 0x00000080
       ffff880e2a51f9e0 0000000000000046 0000000000000000 ffffffffa057bd35
       0000000100000000 ffffc90036b69030 0000000000000246 0000000000000246
       ffff881031249af8 ffff880e2a51ffd8 000000000000fbc8 ffff881031249af8
      Call Trace:
       [<ffffffffa057bd35>] ? cfs_hash_bd_lookup_intent+0x65/0x130 [libcfs]
       [<ffffffffa06cf4d8>] lu_object_find_at+0xa8/0x360 [obdclass]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa06cf7cf>] lu_object_find_slice+0x1f/0x80 [obdclass]
       [<ffffffffa0f4494e>] lfsck_layout_scan_stripes+0x47e/0x1ad0 [lfsck]
       [<ffffffffa1093c24>] ? lod_xattr_get+0x154/0x640 [lod]
       [<ffffffffa0e16c68>] ? osd_object_read_unlock+0x88/0xd0 [osd_ldiskfs]
       [<ffffffffa0f5313b>] lfsck_layout_master_exec_oit+0x51b/0xc30 [lfsck]
       [<ffffffff8109b39c>] ? remove_wait_queue+0x3c/0x50
       [<ffffffffa0f2b780>] lfsck_exec_oit+0x70/0x9e0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f363da>] lfsck_master_oit_engine+0x41a/0x18b0 [lfsck]
       [<ffffffff81528cae>] ? thread_return+0x4e/0x760
       [<ffffffffa0f37bda>] lfsck_master_engine+0x36a/0x6f0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f37870>] ? lfsck_master_engine+0x0/0x6f0 [lfsck]
       [<ffffffff8109abf6>] kthread+0x96/0xa0
       [<ffffffff8100c20a>] child_rip+0xa/0x20
       [<ffffffff8109ab60>] ? kthread+0x0/0xa0
       [<ffffffff8100c200>] ? child_rip+0x0/0x20
      INFO: task lfsck:2905 blocked for more than 120 seconds.
            Not tainted 2.6.32-431.20.3.el6_lustre.x86_64 #1
      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      lfsck         D 000000000000000a     0  2905      2 0x00000080
       ffff880e2a51f9e0 0000000000000046 0000000000000000 ffffffffa057bd35
       0000000100000000 ffffc90036b69030 0000000000000246 0000000000000246
       ffff881031249af8 ffff880e2a51ffd8 000000000000fbc8 ffff881031249af8
      Call Trace:
       [<ffffffffa057bd35>] ? cfs_hash_bd_lookup_intent+0x65/0x130 [libcfs]
       [<ffffffffa06cf4d8>] lu_object_find_at+0xa8/0x360 [obdclass]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa06cf7cf>] lu_object_find_slice+0x1f/0x80 [obdclass]
       [<ffffffffa0f4494e>] lfsck_layout_scan_stripes+0x47e/0x1ad0 [lfsck]
       [<ffffffffa1093c24>] ? lod_xattr_get+0x154/0x640 [lod]
       [<ffffffffa0e16c68>] ? osd_object_read_unlock+0x88/0xd0 [osd_ldiskfs]
       [<ffffffffa0f5313b>] lfsck_layout_master_exec_oit+0x51b/0xc30 [lfsck]
       [<ffffffff8109b39c>] ? remove_wait_queue+0x3c/0x50
       [<ffffffffa0f2b780>] lfsck_exec_oit+0x70/0x9e0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f363da>] lfsck_master_oit_engine+0x41a/0x18b0 [lfsck]
       [<ffffffff81528cae>] ? thread_return+0x4e/0x760
       [<ffffffffa0f37bda>] lfsck_master_engine+0x36a/0x6f0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f37870>] ? lfsck_master_engine+0x0/0x6f0 [lfsck]
       [<ffffffff8109abf6>] kthread+0x96/0xa0
       [<ffffffff8100c20a>] child_rip+0xa/0x20
       [<ffffffff8109ab60>] ? kthread+0x0/0xa0
       [<ffffffff8100c200>] ? child_rip+0x0/0x20
      INFO: task lfsck:2905 blocked for more than 120 seconds.
            Not tainted 2.6.32-431.20.3.el6_lustre.x86_64 #1
      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      lfsck         D 000000000000000a     0  2905      2 0x00000080
       ffff880e2a51f9e0 0000000000000046 0000000000000000 ffffffffa057bd35
       0000000100000000 ffffc90036b69030 0000000000000246 0000000000000246
       ffff881031249af8 ffff880e2a51ffd8 000000000000fbc8 ffff881031249af8
      Call Trace:
       [<ffffffffa057bd35>] ? cfs_hash_bd_lookup_intent+0x65/0x130 [libcfs]
       [<ffffffffa06cf4d8>] lu_object_find_at+0xa8/0x360 [obdclass]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa06cf7cf>] lu_object_find_slice+0x1f/0x80 [obdclass]
       [<ffffffffa0f4494e>] lfsck_layout_scan_stripes+0x47e/0x1ad0 [lfsck]
       [<ffffffffa1093c24>] ? lod_xattr_get+0x154/0x640 [lod]
       [<ffffffffa0e16c68>] ? osd_object_read_unlock+0x88/0xd0 [osd_ldiskfs]
       [<ffffffffa0f5313b>] lfsck_layout_master_exec_oit+0x51b/0xc30 [lfsck]
       [<ffffffff8109b39c>] ? remove_wait_queue+0x3c/0x50
       [<ffffffffa0f2b780>] lfsck_exec_oit+0x70/0x9e0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f363da>] lfsck_master_oit_engine+0x41a/0x18b0 [lfsck]
       [<ffffffff81528cae>] ? thread_return+0x4e/0x760
       [<ffffffffa0f37bda>] lfsck_master_engine+0x36a/0x6f0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f37870>] ? lfsck_master_engine+0x0/0x6f0 [lfsck]
       [<ffffffff8109abf6>] kthread+0x96/0xa0
       [<ffffffff8100c20a>] child_rip+0xa/0x20
       [<ffffffff8109ab60>] ? kthread+0x0/0xa0
       [<ffffffff8100c200>] ? child_rip+0x0/0x20
      INFO: task lfsck:2905 blocked for more than 120 seconds.
            Not tainted 2.6.32-431.20.3.el6_lustre.x86_64 #1
      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      lfsck         D 000000000000000a     0  2905      2 0x00000080
       ffff880e2a51f9e0 0000000000000046 0000000000000000 ffffffffa057bd35
       0000000100000000 ffffc90036b69030 0000000000000246 0000000000000246
       ffff881031249af8 ffff880e2a51ffd8 000000000000fbc8 ffff881031249af8
      Call Trace:
       [<ffffffffa057bd35>] ? cfs_hash_bd_lookup_intent+0x65/0x130 [libcfs]
       [<ffffffffa06cf4d8>] lu_object_find_at+0xa8/0x360 [obdclass]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa06cf7cf>] lu_object_find_slice+0x1f/0x80 [obdclass]
       [<ffffffffa0f4494e>] lfsck_layout_scan_stripes+0x47e/0x1ad0 [lfsck]
       [<ffffffffa1093c24>] ? lod_xattr_get+0x154/0x640 [lod]
       [<ffffffffa0e16c68>] ? osd_object_read_unlock+0x88/0xd0 [osd_ldiskfs]
       [<ffffffffa0f5313b>] lfsck_layout_master_exec_oit+0x51b/0xc30 [lfsck]
       [<ffffffff8109b39c>] ? remove_wait_queue+0x3c/0x50
       [<ffffffffa0f2b780>] lfsck_exec_oit+0x70/0x9e0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f363da>] lfsck_master_oit_engine+0x41a/0x18b0 [lfsck]
       [<ffffffff81528cae>] ? thread_return+0x4e/0x760
       [<ffffffffa0f37bda>] lfsck_master_engine+0x36a/0x6f0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f37870>] ? lfsck_master_engine+0x0/0x6f0 [lfsck]
       [<ffffffff8109abf6>] kthread+0x96/0xa0
       [<ffffffff8100c20a>] child_rip+0xa/0x20
       [<ffffffff8109ab60>] ? kthread+0x0/0xa0
       [<ffffffff8100c200>] ? child_rip+0x0/0x20
      INFO: task lfsck:2905 blocked for more than 120 seconds.
            Not tainted 2.6.32-431.20.3.el6_lustre.x86_64 #1
      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      lfsck         D 000000000000000a     0  2905      2 0x00000080
       ffff880e2a51f9e0 0000000000000046 0000000000000000 ffffffffa057bd35
       0000000100000000 ffffc90036b69030 0000000000000246 0000000000000246
       ffff881031249af8 ffff880e2a51ffd8 000000000000fbc8 ffff881031249af8
      Call Trace:
       [<ffffffffa057bd35>] ? cfs_hash_bd_lookup_intent+0x65/0x130 [libcfs]
       [<ffffffffa06cf4d8>] lu_object_find_at+0xa8/0x360 [obdclass]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa06cf7cf>] lu_object_find_slice+0x1f/0x80 [obdclass]
       [<ffffffffa0f4494e>] lfsck_layout_scan_stripes+0x47e/0x1ad0 [lfsck]
       [<ffffffffa1093c24>] ? lod_xattr_get+0x154/0x640 [lod]
       [<ffffffffa0e16c68>] ? osd_object_read_unlock+0x88/0xd0 [osd_ldiskfs]
       [<ffffffffa0f5313b>] lfsck_layout_master_exec_oit+0x51b/0xc30 [lfsck]
       [<ffffffff8109b39c>] ? remove_wait_queue+0x3c/0x50
       [<ffffffffa0f2b780>] lfsck_exec_oit+0x70/0x9e0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f363da>] lfsck_master_oit_engine+0x41a/0x18b0 [lfsck]
       [<ffffffff81528cae>] ? thread_return+0x4e/0x760
       [<ffffffffa0f37bda>] lfsck_master_engine+0x36a/0x6f0 [lfsck]
       [<ffffffff81061d00>] ? default_wake_function+0x0/0x20
       [<ffffffffa0f37870>] ? lfsck_master_engine+0x0/0x6f0 [lfsck]
       [<ffffffff8109abf6>] kthread+0x96/0xa0
       [<ffffffff8100c20a>] child_rip+0xa/0x20
       [<ffffffff8109ab60>] ? kthread+0x0/0xa0
       [<ffffffff8100c200>] ? child_rip+0x0/0x20
      

      Attachments

        1. 07Aug_lfsck_log1.txt
          0.2 kB
        2. 07Aug_lfsck_log2.txt.gz
          0.2 kB
        3. 07Aug_messages.txt
          889 kB
        4. messages
          787 kB

        Issue Links

          Activity

            [LU-5395] lfsck_start not progressing

            The original issues of LFSCK hung have been resolved by the patches:

            http://review.whamcloud.com/#/c/11304/
            http://review.whamcloud.com/#/c/11373/

            So close this bug. If we hit other failures in further, please open new ticket.

            yong.fan nasf (Inactive) added a comment - The original issues of LFSCK hung have been resolved by the patches: http://review.whamcloud.com/#/c/11304/ http://review.whamcloud.com/#/c/11373/ So close this bug. If we hit other failures in further, please open new ticket.

            I suspect what is wrong on the OSTs is that I set 'fail_loc=0x1614' on them and that caused some problems.

            The OSS kernel logs were collected after all the LFSCK runs. So, it may not be obvious where one LFSCK ends and the next one starts. I do see several of the following messages in the OSS logs:

            00100000:10000000:7.0:1408057678.451385:0:25634:0:(lfsck_layout.c:6274:lfsck_orphan_it_init()) scratch-OST0003-osd: init the orphan iteration: rc = -3
            

            I've uploaded two logs to uploads; lfsck_log_oss1.txt contains the kernel log for OST0000 and OST0001 and lfsck_log_oss2.txt has the kernel log for OST0002 and OST0003

            jamesanunez James Nunez (Inactive) added a comment - I suspect what is wrong on the OSTs is that I set 'fail_loc=0x1614' on them and that caused some problems. The OSS kernel logs were collected after all the LFSCK runs. So, it may not be obvious where one LFSCK ends and the next one starts. I do see several of the following messages in the OSS logs: 00100000:10000000:7.0:1408057678.451385:0:25634:0:(lfsck_layout.c:6274:lfsck_orphan_it_init()) scratch-OST0003-osd: init the orphan iteration: rc = -3 I've uploaded two logs to uploads; lfsck_log_oss1.txt contains the kernel log for OST0000 and OST0001 and lfsck_log_oss2.txt has the kernel log for OST0002 and OST0003

            A lot of failure message like that:
            "(lfsck_layout.c:2906:lfsck_layout_scan_orphan()) scratch-MDT0000-osd: layout LFSCK assistant finished the orphan scanning for OST0003: rc = -3"

            That means some failures on the OST side during the LFSCK. So do you have the Lustre logs on OST? Thanks!

            yong.fan nasf (Inactive) added a comment - A lot of failure message like that: "(lfsck_layout.c:2906:lfsck_layout_scan_orphan()) scratch-MDT0000-osd: layout LFSCK assistant finished the orphan scanning for OST0003: rc = -3" That means some failures on the OST side during the LFSCK. So do you have the Lustre logs on OST? Thanks!

            Thank you for the explanation about what to expect with the different fail_loc values.

            I have uploaded two logs to /scratch/ftp/uploads/LU-5395 for each of the LFSCK runs that returned with a status of "partial"; file names lfsck_log_40.txt and lfsck_log_41.txt. Please let me know if the logs reveal anything interesting on why LFSCK could not complete.

            jamesanunez James Nunez (Inactive) added a comment - Thank you for the explanation about what to expect with the different fail_loc values. I have uploaded two logs to /scratch/ftp/uploads/ LU-5395 for each of the LFSCK runs that returned with a status of "partial"; file names lfsck_log_40.txt and lfsck_log_41.txt. Please let me know if the logs reveal anything interesting on why LFSCK could not complete.

            It is good news that the original issues have been resolved.

            As for the "status: partial", it means that during the LFSCK running, not all the object can be verified, then the LFSCK cannot make sure whether there are some inconsistency or not. For example, during the LFSCK processing, the communication between the MDT and the OST is broken, then the LFSCK on the MDT cannot get related information from the OST for consistency verification, then some OST-ojects have to be skipped. Under such case, the LFSCK will be marked as partial.

            As for the message "lvbo_init failed for resource 0x6a083:0x0: rc = -2" during the dangling test, it is normal, because we inject error and cause related OST-objects are lost or not created, then if someone try to access related OST-objects before the LFSCK repairing them, then they will get the "-2" failure as you hit in the test.

            If we have the Lustre logs, then we can analysis it for the detailed reasons.

            About the error injection, the dangling reference "0x1610" is set on the OST when create, the multiple reference "0x1614" is set on the MDS when create. If you want to test, please apply the patches for the LU-5395. Thanks!

            yong.fan nasf (Inactive) added a comment - It is good news that the original issues have been resolved. As for the "status: partial", it means that during the LFSCK running, not all the object can be verified, then the LFSCK cannot make sure whether there are some inconsistency or not. For example, during the LFSCK processing, the communication between the MDT and the OST is broken, then the LFSCK on the MDT cannot get related information from the OST for consistency verification, then some OST-ojects have to be skipped. Under such case, the LFSCK will be marked as partial. As for the message "lvbo_init failed for resource 0x6a083:0x0: rc = -2" during the dangling test, it is normal, because we inject error and cause related OST-objects are lost or not created, then if someone try to access related OST-objects before the LFSCK repairing them, then they will get the "-2" failure as you hit in the test. If we have the Lustre logs, then we can analysis it for the detailed reasons. About the error injection, the dangling reference "0x1610" is set on the OST when create, the multiple reference "0x1614" is set on the MDS when create. If you want to test, please apply the patches for the LU-5395 . Thanks!

            I’ve tested a build of master plus the latest versions of the two patches mentioned above. Things look good, but there are some issues that I’d like you to comment on.

            I ran the workload explained above, ran ‘lctl lfsck_start’ and everything completed as expected. I didn't see any hangs, I do not see any messages about “bad ost_idx” on the MDSs nor on the OSSs dmesg. So, that is good news. It looks like these patches cleared up the issue I was seeing.

            Since this was successful, I decided to do the same exact thing as above, but change one thing, the fail_loc value on all OSTs. I set fail_loc to “0x1614”, which should be multiple references. When I did this, I can’t get LFSCK to complete successfully. I see in the LFSCK output:

            status: partial
            flags: incomplete
            

            I started LFSCK twice and both times, it would not complete and would stop with “status: partial”. I have logs for both of these runs.

            I then tried to set fail_loc to “0x1610”, dangling references, but the script fails and I see the following in the OST dmesg:

            LustreError: 25382:0:(ldlm_resource.c:1150:ldlm_resource_get()) scratch-OST0000: lvbo_init failed for resource 0x6a083:0x0: rc = -2
            LustreError: 25382:0:(ldlm_resource.c:1150:ldlm_resource_get()) Skipped 46 previous similar messages
            Lustre: *** cfs_fail_loc=1610, val=0***
            LustreError: 25644:0:(ldlm_resource.c:1150:ldlm_resource_get()) scratch-OST0001: lvbo_init failed for resource 0x6a0e3:0x0: rc = -2
            LustreError: 25644:0:(ldlm_resource.c:1150:ldlm_resource_get()) Skipped 168 previous similar messages
            

            This looks like LU-5457.

            I plan to reinstall Lustre on these nodes and run the workload creating the multiple references and dangling references. Should I use this series of patches or just use master? What would provide the most information? If I can recreate this problem, I’ll open a new ticket.

            Thanks.

            jamesanunez James Nunez (Inactive) added a comment - I’ve tested a build of master plus the latest versions of the two patches mentioned above. Things look good, but there are some issues that I’d like you to comment on. I ran the workload explained above, ran ‘lctl lfsck_start’ and everything completed as expected. I didn't see any hangs, I do not see any messages about “bad ost_idx” on the MDSs nor on the OSSs dmesg. So, that is good news. It looks like these patches cleared up the issue I was seeing. Since this was successful, I decided to do the same exact thing as above, but change one thing, the fail_loc value on all OSTs. I set fail_loc to “0x1614”, which should be multiple references. When I did this, I can’t get LFSCK to complete successfully. I see in the LFSCK output: status: partial flags: incomplete I started LFSCK twice and both times, it would not complete and would stop with “status: partial”. I have logs for both of these runs. I then tried to set fail_loc to “0x1610”, dangling references, but the script fails and I see the following in the OST dmesg: LustreError: 25382:0:(ldlm_resource.c:1150:ldlm_resource_get()) scratch-OST0000: lvbo_init failed for resource 0x6a083:0x0: rc = -2 LustreError: 25382:0:(ldlm_resource.c:1150:ldlm_resource_get()) Skipped 46 previous similar messages Lustre: *** cfs_fail_loc=1610, val=0*** LustreError: 25644:0:(ldlm_resource.c:1150:ldlm_resource_get()) scratch-OST0001: lvbo_init failed for resource 0x6a0e3:0x0: rc = -2 LustreError: 25644:0:(ldlm_resource.c:1150:ldlm_resource_get()) Skipped 168 previous similar messages This looks like LU-5457 . I plan to reinstall Lustre on these nodes and run the workload creating the multiple references and dangling references. Should I use this series of patches or just use master? What would provide the most information? If I can recreate this problem, I’ll open a new ticket. Thanks.

            Thanks James! Your log is very useful. I have found at least one bad code in LFSCK that will cause the trouble message "lustre_idl.h:770:ostid_to_fid()) bad ost_idx...".

            I have updated the the two patches: http://review.whamcloud.com/11373 and http://review.whamcloud.com/#/c/11304/. Would you please to retry them? And please get the same logs as you did before if still hit the trouble message. Thanks!

            yong.fan nasf (Inactive) added a comment - Thanks James! Your log is very useful. I have found at least one bad code in LFSCK that will cause the trouble message "lustre_idl.h:770:ostid_to_fid()) bad ost_idx...". I have updated the the two patches: http://review.whamcloud.com/11373 and http://review.whamcloud.com/#/c/11304/ . Would you please to retry them? And please get the same logs as you did before if still hit the trouble message. Thanks!

            I've uploaded the logs to /scratch/ftp/uploads/LU-5395. Here what I did yesterday and the logs that were taken:

            started remove of directories containing 100,000 files each and started running test 2.2 from the LFSCK test plan, create files with unmatched pairs. Start kernel log and then LFSCK:

            lctl clear
            lctl debug_daemon start /tmp/lfsck_log_1 1024
            lctl mark LFSCK_START
            lctl lfsck_start -M scratch-MDT0000 -A --reset --type layout -s 100
            

            I stopped LFSCK because the speed limit of 100 was too slow. Since the workload of remove files and create files with unmatched pairs was still running, I stopped the kernel log, called lfsck_log_1.txt in uploads directory, and restarted the kernel logging since I didn't want to lose information from the log.

            lctl lfsck_stop -M scratch-MDT0000 -A
            lctl mark LFSCK_END
            lctl debug_daemon stop
            lctl clear
            lctl debug_daemon start /tmp/lfsck_log_2 1024
            lctl mark LFSCK_START
            lctl lfsck_start -M scratch-MDT0000 -A --reset --type layout
            

            Again, 'lfsck_start' completed, but workload was still running. So, I stopped the kernel log, lfsck_log_2.txt in uploads, and started the kernel log and 'lfsck_start' again:

            lctl mark LFSCK_END
               31  lctl debug_daemon stop
               32  lctl clear
               33  lctl debug_daemon start /tmp/lfsck_log_3 1024
               34  lctl mark LFSCK_START
               35  lctl lfsck_start -M scratch-MDT0000 -A --reset --type layout
            

            I did this two more times; stop the log, start the log and start 'lctl lfsck_start'. Each time, lfsck_start completed with no hangs. The logs are lfsck_log_3.txt, lfsck_log_4.txt, and lfsck_log_5.txt in uploads. lfsck_log_4.txt and lfsck_log_5.txt capture one run of lfsck_start because I used the '-s 100' flag to see if I could get lfsck_start to hang with the speed limit flag. It did not hang.

            At this point, the file create and delete workload ended. I started up one more 'lctl lfsck_start' to clean up and issues that were left behind in the file system. Since I couldn't hand lfsck_start, I thought I should try again and I started creating 100,000 files in several directories again. At this point, I was watching the status of LFSCK at /proc/fs/lustre/mdd/scratch-MDT0000/lfsck_layout and it looked like things hung. So, I issued "echo t > /proc/sysrq-trigger" and the output is in messages in uploads.

            Several minutes later, things started progressing again and lfsck_start finally completed. So, I started up the same workload of remove files and create files with unmatched pairs and ran lfsck_start twice, with a speed limit of 250 and 300, while this was running. The logs for these runs are in uploads at lfsck_log_20.txt and lfsck_log_21.txt. In both cases, LFSCK completed.

            jamesanunez James Nunez (Inactive) added a comment - I've uploaded the logs to /scratch/ftp/uploads/ LU-5395 . Here what I did yesterday and the logs that were taken: started remove of directories containing 100,000 files each and started running test 2.2 from the LFSCK test plan, create files with unmatched pairs. Start kernel log and then LFSCK: lctl clear lctl debug_daemon start /tmp/lfsck_log_1 1024 lctl mark LFSCK_START lctl lfsck_start -M scratch-MDT0000 -A --reset --type layout -s 100 I stopped LFSCK because the speed limit of 100 was too slow. Since the workload of remove files and create files with unmatched pairs was still running, I stopped the kernel log, called lfsck_log_1.txt in uploads directory, and restarted the kernel logging since I didn't want to lose information from the log. lctl lfsck_stop -M scratch-MDT0000 -A lctl mark LFSCK_END lctl debug_daemon stop lctl clear lctl debug_daemon start /tmp/lfsck_log_2 1024 lctl mark LFSCK_START lctl lfsck_start -M scratch-MDT0000 -A --reset --type layout Again, 'lfsck_start' completed, but workload was still running. So, I stopped the kernel log, lfsck_log_2.txt in uploads, and started the kernel log and 'lfsck_start' again: lctl mark LFSCK_END 31 lctl debug_daemon stop 32 lctl clear 33 lctl debug_daemon start /tmp/lfsck_log_3 1024 34 lctl mark LFSCK_START 35 lctl lfsck_start -M scratch-MDT0000 -A --reset --type layout I did this two more times; stop the log, start the log and start 'lctl lfsck_start'. Each time, lfsck_start completed with no hangs. The logs are lfsck_log_3.txt, lfsck_log_4.txt, and lfsck_log_5.txt in uploads. lfsck_log_4.txt and lfsck_log_5.txt capture one run of lfsck_start because I used the '-s 100' flag to see if I could get lfsck_start to hang with the speed limit flag. It did not hang. At this point, the file create and delete workload ended. I started up one more 'lctl lfsck_start' to clean up and issues that were left behind in the file system. Since I couldn't hand lfsck_start, I thought I should try again and I started creating 100,000 files in several directories again. At this point, I was watching the status of LFSCK at /proc/fs/lustre/mdd/scratch-MDT0000/lfsck_layout and it looked like things hung. So, I issued "echo t > /proc/sysrq-trigger" and the output is in messages in uploads. Several minutes later, things started progressing again and lfsck_start finally completed. So, I started up the same workload of remove files and create files with unmatched pairs and ran lfsck_start twice, with a speed limit of 250 and 300, while this was running. The logs for these runs are in uploads at lfsck_log_20.txt and lfsck_log_21.txt. In both cases, LFSCK completed.

            So you still hit once LFSCK hung with the latest patches applied, right? Or such hung was temporarily and the LFSCK finished finally? If you have the kernel stack dump log "echo t > /proc/sysrq-trigger" at that time point, then it will be very helpful. Thanks!

            yong.fan nasf (Inactive) added a comment - So you still hit once LFSCK hung with the latest patches applied, right? Or such hung was temporarily and the LFSCK finished finally? If you have the kernel stack dump log "echo t > /proc/sysrq-trigger" at that time point, then it will be very helpful. Thanks!

            Yes, I have the logs and will upload them this morning.

            jamesanunez James Nunez (Inactive) added a comment - Yes, I have the logs and will upload them this morning.

            People

              yong.fan nasf (Inactive)
              jamesanunez James Nunez (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: