Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-1085

ASSERTION(cfs_atomic_read(&exp->exp_refcount) == 0) failed

Details

    • 3
    • 6467

    Description

      We have multiple Lustre 2.1 OSS nodes crashing repeatedly during recovery. This is on our classified Lustre cluster which was updated from 1.8 to 2.1 on Tuesday. The summary is one observed symptom. We have also seen these assertions appearing together.

      ASSERTION(cfs_list_empty(&imp->imp_zombie_chain)) failed
      ASSERTION(cfs_atomic_read(&exp->exp_refcount) == 0) failed

      We don't have backtraces for the assertions because STONITH kicked in before the crash dump completed.

      Other OSS nodes are crashing in kernel string handling functions with stacks like

      machine_kexec
      crash_kexec
      oops_end
      die
      do_general_protection
      general_protection
      (exception RIP: strlen+9)
      strlen
      string
      vsnprintf
      libcfs_debug_vmsg2
      _debug_req
      target_send_replay_msg
      target_send_reply
      ost_handle
      ptlrpc_main

      So it appears we are passing a bad value in a debug message.

      Another stack trace:

      BUG: unable to handle kernel NULL ptr dereference at 000...38
      IP: [<fffffffa0a8706>] filter_export_stats_init+0x1f1/0x500 [obdfilter]

      machine_kexec
      crash_kexec
      oops_end
      no_context
      __bad_area_semaphore
      bad_area_semaphore
      __do_page_fault
      do_page_fault
      page_fault
      filter_reconnect
      target_handle_connect
      ost_handle
      ptlrpc_main

      We have multiple symptoms here that may or not be due to the same bug. We may need to open a separate issue to track the root cause. Note that our branch contains LU-874 patches that touched the ptlrpc queue management code, so we should be on the lookout for any races introduced there. Also note we can't send debug data from this system.

      Attachments

        Activity

          [LU-1085] ASSERTION(cfs_atomic_read(&exp->exp_refcount) == 0) failed

          Ignore the comment about being in recovery. So far I don't think the logs show that.

          I've looked at a few nodes, and it looks like there is some kind of client timeout/eviction and then reconnection storm going on before the assertions.

          It looks like generally there are tens of thousands of "haven't heard from client X in 226 seconds. I think it's dead, and I am evicting it" messages. A couple of minutes clients begin reconnecting in droves. There is a mix of ost "connection from" and ost "Not available for connect" messages.

          The "haven't heard from client" and the client connect messages are both interleaved in the logs, and often repeated 30,000+ times (lustre squashes them into "previous similar messages" lines).

          And then we hit one of the two assertions listed at the beginning of this bug.

          Note that for several of the OSS nodes that I have looked at so far, the clients all seem to be from one particular cluster, which is running 1.8. (servers are all 2.1.0-24chaos).

          morrone Christopher Morrone (Inactive) added a comment - Ignore the comment about being in recovery. So far I don't think the logs show that. I've looked at a few nodes, and it looks like there is some kind of client timeout/eviction and then reconnection storm going on before the assertions. It looks like generally there are tens of thousands of "haven't heard from client X in 226 seconds. I think it's dead, and I am evicting it" messages. A couple of minutes clients begin reconnecting in droves. There is a mix of ost "connection from" and ost "Not available for connect" messages. The "haven't heard from client" and the client connect messages are both interleaved in the logs, and often repeated 30,000+ times (lustre squashes them into "previous similar messages" lines). And then we hit one of the two assertions listed at the beginning of this bug. Note that for several of the OSS nodes that I have looked at so far, the clients all seem to be from one particular cluster, which is running 1.8. (servers are all 2.1.0-24chaos).
          green Oleg Drokin added a comment -

          I wonder if that's related to LU-1166 to some degree.

          Unfortunately the current patch is not fully complete, but does not make things worse.

          Any other messages before that crash that you can share with us?

          green Oleg Drokin added a comment - I wonder if that's related to LU-1166 to some degree. Unfortunately the current patch is not fully complete, but does not make things worse. Any other messages before that crash that you can share with us?

          Apparently we hit this over 20 times on various OSS of multiple filesystems over the weekend. Note that last week we re-enabled OSS read cache (write cache is still disabled).

          The admins tell me that this often hit while in (or perhaps shortly after) recovery. I'll need to look into why we were in recovery in the first place.

          morrone Christopher Morrone (Inactive) added a comment - Apparently we hit this over 20 times on various OSS of multiple filesystems over the weekend. Note that last week we re-enabled OSS read cache (write cache is still disabled). The admins tell me that this often hit while in (or perhaps shortly after) recovery. I'll need to look into why we were in recovery in the first place.

          Also I guess I must ask how was the original recovery triggered? Some unrelated crash or was this a recovery test of some sort? (purely of purposes of seeing if there is some another bug at play here).

          This was on a production system, not a recovery test.

          In the case of this stack trace the server had crashed about twenty minutes earlier with ASSERTION(cfs_atomic_read(&exp->exp_refcount) == 0).

          We are still trying to understand what led to that crash, and the others in LU-[1092,1093,1094], but the factors at play seem to be high server load, slow server response, and many clients dropping their connection to the server and reconnecting. There was a four hour window in which we had 16 crashes involving 5 OSS nodes. As mentioned above, things seemed to stabilize after we disabled the OSS read and writethrough caches.

          nedbass Ned Bass (Inactive) added a comment - Also I guess I must ask how was the original recovery triggered? Some unrelated crash or was this a recovery test of some sort? (purely of purposes of seeing if there is some another bug at play here). This was on a production system, not a recovery test. In the case of this stack trace the server had crashed about twenty minutes earlier with ASSERTION(cfs_atomic_read(&exp->exp_refcount) == 0). We are still trying to understand what led to that crash, and the others in LU- [1092,1093,1094] , but the factors at play seem to be high server load, slow server response, and many clients dropping their connection to the server and reconnecting. There was a four hour window in which we had 16 crashes involving 5 OSS nodes. As mentioned above, things seemed to stabilize after we disabled the OSS read and writethrough caches.
          green Oleg Drokin added a comment -

          Also I guess I must ask how was the original recovery triggered? Some unrelated crash or was this a recovery test of some sort? (purely of purposes of seeing if there is some another bug at play here).

          green Oleg Drokin added a comment - Also I guess I must ask how was the original recovery triggered? Some unrelated crash or was this a recovery test of some sort? (purely of purposes of seeing if there is some another bug at play here).

          It was lustre/obdclass/genops.c:728:

           711 /* Export management functions */
           712 static void class_export_destroy(struct obd_export *exp)
           713 {
           714         struct obd_device *obd = exp->exp_obd;
           715         ENTRY;
           716 
           717         LASSERT_ATOMIC_ZERO(&exp->exp_refcount);
           718 
           719         CDEBUG(D_IOCTL, "destroying export %p/%s for %s\n", exp,
           720                exp->exp_client_uuid.uuid, obd->obd_name);
           721 
           722         LASSERT(obd != NULL);
           723 
           724         /* "Local" exports (lctl, LOV->{mdc,osc}) have no connection. */
           725         if (exp->exp_connection)
           726                 ptlrpc_put_connection_superhack(exp->exp_connection);
           727 
           728         LASSERT(cfs_list_empty(&exp->exp_outstanding_replies));
           729         LASSERT(cfs_list_empty(&exp->exp_uncommitted_replies));
           730         LASSERT(cfs_list_empty(&exp->exp_req_replay_queue));
           731         LASSERT(cfs_list_empty(&exp->exp_hp_rpcs));
           732         obd_destroy_export(exp);
           733         class_decref(obd, "export", exp);
           734 
           735         OBD_FREE_RCU(exp, sizeof(*exp), &exp->exp_handle);
           736         EXIT;
           737 }
          
          nedbass Ned Bass (Inactive) added a comment - It was lustre/obdclass/genops.c:728: 711 /* Export management functions */ 712 static void class_export_destroy(struct obd_export *exp) 713 { 714 struct obd_device *obd = exp->exp_obd; 715 ENTRY; 716 717 LASSERT_ATOMIC_ZERO(&exp->exp_refcount); 718 719 CDEBUG(D_IOCTL, "destroying export %p/%s for %s\n", exp, 720 exp->exp_client_uuid.uuid, obd->obd_name); 721 722 LASSERT(obd != NULL); 723 724 /* "Local" exports (lctl, LOV->{mdc,osc}) have no connection. */ 725 if (exp->exp_connection) 726 ptlrpc_put_connection_superhack(exp->exp_connection); 727 728 LASSERT(cfs_list_empty(&exp->exp_outstanding_replies)); 729 LASSERT(cfs_list_empty(&exp->exp_uncommitted_replies)); 730 LASSERT(cfs_list_empty(&exp->exp_req_replay_queue)); 731 LASSERT(cfs_list_empty(&exp->exp_hp_rpcs)); 732 obd_destroy_export(exp); 733 class_decref(obd, "export", exp); 734 735 OBD_FREE_RCU(exp, sizeof(*exp), &exp->exp_handle); 736 EXIT; 737 }
          green Oleg Drokin added a comment -

          for the zombid backtrace, what was the line number from where the lbug_with_loc was called? I know you did not get LASSERT in the logs, but you can use gdb obdclass.ko and then run "l *(obd_zombie_impexp_cull+0x...)" (the +0x... value from backtrace) and it'll tell you the line number.
          Hm, I just noticed that neither obd_zombie_impexp_cull nor obd_zombie_impexp_thread have any lbugs nor lasserts.

          green Oleg Drokin added a comment - for the zombid backtrace, what was the line number from where the lbug_with_loc was called? I know you did not get LASSERT in the logs, but you can use gdb obdclass.ko and then run "l *(obd_zombie_impexp_cull+0x...)" (the +0x... value from backtrace) and it'll tell you the line number. Hm, I just noticed that neither obd_zombie_impexp_cull nor obd_zombie_impexp_thread have any lbugs nor lasserts.

          Opened the following new issues:

          • LU-1092 NULL pointer dereference in filter_export_stats_init()
          • LU-1093 unable to handle kernel paging request in target_handle_connect()
          • LU-1094 general protection fault in _debug_req()
          nedbass Ned Bass (Inactive) added a comment - Opened the following new issues: LU-1092 NULL pointer dereference in filter_export_stats_init() LU-1093 unable to handle kernel paging request in target_handle_connect() LU-1094 general protection fault in _debug_req()

          I am going to open separate issues for the different crashes we ran into yesterday. We can use this issue to track the obd_zombid crash.

          We disabled the OSS read and writethrough caches and have not had any crashes since then. Nearly every crash was preceded by hundreds of client reconnect attempts and hundreds of log messages of the form:

          LustreError: 14210:0:(genops.c:1270:class_disconnect_stale_exports()) ls5-OST0349: disconnect stale client [UUID]@<unknown>

          nedbass Ned Bass (Inactive) added a comment - I am going to open separate issues for the different crashes we ran into yesterday. We can use this issue to track the obd_zombid crash. We disabled the OSS read and writethrough caches and have not had any crashes since then. Nearly every crash was preceded by hundreds of client reconnect attempts and hundreds of log messages of the form: LustreError: 14210:0:(genops.c:1270:class_disconnect_stale_exports()) ls5-OST0349: disconnect stale client [UUID] @<unknown>

          I found a third incident of a panicked obd_zombid thread from yesterday. No ASSERT message was captured in the logs for this one, but we did get a complete crash dump. Here is the backtrace

          machine_kexec
          crash_kexec
          panic
          lbug_with_loc
          obd_zombie_impexp_cull
          obd_zombie_impexp_thread
          kernel_thread

          nedbass Ned Bass (Inactive) added a comment - I found a third incident of a panicked obd_zombid thread from yesterday. No ASSERT message was captured in the logs for this one, but we did get a complete crash dump. Here is the backtrace machine_kexec crash_kexec panic lbug_with_loc obd_zombie_impexp_cull obd_zombie_impexp_thread kernel_thread

          For the case where we got into trouble in _debug_req(), I did some
          digging in crash to see what state the ptlrpc_reqeust was in. I dug up
          the pointer address from the backtrace (let's call it <addr1> to save
          typing). Then resolving some of the strings that get passed to
          libcfs_debug_vmsg2() from _debug_req(), I see:

          crash> struct ptlrpc_request.rq_import <addr>
           rp_import = 0x0 
          crash> struct ptlrpc_request.rq_export <addr>
           rp_export = <addr2>
          crash> struct obd_export.exp_connection <addr2>
           exp_connection = 0x5a5a5a5a5a5a5a5a
          crash> struct obd_export.exp_client_uuid <addr2>
           exp_client_uuid = { 
                  uuid = "ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ"
           }
          

          So the presence of poison value and bogus uuid suggests
          this export has already been destroyed.

          For reference, here a snippet from from _debug_req()
          that uses these values:

          2271 void _debug_req(struct ptlrpc_request *req,
          2272                 struct libcfs_debug_msg_data *msgdata,
          2273                 const char *fmt, ... )
          2274 {       
          2275         va_list args;
          2276         va_start(args, fmt);
          2277         libcfs_debug_vmsg2(msgdata, fmt, args,
          2278                            " req@%p x"LPU64"/t"LPD64"("LPD64") o%d->%s@%s:%d/%d"
          2279                            " lens %d/%d e %d to %d dl "CFS_TIME_T" ref %d "
          2280                            "fl "REQ_FLAGS_FMT"/%x/%x rc %d/%d\n",
          2281                            req, req->rq_xid, req->rq_transno,
          2282                            req->rq_reqmsg ? lustre_msg_get_transno(req->rq_reqmsg) : 0,
          2283                            req->rq_reqmsg && req_ptlrpc_body_swabbed(req) ?
          2284                            lustre_msg_get_opc(req->rq_reqmsg) : -1, 
          2285                            req->rq_import ? obd2cli_tgt(req->rq_import->imp_obd) :
          2286                            req->rq_export ?
          2287                            (char*)req->rq_export->exp_client_uuid.uuid : "<?>",
          2288                            req->rq_import ?
          2289                            (char *)req->rq_import->imp_connection->c_remote_uuid.uuid :
          2290                            req->rq_export ?
          2291                            (char *)req->rq_export->exp_connection->c_remote_uuid.uuid : "<?>",
          
          nedbass Ned Bass (Inactive) added a comment - For the case where we got into trouble in _debug_req(), I did some digging in crash to see what state the ptlrpc_reqeust was in. I dug up the pointer address from the backtrace (let's call it <addr1> to save typing). Then resolving some of the strings that get passed to libcfs_debug_vmsg2() from _debug_req(), I see: crash> struct ptlrpc_request.rq_import <addr> rp_import = 0x0 crash> struct ptlrpc_request.rq_export <addr> rp_export = <addr2> crash> struct obd_export.exp_connection <addr2> exp_connection = 0x5a5a5a5a5a5a5a5a crash> struct obd_export.exp_client_uuid <addr2> exp_client_uuid = { uuid = "ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ" } So the presence of poison value and bogus uuid suggests this export has already been destroyed. For reference, here a snippet from from _debug_req() that uses these values: 2271 void _debug_req(struct ptlrpc_request *req, 2272 struct libcfs_debug_msg_data *msgdata, 2273 const char *fmt, ... ) 2274 { 2275 va_list args; 2276 va_start(args, fmt); 2277 libcfs_debug_vmsg2(msgdata, fmt, args, 2278 " req@%p x"LPU64"/t"LPD64"("LPD64") o%d->%s@%s:%d/%d" 2279 " lens %d/%d e %d to %d dl "CFS_TIME_T" ref %d " 2280 "fl "REQ_FLAGS_FMT"/%x/%x rc %d/%d\n", 2281 req, req->rq_xid, req->rq_transno, 2282 req->rq_reqmsg ? lustre_msg_get_transno(req->rq_reqmsg) : 0, 2283 req->rq_reqmsg && req_ptlrpc_body_swabbed(req) ? 2284 lustre_msg_get_opc(req->rq_reqmsg) : -1, 2285 req->rq_import ? obd2cli_tgt(req->rq_import->imp_obd) : 2286 req->rq_export ? 2287 (char*)req->rq_export->exp_client_uuid.uuid : "<?>", 2288 req->rq_import ? 2289 (char *)req->rq_import->imp_connection->c_remote_uuid.uuid : 2290 req->rq_export ? 2291 (char *)req->rq_export->exp_connection->c_remote_uuid.uuid : "<?>",

          People

            green Oleg Drokin
            nedbass Ned Bass (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: