[LU-1085] ASSERTION(cfs_atomic_read(&exp->exp_refcount) == 0) failed Created: 09/Feb/12  Updated: 30/Apr/12  Resolved: 30/Apr/12

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.1.1
Fix Version/s: None

Type: Bug Priority: Critical
Reporter: Ned Bass Assignee: Oleg Drokin
Resolution: Duplicate Votes: 0
Labels: paj
Environment:

RHEL 6.2
Our branch: https://github.com/chaos/lustre/commits/2.1.0-llnl


Severity: 3
Rank (Obsolete): 6467

 Description   

We have multiple Lustre 2.1 OSS nodes crashing repeatedly during recovery. This is on our classified Lustre cluster which was updated from 1.8 to 2.1 on Tuesday. The summary is one observed symptom. We have also seen these assertions appearing together.

ASSERTION(cfs_list_empty(&imp->imp_zombie_chain)) failed
ASSERTION(cfs_atomic_read(&exp->exp_refcount) == 0) failed

We don't have backtraces for the assertions because STONITH kicked in before the crash dump completed.

Other OSS nodes are crashing in kernel string handling functions with stacks like

machine_kexec
crash_kexec
oops_end
die
do_general_protection
general_protection
(exception RIP: strlen+9)
strlen
string
vsnprintf
libcfs_debug_vmsg2
_debug_req
target_send_replay_msg
target_send_reply
ost_handle
ptlrpc_main

So it appears we are passing a bad value in a debug message.

Another stack trace:

BUG: unable to handle kernel NULL ptr dereference at 000...38
IP: [<fffffffa0a8706>] filter_export_stats_init+0x1f1/0x500 [obdfilter]

machine_kexec
crash_kexec
oops_end
no_context
__bad_area_semaphore
bad_area_semaphore
__do_page_fault
do_page_fault
page_fault
filter_reconnect
target_handle_connect
ost_handle
ptlrpc_main

We have multiple symptoms here that may or not be due to the same bug. We may need to open a separate issue to track the root cause. Note that our branch contains LU-874 patches that touched the ptlrpc queue management code, so we should be on the lookout for any races introduced there. Also note we can't send debug data from this system.



 Comments   
Comment by Oleg Drokin [ 09/Feb/12 ]

Hm, I think we had some fixes in the zombie thread area.

Can you please see if this helps you by any chance: http://review.whamcloud.com/#change,1896,patchset=2

the export stats init I don't think I have seen before, I think most of that stuff was nailed in LU-106, see patches here: http://review.whamcloud.com/#change,326

Comment by Ned Bass [ 09/Feb/12 ]

Hi Oleg,

Can you please see if this helps you by any chance: http://review.whamcloud.com/#change,1896,patchset=2

Both nodes that ASSERTed had been up for a few days, so the mount thread had long since completed. Unless there could be some latent corruption from that race, it seems like that patch won't help.

I think most of that stuff was nailed in LU-106, see patches here: http://review.whamcloud.com/#change,326

Ah, we don't have the LU-106 patch in our branch. We'll pull it in.

Comment by Ned Bass [ 09/Feb/12 ]

For the case where we got into trouble in _debug_req(), I did some
digging in crash to see what state the ptlrpc_reqeust was in. I dug up
the pointer address from the backtrace (let's call it <addr1> to save
typing). Then resolving some of the strings that get passed to
libcfs_debug_vmsg2() from _debug_req(), I see:

crash> struct ptlrpc_request.rq_import <addr>
 rp_import = 0x0 
crash> struct ptlrpc_request.rq_export <addr>
 rp_export = <addr2>
crash> struct obd_export.exp_connection <addr2>
 exp_connection = 0x5a5a5a5a5a5a5a5a
crash> struct obd_export.exp_client_uuid <addr2>
 exp_client_uuid = { 
        uuid = "ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ"
 }

So the presence of poison value and bogus uuid suggests
this export has already been destroyed.

For reference, here a snippet from from _debug_req()
that uses these values:

2271 void _debug_req(struct ptlrpc_request *req,
2272                 struct libcfs_debug_msg_data *msgdata,
2273                 const char *fmt, ... )
2274 {       
2275         va_list args;
2276         va_start(args, fmt);
2277         libcfs_debug_vmsg2(msgdata, fmt, args,
2278                            " req@%p x"LPU64"/t"LPD64"("LPD64") o%d->%s@%s:%d/%d"
2279                            " lens %d/%d e %d to %d dl "CFS_TIME_T" ref %d "
2280                            "fl "REQ_FLAGS_FMT"/%x/%x rc %d/%d\n",
2281                            req, req->rq_xid, req->rq_transno,
2282                            req->rq_reqmsg ? lustre_msg_get_transno(req->rq_reqmsg) : 0,
2283                            req->rq_reqmsg && req_ptlrpc_body_swabbed(req) ?
2284                            lustre_msg_get_opc(req->rq_reqmsg) : -1, 
2285                            req->rq_import ? obd2cli_tgt(req->rq_import->imp_obd) :
2286                            req->rq_export ?
2287                            (char*)req->rq_export->exp_client_uuid.uuid : "<?>",
2288                            req->rq_import ?
2289                            (char *)req->rq_import->imp_connection->c_remote_uuid.uuid :
2290                            req->rq_export ?
2291                            (char *)req->rq_export->exp_connection->c_remote_uuid.uuid : "<?>",
Comment by Ned Bass [ 10/Feb/12 ]

I found a third incident of a panicked obd_zombid thread from yesterday. No ASSERT message was captured in the logs for this one, but we did get a complete crash dump. Here is the backtrace

machine_kexec
crash_kexec
panic
lbug_with_loc
obd_zombie_impexp_cull
obd_zombie_impexp_thread
kernel_thread

Comment by Ned Bass [ 10/Feb/12 ]

I am going to open separate issues for the different crashes we ran into yesterday. We can use this issue to track the obd_zombid crash.

We disabled the OSS read and writethrough caches and have not had any crashes since then. Nearly every crash was preceded by hundreds of client reconnect attempts and hundreds of log messages of the form:

LustreError: 14210:0:(genops.c:1270:class_disconnect_stale_exports()) ls5-OST0349: disconnect stale client [UUID]@<unknown>

Comment by Ned Bass [ 10/Feb/12 ]

Opened the following new issues:

  • LU-1092 NULL pointer dereference in filter_export_stats_init()
  • LU-1093 unable to handle kernel paging request in target_handle_connect()
  • LU-1094 general protection fault in _debug_req()
Comment by Oleg Drokin [ 10/Feb/12 ]

for the zombid backtrace, what was the line number from where the lbug_with_loc was called? I know you did not get LASSERT in the logs, but you can use gdb obdclass.ko and then run "l *(obd_zombie_impexp_cull+0x...)" (the +0x... value from backtrace) and it'll tell you the line number.
Hm, I just noticed that neither obd_zombie_impexp_cull nor obd_zombie_impexp_thread have any lbugs nor lasserts.

Comment by Ned Bass [ 10/Feb/12 ]

It was lustre/obdclass/genops.c:728:

 711 /* Export management functions */
 712 static void class_export_destroy(struct obd_export *exp)
 713 {
 714         struct obd_device *obd = exp->exp_obd;
 715         ENTRY;
 716 
 717         LASSERT_ATOMIC_ZERO(&exp->exp_refcount);
 718 
 719         CDEBUG(D_IOCTL, "destroying export %p/%s for %s\n", exp,
 720                exp->exp_client_uuid.uuid, obd->obd_name);
 721 
 722         LASSERT(obd != NULL);
 723 
 724         /* "Local" exports (lctl, LOV->{mdc,osc}) have no connection. */
 725         if (exp->exp_connection)
 726                 ptlrpc_put_connection_superhack(exp->exp_connection);
 727 
 728         LASSERT(cfs_list_empty(&exp->exp_outstanding_replies));
 729         LASSERT(cfs_list_empty(&exp->exp_uncommitted_replies));
 730         LASSERT(cfs_list_empty(&exp->exp_req_replay_queue));
 731         LASSERT(cfs_list_empty(&exp->exp_hp_rpcs));
 732         obd_destroy_export(exp);
 733         class_decref(obd, "export", exp);
 734 
 735         OBD_FREE_RCU(exp, sizeof(*exp), &exp->exp_handle);
 736         EXIT;
 737 }
Comment by Oleg Drokin [ 10/Feb/12 ]

Also I guess I must ask how was the original recovery triggered? Some unrelated crash or was this a recovery test of some sort? (purely of purposes of seeing if there is some another bug at play here).

Comment by Ned Bass [ 13/Feb/12 ]

Also I guess I must ask how was the original recovery triggered? Some unrelated crash or was this a recovery test of some sort? (purely of purposes of seeing if there is some another bug at play here).

This was on a production system, not a recovery test.

In the case of this stack trace the server had crashed about twenty minutes earlier with ASSERTION(cfs_atomic_read(&exp->exp_refcount) == 0).

We are still trying to understand what led to that crash, and the others in LU-[1092,1093,1094], but the factors at play seem to be high server load, slow server response, and many clients dropping their connection to the server and reconnecting. There was a four hour window in which we had 16 crashes involving 5 OSS nodes. As mentioned above, things seemed to stabilize after we disabled the OSS read and writethrough caches.

Comment by Christopher Morrone [ 10/Apr/12 ]

Apparently we hit this over 20 times on various OSS of multiple filesystems over the weekend. Note that last week we re-enabled OSS read cache (write cache is still disabled).

The admins tell me that this often hit while in (or perhaps shortly after) recovery. I'll need to look into why we were in recovery in the first place.

Comment by Oleg Drokin [ 10/Apr/12 ]

I wonder if that's related to LU-1166 to some degree.

Unfortunately the current patch is not fully complete, but does not make things worse.

Any other messages before that crash that you can share with us?

Comment by Christopher Morrone [ 10/Apr/12 ]

Ignore the comment about being in recovery. So far I don't think the logs show that.

I've looked at a few nodes, and it looks like there is some kind of client timeout/eviction and then reconnection storm going on before the assertions.

It looks like generally there are tens of thousands of "haven't heard from client X in 226 seconds. I think it's dead, and I am evicting it" messages. A couple of minutes clients begin reconnecting in droves. There is a mix of ost "connection from" and ost "Not available for connect" messages.

The "haven't heard from client" and the client connect messages are both interleaved in the logs, and often repeated 30,000+ times (lustre squashes them into "previous similar messages" lines).

And then we hit one of the two assertions listed at the beginning of this bug.

Note that for several of the OSS nodes that I have looked at so far, the clients all seem to be from one particular cluster, which is running 1.8. (servers are all 2.1.0-24chaos).

Comment by Oleg Drokin [ 17/Apr/12 ]

So with many more crashes, any luck in getting the backtraces for the first two assertions?
Any succesful crashdumps?

There are two imp_zombie_chain assertions, do you know which one of them did you hit? in obd_zombie_import_add or in class_import_put?

As for the second assertion, there is no such assertion in the code? I checked out your tree and here's what I see:

Oleg-Drokins-MacBook-Pro-2:lustre green$ grep -r exp_refcount * | grep ASSERT
lustre/obdclass/genops.c:        LASSERT_ATOMIC_ZERO(&exp->exp_refcount);
lustre/obdclass/genops.c:        LASSERT_ATOMIC_GT_LT(&exp->exp_refcount, 0, 0x5a5a5a);
lustre/obdecho/echo_client.c:        LASSERT(cfs_atomic_read(&ec->ec_exp->exp_refcount) > 0);

The first one is actually defined into LASSERTF so it does not exacty match your output I think?
Can you elaborate a bit more on where that might have come from?

Comment by Ned Bass [ 18/Apr/12 ]

So with many more crashes, any luck in getting the backtraces for the first two assertions? Any succesful crashdumps?

I will review the logs and crash dumps and let you know.

There are two imp_zombie_chain assertions, do you know which one of them did you hit? in obd_zombie_import_add or in class_import_put?

We hit the one in class_import_put.

As for the second assertion, there is no such assertion in the code? I checked out your tree and here's what I see:

Here's the second one in our tree:

https://github.com/chaos/lustre/blob/2.1.1-3chaos/lustre/obdclass/genops.c#L717

Comment by Oleg Drokin [ 18/Apr/12 ]

the LASSERT_ATOMIC_ZERO is defined as LASSERTF internally:

#define LASSERT_ATOMIC_EQ(a, v)                                 \
do {                                                            \
        LASSERTF(cfs_atomic_read(a) == v,                       \
                 "value: %d\n", cfs_atomic_read((a)));          \
} while (0)
#define LASSERT_ATOMIC_ZERO(a)                  LASSERT_ATOMIC_EQ(a, 0)

What was the value reported?

Comment by Ned Bass [ 18/Apr/12 ]

Value reported has been either 1, 2, or 3.

Comment by Ned Bass [ 18/Apr/12 ]

Here's a log message and backtrace for the exp_refcount assertion.

LustreError: 24253:0:(genops.c:717:class_export_destory()) ASSERTION(cfs_atomic_read(&exp->exp_refounct) == 0) failed: value: 1

COMMAND: "obd_zombid"
#0 machine_kexec
#1 crash_kexec
#2 panic
#3 lbug_with_loc
#4 obd_zombie_impexp_cull
#5 obd_zombie_impexp_thread
#6 kernel_thread

Comment by Ned Bass [ 18/Apr/12 ]

Here's a log message and backtrace for the cfs_list_empty assertion.

LustreError: 24458:0:(genops.c:931:class_import_put()) ASSERTINO(cfs_list_empty(&imp->imp_zombie_chain)) failed

COMMAND: "ll_ost_54"
#0 machine_kexec
#1 crash_kexec
#2 panic
#3 lbug_with_loc
#4 libcfs_assertion_failed
#5 class_import_put
#6 client_destroy_import
#7 target_handle_connect
#8 ost_handle
#9 ptlrpc_main
#10 kernel_thread

Comment by Ned Bass [ 19/Apr/12 ]

Oleg,

Do you think the LU-1092 patch will help with these assertions? Mikhail made a comment to that effect in LU-1336.

http://git.whamcloud.com/?p=fs/lustre-release.git;a=commit;h=893cf2014a38c5bd94890d3522fafe55f024a958

Comment by Oleg Drokin [ 19/Apr/12 ]

Yes, this looks related.
Any chance you can try it?

Comment by Ned Bass [ 19/Apr/12 ]

Yes, we'll pull the patch in to our tree, and it will eventually get rolled out to our production systems.

Comment by Peter Jones [ 30/Apr/12 ]

Believed to be a duplicate of LU-1092

Generated at Sat Feb 10 01:13:21 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.