[LU-2323] mds crash Created: 14/Nov/12 Updated: 06/Dec/12 Resolved: 06/Dec/12 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.2.0 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Critical |
| Reporter: | ETHz Support (Inactive) | Assignee: | Niu Yawei (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | server | ||
| Environment: |
[root@n-mds1 ~]# cat /proc/fs/lustre/version [root@n-mds1 ~]# uname -r [root@n-mds1 ~]# rpm -qa|grep lustre |
||
| Attachments: |
|
| Severity: | 1 |
| Epic: | metadata, server |
| Rank (Obsolete): | 5550 |
| Description |
|
We recently experienced two MDS crashes on our Lustre installation. I've attached the netconsole output of both crashes (that's all i got: there is nothing in the syslog and i wasn't able to create a screenshot of the console output as the crashed mds was already powercycled by its failover partner). |
| Comments |
| Comment by ETHz Support (Inactive) [ 14/Nov/12 ] |
|
netconsole output |
| Comment by ETHz Support (Inactive) [ 14/Nov/12 ] |
|
In terms of activity, during the crashes, the mds server has memory available and the IB network have low traffic. In both cases there was free memory available: The crash yesterday didn't even have a full read-cache and the MDS was never swapping (it has a 50GB Swap partition). I also checked the system load + infiniband traffic: The MDS was doing almost nothing during the crash (load less than 0.5 / IB traffic ~100KB/s). |
| Comment by Niu Yawei (Inactive) [ 14/Nov/12 ] |
2012-11-08T20:14:29+01:00 n-mds1 [<ffffffffa0dd4dbb>] osd_trans_stop+0xeb/0x390 [osd_ldiskfs] 2012-11-08T20:14:29+01:00 n-mds1 RSP <ffff880bd116da70> 2012-11-08T20:14:29+01:00 n-mds1 ---[ end trace 69a06040c21c938c ]--- 2012-11-08T20:14:29+01:00 n-mds1 Kernel panic - not syncing: Fatal exception 2012-11-08T20:14:29+01:00 n-mds1 Pid: 3845, comm: mdt_52 Tainted: G D ---------------- 2.6.32-220.4.2.el6_lustre.x86_64 #1 2012-11-08T20:14:29+01:00 n-mds1 Call Trace: 2012-11-08T20:14:29+01:00 n-mds1 [<ffffffff814ec61a>] ? panic+0x78/0x143 2012-11-08T20:14:29+01:00 n-mds1 [<ffffffff814f07a4>] ? oops_end+0xe4/0x100 2012-11-08T20:14:29+01:00 n-mds1 [<ffffffff8100f26b>] ? die+0x5b/0x90 2012-11-08T20:14:29+01:00 n-mds1 [<ffffffff814f0312>] ? do_general_protection+0x152/0x160 2012-11-08T20:14:29+01:00 n-mds1 [<ffffffffa0ddaada>] ? osd_xattr_set+0x14a/0x1d0 [osd_ldiskfs] 2012-11-08T20:14:29+01:00 n-mds1 [<ffffffff814efae5>] ? general_protection+0x25/0x30 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa0dd4dbb>] ? osd_trans_stop+0xeb/0x390 [osd_ldiskfs] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa0cde12a>] ? mdd_trans_stop+0x1a/0x20 [mdd] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa0cc1036>] ? mdd_attr_set+0xbf6/0x2030 [mdd] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa0677820>] ? ldlm_completion_ast+0x0/0x6d0 [ptlrpc] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa069ae3c>] ? lustre_msg_get_versions+0x6c/0xb0 [ptlrpc] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa0e12a1c>] ? cml_attr_set+0x6c/0x160 [cmm] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa0d4e578>] ? mdt_attr_set+0x268/0x4b0 [mdt] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa0d4eb0d>] ? mdt_reint_setattr+0x34d/0x1060 [mdt] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa0d48e7b>] ? mdt_reint_rec+0x4b/0xa0 [mdt] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa0d41069>] ? mdt_reint_internal+0x479/0x7b0 [mdt] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa0d413ee>] ? mdt_reint+0x4e/0xb0 [mdt] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa0d37b9d>] ? mdt_handle_common+0x74d/0x1400 [mdt] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa0d38925>] ? mdt_regular_handle+0x15/0x20 [mdt] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa06a6011>] ? ptlrpc_server_handle_request+0x3c1/0xcb0 [ptlrpc] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa04373ee>] ? cfs_timer_arm+0xe/0x10 [libcfs] 2012-11-08T20:14:30+01:00 n-mds1 [<ffffffffa0441e19>] ? lc_watchdog_touch+0x79/0x110 [libcfs] 2012-11-08T20:14:31+01:00 n-mds1 [<ffffffffa06a00e2>] ? ptlrpc_wait_event+0xb2/0x2c0 [ptlrpc] 2012-11-08T20:14:31+01:00 n-mds1 [<ffffffff810519c3>] ? __wake_up+0x53/0x70 2012-11-08T20:14:31+01:00 n-mds1 [<ffffffffa06a701f>] ? ptlrpc_main+0x71f/0x1210 [ptlrpc] 2012-11-08T20:14:31+01:00 n-mds1 [<ffffffffa06a6900>] ? ptlrpc_main+0x0/0x1210 [ptlrpc] 2012-11-08T20:14:31+01:00 n-mds1 [<ffffffff8100c14a>] ? child_rip+0xa/0x20 2012-11-08T20:14:31+01:00 n-mds1 [<ffffffffa06a6900>] ? ptlrpc_main+0x0/0x1210 [ptlrpc] 2012-11-08T20:14:31+01:00 n-mds1 [<ffffffffa06a6900>] ? ptlrpc_main+0x0/0x1210 [ptlrpc] 2012-11-08T20:14:31+01:00 n-mds1 [<ffffffff8100c140>] ? child_rip+0x0/0x20 In the 2.2 code, we added following code in osd_xattr_set(): /* version set is not real XATTR */ if (strcmp(name, XATTR_NAME_VERSION) == 0) { /* for version we are just using xattr API but change inode * field instead */ LASSERT(buf->lb_len == sizeof(dt_obj_version_t)); osd_object_version_set(env, dt, buf->lb_buf); return sizeof(dt_obj_version_t); } we should probably check if the "name" is NULL first. Alex, any thought? |
| Comment by Alex Zhuravlev [ 14/Nov/12 ] |
|
hmm, I'd say we should assert on name=NULL.. and the caller must be fixed. |
| Comment by Adrian Ulrich (Inactive) [ 14/Nov/12 ] |
|
This has become quite serious for us right now: Our filesystem is currently down: We entered a reboot -> panic -> reboot -> panic cycle: Our MDS crashes with the same message as soon as we reboot it: |
| Comment by Johann Lombardi (Inactive) [ 14/Nov/12 ] |
|
Adrian, have you tried to remount the MDS with "-o abort_recov"? |
| Comment by Adrian Ulrich (Inactive) [ 14/Nov/12 ] |
|
No, but it seems to be 'stable' again. After the 4th crash i started a 'fsck -n /dev/mapper/...', got impatient and aborted it after ~10 minutes. After this i was able to start the MDS without any new crash: Looks like the '10 minute downtime' was enough to time out the 'evil' client/operation !? Is there any way to see which client causes the crash? |
| Comment by ETHz Support (Inactive) [ 14/Nov/12 ] |
|
Adrian, |
| Comment by Adrian Ulrich (Inactive) [ 14/Nov/12 ] |
|
Johann: Does the crash actually get triggered due to a client calling setaddr? One of our most frequent setattr callers is an 1.8.4 client (10.201.32.32) - could this client be the cause of the crash? [root@n-mds1 exports]# grep setattr */stats | awk ' {print $2 " " $1}'|sort -n | tail -5 |
| Comment by nasf (Inactive) [ 15/Nov/12 ] |
|
I do not think it is NULL "name" for osd_xattr_set() caused the failure. In this case, the "name" is from MDS internal, not from client. I do not find any internal callers passing NULL "name". On the other hand, the other failure instances have different call traces. Adrian, have you made any system upgrading recently when you hit the failure? There is no evidence to indicate that it is interoperability issue caused the failure, but we can try to locate the issue step by step. The first step, please migrate the system load from above non-2.2 clients to other standard-2.2 clients. Especially the 1.8.4 client, because it is too old. I do not think we have tested the interoperation between Lustre-2.2 server and Lustre-1.8.4 client when we released Lustre-2.2. So it is the most suspicious. |
| Comment by Alex Zhuravlev [ 15/Nov/12 ] |
|
in the both cases the very first messages were about inability to add llog record: LustreError: 3809:0:(llog_cat.c:298:llog_cat_add_rec()) llog_write_rec -28: lh=ffff880482b99800 probably it's a problem in the code handling this error. I think we should reproduce this locally. it makes sense to ls CONFIGS/ directory using ldiskfs or debugfs to see how much space can be free after orphan cleanup. |
| Comment by nasf (Inactive) [ 15/Nov/12 ] |
|
Adrian, if you cannot abandon 1.8.4 clients, then please umount them temporarily, and try to reproduce the failure with other clients. If cannot reproduce, then it is quite possible related with the interoperability issues. |
| Comment by Adrian Ulrich (Inactive) [ 15/Nov/12 ] |
|
Yong Fan:
Alex: We get this llog_write_rec errors since months (Jun 2012) while our MDS only startet to crash recently (also note that there are ~5 hours between the error and the crash). What does the llog_write_rec error mean anyway? I'll post the output of 'ls CONFIGS/' as soon as i have a chance to remount the volume via ldiskfs (= on next crash or on monday evening) |
| Comment by Alex Zhuravlev [ 15/Nov/12 ] |
|
ok, good to know.. llog_write_rec() getting -28 mean MDS was not able to write (transactionally) a record to make some updates to OST (remove OST object, update OST object attributes). then could you attach osd_ldiskfs.ko please ? |
| Comment by Adrian Ulrich (Inactive) [ 15/Nov/12 ] |
|
md5sum = 6a07cbbb49f63ea0f6a5e6bc067bc7c9 requested kernel module |
| Comment by Adrian Ulrich (Inactive) [ 15/Nov/12 ] |
|
Alex: I attached the requested kernel module (or did you mean with 'attach' that i should insmod it? |
| Comment by Alex Zhuravlev [ 15/Nov/12 ] |
|
no, i've got what I need, thanks a bit of analysis here: 0000000000004d26 <osd_trans_stop+0x56> mov 0x50(%rbx),%r12 so rbx contains pointer to oh: (gdb) p/x sizeof(struct thandle) 0000000000004db3 <osd_trans_stop+0xe3> mov (%rbx),%rax these lines implement: RAX: 0006000100000002 is supposed to be ld_type (and 0x8(%rax) is ld_type->ldt_tags) IOW, thandle was broken and pointing to garbage instead of a device. now the question what broke that.. |
| Comment by ETHz Support (Inactive) [ 15/Nov/12 ] |
|
Adrian, debugfs -c -R 'dump CONFIGS/ /tmp/config' /dev/<device> |
| Comment by Adrian Ulrich (Inactive) [ 16/Nov/12 ] |
|
I did this using a snapshot from the MDS (taken at 5. November). Output of CONFIGS/ via debugfs: $ debugfs mds.dump |
| Comment by Zhenyu Xu [ 22/Nov/12 ] |
|
Adrian, Did you have opportunity to try excluding 1.8.x clients to check whether the MDS still crashes with only 2.x clients accessing it? |
| Comment by Adrian Ulrich (Inactive) [ 22/Nov/12 ] |
|
Well, the problem is that i can not reproduce the crash and i did not see any new crashes since 14. November. (The crash was probably caused by an user job: There are about ~800 users on our cluster and i have no way to figure out what job crashed it). But in any case: Even if the crash was triggered by an 1.8.x client: It should get fixed, shouldn't it? And do we have any news about the llog_write_rec error? (did the debugfs output help?) |
| Comment by Zhenyu Xu [ 22/Nov/12 ] |
|
Yes, even it's 1.8.x client problem we should fix it. The purpose of the question is trying to help to make out which area to find the root cause. I'm still investigating the llog part issue. |
| Comment by Zhenyu Xu [ 23/Nov/12 ] |
|
I think the "LustreError: 31980:0:(llog_cat.c:298:llog_cat_add_rec()) llog_write_rec -28: lh=ffff88042d450240" is a misleading message, the message only means the current log does not has enough space for the log record, it will create a new log for it later. int llog_cat_add_rec(struct llog_handle *cathandle, struct llog_rec_hdr *rec, struct llog_cookie *reccookie, void *buf) { struct llog_handle *loghandle; int rc; ENTRY; LASSERT(rec->lrh_len <= LLOG_CHUNK_SIZE); loghandle = llog_cat_current_log(cathandle, 1); if (IS_ERR(loghandle)) RETURN(PTR_ERR(loghandle)); /* loghandle is already locked by llog_cat_current_log() for us */ rc = llog_write_rec(loghandle, rec, reccookie, 1, buf, -1); if (rc < 0) CERROR("llog_write_rec %d: lh=%p\n", rc, loghandle); cfs_up_write(&loghandle->lgh_lock); if (rc == -ENOSPC) { /* to create a new plain log */ loghandle = llog_cat_current_log(cathandle, 1); if (IS_ERR(loghandle)) RETURN(PTR_ERR(loghandle)); rc = llog_write_rec(loghandle, rec, reccookie, 1, buf, -1); cfs_up_write(&loghandle->lgh_lock); } RETURN(rc); } |
| Comment by Niu Yawei (Inactive) [ 29/Nov/12 ] |
|
After checking the 2.2 code carefully, I found a culprit which can cause such memory corruption: in mdd_declare_attr_set(): #ifdef CONFIG_FS_POSIX_ACL
if (ma->ma_attr.la_valid & LA_MODE) {
mdd_read_lock(env, obj, MOR_TGT_CHILD);
rc = mdo_xattr_get(env, obj, buf, XATTR_NAME_ACL_ACCESS,
BYPASS_CAPA);
mdd_read_unlock(env, obj);
if (rc == -EOPNOTSUPP || rc == -ENODATA)
rc = 0;
else if (rc < 0)
return rc;
Our intention here is to retrieve the xattr length, but we passed an uninitialized buffer to mdo_xattr_get() (we should pass NULL here)... |
| Comment by Niu Yawei (Inactive) [ 29/Nov/12 ] |
|
backport the memory corruption fix in mdd_declare_attr_set() to b2_2: http://review.whamcloud.com/4703 |
| Comment by Adrian Ulrich (Inactive) [ 30/Nov/12 ] |
|
Thanks for fixing this issue: We will upgrade our MDS as soon as a new build becomes available – or should we just upgrade to 2.3? |
| Comment by Peter Jones [ 30/Nov/12 ] |
|
Adrian a build of the change backported to 2.2 already exists - http://build.whamcloud.com/job/lustre-reviews/10853/ - but is still in the automated test queue at the moment. Lustre 2.3 is available now and has been thoroughly tested. It will of course include other content beyond just this one fix (both additional features and many other fixes) |
| Comment by Peter Jones [ 06/Dec/12 ] |
|
Adrian Have you decided which approach you will take - to patch 2.2 or upgrade to 2.3? Peter |
| Comment by Adrian Ulrich (Inactive) [ 06/Dec/12 ] |
|
Hello Peter, We will upgrade to 2.3 as soon as the next opportunity arises, you can therefore close this issue. Thanks and best regards, |
| Comment by Peter Jones [ 06/Dec/12 ] |
|
ok thanks Adrian! |