[LU-667] Experiencing sluggish, intermittently unresponsive, and OOM killed MDS nodes Created: 07/Sep/11 Updated: 05/Jan/12 Resolved: 05/Jan/12 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Blocker |
| Reporter: | jbogden | Assignee: | Cliff White (Inactive) |
| Resolution: | Won't Fix | Votes: | 0 |
| Labels: | o2iblnd | ||
| Environment: |
StorP Storage Cluster: Dell R710 servers (20 OSS, 2 MDS), IB direct connected DDN99k storage on OSSes, FC direct attached DDN EF3000 storage on MDS, 24GB per server, dual socket 8 core Nehalem. StorP is dual-homed for Lustre clients with DDR IB and 10 Gig Ethernet via Chelsio T3 adapters. StorP is configured for failover MDS and OSS pairs with multipath. StorP is running TOSS 1.4-2 (chaos 4.4-2) which includes: Multiple compute clusters interconnect to StorP via a set of IB(client)-to-IB(server) lnet routers and a set of IB(client)-to-10gig(server) lnet routers. The IB-to-IB lnet routers deal with <300 Lustre client nodes. The IB-to-10gig routers deal with ~2700 Lustre client nodes. |
||
| Attachments: |
|
| Epic: | hang, lnet, metadata, server, timeout |
| Rank (Obsolete): | 6560 |
| Description |
|
We are experiencing major MDS problems that are greatly affecting the stability of our Lustre filesystem. We don't have any changes in the fundamental configuration or setup of our storage cluster to point the finger at. The general symptoms are that the load on the active MDS node is unusually high and filesystem access hangs intermittently. Logged into the active MDS node we noticed that the command line also intermittently hangs. We noticed that the ptlrpcd process was pegged at 100%+ cpu usage followed by ~50% cpu usage for the kiblnd_sd_* processes. Furthermore, the iowait time is less that 1% while system time ranges from 25%-80%. It sort of appears that the active MDS is spinning as quickly as it can dealing with some kind of RPC traffic coming in over the IB lnd. So far we haven't been able to isolate the traffic involved. In one isolation step we took all the Lnet routers offline feeding in from the compute clusters, and the MDS was still churning vigorously in ptlrpcd and kiblnd processes. Another symptom we are seeing now is that when an MDS node becomes active and start trying to serve clients, we can watch the node rather quickly consume all available memory via Slab allocations and then die an OOM death. Some other observations:
At this point we have been through 3 or more MDS failover sequences and we also rebooted all the StorP Lustre servers and restarted the filesystem cleanly to see if that would clean things up. We have syslog and Lustre debug message logs from various phases of debugging this. I'm not sure at this point what logs will be the most useful, but after I submit this issue I'll attach some files. |
| Comments |
| Comment by Peter Jones [ 07/Sep/11 ] |
|
Cliff Could you please help out with this. Thanks Peter |
| Comment by Cliff White (Inactive) [ 07/Sep/11 ] |
|
We need to know the exact version of Lustre you are running, and any patches applied. |
| Comment by jbogden [ 08/Sep/11 ] |
|
Complete set of syslogs for node "amds1" which shows a complete cycle of boot-> takeover lustre mds duties-> ptlrpcd and kiblnd continually ramp up load on the node-> node OOMs itself to death |
| Comment by jbogden [ 08/Sep/11 ] |
|
/proc/meminfo snapshot as amds1 allocates itself to death |
| Comment by jbogden [ 08/Sep/11 ] |
|
/etc/slabinfo snapshot at the same time as the /proc/meminfo snapshot |
| Comment by jbogden [ 08/Sep/11 ] |
|
Cliff, I just attached three files representative of the behavior we are seeing on the MDS node named 'amds1' (even though the meminfo and slabinfo files were slightly misnamed as amds2). We observed that the start of the high MDS load started ramping up as soon as an MDS node booted, started up Lustre services for MDS duties, and finished reestablishing connectivity with Lustre clients. As best I can tell, the lustre-1.8.5.0-3chaos version we are running seems to be Lustre-1.8.5.0-3 + five patches: I'll attempt to get clarification on the version details. We (or at least I) didn't know about the 'timeouts' proc entries so I don't have data from them. But we'll watch them and see if they are useful. |
| Comment by Christopher Morrone [ 08/Sep/11 ] |
|
I am a little confused. Are there a number of typos in that last comment? I am not aware of any tagged release numbered "1.8.5.0-3". 1.8.5.0-3chaos is 44 patches on top of 1.8.5. But the patch stack that you mention is EXACTLY the patch stack in the range 1.8.5.0-3chaos..1.8.5.0-5chaos. So it sounds like you are actually running 1.8.5.0-5chaos, not 1.8.5.0-3chaos. Is that correct? |
| Comment by jbogden [ 08/Sep/11 ] |
|
Chris, That is probably my bad in pulling the wrong changelog details. Here is exactly what we are running: [root@amds1 ~]# rpm -qa | egrep 'lustre|chaos-kern' Jeff |
| Comment by Christopher Morrone [ 08/Sep/11 ] |
|
According to Joe Mervini in a comment in
|
| Comment by jbogden [ 08/Sep/11 ] |
|
We have a good update about this issue. We seem to finally have stabilized our MDS functions on this Lustre filesystem. We believe that the root cause was almost pathologically bad usage of the filesystem by a single user. The user was running serial batch jobs on the compute cluster connected via the IB<->IB routers. The users directory tree looked like /gscratch2/joeuser/projectname/XXXX where XXXX were directories that contained a tree associated with each serial batch job. At the end of the batch job scripts the user does: chgrp -R othergroup /gscratch2/joeuser/projectname When we finally stumbled upon this, the user had 17 jobs concurrently doing chmod/chgrp on that directory tree. The /gscratch2/joeuser/projectname directory tree contains about 4.5 million files. So what we think was happening was just obscene Lustre DLM thrashing. I don't have a Lustre debug trace to prove it, but it makes sense from what I understand of how DLM goes about things. So maybe this probably isn't a bug per se, but it does raise several questions I can think of (I'm sure there are others as well). In no particular order:
|
| Comment by jbogden [ 08/Sep/11 ] |
|
I didn't explicitly state that when we shot that user's jobs in the head and prevented any new jobs of the user from running, we were able to stablize the MDS behavior. We aren't quite sure why when we took down the IB-to-IB routers initially, the MDS didn't stabilize. Subsequent to the IB-to-IB router shutdown, we did a full restart of all the Lustre server nodes and that may have cleaned out some cruft that was confusing the issue initially... |
| Comment by Oleg Drokin [ 08/Sep/11 ] |
|
I am afraid there is no easy way to tell bad from good traffic in ldlm and act accordingly. Any client behavior that would result in a lot of MDS threads blocked potentially could lead to this sort of DoS. |
| Comment by Cliff White (Inactive) [ 05/Jan/12 ] |
|
I am going to close this issue, as there is not a fix at this time. Please re-open if you have further data or questions. |