[LU-667] Experiencing sluggish, intermittently unresponsive, and OOM killed MDS nodes Created: 07/Sep/11  Updated: 05/Jan/12  Resolved: 05/Jan/12

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Blocker
Reporter: jbogden Assignee: Cliff White (Inactive)
Resolution: Won't Fix Votes: 0
Labels: o2iblnd
Environment:

StorP Storage Cluster: Dell R710 servers (20 OSS, 2 MDS), IB direct connected DDN99k storage on OSSes, FC direct attached DDN EF3000 storage on MDS, 24GB per server, dual socket 8 core Nehalem. StorP is dual-homed for Lustre clients with DDR IB and 10 Gig Ethernet via Chelsio T3 adapters. StorP is configured for failover MDS and OSS pairs with multipath.

StorP is running TOSS 1.4-2 (chaos 4.4-2) which includes:
lustre-1.8.5.0-3chaos_2.6.18_105chaos.ch4.4
lustre-modules-1.8.5.0-3chaos_2.6.18_105chaos.ch4.4
chaos-kernel-2.6.18-105chaos

Multiple compute clusters interconnect to StorP via a set of IB(client)-to-IB(server) lnet routers and a set of IB(client)-to-10gig(server) lnet routers. The IB-to-IB lnet routers deal with <300 Lustre client nodes. The IB-to-10gig routers deal with ~2700 Lustre client nodes.


Attachments: File amds1.syslog.gz     File amds2-meminfo     File amds2-slabinfo    
Epic: hang, lnet, metadata, server, timeout
Rank (Obsolete): 6560

 Description   

We are experiencing major MDS problems that are greatly affecting the stability of our Lustre filesystem. We don't have any changes in the fundamental configuration or setup of our storage cluster to point the finger at.

The general symptoms are that the load on the active MDS node is unusually high and filesystem access hangs intermittently. Logged into the active MDS node we noticed that the command line also intermittently hangs. We noticed that the ptlrpcd process was pegged at 100%+ cpu usage followed by ~50% cpu usage for the kiblnd_sd_* processes. Furthermore, the iowait time is less that 1% while system time ranges from 25%-80%. It sort of appears that the active MDS is spinning as quickly as it can dealing with some kind of RPC traffic coming in over the IB lnd. So far we haven't been able to isolate the traffic involved. In one isolation step we took all the Lnet routers offline feeding in from the compute clusters, and the MDS was still churning vigorously in ptlrpcd and kiblnd processes. Another symptom we are seeing now is that when an MDS node becomes active and start trying to serve clients, we can watch the node rather quickly consume all available memory via Slab allocations and then die an OOM death. Some other observations:

  • OSTs evicting the mdtlov over the IB path
  • fun 'sluggish network' log messages like: ===> Sep 7 12:39:21 amds1 LustreError: 13000:0:(import.c:357:ptlrpc_invalidate_import()) scratch2-OST0053_UUID: RPCs in "Unregistering" phase found (0). Network is sluggish? Waiting them to error out. <===
  • MGS evicting itself over localhost connection: ==> Sep 7 12:39:14 amds1 Lustre: MGS: client 8337bacb-6b62-b0f0-261d-53678e2e56a9 (at 0@lo) evicted, unresponsive for 227s <==

At this point we have been through 3 or more MDS failover sequences and we also rebooted all the StorP Lustre servers and restarted the filesystem cleanly to see if that would clean things up.

We have syslog and Lustre debug message logs from various phases of debugging this. I'm not sure at this point what logs will be the most useful, but after I submit this issue I'll attach some files.



 Comments   
Comment by Peter Jones [ 07/Sep/11 ]

Cliff

Could you please help out with this.

Thanks

Peter

Comment by Cliff White (Inactive) [ 07/Sep/11 ]

We need to know the exact version of Lustre you are running, and any patches applied.
You should certainly have errors in syslog, if you have done a clear full filesystem restart,
were you able to observe the start of the high MDS load? Logs from around that time might
have useful information.
In addition, the MDS records some timeout data, in the 'timeouts' files in /proc
find /proc/fs/lustre -name timeouts
will show you the list, if you cat those files it will indicate which services are slow.
Look for large values in the 'worst' column.

Comment by jbogden [ 08/Sep/11 ]

Complete set of syslogs for node "amds1" which shows a complete cycle of boot-> takeover lustre mds duties-> ptlrpcd and kiblnd continually ramp up load on the node-> node OOMs itself to death

Comment by jbogden [ 08/Sep/11 ]

/proc/meminfo snapshot as amds1 allocates itself to death

Comment by jbogden [ 08/Sep/11 ]

/etc/slabinfo snapshot at the same time as the /proc/meminfo snapshot

Comment by jbogden [ 08/Sep/11 ]

Cliff, I just attached three files representative of the behavior we are seeing on the MDS node named 'amds1' (even though the meminfo and slabinfo files were slightly misnamed as amds2). We observed that the start of the high MDS load started ramping up as soon as an MDS node booted, started up Lustre services for MDS duties, and finished reestablishing connectivity with Lustre clients.

As best I can tell, the lustre-1.8.5.0-3chaos version we are running seems to be Lustre-1.8.5.0-3 + five patches:
ff2ef0c LU-337 Fix alloc mask in alloc_qinfo()
f9e0e36 LU-234 OOM killer causes node hang.
09eb8f9 LU-286 racer: general protection fault.
f5a9068 LU-274 Update LVB from disk when glimpse callback return error
c4d695f Add IP to error message when peer's IB port is not privileged.

I'll attempt to get clarification on the version details.

We (or at least I) didn't know about the 'timeouts' proc entries so I don't have data from them. But we'll watch them and see if they are useful.

Comment by Christopher Morrone [ 08/Sep/11 ]

I am a little confused. Are there a number of typos in that last comment? I am not aware of any tagged release numbered "1.8.5.0-3".

1.8.5.0-3chaos is 44 patches on top of 1.8.5.

But the patch stack that you mention is EXACTLY the patch stack in the range 1.8.5.0-3chaos..1.8.5.0-5chaos.

So it sounds like you are actually running 1.8.5.0-5chaos, not 1.8.5.0-3chaos. Is that correct?

Comment by jbogden [ 08/Sep/11 ]

Chris,

That is probably my bad in pulling the wrong changelog details. Here is exactly what we are running:

[root@amds1 ~]# rpm -qa | egrep 'lustre|chaos-kern'
lustre-1.8.5.0-3chaos_2.6.18_105chaos.ch4.4
lustre-modules-1.8.5.0-3chaos_2.6.18_105chaos.ch4.4
chaos-kernel-2.6.18-105chaos

Jeff

Comment by Christopher Morrone [ 08/Sep/11 ]

According to Joe Mervini in a comment in LU-659, the workload that triggered this issue was:

the MDS was getting buried by multiple user jobs (same user) were running chgrp recursively on a directory with ~4.5M files/directories

Comment by jbogden [ 08/Sep/11 ]

We have a good update about this issue. We seem to finally have stabilized our MDS functions on this Lustre filesystem. We believe that the root cause was almost pathologically bad usage of the filesystem by a single user. The user was running serial batch jobs on the compute cluster connected via the IB<->IB routers. The users directory tree looked like /gscratch2/joeuser/projectname/XXXX where XXXX were directories that contained a tree associated with each serial batch job. At the end of the batch job scripts the user does:

chgrp -R othergroup /gscratch2/joeuser/projectname
chmod -R g=u-w /gscratch2/joeuser/projectname

When we finally stumbled upon this, the user had 17 jobs concurrently doing chmod/chgrp on that directory tree. The /gscratch2/joeuser/projectname directory tree contains about 4.5 million files.

So what we think was happening was just obscene Lustre DLM thrashing. I don't have a Lustre debug trace to prove it, but it makes sense from what I understand of how DLM goes about things.

So maybe this probably isn't a bug per se, but it does raise several questions I can think of (I'm sure there are others as well). In no particular order:

  • Is there a way to protect the MDS from allocating itself to death under conditions like this? I'm guessing the ptlrpcd and kiblnd processes just slab allocated the node to death trying to deal with all the DLM rpc traffic.
  • Is there a way to detect behavior like this is occurring by inspecting sources of information on and MDS node? I'm guessing that we could probably figure it out now by pouring through a Lustre debug log trace with at least 'dlmtrace' and 'rpctrace' enabled in lnet.debug. But that isn't really an optimal production path for a normal systems administrator to have to follow.
  • Is there a way to protect the MDS from what essentially is a denial of service behavior from a small number of clients?
Comment by jbogden [ 08/Sep/11 ]

I didn't explicitly state that when we shot that user's jobs in the head and prevented any new jobs of the user from running, we were able to stablize the MDS behavior. We aren't quite sure why when we took down the IB-to-IB routers initially, the MDS didn't stabilize. Subsequent to the IB-to-IB router shutdown, we did a full restart of all the Lustre server nodes and that may have cleaned out some cruft that was confusing the issue initially...

Comment by Oleg Drokin [ 08/Sep/11 ]

I am afraid there is no easy way to tell bad from good traffic in ldlm and act accordingly.

Any client behavior that would result in a lot of MDS threads blocked potentially could lead to this sort of DoS.
The proper way to fix this is a bit of redesign of the processing architecture to allow MDS threads to do other stuff once there is lock blocking encountered. Or an infinite number of MDT threads otherwise.
Another particular 1.8 scenario is a large job opening the same file with O_CREATE flag set, that would end up causing MDS threads blocked on the same lock and once the thread pool is exhausted unrelated jobs would take a hit. It's even worse in 1.8 as the same condition is triggered by having a lot of threads to open wiles with O_CREAT in the same dir.

Comment by Cliff White (Inactive) [ 05/Jan/12 ]

I am going to close this issue, as there is not a fix at this time. Please re-open if you have further data or questions.

Generated at Sat Feb 10 01:09:16 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.