Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-667

Experiencing sluggish, intermittently unresponsive, and OOM killed MDS nodes

Details

    • Bug
    • Resolution: Won't Fix
    • Blocker
    • None
    • None

    Description

      We are experiencing major MDS problems that are greatly affecting the stability of our Lustre filesystem. We don't have any changes in the fundamental configuration or setup of our storage cluster to point the finger at.

      The general symptoms are that the load on the active MDS node is unusually high and filesystem access hangs intermittently. Logged into the active MDS node we noticed that the command line also intermittently hangs. We noticed that the ptlrpcd process was pegged at 100%+ cpu usage followed by ~50% cpu usage for the kiblnd_sd_* processes. Furthermore, the iowait time is less that 1% while system time ranges from 25%-80%. It sort of appears that the active MDS is spinning as quickly as it can dealing with some kind of RPC traffic coming in over the IB lnd. So far we haven't been able to isolate the traffic involved. In one isolation step we took all the Lnet routers offline feeding in from the compute clusters, and the MDS was still churning vigorously in ptlrpcd and kiblnd processes. Another symptom we are seeing now is that when an MDS node becomes active and start trying to serve clients, we can watch the node rather quickly consume all available memory via Slab allocations and then die an OOM death. Some other observations:

      • OSTs evicting the mdtlov over the IB path
      • fun 'sluggish network' log messages like: ===> Sep 7 12:39:21 amds1 LustreError: 13000:0:(import.c:357:ptlrpc_invalidate_import()) scratch2-OST0053_UUID: RPCs in "Unregistering" phase found (0). Network is sluggish? Waiting them to error out. <===
      • MGS evicting itself over localhost connection: ==> Sep 7 12:39:14 amds1 Lustre: MGS: client 8337bacb-6b62-b0f0-261d-53678e2e56a9 (at 0@lo) evicted, unresponsive for 227s <==

      At this point we have been through 3 or more MDS failover sequences and we also rebooted all the StorP Lustre servers and restarted the filesystem cleanly to see if that would clean things up.

      We have syslog and Lustre debug message logs from various phases of debugging this. I'm not sure at this point what logs will be the most useful, but after I submit this issue I'll attach some files.

      Attachments

        1. amds1.syslog.gz
          38 kB
        2. amds2-meminfo
          0.8 kB
        3. amds2-slabinfo
          19 kB

        Issue Links

          Activity

            [LU-667] Experiencing sluggish, intermittently unresponsive, and OOM killed MDS nodes
            jbogden jbogden added a comment -

            We have a good update about this issue. We seem to finally have stabilized our MDS functions on this Lustre filesystem. We believe that the root cause was almost pathologically bad usage of the filesystem by a single user. The user was running serial batch jobs on the compute cluster connected via the IB<->IB routers. The users directory tree looked like /gscratch2/joeuser/projectname/XXXX where XXXX were directories that contained a tree associated with each serial batch job. At the end of the batch job scripts the user does:

            chgrp -R othergroup /gscratch2/joeuser/projectname
            chmod -R g=u-w /gscratch2/joeuser/projectname

            When we finally stumbled upon this, the user had 17 jobs concurrently doing chmod/chgrp on that directory tree. The /gscratch2/joeuser/projectname directory tree contains about 4.5 million files.

            So what we think was happening was just obscene Lustre DLM thrashing. I don't have a Lustre debug trace to prove it, but it makes sense from what I understand of how DLM goes about things.

            So maybe this probably isn't a bug per se, but it does raise several questions I can think of (I'm sure there are others as well). In no particular order:

            • Is there a way to protect the MDS from allocating itself to death under conditions like this? I'm guessing the ptlrpcd and kiblnd processes just slab allocated the node to death trying to deal with all the DLM rpc traffic.
            • Is there a way to detect behavior like this is occurring by inspecting sources of information on and MDS node? I'm guessing that we could probably figure it out now by pouring through a Lustre debug log trace with at least 'dlmtrace' and 'rpctrace' enabled in lnet.debug. But that isn't really an optimal production path for a normal systems administrator to have to follow.
            • Is there a way to protect the MDS from what essentially is a denial of service behavior from a small number of clients?
            jbogden jbogden added a comment - We have a good update about this issue. We seem to finally have stabilized our MDS functions on this Lustre filesystem. We believe that the root cause was almost pathologically bad usage of the filesystem by a single user. The user was running serial batch jobs on the compute cluster connected via the IB<->IB routers. The users directory tree looked like /gscratch2/joeuser/projectname/XXXX where XXXX were directories that contained a tree associated with each serial batch job. At the end of the batch job scripts the user does: chgrp -R othergroup /gscratch2/joeuser/projectname chmod -R g=u-w /gscratch2/joeuser/projectname When we finally stumbled upon this, the user had 17 jobs concurrently doing chmod/chgrp on that directory tree. The /gscratch2/joeuser/projectname directory tree contains about 4.5 million files. So what we think was happening was just obscene Lustre DLM thrashing. I don't have a Lustre debug trace to prove it, but it makes sense from what I understand of how DLM goes about things. So maybe this probably isn't a bug per se, but it does raise several questions I can think of (I'm sure there are others as well). In no particular order: Is there a way to protect the MDS from allocating itself to death under conditions like this? I'm guessing the ptlrpcd and kiblnd processes just slab allocated the node to death trying to deal with all the DLM rpc traffic. Is there a way to detect behavior like this is occurring by inspecting sources of information on and MDS node? I'm guessing that we could probably figure it out now by pouring through a Lustre debug log trace with at least 'dlmtrace' and 'rpctrace' enabled in lnet.debug. But that isn't really an optimal production path for a normal systems administrator to have to follow. Is there a way to protect the MDS from what essentially is a denial of service behavior from a small number of clients?

            According to Joe Mervini in a comment in LU-659, the workload that triggered this issue was:

            the MDS was getting buried by multiple user jobs (same user) were running chgrp recursively on a directory with ~4.5M files/directories

            morrone Christopher Morrone (Inactive) added a comment - According to Joe Mervini in a comment in LU-659 , the workload that triggered this issue was: the MDS was getting buried by multiple user jobs (same user) were running chgrp recursively on a directory with ~4.5M files/directories
            jbogden jbogden added a comment -

            Chris,

            That is probably my bad in pulling the wrong changelog details. Here is exactly what we are running:

            [root@amds1 ~]# rpm -qa | egrep 'lustre|chaos-kern'
            lustre-1.8.5.0-3chaos_2.6.18_105chaos.ch4.4
            lustre-modules-1.8.5.0-3chaos_2.6.18_105chaos.ch4.4
            chaos-kernel-2.6.18-105chaos

            Jeff

            jbogden jbogden added a comment - Chris, That is probably my bad in pulling the wrong changelog details. Here is exactly what we are running: [root@amds1 ~] # rpm -qa | egrep 'lustre|chaos-kern' lustre-1.8.5.0-3chaos_2.6.18_105chaos.ch4.4 lustre-modules-1.8.5.0-3chaos_2.6.18_105chaos.ch4.4 chaos-kernel-2.6.18-105chaos Jeff

            I am a little confused. Are there a number of typos in that last comment? I am not aware of any tagged release numbered "1.8.5.0-3".

            1.8.5.0-3chaos is 44 patches on top of 1.8.5.

            But the patch stack that you mention is EXACTLY the patch stack in the range 1.8.5.0-3chaos..1.8.5.0-5chaos.

            So it sounds like you are actually running 1.8.5.0-5chaos, not 1.8.5.0-3chaos. Is that correct?

            morrone Christopher Morrone (Inactive) added a comment - - edited I am a little confused. Are there a number of typos in that last comment? I am not aware of any tagged release numbered "1.8.5.0-3". 1.8.5.0-3chaos is 44 patches on top of 1.8.5. But the patch stack that you mention is EXACTLY the patch stack in the range 1.8.5.0-3chaos..1.8.5.0-5chaos. So it sounds like you are actually running 1.8.5.0-5chaos, not 1.8.5.0-3chaos. Is that correct?
            jbogden jbogden added a comment -

            Cliff, I just attached three files representative of the behavior we are seeing on the MDS node named 'amds1' (even though the meminfo and slabinfo files were slightly misnamed as amds2). We observed that the start of the high MDS load started ramping up as soon as an MDS node booted, started up Lustre services for MDS duties, and finished reestablishing connectivity with Lustre clients.

            As best I can tell, the lustre-1.8.5.0-3chaos version we are running seems to be Lustre-1.8.5.0-3 + five patches:
            ff2ef0c LU-337 Fix alloc mask in alloc_qinfo()
            f9e0e36 LU-234 OOM killer causes node hang.
            09eb8f9 LU-286 racer: general protection fault.
            f5a9068 LU-274 Update LVB from disk when glimpse callback return error
            c4d695f Add IP to error message when peer's IB port is not privileged.

            I'll attempt to get clarification on the version details.

            We (or at least I) didn't know about the 'timeouts' proc entries so I don't have data from them. But we'll watch them and see if they are useful.

            jbogden jbogden added a comment - Cliff, I just attached three files representative of the behavior we are seeing on the MDS node named 'amds1' (even though the meminfo and slabinfo files were slightly misnamed as amds2). We observed that the start of the high MDS load started ramping up as soon as an MDS node booted, started up Lustre services for MDS duties, and finished reestablishing connectivity with Lustre clients. As best I can tell, the lustre-1.8.5.0-3chaos version we are running seems to be Lustre-1.8.5.0-3 + five patches: ff2ef0c LU-337 Fix alloc mask in alloc_qinfo() f9e0e36 LU-234 OOM killer causes node hang. 09eb8f9 LU-286 racer: general protection fault. f5a9068 LU-274 Update LVB from disk when glimpse callback return error c4d695f Add IP to error message when peer's IB port is not privileged. I'll attempt to get clarification on the version details. We (or at least I) didn't know about the 'timeouts' proc entries so I don't have data from them. But we'll watch them and see if they are useful.
            jbogden jbogden added a comment -

            /etc/slabinfo snapshot at the same time as the /proc/meminfo snapshot

            jbogden jbogden added a comment - /etc/slabinfo snapshot at the same time as the /proc/meminfo snapshot
            jbogden jbogden added a comment -

            /proc/meminfo snapshot as amds1 allocates itself to death

            jbogden jbogden added a comment - /proc/meminfo snapshot as amds1 allocates itself to death
            jbogden jbogden added a comment -

            Complete set of syslogs for node "amds1" which shows a complete cycle of boot-> takeover lustre mds duties-> ptlrpcd and kiblnd continually ramp up load on the node-> node OOMs itself to death

            jbogden jbogden added a comment - Complete set of syslogs for node "amds1" which shows a complete cycle of boot-> takeover lustre mds duties-> ptlrpcd and kiblnd continually ramp up load on the node-> node OOMs itself to death

            We need to know the exact version of Lustre you are running, and any patches applied.
            You should certainly have errors in syslog, if you have done a clear full filesystem restart,
            were you able to observe the start of the high MDS load? Logs from around that time might
            have useful information.
            In addition, the MDS records some timeout data, in the 'timeouts' files in /proc
            find /proc/fs/lustre -name timeouts
            will show you the list, if you cat those files it will indicate which services are slow.
            Look for large values in the 'worst' column.

            cliffw Cliff White (Inactive) added a comment - We need to know the exact version of Lustre you are running, and any patches applied. You should certainly have errors in syslog, if you have done a clear full filesystem restart, were you able to observe the start of the high MDS load? Logs from around that time might have useful information. In addition, the MDS records some timeout data, in the 'timeouts' files in /proc find /proc/fs/lustre -name timeouts will show you the list, if you cat those files it will indicate which services are slow. Look for large values in the 'worst' column.
            pjones Peter Jones added a comment -

            Cliff

            Could you please help out with this.

            Thanks

            Peter

            pjones Peter Jones added a comment - Cliff Could you please help out with this. Thanks Peter

            People

              cliffw Cliff White (Inactive)
              jbogden jbogden
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: