[LU-5077] insanity test_1: out of memory on MDT in crypto_create_tfm() Created: 17/May/14  Updated: 11/Nov/14  Resolved: 21/Oct/14

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.6.0, Lustre 2.5.2, Lustre 2.5.3
Fix Version/s: None

Type: Bug Priority: Blocker
Reporter: Maloo Assignee: Niu Yawei (Inactive)
Resolution: Duplicate Votes: 0
Labels: None

Issue Links:
Duplicate
duplicates LU-5670 replay-vbr test_4c: MDS OOM Closed
is duplicated by LU-5121 replay-ost-single test_0b: OOM on the... Resolved
is duplicated by LU-5328 Failure on test suite replay-vbr test... Resolved
is duplicated by LU-5593 sanity: OOM during recovery Resolved
is duplicated by LU-5745 sanity test_17m: mds reboot from OOM ... Resolved
Related
is related to LU-5420 Failure on test suite sanity test_17m... Resolved
is related to LU-5079 conf-sanity test_47 timeout Resolved
is related to LU-4578 Early replies do not honor at_max Resolved
is related to LU-5121 replay-ost-single test_0b: OOM on the... Resolved
is related to LU-5079 conf-sanity test_47 timeout Resolved
is related to LU-5595 replay-single test_38: MDS 1 OOM Resolved
Severity: 3
Rank (Obsolete): 14011

 Description   

This issue was created by maloo for wangdi <di.wang@intel.com>

This issue relates to the following test suite run: http://maloo.whamcloud.com/test_sets/4f1c047c-dd60-11e3-9f1e-52540035b04c.

The sub-test test_1 failed with the following error:

16:39:44:Lustre: DEBUG MARKER: == insanity test 1: MDS/MDS failure == 15:39:30 (1400279970)
16:39:44:Lustre: DEBUG MARKER: grep -c /mnt/mds1' ' /proc/mounts
16:39:44:Lustre: DEBUG MARKER: umount -d /mnt/mds1
16:39:44:Lustre: Failing over lustre-MDT0000
16:42:00:LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 10.1.4.136@tcp (no target). If you are running an HA pair check that the target is mounted on the other server.
16:42:00:LustreError: Skipped 2839 previous similar messages
16:42:00:Lustre: 11949:0:(client.c:1914:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1400279972/real 1400279972] req@ffff88007a72f000 x1468278858237216/t0(0) o251->MGC10.1.4.144@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1400279978 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1
16:42:00:Lustre: 11949:0:(client.c:1914:ptlrpc_expire_one_request()) Skipped 12 previous similar messages
16:42:00:Lustre: server umount lustre-MDT0000 complete
16:42:00:Lustre: DEBUG MARKER: lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
16:42:00:Lustre: DEBUG MARKER: hostname
16:42:00:Lustre: DEBUG MARKER: mkdir -p /mnt/mds1
16:42:00:Lustre: DEBUG MARKER: test -b /dev/lvm-Role_MDS/P1
16:42:00:Lustre: DEBUG MARKER: mkdir -p /mnt/mds1; mount -t lustre -o user_xattr,acl /dev/lvm-Role_MDS/P1 /mnt/mds1
16:42:00:LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. quota=on. Opts:
16:42:00:LustreError: 12262:0:(mdt_coordinator.c:976:mdt_hsm_cdt_start()) lustre-MDT0000: cannot take the layout locks needed for registered restore: -2
16:42:00:Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 5 clients reconnect
16:42:00:Lustre: DEBUG MARKER: PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/u
16:42:00:LNet: 12440:0:(debug.c:218:libcfs_debug_str2mask()) You are trying to use a numerical value for the mask - this will be deprecated in a future release.
16:42:00:LNet: 12440:0:(debug.c:218:libcfs_debug_str2mask()) Skipped 1 previous similar message
16:42:00:Lustre: DEBUG MARKER: e2label /dev/lvm-Role_MDS/P1 2>/dev/null
16:42:00:Lustre: mdt: This server is not able to keep up with request traffic (cpu-bound).
16:42:00:Lustre: 12222:0:(service.c:1511:ptlrpc_at_check_timed()) earlyQ=1 reqQ=0 recA=0, svcEst=21, delay=0(jiff)
16:42:00:Lustre: 12222:0:(service.c:1308:ptlrpc_at_send_early_reply()) @@@ Already past deadline (1s), not sending early reply. Consider increasing at_early_margin (5)? req@ffff88002ef5e400 x1468299694051176/t0(0) o400>ac3e68be-778c-5e81-3dfa-5ce31a3eb36d@10.1.4.142@tcp:0/0 lens 224/0 e 225451 to 0 dl 1400280113 ref 2 fl Complete:H/c0/ffffffff rc 0/-1
16:42:00:Lustre: 12229:0:(service.c:1308:ptlrpc_at_send_early_reply()) @@@ Already past deadline (1s), not sending early reply. Consider increasing at_early_margin (5)? req@ffff880022bb0000 x1468278870043488/t0(0) o400>lustre-MDT0001-mdtlov_UUID@10.1.4.140@tcp:0/0 lens 224/0 e 1032545 to 0 dl 1400280114 ref 2 fl Complete:/c0/ffffffff rc 0/-1
16:42:00:mdt_out00_000 invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0, oom_score_adj=0
16:42:00:mdt_out00_000 cpuset=/ mems_allowed=0
16:42:00:Pid: 12229, comm: mdt_out00_000 Not tainted 2.6.32-431.17.1.el6_lustre.g684b38d.x86_64 #1
16:42:00:Call Trace:
16:42:00: [<ffffffff810d0211>] ? cpuset_print_task_mems_allowed+0x91/0xb0
16:42:00: [<ffffffff811225c0>] ? dump_header+0x90/0x1b0
16:42:00: [<ffffffff8129058d>] ? __bitmap_intersects+0x1d/0xa0
16:42:00: [<ffffffff8122781c>] ? security_real_capable_noaudit+0x3c/0x70
16:42:00: [<ffffffff81122a42>] ? oom_kill_process+0x82/0x2a0
16:42:00: [<ffffffff8112293e>] ? select_bad_process+0x9e/0x120
16:42:00: [<ffffffff81122e80>] ? out_of_memory+0x220/0x3c0
16:42:00: [<ffffffff8112f79f>] ? __alloc_pages_nodemask+0x89f/0x8d0
16:42:01: [<ffffffff8116e082>] ? kmem_getpages+0x62/0x170
16:42:01: [<ffffffff8116ec9a>] ? fallback_alloc+0x1ba/0x270
16:42:01: [<ffffffff8116e6ef>] ? cache_grow+0x2cf/0x320
16:42:01: [<ffffffff8116ea19>] ? ____cache_alloc_node+0x99/0x160
16:42:01: [<ffffffff8124bc3c>] ? crypto_create_tfm+0x3c/0xe0
16:42:01: [<ffffffff8116f7e9>] ? __kmalloc+0x189/0x220
16:42:01: [<ffffffff8124bc3c>] ? crypto_create_tfm+0x3c/0xe0
16:42:01: [<ffffffff812525d8>] ? crypto_init_shash_ops+0x68/0x100
16:42:01: [<ffffffff8124bd4a>] ? __crypto_alloc_tfm+0x6a/0x130
16:42:01: [<ffffffff8124c5ba>] ? crypto_alloc_base+0x5a/0xb0
16:42:01: [<ffffffffa04a2684>] ? cfs_percpt_unlock+0x24/0xb0 [libcfs]
16:42:01: [<ffffffffa048d2ca>] ? cfs_crypto_hash_alloc+0x7a/0x290 [libcfs]
16:42:01: [<ffffffffa048d5da>] ? cfs_crypto_hash_digest+0x6a/0xf0 [libcfs]
16:42:01: [<ffffffff8116f86c>] ? __kmalloc+0x20c/0x220
16:42:01: [<ffffffffa082bd73>] ? lustre_msg_calc_cksum+0xd3/0x130 [ptlrpc]
16:42:01: [<ffffffffa0865881>] ? null_authorize+0xa1/0x100 [ptlrpc]
16:42:01: [<ffffffffa0854a56>] ? sptlrpc_svc_wrap_reply+0x56/0x1c0 [ptlrpc]
16:42:01: [<ffffffffa08241ec>] ? ptlrpc_send_reply+0x1fc/0x7f0 [ptlrpc]
16:42:01: [<ffffffffa083b675>] ? ptlrpc_at_check_timed+0xc05/0x1360 [ptlrpc]
16:42:01: [<ffffffffa0832c09>] ? ptlrpc_wait_event+0xa9/0x2d0 [ptlrpc]
16:42:01: [<ffffffffa083cf68>] ? ptlrpc_main+0x1198/0x1980 [ptlrpc]
16:42:01: [<ffffffffa083bdd0>] ? ptlrpc_main+0x0/0x1980 [ptlrpc]
16:42:01: [<ffffffff8109ab56>] ? kthread+0x96/0xa0
16:42:01: [<ffffffff8100c20a>] ? child_rip+0xa/0x20
16:42:01: [<ffffffff8109aac0>] ? kthread+0x0/0xa0
16:42:01: [<ffffffff8100c200>] ? child_rip+0x0/0x20
16:42:01:Mem-Info:

test failed to respond and timed out

Info required for matching: insanity 1



 Comments   
Comment by Sarah Liu [ 23/Jun/14 ]

another instance:

https://maloo.whamcloud.com/test_sets/41a9daf4-f9de-11e3-9447-52540035b04c

Comment by nasf (Inactive) [ 30/Jun/14 ]

Another possible failure instance:

https://testing.hpdd.intel.com/test_sets/e0be2e84-ff23-11e3-b8ed-5254006e85c2

Comment by Andreas Dilger [ 11/Jul/14 ]

https://testing.hpdd.intel.com/test_sets/0048beca-07e1-11e4-9ea6-5254006e85c2 reports slab_unreclaimable:1741644kB of 2097136kB
https://testing.hpdd.intel.com/test_sets/e0be2e84-ff23-11e3-b8ed-5254006e85c2 reports slab_unreclaimable:1663660kB of 2097136kB

so it seems we are suffering some kind of slab cache leak on both the OST and MDT. While the cfs_crypto_hash_alloc() call makes it easier to identify this case, it isn't at all clear that this would be the source of the actual leak. We need to run some longer tests on a system to see where the slab cache entries are being leaked.

Comment by Andreas Dilger [ 28/Jul/14 ]

If this problem happens again, and it is possible to log into the node, then capturing the output of /proc/slabinfo would be very useful for debugging. Otherwise, getting a crashdump of the MDS, so that information about what is consuming the memory might be extracted from the crashdump.

Comment by Jian Yu [ 29/Jul/14 ]

Here is a test only patch to run insanity on master branch: http://review.whamcloud.com/11268
And here is a test only patch to run insanity on master branch with commit ff8087 reverted: http://review.whamcloud.com/11269

Testing kept hitting LU-5429, which is blocking running insanity tests with MDSCOUNT=4.

Comment by Andreas Dilger [ 30/Jul/14 ]

It seems that almost all of the recent test failures marked LU-5077 were actually caused by the three different versions of the LU-5420 patch that are being tested, and not the original LU-5077 problem. I've re-annotated them all with LU-5420.

Comment by Di Wang [ 31/Jul/14 ]

Hmm, this seems occur more recently on master

https://testing.hpdd.intel.com/sub_tests/4e152edc-185c-11e4-be2f-5254006e85c2
https://testing.hpdd.intel.com/test_sets/64d04970-185e-11e4-9c4c-5254006e85c2

Comment by Jian Yu [ 21/Aug/14 ]

Lustre Build: https://build.hpdd.intel.com/job/lustre-b2_5/83/
Distro/Arch: RHEL6.5/x86_64
MDSCOUNT=2

replay-vbr test 5c hit the same oom failure on MDS:
https://testing.hpdd.intel.com/test_sets/fb694c46-28f9-11e4-8b4b-5254006e85c2

By searching on Maloo, I found one instance on Lustre 2.5.2:
https://testing.hpdd.intel.com/test_sets/4bb8fc7e-018f-11e4-b80d-5254006e85c2

And the following patch was also in Lustre 2.5.2:

commit	9aacef521e971dff6097a4ae335bd840521be5dd
LU-4877 mdt: tolerate zero md_packed in mdt_fix_reply
Comment by Peter Jones [ 09/Sep/14 ]

Niu

Could you please help with this issue?

Thanks

Peter

Comment by Andreas Dilger [ 09/Sep/14 ]

It looks like all of the cases where this is failing is calling into the stack ending in crypto_create_tfm():

out_of_memory+0x220/0x3c0
__alloc_pages_nodemask+0x89f/0x8d0
kmem_getpages+0x62/0x170
fallback_alloc+0x1ba/0x270
cache_grow+0x2cf/0x320
____cache_alloc_node+0x99/0x160
crypto_create_tfm+0x3c/0xe0
__kmalloc+0x189/0x220
crypto_create_tfm+0x3c/0xe0
crypto_init_shash_ops+0x68/0x100
__crypto_alloc_tfm+0x6a/0x130
crypto_alloc_base+0x5a/0xb0
cfs_percpt_unlock+0x24/0xb0 [libcfs]
cfs_crypto_hash_alloc+0x7a/0x290 [libcfs]
cfs_crypto_hash_digest+0x6a/0xf0 [libcfs]
lustre_msg_calc_cksum+0xd3/0x130 [ptlrpc]
null_authorize+0xa1/0x100 [ptlrpc]
sptlrpc_svc_wrap_reply+0x56/0x1c0 [ptlrpc]
ptlrpc_send_reply+0x1fc/0x7f0 [ptlrpc]
ptlrpc_at_check_timed+0xc05/0x1360 [ptlrpc]
ptlrpc_wait_event+0xa9/0x2d0 [ptlrpc]
ptlrpc_main+0x1198/0x1980 [ptlrpc]

The allocation in crypto_create_tfm() is being done using GFP_KERNEL, so it may fail a bit earlier than necessary.

Since this code is only ever going to use CRC32, and is only used for early reply in an interrupt context, it might make more sense to use the kernel library crc32 routines instead of going through the overhead of setting up the cryptoapi context. The message size is just the ptlrpc_body (only about 100 bytes) and early replies should only be used in relatively rare cases, so getting hardware assist for that is probably not going to affect performance either way. That could eliminate a large amount of code and complexity and also remove this kind of allocation failure.

Comment by Niu Yawei (Inactive) [ 25/Sep/14 ]

Here is a test only patch to run insanity on master branch: http://review.whamcloud.com/11268
And here is a test only patch to run insanity on master branch with commit ff8087 reverted: http://review.whamcloud.com/11269
Testing kept hitting LU-5429, which is blocking running insanity tests with MDSCOUNT=4.

Hi, Yujian
I saw lots of OOM failure in replay-vbr or replay-single, seems we can test replay-vbr instead of insanity to round the LU-5420 & LU-5429. What do you think about?

Comment by Jian Yu [ 25/Sep/14 ]

Here is a test only patch to run replay-vbr on master branch: http://review.whamcloud.com/12064
And here is a test only patch to run replay-vbr on master branch with commit ff8087 reverted: http://review.whamcloud.com/12065

Comment by Jian Yu [ 30/Sep/14 ]

Test results in http://review.whamcloud.com/12065 showed that with commit ff8087 reverted the failure still occurred. So, commit ff8087 is not a culprit.

Comment by Andreas Dilger [ 30/Sep/14 ]

It would be worthwhile to try a simple patch to replace the cryptoAPI code with a direct call to the crc32() library functions in the kernel.

The other alternative is to add more debugging to this code so that we get good logs when a test fails. It would be good to know how often this code is being called and how large the allocation is. It should only be called in rare cases, because it is only used for early replies when the client might time out waiting for a busy server.

There has to be some strange reason why these tests are always OOM in crypto_alloc_tfm() instead of in some random place. Is it because some calculation is going badly and it is trying to allocate a huge amount of memory? Is this function being called so much more than others that it is just failing here more often due to statistics?

Comment by James A Simmons [ 01/Oct/14 ]

Please don't remove the crypto wrapper for direct crc32 access. Some platforms like ARM and PowerPC don't have hardware accelerate crc32c but do have other crypto algorithms that are accelerated.

Comment by Dmitry Eremin (Inactive) [ 02/Oct/14 ]

I think a cryptoAPI code is not an issue here. It's just last memory allocation code before OOM.

Comment by Andreas Dilger [ 02/Oct/14 ]

Dmitry, the question is why crypto_alloc_tfm() is always in the callpath that runs out of memory? If this was just a general shortage if memory, one would expect that memory allocation failures would happen in random places in the code instead of always in this same call stack. I don't think it is called frequently enough to just hit it due to statistical reasons.

Comment by Andreas Dilger [ 02/Oct/14 ]

James, as explained earlier in this ticket, in this particular instance Lustre is only using CRC32 to compute the checksum of the early reply message and does not need to support different checksum types. It is only checksumming 184 bytes of data that is sent if an RPC would otherwise have timed out (i.e. after tens or hundreds of seconds), so it is largely irrelevant whether ARM/PPC or even x86 have acceleration for CRC32 in this case. It may even be faster to call the crc32_le() function directly rather than going to external cryptoAPI code with hardware acceleration for such small messages.

It isn't clear to me yet why this code is failing in this particular case, but simplifying this code is never a bad thing, and maybe removing the cryptoAPI usage will tell us more about why/how it is running out of memory (i.e. do we start to fail in the crc32_le() or some other part of this same stack, or stop failing entirely).

The relevant commit that added the crptoAPI code to lustre_msg_calc_cksum() was http://review.whamcloud.com/3098 (commit 353e3c2f8d9). It looks like the compatibility for the older Lustre 1.6 code could just be deleted at this point, leaving something like:

__u32 lustre_msg_calc_cksum(struct lustre_msg *msg)
{
        switch (msg->lm_magic) {
        case LUSTRE_MSG_MAGIC_V2: {
                struct ptlrpc_body *pb = lustre_msg_ptlrpc_body(msg);
                LASSERTF(pb != NULL, "invalid msg %p: no ptlrpc body!\n", msg);
                return crc32_le(~(__u32)0, (unsigned char *)pb,
                                lustre_msg_buflen(msg, MSG_PTLRPC_BODY_OFF));
        }
        default:
                CERROR("incorrect message magic: %08x\n", msg->lm_magic);
                return 0;
        }
}
Comment by nasf (Inactive) [ 04/Oct/14 ]

Another failure instance: replay-single test_48 timeout, the MDS out_of_memory with similar stack.
https://testing.hpdd.intel.com/sub_tests/fe690dc4-4b3a-11e4-b999-5254006e85c2

Comment by Niu Yawei (Inactive) [ 08/Oct/14 ]

Hi, I'm just back from vacation, I'll try to compose a patch to add some debug code and use the direct crc32 function soon.

Another thing confused me is that why all OOM happened just after recovery done? What kind of operation immediately after recovery caused OOM?

Comment by Niu Yawei (Inactive) [ 08/Oct/14 ]

patch of using crc32_le() directly instead of crypto API: http://review.whamcloud.com/#/c/12230

Comment by Andreas Dilger [ 13/Oct/14 ]

I wonder if this relates to LU-5727 ?

Comment by Jian Yu [ 13/Oct/14 ]

On master branch:

commit bfae5a4e4a37e4057200bcc70dbff14e18a00797
Author: Jinshan Xiong <jinshan.xiong@intel.com>
Date:   Thu Feb 6 23:03:25 2014 -0800

    LU-4300 ldlm: ELC picks locks in a safer policy
    
    Change the policy of ELC to pick locks that have no dirty pages,
    no page in writeback state, and no locked pages.
$ git revert bfae5a4e4a37e4057200bcc70dbff14e18a00797
error: could not revert bfae5a4... LU-4300 ldlm: ELC picks locks in a safer policy
hint: after resolving the conflicts, mark the corrected paths
hint: with 'git add <paths>' or 'git rm <paths>'
hint: and commit the result with 'git commit'

We cannot simply revert the patch. I'm looking at the conflicts.

Comment by Jian Yu [ 13/Oct/14 ]

Here is a test only patch to run replay-vbr on master branch: http://review.whamcloud.com/12286
And here is a test only patch to run replay-vbr on master branch with commit bfae5a4 reverted: http://review.whamcloud.com/12290

Comment by Andreas Dilger [ 14/Oct/14 ]

Yu Jian, it isn't clear why you made the patch http://review.whamcloud.com/12290 to revert commit bfae5a4 fortestonly? If this fixes the problem then we would want to land this patch to master.

Comment by Jian Yu [ 14/Oct/14 ]

Hi Andreas, I just updated the commit message to remove the 'fortestonly' parameter. Let's wait for test results.

Comment by John Hammond [ 15/Oct/14 ]

There are tons of crashdumps for this on shadow. If anyone needs help working with one then please ping me.

/export/scratch/dumps/shadow-14vm12.shadow.whamcloud.com/10.1.4.170-2014-10-14-03:24:41/vmcore-dmesg.txt
        <6>Lustre: Lustre: Build Version: jenkins-arch=x86_64,build_type=server,distro=el6,ib_stack=inkernel-27476-g51a843b-PRISTINE-2.6.32-431.29.2.el6_lustre.g01c1143.x86_64

        51a843b http://review.whamcloud.com/#/c/9720/5 LU-0000 osd: remove unused ->do_data_get() method
        <4>Lustre: DEBUG MARKER: == sanity test 103b: MDS mount option 'noacl' == 03:24:10 (1413257050)
        MDT OOM

        https://build.hpdd.intel.com/job/lustre-reviews/27476
crash> log
...
Lustre: DEBUG MARKER: == sanity test 103b: MDS mount option 'noacl' == 03:24:10 (1413257050)
Lustre: DEBUG MARKER: grep -c /mnt/mds1' ' /proc/mounts
Lustre: DEBUG MARKER: umount -d /mnt/mds1
Lustre: Failing over lustre-MDT0000
LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 10.1.4.169@tcp (no target). If you are
 running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 54 previous similar messages
Lustre: 8142:0:(client.c:1934:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sen
t 1413257051/real 1413257051]  req@ffff88005ccdf400 x1481901673134392/t0(0) o251->MGC10.1.4.170@tcp@0@lo:26/25
 lens 224/224 e 0 to 1 dl 1413257057 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: server umount lustre-MDT0000 complete
Lustre: DEBUG MARKER: lsmod | grep lnet > /dev/null && lctl dl | grep ' ST '
Lustre: DEBUG MARKER: mkdir -p /mnt/mds1
Lustre: DEBUG MARKER: test -b /dev/lvm-Role_MDS/P1
Lustre: DEBUG MARKER: mkdir -p /mnt/mds1; mount -t lustre -o noacl                                 /dev/lvm-Ro
le_MDS/P1 /mnt/mds1
LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. quota=on. Opts: 
LustreError: 11-0: lustre-MDT0000-lwp-MDT0000: Communicating with 0@lo, operation mds_connect failed with -11.
LustreError: Skipped 3 previous similar messages
Lustre: DEBUG MARKER: PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/b
in:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit
/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/u
Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 7 clients reconnect
Lustre: DEBUG MARKER: e2label /dev/lvm-Role_MDS/P1 2>/dev/null
Lustre: DEBUG MARKER: lctl set_param -n mdt.lustre*.enable_remote_dir=1
Lustre: lustre-MDT0000: Denying connection for new client lustre-MDT0000-lwp-MDT0002_UUID (at 10.1.4.166@tcp),
 waiting for all 7 known clients (3 recovered, 0 in progress, and 0 evicted) to recover in 1:03
Lustre: Skipped 181 previous similar messages
Lustre: mdt: This server is not able to keep up with request traffic (cpu-bound).
Lustre: Skipped 2 previous similar messages
Lustre: 10761:0:(service.c:1487:ptlrpc_at_check_timed()) earlyQ=1 reqQ=0 recA=0, svcEst=16, delay=0(jiff)
Lustre: 10761:0:(service.c:1487:ptlrpc_at_check_timed()) Skipped 2 previous similar messages
Lustre: 10761:0:(service.c:1284:ptlrpc_at_send_early_reply()) @@@ Already past deadline (-1s), not sending ear
ly reply. Consider increasing at_early_margin (5)?  req@ffff8800614d5400 x1481689374041584/t0(0) o400->ad370be
7-de00-0a17-c4a8-41a66f6ba69e@10.1.4.160@tcp:0/0 lens 224/0 e 391777 to 0 dl 1413257076 ref 2 fl Complete:H/c0
/ffffffff rc 0/-1
Lustre: 10761:0:(service.c:1284:ptlrpc_at_send_early_reply()) Skipped 4 previous similar messages
sh invoked oom-killer: gfp_mask=0x280da, order=0, oom_adj=0, oom_score_adj=0
sh cpuset=/ mems_allowed=0
Pid: 14821, comm: sh Not tainted 2.6.32-431.29.2.el6_lustre.g01c1143.x86_64 #1
Call Trace:
 [<ffffffff810d07b1>] ? cpuset_print_task_mems_allowed+0x91/0xb0
 [<ffffffff81122b80>] ? dump_header+0x90/0x1b0
 [<ffffffff81122cee>] ? check_panic_on_oom+0x4e/0x80
 [<ffffffff811233db>] ? out_of_memory+0x1bb/0x3c0
 [<ffffffff8112fd5f>] ? __alloc_pages_nodemask+0x89f/0x8d0
 [<ffffffff81167dea>] ? alloc_pages_vma+0x9a/0x150
 [<ffffffff81149e0c>] ? do_wp_page+0x52c/0x920
 [<ffffffff8114a669>] ? __do_fault+0x469/0x530
 [<ffffffff8114a9fd>] ? handle_pte_fault+0x2cd/0xb00
 [<ffffffff8114f0bf>] ? vma_adjust+0x11f/0x5e0
 [<ffffffff8114b45a>] ? handle_mm_fault+0x22a/0x300
 [<ffffffff8104a8d8>] ? __do_page_fault+0x138/0x480
 [<ffffffff811519f5>] ? do_mmap_pgoff+0x335/0x380
 [<ffffffff8152f25e>] ? do_page_fault+0x3e/0xa0
 [<ffffffff8152c615>] ? page_fault+0x25/0x30
Mem-Info:
Node 0 DMA per-cpu:
CPU    0: hi:    0, btch:   1 usd:   0
CPU    1: hi:    0, btch:   1 usd:   0
Node 0 DMA32 per-cpu:
CPU    0: hi:  186, btch:  31 usd: 164
CPU    1: hi:  186, btch:  31 usd:  30
active_anon:949 inactive_anon:1003 isolated_anon:0
 active_file:6388 inactive_file:6311 isolated_file:32
 unevictable:0 dirty:0 writeback:1023 unstable:0
 free:13255 slab_reclaimable:2483 slab_unreclaimable:423051
 mapped:16 shmem:13 pagetables:614 bounce:0
Node 0 DMA free:8336kB min:332kB low:412kB high:496kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15348kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:7408kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
lowmem_reserve[]: 0 2004 2004 2004
Node 0 DMA32 free:44852kB min:44720kB low:55900kB high:67080kB active_anon:3796kB inactive_anon:4012kB active_file:25552kB inactive_file:25244kB unevictable:0kB isolated(anon):0kB isolated(file):128kB present:2052308kB mlocked:0kB dirty:0kB writeback:4092kB mapped:64kB shmem:52kB slab_reclaimable:9932kB slab_unreclaimable:1684712kB kernel_stack:1744kB pagetables:2456kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
Node 0 DMA: 0*4kB 1*8kB 1*16kB 1*32kB 1*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 2*2048kB 1*4096kB = 8312kB
Node 0 DMA32: 3298*4kB 833*8kB 378*16kB 162*32kB 51*64kB 19*128kB 6*256kB 3*512kB 1*1024kB 0*2048kB 1*4096kB = 44976kB
1157 total pagecache pages
1025 pages in swap cache
Swap cache stats: add 3261, delete 2236, find 0/0
Free swap  = 4115720kB
Total swap = 4128764kB
524284 pages RAM
43695 pages reserved
192 pages shared
461953 pages non-shared
[ pid ]   uid  tgid total_vm      rss cpu oom_adj oom_score_adj name
[  365]     0   365     2700        0   1     -17         -1000 udevd
[  998]     0   998     6910        1   1     -17         -1000 auditd
[ 1014]     0  1014    62272      141   1       0             0 rsyslogd
[ 1043]     0  1043     2705        1   0       0             0 irqbalance
[ 1057]    32  1057     4744        1   1       0             0 rpcbind
[ 1075]    29  1075     5837        1   1       0             0 rpc.statd
[ 1188]    81  1188     6436        1   1       0             0 dbus-daemon
[ 1203]     0  1203    53901        1   1       0             0 ypbind
[ 1267]     0  1267     1020        0   0       0             0 acpid
[ 1276]    68  1276    10467        1   1       0             0 hald
[ 1277]     0  1277     5081        1   0       0             0 hald-runner
[ 1309]     0  1309     5611        1   1       0             0 hald-addon-inpu
[ 1319]    68  1319     4483        1   0       0             0 hald-addon-acpi
[ 1355]     0  1355    26827        0   0       0             0 rpc.rquotad
[ 1359]     0  1359     5414        0   0       0             0 rpc.mountd
[ 1394]     0  1394     6291        1   0       0             0 rpc.idmapd
[ 1425]   498  1425    57322        1   1       0             0 munged
[ 1440]     0  1440    16653        0   0     -17         -1000 sshd
[ 1448]     0  1448     5545       20   0       0             0 xinetd
[ 1472]     0  1472    22316       23   0       0             0 sendmail
[ 1480]    51  1480    20180       14   0       0             0 sendmail
[ 1502]     0  1502    29325        8   1       0             0 crond
[ 1513]     0  1513     5385        0   0       0             0 atd
[ 1526]     0  1526     1020        1   1       0             0 agetty
[ 1528]     0  1528     1016        1   0       0             0 mingetty
[ 1530]     0  1530     1016        1   1       0             0 mingetty
[ 1532]     0  1532     1016        1   0       0             0 mingetty
[ 1534]     0  1534     1016        1   1       0             0 mingetty
[ 1536]     0  1536     2663        0   1     -17         -1000 udevd
[ 1537]     0  1537     2699        0   0     -17         -1000 udevd
[ 1538]     0  1538     1016        1   1       0             0 mingetty
[ 1540]     0  1540     1016        1   1       0             0 mingetty
[ 2182]    38  2182     7686        2   0       0             0 ntpd
[14883]     0 14883     4872       57   0       0             0 anacron
[ 5591]     0  5591    26515       50   0       0             0 run-parts
[ 5602]     0  5602    26515       45   1       0             0 makewhatis.cron
[ 5603]     0  5603    26486       40   1       0             0 awk
[ 5608]     0  5608    26515       69   1       0             0 makewhatis
[13754]     0 13754    28217      220   1       0             0 find
[13755]     0 13755    26682      245   0       0             0 awk
[14821]     0 14821     2274       22   1       0             0 sh
Kernel panic - not syncing: Out of memory: system-wide panic_on_oom is enabled
crash> kmem -i
              PAGES        TOTAL      PERCENTAGE
 TOTAL MEM   480589       1.8 GB         ----
      FREE    13539      52.9 MB    2% of TOTAL MEM
      USED   467050       1.8 GB   97% of TOTAL MEM
    SHARED      161       644 KB    0% of TOTAL MEM
   BUFFERS      125       500 KB    0% of TOTAL MEM
    CACHED       19        76 KB    0% of TOTAL MEM
      SLAB   425257       1.6 GB   88% of TOTAL MEM

TOTAL SWAP  1032191       3.9 GB         ----
 SWAP USED     3261      12.7 MB    0% of TOTAL SWAP
 SWAP FREE  1028930       3.9 GB   99% of TOTAL SWAP

slab-512 is the problem:

crash> kmem -s | awk '$4 != 0'
CACHE            NAME                 OBJSIZE  ALLOCATED     TOTAL  SLABS  SSIZE
ffff8800790b3c80 mdd_obj                   80         29       912     19     4k
ffff880079123c40 lod_obj                  120         29       672     21     4k
ffff88006b493b80 mdt_obj                  248         29       352     22     4k
ffff88005f1a3ac0 lqe_kmem                 200          4        19      1     4k
ffff88006a803a00 ccc_session_kmem         224          4        34      2     4k
ffff88006b4f39c0 ccc_thread_kmem          360          8        22      2     4k
ffff88007a543900 vvp_session_kmem          96          4        40      1     4k
ffff880069a838c0 vvp_thread_kmem          376          8        20      2     4k
ffff88006a823600 lov_session_kmem         400          4        20      2     4k
ffff880057e535c0 lov_thread_kmem          384          8        20      2     4k
ffff88006c7834c0 ldlm_locks               512        116       315     45     4k
ffff88006be13480 ldlm_resources           320         17        96      8     4k
ffff880059083440 ptlrpc_cache            1024         76       236     59     4k
ffff88006ac233c0 cl_env_kmem              168          1        23      1     4k
ffff88007ab332c0 ll_obd_dev_cache        3592         20        20     20     4k
ffff88006ad83280 ldiskfs_inode_cache     1048        252       261     87     4k
ffff880060433180 ldiskfs_prealloc_space   112          1        34      1     4k
ffff88005aa33080 jbd2_revoke_table         16          2       202      1     4k
ffff88007ac62e40 nfs_write_data           960         36        36      9     4k
ffff88007b142dc0 nfs_inode_cache         1048          2        36     12     4k
ffff880037fc2d40 fscache_cookie_jar        80          4        48      1     4k
ffff88007a532d00 rpc_buffers             2048          8         8      4     4k
ffff880079ef2cc0 rpc_tasks                256          8        15      1     4k
ffff88007a4e2c80 rpc_inode_cache          832         22        24      6     4k
ffff88007b0d2c40 fib6_nodes                64         22        59      1     4k
ffff88007b612c00 ip6_dst_cache            384         13        20      2     4k
ffff88007d132bc0 ndisc_cache              256          1        15      1     4k
ffff88007d062b40 RAWv6                   1024          5         8      2     4k
ffff88007d7f2ac0 UDPv6                   1024         12        16      4     4k
ffff88007d7c2a80 tw_sock_TCPv6            320         72        96      8     4k
ffff88007d752a00 TCPv6                   1920         12        14      7     4k
ffff880037fe2980 ext3_inode_cache         800        362      1115    223     4k
ffff880037fd2940 ext3_xattr                88          1        44      1     4k
ffff880037f228c0 journal_head             112         16       476     14     4k
ffff880037f12880 revoke_table              16          2       202      1     4k
ffff880037d92800 scsi_sense_cache         128          2        30      1     4k
ffff880037d827c0 scsi_cmd_cache           256          2        15      1     4k
ffff880037b22600 dm_target_io              24       1536      1872     13     4k
ffff880037b125c0 dm_io                     40       1536      1748     19     4k
ffff8800378d2440 mqueue_inode_cache       896          1         4      1     4k
ffff8800378923c0 hugetlbfs_inode_cache    608          1         6      1     4k
ffff880037882380 dquot                    256          2        30      2     4k
ffff880037842280 inotify_inode_mark_entry 112        115       136      4     4k
ffff880037832240 dnotify_mark_entry       112          6        34      1     4k
ffff880037822200 dnotify_struct            32          6       112      1     4k
ffff880079341f40 uid_cache                128          7        30      1     4k
ffff88007d831f00 UNIX                     832         39        63      7     8k
ffff88007e311e40 tcp_bind_bucket           64        387       590     10     4k
ffff88007e2c1d00 ip_fib_hash               72         10        53      1     4k
ffff88007e211cc0 ip_dst_cache             384         30        50      5     4k
ffff88007e201c80 arp_cache                256          8        15      1     4k
ffff88007e1e1c00 RAW                      832          3         9      1     8k
ffff88007e1d1bc0 UDP                      896         16        24      6     4k
ffff88007e1c1b80 tw_sock_TCP              256        366       495     33     4k
ffff88007e1a1b00 TCP                     1728         30        32      8     8k
ffff88007e171ac0 eventpoll_pwq             72          5        53      1     4k
ffff88007e161a80 eventpoll_epi            128          5        30      1     4k
ffff88007e001a40 sgpool-256              8192          2         2      2     8k
ffff88007e7f1a00 sgpool-128              4096          2         2      2     4k
ffff88007e7e19c0 sgpool-64               2048          2         2      1     4k
ffff88007e7d1980 sgpool-32               1024          2         4      1     4k
ffff88007e7c1940 sgpool-16                512          2         8      1     4k
ffff88007e7b1900 sgpool-8                 256          2        15      1     4k
ffff88007e701840 blkdev_queue            2864         31        32     16     8k
ffff88007e6d1800 blkdev_requests          352         43       132     12     4k
ffff88007e6c17c0 blkdev_ioc                80          1        48      1     4k
ffff88007e681700 bio-0                    192       1123      1480     74     4k
ffff88007e6716c0 biovec-256              4096         98        98     98     4k
ffff88007e6315c0 bip-256                 4224          2         2      2     8k
ffff88007e5a1440 sock_inode_cache         704        136       155     31     4k
ffff88007e591400 skbuff_fclone_cache      512          3       343     49     4k
ffff88007e5813c0 skbuff_head_cache        256         66       225     15     4k
ffff88007e571380 file_lock_cache          176          9        44      2     4k
ffff88007e531300 shmem_inode_cache        784        674       680    136     4k
ffff88007e4412c0 Acpi-Operand              72       1204      1219     23     4k
ffff88007e4011c0 Acpi-Namespace            40        400       460      5     4k
ffff88007eff1180 task_delay_info          112        219       340     10     4k
ffff88007efe1140 taskstats                328          1        12      1     4k
ffff88007efd1100 proc_inode_cache         656         29       552     92     4k
ffff88007efc10c0 sigqueue                 160          1       144      6     4k
ffff88007efb1080 bdev_cache               832         35        36      9     4k
ffff88007ef81040 sysfs_dir_cache          144       8229      8235    305     4k
ffff88007ef71000 mnt_cache                256         31        45      3     4k
ffff88007ef60fc0 filp                     192        627      1260     63     4k
ffff88007ee50f80 inode_cache              592       4474      4986    831     4k
ffff88007ec30f40 dentry                   192       5616      9520    476     4k
ffff88007ec00e80 selinux_inode_security    72       7254      7314    138     4k
ffff88007f3f0e40 radix_tree_node          560        112       959    137     4k
ffff88007f3e0e00 key_jar                  192          6        20      1     4k
ffff88007f3d0dc0 buffer_head              104      12751     19277    521     4k
ffff88007f3b0d40 vm_area_struct           200       1981      2451    129     4k
ffff88007f3a0d00 mm_struct               1408         42        75     15     8k
ffff88007f390cc0 fs_cache                  64         84       236      4     4k
ffff88007f380c80 files_cache              704         43       110     10     8k
ffff88007f370c40 signal_cache            1088        208       238     34     8k
ffff88007f360c00 sighand_cache           2112        208       240     80     8k
ffff88007f350bc0 task_xstate              512         52       112     14     4k
ffff88007f340b80 task_struct             2656        218       252     84     8k
ffff88007f330b40 cred_jar                 192        288       580     29     4k
ffff88007f320b00 anon_vma_chain            48       1944      2541     33     4k
ffff88007f310ac0 anon_vma                  40       1526      1840     20     4k
ffff88007f300a80 pid                      128        227       330     11     4k
ffff88007fab0a00 numa_policy              136          1        28      1     4k
ffff88007fa809c0 idr_layer_cache          544        280       301     43     4k
ffff88007f990640 size-65536             65536          1         1      1    64k
ffff88007f9705c0 size-32768             32768          2         2      2    32k
ffff88007f950540 size-16384             16384         22        31     31    16k
ffff88007f9304c0 size-8192               8192        685       689    689     8k
ffff88007f910440 size-4096               4096        489       631    631     4k
ffff88007f8f03c0 size-2048               2048       1168      1230    615     4k
ffff88007f8d0340 size-1024               1024       1149      1412    353     4k
ffff88007f8b02c0 size-512                 512    2823631   2830768 353846     4k
ffff88007f890240 size-256                 256        430       615     41     4k
ffff88007f8701c0 size-192                 192       2496      4100    205     4k
ffff88007f840100 size-64                   64      11484     13098    222     4k
ffff88007f810080 size-128                 128    1768713   1773240  59108     4k
ffff88007f800040 size-32                   32      20003     22512    201     4k
ffffffff81ac5420 kmem_cache             32896        243       243    243    64k
crash> ps | grep -v IN
   PID    PPID  CPU       TASK        ST  %MEM     VSZ    RSS  COMM
      0      0   0  ffffffff81a8d020  RU   0.0       0      0  [swapper]
      0      0   1  ffff88007e509540  RU   0.0       0      0  [swapper]
     12      2   1  ffff88007e51b500  RU   0.0       0      0  [events/1]
     39      2   1  ffff880079337540  RU   0.0       0      0  [kswapd0]
   1018      1   1  ffff8800378b8ae0  RU   0.0  249088    564  rsyslogd
   2820      2   1  ffff88007dae4040  RU   0.0       0      0  [socknal_reaper]
>  2821      2   0  ffff88007db24aa0  RU   0.0       0      0  [socknal_sd00_00]
   2822      2   0  ffff880079050080  RU   0.0       0      0  [socknal_sd00_01]
  10770      2   1  ffff88007b642aa0  RU   0.0       0      0  [mdt_out00_000]
  10891      2   0  ffff88007982c080  UN   0.0       0      0  [tgt_recov]
> 14821  13755   1  ffff88007b6b4080  RU   0.0    9096     88  sh
crash> bt 12
PID: 12     TASK: ffff88007e51b500  CPU: 1   COMMAND: "events/1"
 #0 [ffff88007e51dd70] schedule at ffffffff815296c0
 #1 [ffff88007e51de38] worker_thread at ffffffff81094aac
 #2 [ffff88007e51dee8] kthread at ffffffff8109abf6
 #3 [ffff88007e51df48] kernel_thread at ffffffff8100c20a
crash> bt 39
PID: 39     TASK: ffff880079337540  CPU: 1   COMMAND: "kswapd0"
 #0 [ffff880079339b00] schedule at ffffffff815296c0
 #1 [ffff880079339bc8] schedule_timeout at ffffffff8152a532
 #2 [ffff880079339c78] io_schedule_timeout at ffffffff8152918f
 #3 [ffff880079339ca8] congestion_wait at ffffffff81142df0
 #4 [ffff880079339d08] balance_pgdat at ffffffff8113c2c8
 #5 [ffff880079339e28] kswapd at ffffffff8113c494
 #6 [ffff880079339ee8] kthread at ffffffff8109abf6
 #7 [ffff880079339f48] kernel_thread at ffffffff8100c20a
crash> bt 1018
PID: 1018   TASK: ffff8800378b8ae0  CPU: 1   COMMAND: "rsyslogd"
 #0 [ffff880079aebd38] schedule at ffffffff815296c0
 #1 [ffff880079aebe00] do_syslog at ffffffff81073731
 #2 [ffff880079aebe80] kmsg_read at ffffffff81200142
 #3 [ffff880079aebea0] proc_reg_read at ffffffff811f431e
 #4 [ffff880079aebef0] vfs_read at ffffffff81189a95
 #5 [ffff880079aebf30] sys_read at ffffffff81189bd1
 #6 [ffff880079aebf80] system_call_fastpath at ffffffff8100b072
    RIP: 00007fa949ea875d  RSP: 00007fa947a7cd38  RFLAGS: 00000202
    RAX: 0000000000000000  RBX: ffffffff8100b072  RCX: 0000000000000010
    RDX: 0000000000000fff  RSI: 00007fa949086780  RDI: 0000000000000003
    RBP: 0000000000000000   R8: 00007fa94a952760   R9: 0000000000000000
    R10: 64657070696b5320  R11: 0000000000000293  R12: 00007fa947a7d9c0
    R13: 00007fa94a7501c0  R14: 00007fa94a967910  R15: 00007fa94a7501c0
    ORIG_RAX: 0000000000000000  CS: 0033  SS: 002b
crash> bt 2820
PID: 2820   TASK: ffff88007dae4040  CPU: 1   COMMAND: "socknal_reaper"
 #0 [ffff88006be0fcc0] schedule at ffffffff815296c0
 #1 [ffff88006be0fd88] schedule_timeout at ffffffff8152a532
 #2 [ffff88006be0fe38] ksocknal_reaper at ffffffffa0a23c82 [ksocklnd]
 #3 [ffff88006be0fee8] kthread at ffffffff8109abf6
 #4 [ffff88006be0ff48] kernel_thread at ffffffff8100c20a
crash> bt 2821
PID: 2821   TASK: ffff88007db24aa0  CPU: 0   COMMAND: "socknal_sd00_00"
 #0 [ffff880002207e90] crash_nmi_callback at ffffffff8102fee6
 #1 [ffff880002207ea0] notifier_call_chain at ffffffff8152f315
 #2 [ffff880002207ee0] atomic_notifier_call_chain at ffffffff8152f37a
 #3 [ffff880002207ef0] notify_die at ffffffff810a11de
 #4 [ffff880002207f20] do_nmi at ffffffff8152cfdb
 #5 [ffff880002207f50] nmi at ffffffff8152c8a0
    [exception RIP: cp_interrupt+110]
    RIP: ffffffffa00f0e4e  RSP: ffff880002203e78  RFLAGS: 00010246
    RAX: 0000000000000000  RBX: ffff880037d5c6e0  RCX: 0000000000000000
    RDX: ffffc900009d4000  RSI: ffff880037d5c020  RDI: ffff880037d5c6f0
    RBP: ffff880002203ed8   R8: ffff880065efc000   R9: ffff880065efde10
    R10: 0000000000000000  R11: 0000000000000100  R12: 0000000000000000
    R13: 0000000000000000  R14: 000000000000000b  R15: ffff880037d5c020
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0000
--- <NMI exception stack> ---
 #6 [ffff880002203e78] cp_interrupt at ffffffffa00f0e4e [8139cp]
 #7 [ffff880002203ee0] handle_IRQ_event at ffffffff810e70b0
 #8 [ffff880002203f30] handle_fasteoi_irq at ffffffff810e9b38
 #9 [ffff880002203f60] handle_irq at ffffffff8100faf9
#10 [ffff880002203f80] do_IRQ at ffffffff81532ddc
--- <IRQ stack> ---
#11 [ffff880065efdd68] ret_from_intr at ffffffff8100b9d3
    [exception RIP: _spin_lock_bh+31]
    RIP: ffffffff8152c21f  RSP: ffff880065efde10  RFLAGS: 00000206
    RAX: 0000000000000000  RBX: ffff880065efde20  RCX: 0000000000000120
    RDX: 0000000000000000  RSI: 0000000000000000  RDI: ffffc9000183b000
    RBP: ffffffff8100b9ce   R8: ffff88005b13c200   R9: 00000000000000c0
    R10: 0000000000000000  R11: 0000000000000100  R12: ffff880065efddf0
    R13: ffff880079af9980  R14: ffff88005b13c200  R15: ffffffffa04a1c64
    ORIG_RAX: ffffffffffffffc4  CS: 0010  SS: 0000
#12 [ffff880065efde28] ksocknal_scheduler at ffffffffa0a22105 [ksocklnd]
#13 [ffff880065efdee8] kthread at ffffffff8109abf6
#14 [ffff880065efdf48] kernel_thread at ffffffff8100c20a
crash> bt 2822
PID: 2822   TASK: ffff880079050080  CPU: 0   COMMAND: "socknal_sd00_01"
 #0 [ffff880065effad0] schedule at ffffffff815296c0
 #1 [ffff880065effb98] __cond_resched at ffffffff810695fa
 #2 [ffff880065effbb8] _cond_resched at ffffffff8152a100
 #3 [ffff880065effbc8] __kmalloc at ffffffff8116fde0
 #4 [ffff880065effc18] ksocknal_alloc_tx at ffffffffa0a1ea10 [ksocklnd]
 #5 [ffff880065effc38] ksocknal_send at ffffffffa0a23019 [ksocklnd]
 #6 [ffff880065effcb8] lnet_ni_send at ffffffffa0540b7b [lnet]
 #7 [ffff880065effcd8] lnet_post_send_locked at ffffffffa0543ad4 [lnet]
 #8 [ffff880065effd28] lnet_return_tx_credits_locked at ffffffffa0543f1f [lnet]
 #9 [ffff880065effd58] lnet_msg_decommit at ffffffffa053a110 [lnet]
#10 [ffff880065effd88] lnet_finalize at ffffffffa053af8b [lnet]
#11 [ffff880065effdf8] ksocknal_tx_done at ffffffffa0a206b9 [ksocklnd]
#12 [ffff880065effe28] ksocknal_scheduler at ffffffffa0a220fd [ksocklnd]
#13 [ffff880065effee8] kthread at ffffffff8109abf6
#14 [ffff880065efff48] kernel_thread at ffffffff8100c20a
crash> bt 10770
PID: 10770  TASK: ffff88007b642aa0  CPU: 1   COMMAND: "mdt_out00_000"
 #0 [ffff880070c8b260] schedule at ffffffff815296c0
 #1 [ffff880070c8b328] __cond_resched at ffffffff810695fa
 #2 [ffff880070c8b348] _cond_resched at ffffffff8152a100
 #3 [ffff880070c8b358] shrink_page_list.clone.3 at ffffffff81139212
 #4 [ffff880070c8b4a8] shrink_inactive_list at ffffffff8113a1f3
 #5 [ffff880070c8b658] shrink_mem_cgroup_zone at ffffffff8113aa56
 #6 [ffff880070c8b708] shrink_zone at ffffffff8113ac5a
 #7 [ffff880070c8b788] do_try_to_free_pages at ffffffff8113ae75
 #8 [ffff880070c8b828] try_to_free_pages at ffffffff8113b542
 #9 [ffff880070c8b8c8] __alloc_pages_nodemask at ffffffff8112f93e
#10 [ffff880070c8ba08] kmem_getpages at ffffffff8116e6d2
#11 [ffff880070c8ba38] fallback_alloc at ffffffff8116f2ea
#12 [ffff880070c8bab8] ____cache_alloc_node at ffffffff8116f069
#13 [ffff880070c8bb18] kmem_cache_alloc_trace at ffffffff8116f8d7
#14 [ffff880070c8bb68] LNetPut at ffffffffa0545b9a [lnet]
#15 [ffff880070c8bbc8] ptl_send_buf at ffffffffa08172b0 [ptlrpc]
#16 [ffff880070c8bc78] ptlrpc_send_reply at ffffffffa081788b [ptlrpc]
#17 [ffff880070c8bcf8] ptlrpc_at_check_timed at ffffffffa082ee85 [ptlrpc]
#18 [ffff880070c8bda8] ptlrpc_main at ffffffffa0830808 [ptlrpc]
#19 [ffff880070c8bee8] kthread at ffffffff8109abf6
#20 [ffff880070c8bf48] kernel_thread at ffffffff8100c20a
crash> bt 10891
PID: 10891  TASK: ffff88007982c080  CPU: 0   COMMAND: "tgt_recov"
 #0 [ffff88005a8fbce0] schedule at ffffffff815296c0
 #1 [ffff88005a8fbda8] target_recovery_overseer at ffffffffa07ddf5d [ptlrpc]
 #2 [ffff88005a8fbe48] target_recovery_thread at ffffffffa07e5406 [ptlrpc]
 #3 [ffff88005a8fbee8] kthread at ffffffff8109abf6
 #4 [ffff88005a8fbf48] kernel_thread at ffffffff8100c20a
crash> bt 14821
PID: 14821  TASK: ffff88007b6b4080  CPU: 1   COMMAND: "sh"
 #0 [ffff88004009d820] machine_kexec at ffffffff81038f3b
 #1 [ffff88004009d880] crash_kexec at ffffffff810c5f42
 #2 [ffff88004009d950] panic at ffffffff81529003
 #3 [ffff88004009d9d0] check_panic_on_oom at ffffffff81122d1c
 #4 [ffff88004009da00] out_of_memory at ffffffff811233db
 #5 [ffff88004009da80] __alloc_pages_nodemask at ffffffff8112fd5f
 #6 [ffff88004009dbc0] alloc_pages_vma at ffffffff81167dea
 #7 [ffff88004009dc10] do_wp_page at ffffffff81149e0c
 #8 [ffff88004009dcb0] handle_pte_fault at ffffffff8114a9fd
 #9 [ffff88004009dd90] handle_mm_fault at ffffffff8114b45a
#10 [ffff88004009de00] __do_page_fault at ffffffff8104a8d8
#11 [ffff88004009df20] do_page_fault at ffffffff8152f25e
#12 [ffff88004009df50] page_fault at ffffffff8152c615
    RIP: 00007fadf361011b  RSP: 00007fffe990c2f8  RFLAGS: 00010246
    RAX: 0000000000000004  RBX: 00000000756e6547  RCX: 000000006c65746e
    RDX: 0000000049656e69  RSI: 00007fadf35f1000  RDI: 00007fadf3dc42a0
    RBP: 00007fffe990c3f0   R8: 000000000034e6b4   R9: 000000000000003d
    R10: 00007fffe990c130  R11: 00007fffe990c130  R12: 00007fadf35f4c48
    R13: 00007fadf360f970  R14: 0000000000000025  R15: 00007fadf397f018
    ORIG_RAX: ffffffffffffffff  CS: 0033  SS: 002b
Comment by nasf (Inactive) [ 16/Oct/14 ]

After reverting LU-4300 patch, we still hit the same OOM, here is an instance:
https://testing.hpdd.intel.com/test_sets/53b10dca-5489-11e4-84d2-5254006e85c2

Comment by Sarah Liu [ 16/Oct/14 ]

another instance seen in replay-vbr:

https://testing.hpdd.intel.com/test_sets/473c9eda-5428-11e4-abcf-5254006e85c2

Comment by John Hammond [ 17/Oct/14 ]

I think these OOMs are due to lnet+ptlrpc and not to LDLM. In the crash dumps I find there are a trivial number of locks on the MDT. But lnet_memused may be as high as 700 MB. Running replay-single 0c (one palce where we see OOMs) under trace I see that ptlrpc_send_early_reply() is being called 1000s of times a second to handle the same three OBD_PINGS:

awk -F: '$4 ~ /1413555299/' 8.dk
00000100:00000010:1.0:1413555299.934838:0:13846:0:(service.c:1335:ptlrpc_at_send_early_reply()) kmalloced 'reqmsg': 224 at ffff88006dc987c0.
02000000:00000010:1.0:1413555299.934839:0:13846:0:(sec_null.c:327:null_alloc_rs()) kmalloced 'rs': 488 at ffff880068e26200.
00000100:00000010:1.0:1413555299.934841:0:13846:0:(service.c:1394:ptlrpc_at_send_early_reply()) kfreed 'reqmsg': 224 at ffff88006dc987c0.
00000100:00000010:1.0:1413555299.934842:0:13846:0:(client.c:414:ptlrpc_request_cache_free()) slab-freed '(req)': 984 at ffff880032b64400.
00000100:00001000:1.0:1413555299.934843:0:13846:0:(service.c:1276:ptlrpc_at_send_early_reply()) @@@ sending early reply (deadline +0s, margin -21s) for 21+30  req@ffff880066e91400 x1482175522279684/t0(0) o400->lustre-MDT0002-mdtlov_UUID@10.1.5.202@tcp:0/0 lens 224/0 e 27720 to 0 dl 1413555299 ref 2 fl Complete:/c0/ffffffff rc 0/-1
00000100:00001000:1.0:1413555299.934845:0:13846:0:(import.c:1570:at_measured()) add 21 to ffff88007cab0150 time=21 v=21 (21 0 0 0)
00000100:00000010:1.0:1413555299.934846:0:13846:0:(client.c:408:ptlrpc_request_cache_alloc()) slab-alloced 'req': 984 at ffff880032b64400.
00000100:00000010:1.0:1413555299.934847:0:13846:0:(service.c:1335:ptlrpc_at_send_early_reply()) kmalloced 'reqmsg': 224 at ffff88006dc987c0.
02000000:00000010:1.0:1413555299.934848:0:13846:0:(sec_null.c:327:null_alloc_rs()) kmalloced 'rs': 488 at ffff880068e25e00.
00000100:00000010:1.0:1413555299.934850:0:13846:0:(service.c:1394:ptlrpc_at_send_early_reply()) kfreed 'reqmsg': 224 at ffff88006dc987c0.
00000100:00000010:1.0:1413555299.934851:0:13846:0:(client.c:414:ptlrpc_request_cache_free()) slab-freed '(req)': 984 at ffff880032b64400.
00000100:00001000:1.0:1413555299.934852:0:13846:0:(service.c:1476:ptlrpc_at_check_timed()) timeout in +0s, asking for 30 secs on 2 early replies
00000100:00001000:1.0:1413555299.934853:0:13846:0:(service.c:1276:ptlrpc_at_send_early_reply()) @@@ sending early reply (deadline +0s, margin -21s) for 21+30  req@ffff880066e91400 x1482175522279684/t0(0) o400->lustre-MDT0002-mdtlov_UUID@10.1.5.202@tcp:0/0 lens 224/0 e 27721 to 0 dl 1413555299 ref 2 fl Complete:/c0/ffffffff rc 0/-1
00000100:00001000:1.0:1413555299.934855:0:13846:0:(import.c:1570:at_measured()) add 21 to ffff88007cab0150 time=21 v=21 (21 0 0 0)
00000100:00000010:1.0:1413555299.934856:0:13846:0:(client.c:408:ptlrpc_request_cache_alloc()) slab-alloced 'req': 984 at ffff880032b64400.
00000100:00000010:1.0:1413555299.934856:0:13846:0:(service.c:1335:ptlrpc_at_send_early_reply()) kmalloced 'reqmsg': 224 at ffff88006dc987c0.
02000000:00000010:1.0:1413555299.934857:0:13846:0:(sec_null.c:327:null_alloc_rs()) kmalloced 'rs': 488 at ffff880068e25a00.
00000100:00000010:1.0:1413555299.934860:0:13846:0:(service.c:1394:ptlrpc_at_send_early_reply()) kfreed 'reqmsg': 224 at ffff88006dc987c0.
00000100:00000010:1.0:1413555299.934860:0:13846:0:(client.c:414:ptlrpc_request_cache_free()) slab-freed '(req)': 984 at ffff880032b64400.
...
[root@shadow-37vm1 ~]# awk -F: '$4 ~ /1413555299/' 8.dk | grep -c ptlrpc_at_send_early_reply
13749

In all the cases the new request deadline is the same as the old deadline.

[root@shadow-37vm1 ~]# awk -F: '$4 ~ /1413555299/' 8.dk | grep -c null_alloc_rs
4583
[root@shadow-37vm1 ~]# awk -F: '$4 ~ /1413555299/' 8.dk | grep -c null_free_rs
2580
[root@shadow-37vm1 ~]# awk -F: '$4 ~ /1413555300/' 8.dk | grep -c null_alloc_rs
95
[root@shadow-37vm1 ~]# awk -F: '$4 ~ /1413555300/' 8.dk | grep -c null_free_rs
49217
[root@shadow-37vm1 ~]# 

Perhaps we should look at reverting or adjusting:

commit 8b2f9c0e40822dd847494f27d9ef4c6858fee78c
Author: Chris Horn <hornc@cray.com>
Date:   Mon Feb 3 15:24:01 2014 -0600

    LU-4578 ptlrpc: Early replies need to honor at_max
    
    When determining whether an early reply can be sent the server will
    calculate the new deadline based on an offset from the request
    arrival time. However, when actually setting the new deadline
    the server offsets the current time. This can result in deadlines
    being extended more than at_max seconds past the request arrival
    time. Instead, the server should offset the arrival time when updating
    its request timeout.
    
    When a client receives an early reply it doesn't know the server side
    arrival time so we use the original sent time as an approximation.
    
    Signed-off-by: Chris Horn <hornc@cray.com>
    Change-Id: I1b7b04bd20ea5d471ab3777f987e89add744b813
    Reviewed-on: http://review.whamcloud.com/9100
    Tested-by: Jenkins
    Reviewed-by: James Simmons <uja.ornl@gmail.com>
    Tested-by: Maloo <hpdd-maloo@intel.com>
    Reviewed-by: Christopher J. Morrone <chris.morrone.llnl@gmail.com>
    Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>

...

@@ -1344,7 +1352,6 @@ static int ptlrpc_at_send_early_reply(struct ptlrpc_request *req)
                        RETURN(-ETIMEDOUT);
                }
        }
-       newdl = cfs_time_current_sec() + at_get(&svcpt->scp_at_estimate);
 
        reqcopy = ptlrpc_request_cache_alloc(GFP_NOFS);
        if (reqcopy == NULL)
@@ -1390,13 +1397,14 @@ static int ptlrpc_at_send_early_reply(struct ptlrpc_request *req)
 
         rc = ptlrpc_send_reply(reqcopy, PTLRPC_REPLY_EARLY);
 
-        if (!rc) {
-                /* Adjust our own deadline to what we told the client */
-                req->rq_deadline = newdl;
-                req->rq_early_count++; /* number sent, server side */
-        } else {
-                DEBUG_REQ(D_ERROR, req, "Early reply send failed %d", rc);
-        }
+       if (!rc) {
+               /* Adjust our own deadline to what we told the client */
+               req->rq_deadline = req->rq_arrival_time.tv_sec +
+                                  at_get(&svcpt->scp_at_estimate);
+               req->rq_early_count++; /* number sent, server side */
+       } else {
+               DEBUG_REQ(D_ERROR, req, "Early reply send failed %d", rc);
+       }
Comment by Andreas Dilger [ 17/Oct/14 ]

That the node us looping on early reply makes sense, since that is the main codepath that is invoking the crypto_alloc_tfm().

Comment by Andreas Dilger [ 17/Oct/14 ]

I've created http://review.whamcloud.com/12330 to revert this patch.

Comment by John Hammond [ 17/Oct/14 ]

My local testing shows that running with http://review.whamcloud.com/#/c/12330/ eliminates the looping in early reply.

Comment by James A Simmons [ 17/Oct/14 ]

Instead of reverting it can we try the patch for LU-5079 - http://review.whamcloud.com/#/c/11213. It should address the problem with the LU-4578 patch

Comment by Chris Horn [ 17/Oct/14 ]

I agree with James that the patch for LU-5079 should resolve this issue.

Comment by James A Simmons [ 17/Oct/14 ]

I hate to see it reverted since the original patch helped reduce a lot of the client evictions I saw earlier. So reverting would cause just as much damage in my testing.

Comment by Andreas Dilger [ 21/Oct/14 ]

This was addressed by landing http://review.whamcloud.com/11213 instead of reverting the 9100 patch.

Comment by Sarah Liu [ 11/Nov/14 ]

seen in interop test with 2.6.0 server and master client
https://testing.hpdd.intel.com/test_sets/8d7a5190-68fe-11e4-9444-5254006e85c2

Generated at Sat Feb 10 01:48:20 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.