[LU-4754] MDS large amount of slab usage Created: 12/Mar/14  Updated: 15/Mar/14  Resolved: 12/Mar/14

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.5.1
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: James Beal Assignee: WC Triage
Resolution: Duplicate Votes: 0
Labels: None
Environment:

Linux lustre-utils01 3.8.0-37-generic #53~precise1-Ubuntu SMP Wed Feb 19 21:37:54 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux


Issue Links:
Duplicate
duplicates LU-4053 client leaking objects/locks during IO Resolved
Related
is related to LU-4053 client leaking objects/locks during IO Resolved
Severity: 3
Rank (Obsolete): 13083

 Description   

We have a virtual machine which runs various admin functions one of which is forcing group ownership in certain directories.

This may be related to LU-2613.

The behaviour has improved what happens now is that the slab usage increases until we get the following message in /var/log/kernel

Mar 12 09:16:32 lustre-utils01 kernel: [153486.607665] LustreError: 11-0: lus13-MDT0000-mdc-ffff8804145e5000: Communicating with 172.17.117.153@tcp, operation mds_close failed with -107.
Mar 12 09:16:32 lustre-utils01 kernel: [153486.608011] Lustre: lus13-MDT0000-mdc-ffff8804145e5000: Connection to lus13-MDT0000 (at 172.17.117.153@tcp) was lost; in progress operations using this service will wait for recovery to complete
Mar 12 09:16:32 lustre-utils01 kernel: [153486.609483] LustreError: 167-0: lus13-MDT0000-mdc-ffff8804145e5000: This client was evicted by lus13-MDT0000; in progress operations using this service will fail.
Mar 12 09:16:33 lustre-utils01 kernel: [153486.972130] LustreError: 27292:0:(file.c:174:ll_close_inode_openhandle()) lus13-clilmv-ffff8804145e5000: inode [0x2000067f4:0xcf79:0x0] mdc close failed: rc = -5
Mar 12 09:16:33 lustre-utils01 kernel: [153486.972388] LustreError: 27292:0:(file.c:174:ll_close_inode_openhandle()) Skipped 90999 previous similar messages
Mar 12 09:16:33 lustre-utils01 kernel: [153487.479055] LustreError: 24002:0:(file.c:174:ll_close_inode_openhandle()) lus13-clilmv-ffff8804145e5000: inode [0x200006759:0x9ecd:0x0] mdc close failed: rc = -108
Mar 12 09:16:33 lustre-utils01 kernel: [153487.479273] LustreError: 24002:0:(file.c:174:ll_close_inode_openhandle()) Skipped 5037 previous similar messages
Mar 12 09:16:34 lustre-utils01 kernel: [153488.487923] LustreError: 28028:0:(file.c:174:ll_close_inode_openhandle()) lus13-clilmv-ffff8804145e5000: inode [0x200006397:0xf7e7:0x0] mdc close failed: rc = -108
Mar 12 09:16:34 lustre-utils01 kernel: [153488.488135] LustreError: 28028:0:(file.c:174:ll_close_inode_openhandle()) Skipped 9000 previous similar messages
Mar 12 09:16:36 lustre-utils01 kernel: [153490.496356] LustreError: 28925:0:(file.c:174:ll_close_inode_openhandle()) lus13-clilmv-ffff8804145e5000: inode [0x2000074b3:0x19e1a:0x0] mdc close failed: rc = -108
Mar 12 09:16:36 lustre-utils01 kernel: [153490.496569] LustreError: 28925:0:(file.c:174:ll_close_inode_openhandle()) Skipped 18377 previous similar messages
Mar 12 09:16:40 lustre-utils01 kernel: [153494.504137] LustreError: 24884:0:(file.c:174:ll_close_inode_openhandle()) lus13-clilmv-ffff8804145e5000: inode [0x200000404:0x47b1:0x0] mdc close failed: rc = -108
Mar 12 09:16:40 lustre-utils01 kernel: [153494.504365] LustreError: 24884:0:(file.c:174:ll_close_inode_openhandle()) Skipped 36626 previous similar messages
Mar 12 09:16:48 lustre-utils01 kernel: [153502.506987] LustreError: 28928:0:(file.c:174:ll_close_inode_openhandle()) lus13-clilmv-ffff8804145e5000: inode [0x200000667:0x13997:0x0] mdc close failed: rc = -108
Mar 12 09:16:48 lustre-utils01 kernel: [153502.507204] LustreError: 28928:0:(file.c:174:ll_close_inode_openhandle()) Skipped 68840 previous similar messages
Mar 12 09:17:04 lustre-utils01 kernel: [153518.507317] LustreError: 3503:0:(file.c:174:ll_close_inode_openhandle()) lus13-clilmv-ffff8804145e5000: inode [0x200000712:0x8c0f:0x0] mdc close failed: rc = -108
Mar 12 09:17:04 lustre-utils01 kernel: [153518.507569] LustreError: 3503:0:(file.c:174:ll_close_inode_openhandle()) Skipped 169045 previous similar messages
Mar 12 09:17:32 lustre-utils01 kernel: [153546.597304] LustreError: 1033:0:(mdc_locks.c:904:mdc_enqueue()) lus13-MDT0000-mdc-ffff8804145e5000: ldlm_cli_enqueue failed: rc = -108
Mar 12 09:17:32 lustre-utils01 kernel: [153546.597482] LustreError: 1033:0:(mdc_locks.c:904:mdc_enqueue()) Skipped 173 previous similar messages
Mar 12 09:17:32 lustre-utils01 kernel: [153546.597618] LustreError: 1033:0:(file.c:3196:ll_inode_revalidate_fini()) lus13: revalidate FID [0x1900001:0xe5eff204:0x0] error: rc = -108
Mar 12 09:17:32 lustre-utils01 kernel: [153546.597792] LustreError: 1033:0:(file.c:3196:ll_inode_revalidate_fini()) Skipped 232 previous similar messages
Mar 12 09:17:33 lustre-utils01 kernel: [153546.859806] LustreError: 3527:0:(mdc_locks.c:904:mdc_enqueue()) lus13-MDT0000-mdc-ffff8804145e5000: ldlm_cli_enqueue failed: rc = -108
Mar 12 09:17:33 lustre-utils01 kernel: [153546.859985] LustreError: 3527:0:(mdc_locks.c:904:mdc_enqueue()) Skipped 20 previous similar messages
Mar 12 09:17:33 lustre-utils01 kernel: [153546.860122] LustreError: 3527:0:(file.c:3196:ll_inode_revalidate_fini()) lus13: revalidate FID [0x1900001:0xe5eff204:0x0] error: rc = -108
Mar 12 09:17:33 lustre-utils01 kernel: [153546.860284] LustreError: 3527:0:(file.c:3196:ll_inode_revalidate_fini()) Skipped 20 previous similar messages
Mar 12 09:17:33 lustre-utils01 kernel: [153547.378561] LustreError: 3573:0:(mdc_locks.c:904:mdc_enqueue()) lus13-MDT0000-mdc-ffff8804145e5000: ldlm_cli_enqueue failed: rc = -108
Mar 12 09:17:33 lustre-utils01 kernel: [153547.378740] LustreError: 3573:0:(mdc_locks.c:904:mdc_enqueue()) Skipped 45 previous similar messages
Mar 12 09:17:33 lustre-utils01 kernel: [153547.378966] LustreError: 3573:0:(file.c:3196:ll_inode_revalidate_fini()) lus13: revalidate FID [0x1900001:0xe5eff204:0x0] error: rc = -108
Mar 12 09:17:33 lustre-utils01 kernel: [153547.379129] LustreError: 3573:0:(file.c:3196:ll_inode_revalidate_fini()) Skipped 45 previous similar messages
Mar 12 09:17:34 lustre-utils01 kernel: [153548.399065] LustreError: 3677:0:(mdc_locks.c:904:mdc_enqueue()) lus13-MDT0000-mdc-ffff8804145e5000: ldlm_cli_enqueue failed: rc = -108
Mar 12 09:17:34 lustre-utils01 kernel: [153548.399243] LustreError: 3677:0:(mdc_locks.c:904:mdc_enqueue()) Skipped 103 previous similar messages
Mar 12 09:17:34 lustre-utils01 kernel: [153548.399380] LustreError: 3677:0:(file.c:3196:ll_inode_revalidate_fini()) lus13: revalidate FID [0x1900001:0xe5eff204:0x0] error: rc = -108
Mar 12 09:17:34 lustre-utils01 kernel: [153548.399541] LustreError: 3677:0:(file.c:3196:ll_inode_revalidate_fini()) Skipped 103 previous similar messages
Mar 12 09:17:36 lustre-utils01 kernel: [153550.497531] LustreError: 3503:0:(file.c:174:ll_close_inode_openhandle()) lus13-clilmv-ffff8804145e5000: inode [0x200006038:0x306:0x0] mdc close failed: rc = -108
Mar 12 09:17:36 lustre-utils01 kernel: [153550.497748] LustreError: 3503:0:(file.c:174:ll_close_inode_openhandle()) Skipped 246788 previous similar messages
Mar 12 09:17:37 lustre-utils01 kernel: [153550.950531] Lustre: lus13-MDT0000-mdc-ffff8804145e5000: Connection restored to lus13-MDT0000 (at 172.17.117.153@tcp)

This is an atop -l output before the eviction.

ATOP - lustre-utils01 2014/03/12 09:03:14 ------ 10s elapsed
PRC | sys 5.25s | user 3.48s | #proc 224 | #zombie 0 | #exit 444 |
CPU | sys 53% | user 42% | irq 5% | idle 0% | wait 0% |
CPL | avg1 11.58 | avg5 13.64 | avg15 9.28 | csw 158679 | intr 33749 |
MEM | tot 15.7G | free 647.2M | cache 680.5M | buff 14.8M | slab 13.3G |
SWP | tot 37.0G | free 37.0G | | vmcom 407.8M | vmlim 44.8G |
LVM | --dev64-root | busy 0% | read 0 | write 32 | avio 0.12 ms |
DSK | sda | busy 0% | read 0 | write 4 | avio 1.00 ms |
NET | transport | tcpi 31250 | tcpo 31103 | udpi 0 | udpo 0 |
NET | network | ipi 31250 | ipo 31103 | ipfrw 0 | deliv 31250 |
NET | site ---- | pcki 31255 | pcko 31103 | si 19 Mbps | so 18 Mbps |

PID SYSCPU USRCPU VGROW RGROW RDDSK WRDSK ST EXC S CPU CMD 1/42
27448 0.29s 2.57s 2108K 2072K 0K 108K – - R 29% cf-agent
27183 1.76s 0.03s 0K 0K 0K 0K – - S 18% lfs
1999 1.21s 0.00s 0K 0K 0K 0K – - S 12% socknal_sd00_0
27439 0.76s 0.00s 0K 0K 0K 0K – - S 8% ll_sa_27183
2003 0.39s 0.00s 0K 0K 0K 0K – - S 4% ptlrpcd_0
2004 0.37s 0.00s 0K 0K 0K 0K – - S 4% ptlrpcd_1
28583 0.03s 0.06s 0K 0K - - NE 0 E 1% <lspci>
28533 0.02s 0.06s 0K 0K - - NE 0 E 1% <lspci>
28633 0.03s 0.05s 0K 0K - - NE 0 E 1% <lspci>
28708 0.02s 0.06s 0K 0K - - NE 0 E 1% <lspci>
28758 0.03s 0.05s 0K 0K - - NE 0 E 1% <lspci>

And after

ATOP - lustre-utils01 2014/03/12 09:28:14 ------ 10s elapsed
PRC | sys 0.07s | user 0.02s | #proc 214 | #zombie 0 | #exit 0 |
CPU | sys 0% | user 0% | irq 0% | idle 99% | wait 0% |
CPL | avg1 0.01 | avg5 5.18 | avg15 13.98 | csw 650 | intr 670 |
MEM | tot 15.7G | free 7.1G | cache 316.9M | buff 10.4M | slab 7.2G |
SWP | tot 37.0G | free 37.0G | | vmcom 388.7M | vmlim 44.8G |
LVM | --dev64-root | busy 0% | read 0 | write 3 | avio 0.00 ms |
DSK | sda | busy 0% | read 0 | write 2 | avio 0.00 ms |
NET | transport | tcpi 288 | tcpo 366 | udpi 1 | udpo 1 |
NET | network | ipi 289 | ipo 367 | ipfrw 0 | deliv 289 |
NET | site ---- | pcki 294 | pcko 367 | si 93 Kbps | so 106 Kbps |

PID SYSCPU USRCPU VGROW RGROW RDDSK WRDSK ST EXC S CPU CMD 1/1
4270 0.03s 0.02s 0K 0K 0K 0K – - R 1% atop
492 0.01s 0.00s 0K 0K 0K 0K – - S 0% rsyslogd
1987 0.01s 0.00s 0K 0K 0K 0K – - S 0% ntpd
1817 0.01s 0.00s 0K 0K 0K 0K – - S 0% nrpe
1999 0.01s 0.00s 0K 0K 0K 0K – - S 0% socknal_sd00_0
1839 0.00s 0.00s 0K 0K 0K 0K – - S 0% snmpd
260 0.00s 0.00s 0K 0K 0K 4K – - S 0% jbd2/dm-0-8

The script that we run to provoke the error is:

#!/bin/bash

  1. run from /etc/cron.d/lustre_scratch107 on isg-disc-mon-03
    if [ ! -d /lustre/scratch113/._DO_NOT_DELETE_THIS_FILE ] ;
    then
    echo "Scratch113 not mounted"
    exit 1 ;
    fi

project_path=/lustre/scratch113/projects
team_path=/lustre/scratch113/teams

teams=( anderson barrett barroso carter deloukas durbin hgi hurles mcginnis palotie sandhu soranzo tyler-smith zeggini )
groups=( team152 team143 team35 team70 team147 team118 hgi team29 team111 team128 team149 team151 team19 team144 )

projects=`lfs find --maxdepth 0 ${project_path}/* --print0 | xargs -L 1 -0 basename`

test ${#groups[*]} -eq ${#teams[*]} || ( echo "count of groups and teams do not match" && exit 1 )

for project in ${projects}; do lfs find ${project_path}/${project} --print0 ! --group ${project} | xargs -0 -r chgrp -h ${project} ; done
for i in `seq 0 $((${#groups[*]}-1))`; do lfs find ${team_path}/${teams[${i}]}/* --print0 ! --group ${groups[${i}]} | xargs -0 -r chgrp -h ${groups[${i}]}; done

lfs find ${project_path} ${team_path} -type d -print0 | xargs -0 -r stat --printf="%n\0%a\n" | gawk 'BEGIN

{FS="\0"; ORS="\0";}

!and(rshift(strtonum("0"$2),10),1)

{print $1}

' | xargs -0 -r chmod g+s

(cd /lustre/scratch113/sinbin/ ; for i in *; do j=`echo $i | sed -e 's/.gz$//'`; chown $j ; chgrp -h hgi $i; chmod 750 $i ; done )
#

  1. RT 304761 fix ownership on actual directory
    chmod 755 /lustre/scratch113/sinbin/
    chgrp -h hgi /lustre/scratch113/sinbin/

And the display as it dies is:

+ for project in '${projects}'
+ xargs -0 -r chgrp -h cichlid
+ lfs find /lustre/scratch113/projects/cichlid --print0 '!' --group cichlid
error: find failed for cichlid.
+ for project in '${projects}'
+ xargs -0 -r chgrp -h cloten
+ lfs find /lustre/scratch113/projects/cloten --print0 '!' --group cloten
error opening /lustre/scratch113/projects/cloten: Cannot send after transport endpoint shutdown (108)
llapi_semantic_traverse: Failed to open '/lustre/scratch113/projects/cloten': Cannot send after transport endpoint shutdown (108)
error: find failed for cloten.

I note that "echo 3 > /proc/sys/vm/drop_caches" has the following effect:

MEM | tot 15.7G | free 7.1G | cache 317.3M | buff 10.8M | slab 7.2G |
echo 3 > /proc/sys/vm/drop_caches
MEM | tot 15.7G | free 14.5G | cache 65.2M | buff 3.9M | slab 128.3M |



 Comments   
Comment by Andreas Dilger [ 12/Mar/14 ]

It looks like this is the same issue as LU-4053. There are a number of patches in progress under that bug, and on the bugs linked to it.

Are you able to test with the current master branch on the client and MDS to see if this problem has been fixed? Also, it would be useful to get the "slabtop" output to see which slabs are consuming the most memory.

Comment by James Beal [ 14/Mar/14 ]

I will put effort in to testing the latest master on a test system and seeing if the problem is fixed however this will take some time. I would hope to have it completed by Wednesday or Thursday of next week.

Comment by James Beal [ 15/Mar/14 ]

I have got a master branch client running against a 2.2 server and I think the problem is less than it was however the client still evicted from the server because of memory pressure. I will try and get a master server set up available next week.

This is a selection of slabtops over time, these stop around midnight and the eviction happened at 5am this morning.

Active / Total Objects (% used) : 22898565 / 22901648 (100.0%)
Active / Total Slabs (% used) : 1355602 / 1355602 (100.0%)
Active / Total Caches (% used) : 86 / 144 (59.7%)
Active / Total Size (% used) : 9605680.88K / 9607574.83K (100.0%)
Minimum / Average / Maximum Object : 0.01K / 0.42K / 8.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
3224064 3224064 100% 0.06K 50376 64 201504K kmalloc-64
2324504 2324504 100% 0.50K 290563 8 1162252K ldlm_locks
1568096 1568096 100% 0.12K 49003 32 196012K kmalloc-128
1554888 1554888 100% 0.31K 129574 12 518296K ldlm_resources
1544774 1544774 100% 0.18K 70217 22 280868K vm_area_struct
1538640 1538640 100% 0.21K 85480 18 341920K cl_lock_kmem
1517184 1517184 100% 1.00K 189648 8 1517184K kmalloc-1024
852432 852432 100% 0.19K 40592 21 162368K dentry
796467 796467 100% 0.19K 37927 21 151708K kmalloc-192
782820 782820 100% 0.20K 39141 20 156564K lovsub_object_km

Active / Total Objects (% used) : 26842576 / 26844943 (100.0%)
Active / Total Slabs (% used) : 1591226 / 1591226 (100.0%)
Active / Total Caches (% used) : 86 / 144 (59.7%)
Active / Total Size (% used) : 11217926.46K / 11219307.17K (100.0%)
Minimum / Average / Maximum Object : 0.01K / 0.42K / 8.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
3814528 3814528 100% 0.06K 59602 64 238408K kmalloc-64
2727792 2727792 100% 0.50K 340974 8 1363896K ldlm_locks
1878528 1878528 100% 0.12K 58704 32 234816K kmalloc-128
1850860 1850860 100% 0.18K 84130 22 336520K vm_area_struct
1844712 1844712 100% 0.21K 102484 18 409936K cl_lock_kmem
1836384 1836384 100% 0.31K 153032 12 612128K ldlm_resources
1755792 1755792 100% 1.00K 219474 8 1755792K kmalloc-1024
974715 974715 100% 0.19K 46415 21 185660K dentry
937980 937980 100% 0.20K 46899 20 187596K lovsub_object_kmem
937976 937976 100% 0.30K 72152 13 288608K osc_object_kmem

Active / Total Objects (% used) : 26521167 / 27132364 (97.7%)
Active / Total Slabs (% used) : 1611072 / 1611072 (100.0%)
Active / Total Caches (% used) : 86 / 144 (59.7%)
Active / Total Size (% used) : 11297748.74K / 11430286.98K (98.8%)
Minimum / Average / Maximum Object : 0.01K / 0.42K / 8.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
3712704 3494655 94% 0.06K 58011 64 232044K kmalloc-64
2606680 2471004 94% 0.50K 325835 8 1303340K ldlm_locks
1856288 1773437 95% 0.12K 58009 32 232036K kmalloc-128
1804776 1749724 96% 0.31K 150398 12 601592K ldlm_resources
1792428 1739736 97% 0.18K 81474 22 325896K vm_area_struct
1784808 1733567 97% 0.21K 99156 18 396624K cl_lock_kmem
1687128 1686634 99% 1.00K 210891 8 1687128K kmalloc-1024
1081052 1080475 99% 0.55K 77218 14 617744K radix_tree_node
983801 983801 100% 1.19K 75677 13 1210832K lustre_inode_cache

Active / Total Objects (% used) : 40846299 / 41527575 (98.4%)
Active / Total Slabs (% used) : 2204382 / 2204382 (100.0%)
Active / Total Caches (% used) : 86 / 144 (59.7%)
Active / Total Size (% used) : 12058511.40K / 12446860.35K (96.9%)
Minimum / Average / Maximum Object : 0.01K / 0.30K / 8.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
6956800 6859865 98% 0.06K 108700 64 434800K kmalloc-64
4736928 4706877 99% 0.12K 148029 32 592116K kmalloc-128
4333032 4304267 99% 0.18K 196956 22 787824K vm_area_struct
4320126 4298127 99% 0.21K 240007 18 960028K cl_lock_kmem
3003780 3003780 100% 0.20K 150189 20 600756K lovsub_object_kmem
3003767 3003767 100% 0.30K 231059 13 924236K osc_object_kmem
2962024 2938588 99% 0.50K 370253 8 1481012K ldlm_locks
2276664 2261996 99% 0.31K 189722 12 758888K ldlm_resources
1148316 1077936 93% 1.00K 143540 8 1148320K kmalloc-1024
1073917 1045776 97% 1.19K 82609 13 1321744K lustre_inode_cache
1052672 943241 89% 0.01K 2056 512 8224K kmalloc-8
1031100 991959 96% 0.09K 24550 42 98200K kmalloc-96
991512 981232 98% 0.22K 55084 18 220336K ccc_object_kmem
990981 980745 98% 0.23K 58293 17 233172K posix_timers_cache
879774 869997 98% 0.19K 41894 21 167576K kmalloc-192
621568 553136 88% 0.03K 4856 128 19424K kmalloc-32
598112 548540 91% 4.00K 74764 8 2392448K kmalloc-4096
574497 569315 99% 0.19K 27357 21 109428K dentry
496002 482323 97% 0.10K 12718 39 50872K buffer_head
154904 114338 73% 0.50K 19363 8 77452K kmalloc-512
150374 148488 98% 0.55K 10741 14 85928K radix_tree_node
40137 38435 95% 0.89K 2361 17 37776K ext4_inode_cache
31280 25454 81% 2.00K 3910 8 62560K kmalloc-2048
22536 22536 100% 0.11K 626 36 2504K sysfs_dir_cache
12070 11277 93% 0.05K 142 85 568K shared_policy_node
11508 11508 100% 0.55K 822 14 6576K inode_cache
9044 9018 99% 8.00K 2261 4 72352K kmalloc-8192
5632 5632 100% 0.02K 22 256 88K kmalloc-16
4608 4608 100% 0.02K 18 256 72K ext4_io_page
4200 4200 100% 0.07K 75 56 300K Acpi-ParseExt
4182 4182 100% 0.04K 41 102 164K Acpi-Namespace
3264 3264 100% 0.06K 51 64 204K anon_vma
3200 3200 100% 0.03K 25 128 100K extent_status
2752 2504 90% 0.25K 172 16 688K kmalloc-256
2483 2456 98% 0.61K 191 13 1528K proc_inode_cache
1984 1984 100% 0.06K 31 64 124K ext4_free_data
864 864 100% 0.63K 72 12 576K shmem_inode_cache
808 808 100% 1.00K 101 8 808K nfs_inode_cache
756 720 95% 0.11K 21 36 84K journal_head
624 624 100% 0.10K 16 39 64K blkdev_ioc
576 576 100% 0.16K 24 24 96K cl_env_kmem
527 527 100% 0.45K 31 17 248K vvp_thread_kmem
520 520 100% 0.38K 52 10 208K bip-16

Active / Total Objects (% used) : 50884965 / 52222588 (97.4%)
Active / Total Slabs (% used) : 2691588 / 2691588 (100.0%)
Active / Total Caches (% used) : 86 / 144 (59.7%)
Active / Total Size (% used) : 12689583.46K / 13205036.09K (96.1%)
Minimum / Average / Maximum Object : 0.01K / 0.25K / 8.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
9305216 9237069 99% 0.06K 145394 64 581576K kmalloc-64
7381632 7381632 100% 0.12K 230676 32 922704K kmalloc-128
6077280 6077280 100% 0.18K 276240 22 1104960K vm_area_struct
6070878 6070878 100% 0.21K 337271 18 1349084K cl_lock_kmem
4640580 4640580 100% 0.20K 232029 20 928116K lovsub_object_kmem
4640571 4640571 100% 0.30K 356967 13 1427868K osc_object_kmem
3448296 3286986 95% 0.50K 431037 8 1724148K ldlm_locks
3127980 3030264 96% 0.31K 260665 12 1042660K ldlm_resources
906752 645940 71% 0.01K 1771 512 7084K kmalloc-8
802032 694466 86% 0.09K 19096 42 76384K kmalloc-96
792584 764218 96% 1.19K 60968 13 975488K lustre_inode_cache
733446 730776 99% 0.22K 40747 18 162988K ccc_object_kmem
732870 730260 99% 0.23K 43110 17 172440K posix_timers_cache
664684 545323 82% 1.00K 83086 8 664688K kmalloc-1024
594720 378223 63% 0.19K 28320 21 113280K kmalloc-192
459776 296816 64% 0.03K 3592 128 14368K kmalloc-32
435903 397622 91% 0.10K 11177 39 44708K buffer_head
387176 345630 89% 4.00K 48397 8 1548704K kmalloc-4096
381948 381948 100% 0.19K 18188 21 72752K dentry
263776 263324 99% 0.50K 32972 8 131888K kmalloc-512
135296 111865 82% 0.55K 9664 14 77312K radix_tree_node
99872 99872 100% 2.00K 12484 8 199744K kmalloc-2048
39491 34932 88% 0.89K 2323 17 37168K ext4_inode_cache
22536 22536 100% 0.11K 626 36 2504K sysfs_dir_cache
12070 11277 93% 0.05K 142 85 568K shared_policy_node
11508 11508 100% 0.55K 822 14 6576K inode_cache
9044 9018 99% 8.00K 2261 4 72352K kmalloc-8192
5632 5632 100% 0.02K 22 256 88K kmalloc-16
4608 4608 100% 0.02K 18 256 72K ext4_io_page
4200 4200 100% 0.07K 75 56 300K Acpi-ParseExt
4182 4182 100% 0.04K 41 102 164K Acpi-Namespace
3264 3264 100% 0.06K 51 64 204K anon_vma
3200 3200 100% 0.03K 25 128 100K extent_status
2768 2625 94% 0.25K 173 16 692K kmalloc-256
2496 2496 100% 0.61K 192 13 1536K proc_inode_cache
1984 1984 100% 0.06K 31 64 124K ext4_free_data
864 864 100% 0.63K 72 12 576K shmem_inode_cache
808 808 100% 1.00K 101 8 808K nfs_inode_cache
756 720 95% 0.11K 21 36 84K journal_head
663 663 100% 0.10K 17 39 68K blkdev_ioc
600 600 100% 0.16K 25 24 100K cl_env_kmem
595 595 100% 0.45K 35 17 280K vvp_thread_kmem
594 534 89% 0.35K 54 11 216K ccc_thread_kmem

Active / Total Objects (% used) : 51890949 / 54021919 (96.1%)
Active / Total Slabs (% used) : 2770129 / 2770129 (100.0%)
Active / Total Caches (% used) : 86 / 144 (59.7%)
Active / Total Size (% used) : 12550476.12K / 13248357.77K (94.7%)
Minimum / Average / Maximum Object : 0.01K / 0.25K / 8.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
9526208 9309363 97% 0.06K 148847 64 595388K kmalloc-64
7958624 7889903 99% 0.12K 248707 32 994828K kmalloc-128
6397490 6313277 98% 0.18K 290795 22 1163180K vm_area_struct
6389262 6307025 98% 0.21K 354959 18 1419836K cl_lock_kmem
5259020 5259020 100% 0.20K 262951 20 1051804K lovsub_object_kmem
5259020 5259020 100% 0.30K 404540 13 1618160K osc_object_kmem
3401320 3042848 89% 0.50K 425165 8 1700660K ldlm_locks
3009912 2867227 95% 0.31K 250826 12 1003304K ldlm_resources
858624 533651 62% 0.01K 1677 512 6708K kmalloc-8
699006 549886 78% 0.09K 16643 42 66572K kmalloc-96
693069 668194 96% 1.19K 53313 13 853008K lustre_inode_cache
640638 637521 99% 0.22K 35591 18 142364K ccc_object_kmem
639387 637015 99% 0.23K 37611 17 150444K posix_timers_cache
572348 434761 75% 1.00K 71544 8 572352K kmalloc-1024
539658 307478 56% 0.19K 25698 21 102792K kmalloc-192
442752 248124 56% 0.03K 3459 128 13836K kmalloc-32
397917 363898 91% 0.10K 10203 39 40812K buffer_head
345504 304895 88% 4.00K 43188 8 1382016K kmalloc-4096
313696 310694 99% 0.50K 39212 8 156848K kmalloc-512
312291 308709 98% 0.19K 14871 21 59484K dentry
118128 118128 100% 2.00K 14766 8 236256K kmalloc-2048
113498 91936 81% 0.55K 8107 14 64856K radix_tree_node
34782 29996 86% 0.89K 2046 17 32736K ext4_inode_cache
22536 22536 100% 0.11K 626 36 2504K sysfs_dir_cache
12070 11277 93% 0.05K 142 85 568K shared_policy_node
11508 11508 100% 0.55K 822 14 6576K inode_cache
9044 9018 99% 8.00K 2261 4 72352K kmalloc-8192
5632 5632 100% 0.02K 22 256 88K kmalloc-16
4608 4608 100% 0.02K 18 256 72K ext4_io_page
4200 4200 100% 0.07K 75 56 300K Acpi-ParseExt
4182 4182 100% 0.04K 41 102 164K Acpi-Namespace
3264 3264 100% 0.06K 51 64 204K anon_vma
3200 3200 100% 0.03K 25 128 100K extent_status
2704 2535 93% 0.25K 169 16 676K kmalloc-256
2496 2482 99% 0.61K 192 13 1536K proc_inode_cache
1984 1984 100% 0.06K 31 64 124K ext4_free_data
864 864 100% 0.63K 72 12 576K shmem_inode_cache
808 808 100% 1.00K 101 8 808K nfs_inode_cache
756 720 95% 0.11K 21 36 84K journal_head
663 663 100% 0.10K 17 39 68K blkdev_ioc
600 600 100% 0.16K 25 24 100K cl_env_kmem
595 595 100% 0.45K 35 17 280K vvp_thread_kmem
594 534 89% 0.35K 54 11 216K ccc_thread_kmem

Active / Total Objects (% used) : 56083059 / 57762936 (97.1%)
Active / Total Slabs (% used) : 2918205 / 2918205 (100.0%)
Active / Total Caches (% used) : 86 / 144 (59.7%)
Active / Total Size (% used) : 12561871.41K / 13356838.08K (94.0%)
Minimum / Average / Maximum Object : 0.01K / 0.23K / 8.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
10089600 10012388 99% 0.06K 157650 64 630600K kmalloc-64
9196576 9153937 99% 0.12K 287393 32 1149572K kmalloc-128
7127868 7083890 99% 0.18K 323994 22 1295976K vm_area_struct
7121340 7077428 99% 0.21K 395630 18 1582520K cl_lock_kmem
6515704 6515704 100% 0.30K 501208 13 2004832K osc_object_kmem
6515700 6515700 100% 0.20K 325785 20 1303140K lovsub_object_kmem
3032256 2904468 95% 0.50K 379032 8 1516128K ldlm_locks
2895816 2815928 97% 0.31K 241318 12 965272K ldlm_resources
669696 294734 44% 0.01K 1308 512 5232K kmalloc-8
475761 456502 95% 1.19K 36597 13 585552K lustre_inode_cache
474692 282660 59% 1.00K 59337 8 474696K kmalloc-1024
469602 294410 62% 0.09K 11181 42 44724K kmalloc-96
441432 439559 99% 0.22K 24524 18 98096K ccc_object_kmem
440606 438840 99% 0.23K 25918 17 103672K posix_timers_cache
399546 263033 65% 0.19K 19026 21 76104K kmalloc-192
376704 189987 50% 0.03K 2943 128 11772K kmalloc-32
352911 320981 90% 0.10K 9049 39 36196K buffer_head
350840 332443 94% 0.50K 43855 8 175420K kmalloc-512
289680 194750 67% 4.00K 36210 8 1158720K kmalloc-4096
216678 216213 99% 0.19K 10318 21 41272K dentry
106120 105046 98% 2.00K 13265 8 212240K kmalloc-2048
73514 52834 71% 0.55K 5251 14 42008K radix_tree_node
29019 22179 76% 0.89K 1707 17 27312K ext4_inode_cache
22572 22572 100% 0.11K 627 36 2508K sysfs_dir_cache
11900 10400 87% 0.05K 140 85 560K shared_policy_node
11508 11508 100% 0.55K 822 14 6576K inode_cache
10100 10100 100% 8.00K 2525 4 80800K kmalloc-8192
5632 5632 100% 0.02K 22 256 88K kmalloc-16
4608 4608 100% 0.02K 18 256 72K ext4_io_page
4200 4200 100% 0.07K 75 56 300K Acpi-ParseExt
4182 4182 100% 0.04K 41 102 164K Acpi-Namespace
3456 3456 100% 0.06K 54 64 216K anon_vma
3200 3200 100% 0.03K 25 128 100K extent_status
2784 2618 94% 0.25K 174 16 696K kmalloc-256
2509 2479 98% 0.61K 193 13 1544K proc_inode_cache
1984 1984 100% 0.06K 31 64 124K ext4_free_data
864 864 100% 0.63K 72 12 576K shmem_inode_cache
824 824 100% 1.00K 103 8 824K nfs_inode_cache
756 720 95% 0.11K 21 36 84K journal_head
702 702 100% 0.10K 18 39 72K blkdev_ioc
629 629 100% 0.45K 37 17 296K vvp_thread_kmem
624 624 100% 0.16K 26 24 104K cl_env_kmem
624 592 94% 2.48K 52 12 1664K osc_thread_kmem

Active / Total Objects (% used) : 59444818 / 60656059 (98.0%)
Active / Total Slabs (% used) : 3055836 / 3055836 (100.0%)
Active / Total Caches (% used) : 86 / 144 (59.7%)
Active / Total Size (% used) : 12840918.79K / 13460827.62K (95.4%)
Minimum / Average / Maximum Object : 0.01K / 0.22K / 8.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
10928576 10908110 99% 0.06K 170759 64 683036K kmalloc-64
9973440 9949875 99% 0.12K 311670 32 1246680K kmalloc-128
7705060 7689714 99% 0.18K 350230 22 1400920K vm_area_struct
7698348 7683359 99% 0.21K 427686 18 1710744K cl_lock_kmem
6927580 6927580 100% 0.20K 346379 20 1385516K lovsub_object_kmem
6927570 6927570 100% 0.30K 532890 13 2131560K osc_object_kmem
3241856 3234526 99% 0.50K 405232 8 1620928K ldlm_locks
3150036 3144155 99% 0.31K 262503 12 1050012K ldlm_resources
513536 161322 31% 0.01K 1003 512 4012K kmalloc-8
377072 356692 94% 0.50K 47134 8 188536K kmalloc-512
351388 156344 44% 1.00K 43924 8 351392K kmalloc-1024
343317 333202 97% 1.19K 26409 13 422544K lustre_inode_cache
335307 187089 55% 0.19K 15967 21 63868K kmalloc-192
327474 164763 50% 0.09K 7797 42 31188K kmalloc-96
324468 319310 98% 0.22K 18026 18 72104K ccc_object_kmem
321878 318717 99% 0.23K 18934 17 75736K posix_timers_cache
299169 282635 94% 0.10K 7671 39 30684K buffer_head
243584 133850 54% 0.03K 1903 128 7612K kmalloc-32
217064 136455 62% 4.00K 27133 8 868256K kmalloc-4096
168315 168046 99% 0.19K 8015 21 32060K dentry
117368 117368 100% 2.00K 14671 8 234736K kmalloc-2048
41076 29216 71% 0.55K 2934 14 23472K radix_tree_node
22572 22572 100% 0.11K 627 36 2508K sysfs_dir_cache
21267 15954 75% 0.89K 1251 17 20016K ext4_inode_cache
11815 9954 84% 0.05K 139 85 556K shared_policy_node
11536 11536 100% 0.55K 824 14 6592K inode_cache
10100 10100 100% 8.00K 2525 4 80800K kmalloc-8192
5632 5632 100% 0.02K 22 256 88K kmalloc-16
4608 4608 100% 0.02K 18 256 72K ext4_io_page
4200 4200 100% 0.07K 75 56 300K Acpi-ParseExt
4182 4182 100% 0.04K 41 102 164K Acpi-Namespace
3456 3456 100% 0.06K 54 64 216K anon_vma
3200 3200 100% 0.03K 25 128 100K extent_status
2784 2596 93% 0.25K 174 16 696K kmalloc-256
2561 2487 97% 0.61K 197 13 1576K proc_inode_cache
1984 1984 100% 0.06K 31 64 124K ext4_free_data
864 864 100% 0.63K 72 12 576K shmem_inode_cache
824 824 100% 1.00K 103 8 824K nfs_inode_cache
756 720 95% 0.11K 21 36 84K journal_head
702 702 100% 0.10K 18 39 72K blkdev_ioc
629 629 100% 0.45K 37 17 296K vvp_thread_kmem
624 624 100% 0.16K 26 24 104K cl_env_kmem
624 587 94% 2.48K 52 12 1664K osc_thread_kmem

Active / Total Objects (% used) : 61353266 / 63140934 (97.2%)
Active / Total Slabs (% used) : 3187106 / 3187106 (100.0%)
Active / Total Caches (% used) : 86 / 144 (59.7%)
Active / Total Size (% used) : 13112865.12K / 13732388.54K (95.5%)
Minimum / Average / Maximum Object : 0.01K / 0.22K / 8.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
11841024 11571775 97% 0.06K 185016 64 740064K kmalloc-64
10501024 10324504 98% 0.12K 328157 32 1312628K kmalloc-128
8131156 7986184 98% 0.18K 369598 22 1478392K vm_area_struct
8118000 7979773 98% 0.21K 451000 18 1804000K cl_lock_kmem
6939100 6936332 99% 0.20K 346955 20 1387820K lovsub_object_kmem
6936319 6936319 100% 0.30K 533563 13 2134252K osc_object_kmem
3672272 3589757 97% 0.50K 459034 8 1836136K ldlm_locks
3571836 3497306 97% 0.31K 297653 12 1190612K ldlm_resources
416768 105379 25% 0.01K 814 512 3256K kmalloc-8
372608 357711 96% 0.50K 46576 8 186304K kmalloc-512
302628 130488 43% 1.00K 37829 8 302632K kmalloc-1024
296374 265104 89% 1.19K 22798 13 364768K lustre_inode_cache
265713 172109 64% 0.19K 12653 21 50612K kmalloc-192
265518 263746 99% 0.22K 14751 18 59004K ccc_object_kmem
264877 263423 99% 0.23K 15581 17 62324K posix_timers_cache
246414 114477 46% 0.09K 5867 42 23468K kmalloc-96
228735 210882 92% 0.10K 5865 39 23460K buffer_head
181760 121161 66% 0.03K 1420 128 5680K kmalloc-32
169760 120055 70% 4.00K 21220 8 679040K kmalloc-4096
154266 153263 99% 0.19K 7346 21 29384K dentry
114952 113950 99% 2.00K 14369 8 229904K kmalloc-2048
33082 29829 90% 0.55K 2363 14 18904K radix_tree_node
22572 22572 100% 0.11K 627 36 2508K sysfs_dir_cache
16643 12592 75% 0.89K 979 17 15664K ext4_inode_cache
11536 11536 100% 0.55K 824 14 6592K inode_cache
10625 8029 75% 0.05K 125 85 500K shared_policy_node
10100 10100 100% 8.00K 2525 4 80800K kmalloc-8192
5632 5632 100% 0.02K 22 256 88K kmalloc-16
4608 4608 100% 0.02K 18 256 72K ext4_io_page
4200 4200 100% 0.07K 75 56 300K Acpi-ParseExt
4182 4182 100% 0.04K 41 102 164K Acpi-Namespace
3456 3456 100% 0.06K 54 64 216K anon_vma
3200 3200 100% 0.03K 25 128 100K extent_status
2800 2679 95% 0.25K 175 16 700K kmalloc-256
2535 2509 98% 0.61K 195 13 1560K proc_inode_cache
1984 1984 100% 0.06K 31 64 124K ext4_free_data
864 864 100% 0.63K 72 12 576K shmem_inode_cache
824 824 100% 1.00K 103 8 824K nfs_inode_cache
756 720 95% 0.11K 21 36 84K journal_head
702 702 100% 0.10K 18 39 72K blkdev_ioc
629 629 100% 0.45K 37 17 296K vvp_thread_kmem
624 624 100% 0.16K 26 24 104K cl_env_kmem
624 583 93% 2.48K 52 12 1664K osc_thread_kmem

Active / Total Objects (% used) : 62668295 / 63614558 (98.5%)
Active / Total Slabs (% used) : 3216258 / 3216258 (100.0%)
Active / Total Caches (% used) : 86 / 144 (59.7%)
Active / Total Size (% used) : 13360759.30K / 13824015.77K (96.6%)
Minimum / Average / Maximum Object : 0.01K / 0.22K / 8.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
11851840 11812392 99% 0.06K 185185 64 740740K kmalloc-64
10598752 10577501 99% 0.12K 331211 32 1324844K kmalloc-128
8170866 8150475 99% 0.18K 371403 22 1485612K vm_area_struct
8163720 8143662 99% 0.21K 453540 18 1814160K cl_lock_kmem
7123700 7121509 99% 0.20K 356185 20 1424740K lovsub_object_kmem
7121569 7121489 99% 0.30K 547813 13 2191252K osc_object_kmem
3668888 3651775 99% 0.50K 458611 8 1834444K ldlm_locks
3583632 3566598 99% 0.31K 298636 12 1194544K ldlm_resources
372272 362950 97% 0.50K 46534 8 186136K kmalloc-512
365568 103842 28% 0.01K 714 512 2856K kmalloc-8
297164 127455 42% 1.00K 37146 8 297168K kmalloc-1024
295750 268318 90% 1.19K 22750 13 364000K lustre_inode_cache
267030 267030 100% 0.22K 14835 18 59340K ccc_object_kmem
266458 266458 100% 0.23K 15674 17 62696K posix_timers_cache
254163 160921 63% 0.19K 12103 21 48412K kmalloc-192
233016 112322 48% 0.09K 5548 42 22192K kmalloc-96
220272 204001 92% 0.10K 5648 39 22592K buffer_head
176512 121715 68% 0.03K 1379 128 5516K kmalloc-32
164640 117104 71% 4.00K 20580 8 658560K kmalloc-4096
154581 154216 99% 0.19K 7361 21 29444K dentry
115264 115097 99% 2.00K 14408 8 230528K kmalloc-2048
32886 32346 98% 0.55K 2349 14 18792K radix_tree_node
22572 22572 100% 0.11K 627 36 2508K sysfs_dir_cache
15810 11801 74% 0.89K 930 17 14880K ext4_inode_cache
11536 11536 100% 0.55K 824 14 6592K inode_cache
10455 7887 75% 0.05K 123 85 492K shared_policy_node
10100 10100 100% 8.00K 2525 4 80800K kmalloc-8192
5632 5632 100% 0.02K 22 256 88K kmalloc-16
4608 4608 100% 0.02K 18 256 72K ext4_io_page
4200 4200 100% 0.07K 75 56 300K Acpi-ParseExt
4182 4182 100% 0.04K 41 102 164K Acpi-Namespace
3456 3456 100% 0.06K 54 64 216K anon_vma
3200 3200 100% 0.03K 25 128 100K extent_status
2992 2885 96% 0.25K 187 16 748K kmalloc-256
2613 2613 100% 0.61K 201 13 1608K proc_inode_cache
1984 1984 100% 0.06K 31 64 124K ext4_free_data
864 864 100% 0.63K 72 12 576K shmem_inode_cache
824 824 100% 1.00K 103 8 824K nfs_inode_cache
756 720 95% 0.11K 21 36 84K journal_head
702 702 100% 0.10K 18 39 72K blkdev_ioc
629 629 100% 0.45K 37 17 296K vvp_thread_kmem
624 624 100% 0.16K 26 24 104K cl_env_kmem
624 584 93% 2.48K 52 12 1664K osc_thread_kmem

Active / Total Objects (% used) : 64609815 / 66092315 (97.8%)
Active / Total Slabs (% used) : 3344079 / 3344079 (100.0%)
Active / Total Caches (% used) : 86 / 144 (59.7%)
Active / Total Size (% used) : 13781601.98K / 14303296.28K (96.4%)
Minimum / Average / Maximum Object : 0.01K / 0.22K / 8.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
12343168 12119132 98% 0.06K 192862 64 771448K kmalloc-64
11131936 10984646 98% 0.12K 347873 32 1391492K kmalloc-128
8497324 8377189 98% 0.18K 386242 22 1544968K vm_area_struct
8487936 8371662 98% 0.21K 471552 18 1886208K cl_lock_kmem
7470160 7454679 99% 0.20K 373508 20 1494032K lovsub_object_kmem
7467850 7454531 99% 0.30K 574450 13 2297800K osc_object_kmem
3807376 3723671 97% 0.50K 475922 8 1903688K ldlm_locks
3728436 3644444 97% 0.31K 310703 12 1242812K ldlm_resources
383832 380292 99% 0.50K 47979 8 191916K kmalloc-512
317440 102690 32% 0.01K 620 512 2480K kmalloc-8
294086 275496 93% 1.19K 22622 13 361952K lustre_inode_cache
289044 130822 45% 1.00K 36131 8 289048K kmalloc-1024
276048 274301 99% 0.22K 15336 18 61344K ccc_object_kmem
275468 273737 99% 0.23K 16204 17 64816K posix_timers_cache
236229 172276 72% 0.19K 11249 21 44996K kmalloc-192
214830 112807 52% 0.09K 5115 42 20460K kmalloc-96
168448 125066 74% 0.03K 1316 128 5264K kmalloc-32
158624 121083 76% 4.00K 19828 8 634496K kmalloc-4096
155148 151849 97% 0.19K 7388 21 29552K dentry
119792 118351 98% 2.00K 14974 8 239584K kmalloc-2048
117897 98553 83% 0.10K 3023 39 12092K buffer_head
36372 34830 95% 0.55K 2598 14 20784K radix_tree_node
22572 22572 100% 0.11K 627 36 2508K sysfs_dir_cache
15096 10947 72% 0.89K 888 17 14208K ext4_inode_cache
11536 11536 100% 0.55K 824 14 6592K inode_cache
10285 7715 75% 0.05K 121 85 484K shared_policy_node
10100 10100 100% 8.00K 2525 4 80800K kmalloc-8192
5632 5632 100% 0.02K 22 256 88K kmalloc-16
4608 4608 100% 0.02K 18 256 72K ext4_io_page
4200 4200 100% 0.07K 75 56 300K Acpi-ParseExt
4182 4182 100% 0.04K 41 102 164K Acpi-Namespace
3456 3456 100% 0.06K 54 64 216K anon_vma
3200 3200 100% 0.03K 25 128 100K extent_status
2784 2652 95% 0.25K 174 16 696K kmalloc-256
2561 2475 96% 0.61K 197 13 1576K proc_inode_cache
1984 1984 100% 0.06K 31 64 124K ext4_free_data
864 864 100% 0.63K 72 12 576K shmem_inode_cache
824 824 100% 1.00K 103 8 824K nfs_inode_cache
756 720 95% 0.11K 21 36 84K journal_head
702 702 100% 0.10K 18 39 72K blkdev_ioc
629 629 100% 0.45K 37 17 296K vvp_thread_kmem
624 624 100% 0.16K 26 24 104K cl_env_kmem
624 584 93% 2.48K 52 12 1664K osc_thread_kmem

Active / Total Objects (% used) : 66366086 / 67550379 (98.2%)
Active / Total Slabs (% used) : 3406050 / 3406050 (100.0%)
Active / Total Caches (% used) : 86 / 144 (59.7%)
Active / Total Size (% used) : 13909420.38K / 14316399.86K (97.2%)
Minimum / Average / Maximum Object : 0.01K / 0.21K / 8.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
12250752 11967284 97% 0.06K 191418 64 765672K kmalloc-64
11859008 11647845 98% 0.12K 370594 32 1482376K kmalloc-128
8742844 8598380 98% 0.18K 397402 22 1589608K vm_area_struct
8729100 8592133 98% 0.21K 484950 18 1939800K cl_lock_kmem
8449340 8449340 100% 0.20K 422467 20 1689868K lovsub_object_kmem
8449337 8449337 100% 0.30K 649949 13 2599796K osc_object_kmem
3434880 3373863 98% 0.50K 429360 8 1717440K ldlm_locks
3317136 3292795 99% 0.31K 276428 12 1105712K ldlm_resources
444184 433975 97% 0.50K 55523 8 222092K kmalloc-512
238764 111854 46% 1.00K 29846 8 238768K kmalloc-1024
217061 212292 97% 1.19K 16697 13 267152K lustre_inode_cache
216450 212822 98% 0.22K 12025 18 48100K ccc_object_kmem
215594 212226 98% 0.23K 12682 17 50728K posix_timers_cache
189042 137698 72% 0.19K 9002 21 36008K kmalloc-192
142336 106544 74% 0.03K 1112 128 4448K kmalloc-32
129880 102900 79% 4.00K 16235 8 519520K kmalloc-4096
129570 128815 99% 0.19K 6170 21 24680K dentry
101816 100897 99% 2.00K 12727 8 203632K kmalloc-2048
100893 79168 78% 0.10K 2587 39 10348K buffer_head
33012 16735 50% 0.09K 786 42 3144K kmalloc-96
26624 18242 68% 0.01K 52 512 208K kmalloc-8
23562 23562 100% 0.55K 1683 14 13464K radix_tree_node
22608 22608 100% 0.11K 628 36 2512K sysfs_dir_cache
11696 6076 51% 0.89K 688 17 11008K ext4_inode_cache
11494 10467 91% 0.55K 821 14 6568K inode_cache
10100 10100 100% 8.00K 2525 4 80800K kmalloc-8192
8075 3858 47% 0.05K 95 85 380K shared_policy_node
5632 5632 100% 0.02K 22 256 88K kmalloc-16
4608 4608 100% 0.02K 18 256 72K ext4_io_page
4200 4200 100% 0.07K 75 56 300K Acpi-ParseExt
4182 4182 100% 0.04K 41 102 164K Acpi-Namespace
3392 3067 90% 0.06K 53 64 212K anon_vma
3200 3200 100% 0.03K 25 128 100K extent_status
2768 2504 90% 0.25K 173 16 692K kmalloc-256
2535 2437 96% 0.61K 195 13 1560K proc_inode_cache
1984 1984 100% 0.06K 31 64 124K ext4_free_data
864 864 100% 0.63K 72 12 576K shmem_inode_cache
824 824 100% 1.00K 103 8 824K nfs_inode_cache
756 720 95% 0.11K 21 36 84K journal_head
702 702 100% 0.10K 18 39 72K blkdev_ioc
629 629 100% 0.45K 37 17 296K vvp_thread_kmem
624 624 100% 0.16K 26 24 104K cl_env_kmem
624 594 95% 2.48K 52 12 1664K osc_thread_kmem

Active / Total Objects (% used) : 66336663 / 68128984 (97.4%)
Active / Total Slabs (% used) : 3446333 / 3446333 (100.0%)
Active / Total Caches (% used) : 86 / 144 (59.7%)
Active / Total Size (% used) : 13919700.20K / 14376913.70K (96.8%)
Minimum / Average / Maximum Object : 0.01K / 0.21K / 8.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
12579136 12140219 96% 0.06K 196549 64 786196K kmalloc-64
12013952 11682051 97% 0.12K 375436 32 1501744K kmalloc-128
8746672 8490175 97% 0.18K 397576 22 1590304K vm_area_struct
8734824 8483932 97% 0.21K 485268 18 1941072K cl_lock_kmem
8246550 8246550 100% 0.30K 634350 13 2537400K osc_object_kmem
8246540 8246540 100% 0.20K 412327 20 1649308K lovsub_object_kmem
3734952 3595155 96% 0.50K 466869 8 1867476K ldlm_locks
3664284 3527808 96% 0.31K 305357 12 1221428K ldlm_resources
443368 421802 95% 0.50K 55421 8 221684K kmalloc-512
213811 206672 96% 1.19K 16447 13 263152K lustre_inode_cache
211518 206783 97% 0.22K 11751 18 47004K ccc_object_kmem
210511 206223 97% 0.23K 12383 17 49532K posix_timers_cache
194296 107261 55% 1.00K 24287 8 194296K kmalloc-1024
171717 139872 81% 0.19K 8177 21 32708K kmalloc-192
133854 131854 98% 0.19K 6374 21 25496K dentry
123776 103967 83% 0.03K 967 128 3868K kmalloc-32
111640 98276 88% 4.00K 13955 8 446560K kmalloc-4096
98384 98384 100% 2.00K 12298 8 196768K kmalloc-2048
94185 62732 66% 0.10K 2415 39 9660K buffer_head
27216 27216 100% 0.55K 1944 14 15552K radix_tree_node
22608 22608 100% 0.11K 628 36 2512K sysfs_dir_cache
21504 16937 78% 0.01K 42 512 168K kmalloc-8
11494 10466 91% 0.55K 821 14 6568K inode_cache
10100 10100 100% 8.00K 2525 4 80800K kmalloc-8192
6392 2426 37% 0.89K 376 17 6016K ext4_inode_cache
5632 5632 100% 0.02K 22 256 88K kmalloc-16
5586 4216 75% 0.09K 133 42 532K kmalloc-96
4845 1937 39% 0.05K 57 85 228K shared_policy_node
4608 4608 100% 0.02K 18 256 72K ext4_io_page
4200 4200 100% 0.07K 75 56 300K Acpi-ParseExt
4182 4182 100% 0.04K 41 102 164K Acpi-Namespace
3392 3254 95% 0.06K 53 64 212K anon_vma
3200 3200 100% 0.03K 25 128 100K extent_status
2848 2549 89% 0.25K 178 16 712K kmalloc-256
2509 2439 97% 0.61K 193 13 1544K proc_inode_cache
1984 1984 100% 0.06K 31 64 124K ext4_free_data
864 864 100% 0.63K 72 12 576K shmem_inode_cache
824 824 100% 1.00K 103 8 824K nfs_inode_cache
756 720 95% 0.11K 21 36 84K journal_head
702 702 100% 0.10K 18 39 72K blkdev_ioc
629 629 100% 0.45K 37 17 296K vvp_thread_kmem
624 624 100% 0.16K 26 24 104K cl_env_kmem
624 598 95% 2.48K 52 12 1664K osc_thread_kmem

And one last one after the eviction, but with the job trying to restart itself afterwards.

Active / Total Objects (% used) : 49123575 / 54155998 (90.7%)
Active / Total Slabs (% used) : 2688713 / 2688713 (100.0%)
Active / Total Caches (% used) : 86 / 144 (59.7%)
Active / Total Size (% used) : 10955267.49K / 11690747.15K (93.7%)
Minimum / Average / Maximum Object : 0.01K / 0.22K / 8.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
11287936 9151406 81% 0.06K 176374 64 705496K kmalloc-64
9460160 8156221 86% 0.12K 295630 32 1182520K kmalloc-128
6699880 6151693 91% 0.18K 304540 22 1218160K vm_area_struct
6667344 6146409 92% 0.21K 370408 18 1481632K cl_lock_kmem
5386420 5377912 99% 0.20K 269321 20 1077284K lovsub_object_kmem
5377905 5377905 100% 0.30K 413685 13 1654740K osc_object_kmem
3216544 3036219 94% 0.50K 402068 8 1608272K ldlm_locks
3094272 2927971 94% 0.31K 257856 12 1031424K ldlm_resources
328045 319771 97% 1.19K 25235 13 403760K lustre_inode_cache
309042 299995 97% 0.22K 17169 18 68676K ccc_object_kmem
308108 299421 97% 0.23K 18124 17 72496K posix_timers_cache
284928 271396 95% 0.50K 35616 8 142464K kmalloc-512
248600 233616 93% 1.00K 31075 8 248600K kmalloc-1024
226695 198060 87% 0.19K 10795 21 43180K kmalloc-192
202272 173452 85% 0.09K 4816 42 19264K kmalloc-96
202240 181651 89% 0.01K 395 512 1580K kmalloc-8
190428 188036 98% 0.19K 9068 21 36272K dentry
165376 149827 90% 0.03K 1292 128 5168K kmalloc-32
154924 151726 97% 0.55K 11066 14 88528K radix_tree_node
153736 145800 94% 4.00K 19217 8 614944K kmalloc-4096
69440 65483 94% 2.00K 8680 8 138880K kmalloc-2048
32292 31360 97% 0.10K 828 39 3312K buffer_head
22608 22608 100% 0.11K 628 36 2512K sysfs_dir_cache
10100 10100 100% 8.00K 2525 4 80800K kmalloc-8192
8232 8179 99% 0.55K 588 14 4704K inode_cache
5632 5632 100% 0.02K 22 256 88K kmalloc-16
4608 4608 100% 0.02K 18 256 72K ext4_io_page
4200 4200 100% 0.07K 75 56 300K Acpi-ParseExt
4182 4182 100% 0.04K 41 102 164K Acpi-Namespace
3200 3200 100% 0.03K 25 128 100K extent_status
2880 2880 100% 0.06K 45 64 180K anon_vma
2576 2301 89% 0.25K 161 16 644K kmalloc-256
2418 2418 100% 0.61K 186 13 1488K proc_inode_cache
1984 1984 100% 0.06K 31 64 124K ext4_free_data
1955 1556 79% 0.05K 23 85 92K shared_policy_node
1921 1828 95% 0.89K 113 17 1808K ext4_inode_cache
864 864 100% 0.63K 72 12 576K shmem_inode_cache
848 848 100% 1.00K 106 8 848K nfs_inode_cache
756 720 95% 0.11K 21 36 84K journal_head
741 741 100% 0.10K 19 39 76K blkdev_ioc
672 672 100% 0.16K 28 24 112K cl_env_kmem
636 593 93% 2.48K 53 12 1696K osc_thread_kmem
630 587 93% 0.39K 63 10 252K lov_session_kmem

Generated at Sat Feb 10 01:45:33 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.