Details
-
Bug
-
Resolution: Not a Bug
-
Major
-
None
-
None
-
None
-
Server: 2.1.4, centos 6.3
Client: 2.1.5, sles11sp1
-
2
-
9716
Description
We have ongoing problem of unreclaiming slab memory stuck in Lustre. It is different from LU-2613 in that unmounting Lustre FS did not release the stuck memory. Also we tried lflush and also the write technique suggested by Niu Yawei in LU-2613 at 15/Jan/13 8:54 AM. None worked for us.
This is an ongoing problem and created a lot of problem in our production systems.
I will append /proc/meminfo and a 'slabtop' output below. Let me know what other information you need.
bridge2 /proc # cat meminfo
MemTotal: 65978336 kB
MemFree: 4417544 kB
Buffers: 7804 kB
Cached: 183036 kB
SwapCached: 6068 kB
Active: 101840 kB
Inactive: 183404 kB
Active(anon): 83648 kB
Inactive(anon): 13036 kB
Active(file): 18192 kB
Inactive(file): 170368 kB
Unevictable: 3480 kB
Mlocked: 3480 kB
SwapTotal: 2000052 kB
SwapFree: 1669420 kB
Dirty: 288 kB
Writeback: 0 kB
AnonPages: 92980 kB
Mapped: 16964 kB
Shmem: 136 kB
Slab: 57633936 kB
SReclaimable: 1029472 kB
SUnreclaim: 56604464 kB
KernelStack: 5280 kB
PageTables: 15928 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 34989220 kB
Committed_AS: 737448 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 2348084 kB
VmallocChunk: 34297775112 kB
HardwareCorrupted: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 7104 kB
DirectMap2M: 67100672 kB
bridge2 /proc #
bridge2 ~ # slabtop --once
Active / Total Objects (% used) : 2291913 / 500886088 (0.5%)
Active / Total Slabs (% used) : 170870 / 14351991 (1.2%)
Active / Total Caches (% used) : 151 / 249 (60.6%)
Active / Total Size (% used) : 838108.56K / 53998141.57K (1.6%)
Minimum / Average / Maximum Object : 0.01K / 0.11K / 4096.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
133434868 41138 0% 0.04K 1450379 92 5801516K lovsub_page_kmem
124369720 77440 0% 0.19K 6218486 20 24873944K cl_page_kmem
115027759 41264 0% 0.05K 1493867 77 5975468K lov_page_kmem
77597568 41174 0% 0.08K 1616616 48 6466464K vvp_page_kmem
44004405 38371 0% 0.26K 2933627 15 11734508K osc_page_kmem
1558690 9106 0% 0.54K 222670 7 890680K radix_tree_node
1435785 457262 31% 0.25K 95719 15 382876K size-256
991104 24455 2% 0.50K 123888 8 495552K size-512
591420 573510 96% 0.12K 19714 30 78856K size-128
583038 507363 87% 0.06K 9882 59 39528K size-64
399080 4356 1% 0.19K 19954 20 79816K cred_jar
112112 81796 72% 0.03K 1001 112 4004K size-32
106368 106154 99% 0.08K 2216 48 8864K sysfs_dir_cache
89740 26198 29% 1.00K 22435 4 89740K size-1024
87018 1601 1% 0.62K 14503 6 58012K proc_inode_cache
53772 2845 5% 0.58K 8962 6 35848K inode_cache
44781 44746 99% 8.00K 44781 1 358248K size-8192
42700 28830 67% 0.19K 2135 20 8540K dentry
38990 2213 5% 0.79K 7798 5 31192K ext3_inode_cache
25525 24880 97% 0.78K 5105 5 20420K shmem_inode_cache
23394 16849 72% 0.18K 1114 21 4456K vm_area_struct
22340 6262 28% 0.19K 1117 20 4468K filp
20415 19243 94% 0.25K 1361 15 5444K skbuff_head_cache
19893 2152 10% 0.20K 1047 19 4188K ll_obdo_cache
15097 15006 99% 4.00K 15097 1 60388K size-4096
14076 1837 13% 0.04K 153 92 612K osc_req_kmem
12696 1448 11% 0.04K 138 92 552K lovsub_req_kmem
11684 1444 12% 0.04K 127 92 508K lov_req_kmem
10028 1477 14% 0.04K 109 92 436K ccc_req_kmem
9750 3000 30% 0.12K 325 30 1300K nfs_page
Attachments
Issue Links
- is duplicated by
-
LU-4053 client leaking objects/locks during IO
-
- Resolved
-
The slab memory is accounted in SUnreclaim when the slab cache is created without SLAB_RECLAIM_ACCOUNT flag, the cl/lov/osc page slabs are created without this flag, so they showed in SUnreclaim, and I think adding the flag and shrinker callback won't help, because the problem now is that slab cache isn't reaped but not the slab objects are not freed.
Right, it's not a memory leak problem, and all the slab memory will be freed after unloading lustre modules (see Jay's previous comment)
I don't think it's a lustre problem, the slab objects are already freed and put back in the slab cache after umount, so the problem is that kernel didn't reap the slab cache for some reason (actually, I don't know how to reap slab cache initiatively in 2.6 kernel).