Details
-
Bug
-
Resolution: Cannot Reproduce
-
Critical
-
None
-
Lustre 2.4.2
-
None
-
vanilla 2.6.32.61
lustre 2.4.2
Hardware: Dual Xeon L5640 / 24G RAM
-
4
-
13030
Description
On our MDS, we seem to have a memory leak related to buffer cache that is
unreclaimable. Our workload is extremely metadata intensive, so that the MDS is under constant heavy load.
After a fresh reboot the buffer cache is filling up quickly. After a while RAM is used up and the machine starts swapping basically bringing Luster to a halt
(clients disconnect, lock failures, etc.).
The strange thing is that
$ echo 3 > /proc/sys/vm/drop_caches
only frees part of the allocated buffer cache and after a while the unreclaimable part fills up RAM completely leading to the swap disaster.
Setting /proc/sys/vm/vfs_cache_pressure > 100 doesn't help and
a large value of /proc/sys/vm/min_free_kbytes is happily ignored.
Also strange: After unmounting all Lustre targets and even unloading the Lustre kernel modules the kernel still shows the amount of previously allocated buffer cache as used memory even though the amount of buffer cache is then shown as close to zero. So it seems we have big memory leak.
Attachments
Issue Links
- is related to
-
LU-4053 client leaking objects/locks during IO
- Resolved