Details
-
Technical task
-
Resolution: Unresolved
-
Blocker
-
None
-
None
-
None
-
9223372036854775807
Description
On a client, WBC borrows the design and implementation from Linux/tmpfs a lot. It uses a virtual in-memory subtree to represent a directory entirely caching on a client.
All inodes and directory entries (dentry for short) are stored in memory and managed by Linux VFS layer, which is a common VFS layer data structure with its own private data.
When create a file under a directory protected by the EX WBC lock, the client only creates the corresponding in-memory inode and dentry, then pins the dentry in the dentry cache by adding the reference to the dentry object.
When unlink the file, put the corresponding reference. Thus, the in-memory dentry and inode will be released when the last reference to the dentry object is deleted.
Also similar to Linux/tmpfs, file data is directly written into and pinned the page caches
To prevent from exhausting all virtual memory on a client, MemFS should allow an administrator to specify a maximum upper bound for the caching size in two aspects:
- Page cache size for caching file data;
- The maximum number of inodes.
Discussion:
Should we add memory cache limit?
Or not limit memory cache, let VM mechanism to reclaim memory cache for WBC automatically (inode/dentry/page caches)? If not limit, it will break the max_cached_mb (llite.*.max_cache_mb) memory usage limit in Lustre...
Attachments
Issue Links
- is related to
-
LU-10938 Metadata writeback cache support
- Open