[LU-12534] Add delete_pagevec_from_page_cache Created: 11/Jul/19 Updated: 09/Sep/19 |
|
| Status: | Open |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Minor |
| Reporter: | Patrick Farrell (Inactive) | Assignee: | WC Triage |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | None | ||
| Rank (Obsolete): | 9223372036854775807 |
| Description |
|
When doing shared file reads, we back up in page discard (this can also happen when writing a file, but is less of an issue). a
- 32.79% 0.10% ior [kernel.kallsyms] [k] osc_page_init a
- 32.69% osc_page_init a
- 32.22% osc_lru_alloc a
- 31.58% osc_lru_reclaim a
- 31.57% osc_lru_shrink a
- 30.24% discard_pagevec a
- 27.27% cl_page_discard a
- 27.22% vvp_page_discard a
- 26.55% delete_from_page_cache a
+ 23.77% _raw_qspin_lock_irq a
+ 1.58% mem_cgroup_uncharge_cache_page a
1.03% __delete_from_page_cache a
This is seen when doing the i/o 500 hard read with more data than will fit in cache on a node (and many threads). This will be very similar to void delete_from_page_cache(struct page *page)
{
struct address_space *mapping = page->mapping;
void (*freepage)(struct page *); BUG_ON(!PageLocked(page)); freepage = mapping->a_ops->freepage;
spin_lock_irq(&mapping->tree_lock);
__delete_from_page_cache(page, NULL);
spin_unlock_irq(&mapping->tree_lock);
mem_cgroup_uncharge_cache_page(page); if (freepage)
freepage(page);
page_cache_release(page);
}
EXPORT_SYMBOL(delete_from_page_cache);
And this should offer a decent speed up. |
| Comments |
| Comment by Patrick Farrell (Inactive) [ 11/Jul/19 ] |
|
Turns out this is actually not easily possible - Some of the required functions aren't exported. Notably, __delete_from_page_cache(page, NULL); and mem_cgroup_uncharge_cache_page are not available. Let's stick this on the backburner, then... Something to try to get in upstream when we find the time. |