Details
-
Bug
-
Resolution: Unresolved
-
Minor
-
None
-
None
-
None
-
3
-
13144
Description
gnilnd vmap checksum will use per-cpu buffer as scratch buffer of page pointers array, which will be passed int vmap:
if ((odd || *kgnilnd_tunables.kgn_vmap_cksum) && nkiov > 1) {
struct page **pages = kgnilnd_data.kgn_cksum_map_pages[get_cpu()];
LASSERTF(pages != NULL, "NULL pages for cpu %d map_pages 0x%p\n",
get_cpu(), kgnilnd_data.kgn_cksum_map_pages);
CDEBUG(D_BUFFS, "odd %d len %u offset %u nob %u\n",
odd, kiov[0].kiov_len, offset, nob);
for (i = 0; i < nkiov; i++) {
pages[i] = kiov[i].kiov_page;
}
addr = vmap(pages, nkiov, VM_MAP, PAGE_KERNEL);
This is wrong because if there is thread context and another scheduler thread is scheduled before completion of vmap, page pointers will be overwritten and it will get wrong checksum.
Because this is disabled by default, so I think it's a low priority issue.