From: Anton Blanchard Paul wrote a patch to use some of the rmap infrastructure to flush TLB entries on ppc64. When testing it we found a problem in vmalloc where it sets up the pte -> address mapping incorrectly. We clear the top bits of the address but then forget to pass in the full address to pte_alloc_kernel. The end result is the address in page->index is truncated. I fixed it in a similar way to how zeromap_pmd_range etc does it. I'm guessing no one uses the rmap hooks on vmalloc pages yet, so havent seen this problem. --- mm/vmalloc.c | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff -puN mm/vmalloc.c~vmalloc-address-offset-fix mm/vmalloc.c --- 25/mm/vmalloc.c~vmalloc-address-offset-fix 2004-01-20 20:34:28.000000000 -0800 +++ 25-akpm/mm/vmalloc.c 2004-01-20 20:34:28.000000000 -0800 @@ -114,15 +114,16 @@ static int map_area_pmd(pmd_t *pmd, unsi unsigned long size, pgprot_t prot, struct page ***pages) { - unsigned long end; + unsigned long base, end; + base = address & PGDIR_MASK; address &= ~PGDIR_MASK; end = address + size; if (end > PGDIR_SIZE) end = PGDIR_SIZE; do { - pte_t * pte = pte_alloc_kernel(&init_mm, pmd, address); + pte_t * pte = pte_alloc_kernel(&init_mm, pmd, base + address); if (!pte) return -ENOMEM; if (map_area_pte(pte, address, end - address, prot, pages)) _