From: Hugh Dickins Move page_add_anon_rmap's BUG_ON(page_mapping(page)) inside the rmap_lock (well, might as well just check mapping if !mapcount then): if this page is being mapped or unmapped on another cpu at the same time, page_mapping's PageAnon(page) and page->mapping are volatile. But page_mapping(page) is used more widely: I've a nasty feeling that clear_page_anon, page_add_anon_rmap and/or page_mapping need barriers added (also in 2.6.6 itself), --- 25-akpm/mm/rmap.c | 2 +- 1 files changed, 1 insertion(+), 1 deletion(-) diff -puN mm/rmap.c~rmap-9-page_add_anon_rmap-bug-fix mm/rmap.c --- 25/mm/rmap.c~rmap-9-page_add_anon_rmap-bug-fix 2004-05-03 20:12:00.744524664 -0700 +++ 25-akpm/mm/rmap.c 2004-05-03 20:12:00.747524208 -0700 @@ -215,10 +215,10 @@ void fastcall page_add_anon_rmap(struct struct mm_struct *mm, unsigned long address) { BUG_ON(PageReserved(page)); - BUG_ON(page_mapping(page)); rmap_lock(page); if (!page->mapcount) { + BUG_ON(page->mapping); SetPageAnon(page); page->index = address & PAGE_MASK; page->mapping = (void *) mm; /* until next patch */ _