From 4d7354a9447283d80737265b50812b451f40c4d7 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 3 Jul 2009 08:30:37 -0500 Subject: [PATCH] x86: decouple pagefault-logic in highmem/kmap commit 4beedc411ebd0cd5d294acfe9b33e8c2ad7dc598 in tip. With the separation of pagefault_{disable,enable}() from the preempt_count a previously overlooked dependancy became painfully clear. kmap_atomic() is per cpu and relies not only on disabling the pagefault handler, but really needs preemption disabled too. Make this explicit now - so that we can change pagefault_disable(). Signed-off-by: Peter Zijlstra Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner Signed-off-by: Paul Gortmaker --- arch/x86/mm/highmem_32.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c index c838ac8..76e0544 100644 --- a/arch/x86/mm/highmem_32.c +++ b/arch/x86/mm/highmem_32.c @@ -33,6 +33,7 @@ void *kmap_atomic_prot(struct page *page, enum km_type type, pgprot_t prot) unsigned long vaddr; /* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */ + preempt_disable(); pagefault_disable(); if (!PageHighMem(page)) @@ -74,6 +75,7 @@ void kunmap_atomic(void *kvaddr, enum km_type type) } pagefault_enable(); + preempt_enable(); } /* -- 1.7.0.4