From: Anton Blanchard When trawling though code I noticed something that could be dodgy, if cpu_vm_mask ever got out of sync all manner of weird things can happen. Im not sure if there is a bug here yet, youd have to fork (which zeros cpu_vm_mask) and somehow reuse the same mm (so the prev == next is true). Anyway heres the patch, Ill probably play it safe and merge it. --- 25-akpm/include/asm-ppc64/mmu_context.h | 3 ++- 1 files changed, 2 insertions(+), 1 deletion(-) diff -puN include/asm-ppc64/mmu_context.h~ppc64-cpu_vm_mask-fix include/asm-ppc64/mmu_context.h --- 25/include/asm-ppc64/mmu_context.h~ppc64-cpu_vm_mask-fix Tue Feb 3 13:18:12 2004 +++ 25-akpm/include/asm-ppc64/mmu_context.h Tue Feb 3 13:18:12 2004 @@ -156,6 +156,8 @@ static inline void switch_mm(struct mm_s : : ); #endif /* CONFIG_ALTIVEC */ + cpu_set(smp_processor_id(), next->cpu_vm_mask); + /* No need to flush userspace segments if the mm doesnt change */ if (prev == next) return; @@ -164,7 +166,6 @@ static inline void switch_mm(struct mm_s flush_slb(tsk, next); else flush_stab(tsk, next); - cpu_set(smp_processor_id(), next->cpu_vm_mask); } #define deactivate_mm(tsk,mm) do { } while (0) _