From: Andi Kleen Fix a long standing race in x86-64 SMP TLB handling. When a mm is freed and another thread exits to a lazy TLB thread (like idle) the freed user page tables would be still kept loaded in the idle thread. When an interrupt does a prefetch on NULL the CPU would try to follow it and read random data. This could lead to machine checks on Opterons in some cases. Credit goes to some unnamed debugging wizards at AMD who described the problem. All blame to me. I did the fix based on their description. Signed-off-by: Andi Kleen Signed-off-by: Andrew Morton --- 25-akpm/include/asm-x86_64/mmu_context.h | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff -puN include/asm-x86_64/mmu_context.h~x86_64-fix-flush-race-on-context-switch include/asm-x86_64/mmu_context.h --- 25/include/asm-x86_64/mmu_context.h~x86_64-fix-flush-race-on-context-switch 2005-01-16 01:15:32.715364896 -0800 +++ 25-akpm/include/asm-x86_64/mmu_context.h 2005-01-16 01:15:32.719364288 -0800 @@ -51,9 +51,10 @@ static inline void switch_mm(struct mm_s out_of_line_bug(); if(!test_and_set_bit(cpu, &next->cpu_vm_mask)) { /* We were in lazy tlb mode and leave_mm disabled - * tlb flush IPI delivery. We must flush our tlb. + * tlb flush IPI delivery. We must reload CR3 + * to make sure to use no freed page tables. */ - local_flush_tlb(); + asm volatile("movq %0,%%cr3" :: "r" (__pa(next->pgd)) : "memory"); load_LDT_nolock(&next->context, cpu); } } _