From: Zwane Mwaikambo This is a patch to make the MTRR initialisation more conformant with what is stated in volume 3 of (10-36 Memory Cache Control). The most notable change is entering the no-fill cache mode before clearing the PGE bit in cr4. Intel also states that we should do the cache flush via the cr3 register shuffle. If there is a problem with the patch please don't hesitate to beat me vigorously with a clue-by-four. It has been tested on a 3x Pentium 133, 8x PIII Xeon 700, 1x Celeron 550 and 32x PIII 500 NUMAQ (hardware courtesy of OSDL) arch/i386/kernel/cpu/mtrr/generic.c | 16 ++++++++++------ 1 files changed, 10 insertions(+), 6 deletions(-) diff -puN arch/i386/kernel/cpu/mtrr/generic.c~mtrr-init-ordering-fixes arch/i386/kernel/cpu/mtrr/generic.c --- 25/arch/i386/kernel/cpu/mtrr/generic.c~mtrr-init-ordering-fixes 2003-08-10 14:23:49.000000000 -0700 +++ 25-akpm/arch/i386/kernel/cpu/mtrr/generic.c 2003-08-10 14:23:49.000000000 -0700 @@ -8,6 +8,7 @@ #include #include #include +#include #include "mtrr.h" struct mtrr_state { @@ -241,18 +242,20 @@ static void prepare_set(void) more invasive changes to the way the kernel boots */ spin_lock(&set_atomicity_lock); + /* Enter the no-fill (CD=1, NW=0) cache mode and flush caches. */ + cr0 = read_cr0() | 0x40000000; /* set CD flag */ + wbinvd(); + write_cr0(cr0); + wbinvd(); + /* Save value of CR4 and clear Page Global Enable (bit 7) */ if ( cpu_has_pge ) { cr4 = read_cr4(); write_cr4(cr4 & (unsigned char) ~(1 << 7)); } - /* Disable and flush caches. Note that wbinvd flushes the TLBs as - a side-effect */ - cr0 = read_cr0() | 0x40000000; - wbinvd(); - write_cr0(cr0); - wbinvd(); + /* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */ + __flush_tlb(); /* Save MTRR state */ rdmsr(MTRRdefType_MSR, deftype_lo, deftype_hi); @@ -265,6 +268,7 @@ static void post_set(void) { /* Flush caches and TLBs */ wbinvd(); + __flush_tlb(); /* Intel (P6) standard MTRRs */ wrmsr(MTRRdefType_MSR, deftype_lo, deftype_hi); _