From: john stultz This patch tries to resolve issues caused by running the TSC based lost tick compensation code on CPUs that change frequency (speedstep, etc). Should the CPU be in slow mode when calibrate_tsc() executes, the kernel will assume we have so many cycles per tick. Later when the cpu speeds up, the kernel will start noting that too many cycles have past since the last interrupt. Since this can occasionally happen, the lost tick compensation code then tries to fix this by incrementing jiffies. Thus every tick we end up incrementing jiffies many times, causing timers to expire too quickly and time to rush ahead. This patch detects when there has been 100 consecutive interrupts where we had to compensate for lost ticks. If this occurs, we spit out a warning and fall back to using the PIT as a time source. I've tested this on my speedstep enabled laptop with success, and others laptop users seeing this problem have reported it works for them. Also to ensure we don't fall back to the slower PIT too quickly, I tested the code on a system I have that looses ~30 ticks about every second and it can still manage to use the TSC as a good time source. arch/i386/kernel/timers/timer.c | 10 ++++++++++ arch/i386/kernel/timers/timer_tsc.c | 14 +++++++++++++- include/asm-i386/timer.h | 5 +++++ 3 files changed, 28 insertions(+), 1 deletion(-) diff -puN arch/i386/kernel/timers/timer.c~lost-tick-speedstep-fix arch/i386/kernel/timers/timer.c --- 25/arch/i386/kernel/timers/timer.c~lost-tick-speedstep-fix 2003-06-23 21:58:53.000000000 -0700 +++ 25-akpm/arch/i386/kernel/timers/timer.c 2003-06-23 22:10:00.000000000 -0700 @@ -29,6 +29,16 @@ static int __init clock_setup(char* str) } __setup("clock=", clock_setup); + +/* + * The chosen timesource has been found to be bad. Fall back to a known good + * timesource (the PIT) + */ +void clock_fallback(void) +{ + timer = &timer_pit; +} + /* iterates through the list of timers, returning the first * one that initializes successfully. */ diff -puN arch/i386/kernel/timers/timer_tsc.c~lost-tick-speedstep-fix arch/i386/kernel/timers/timer_tsc.c --- 25/arch/i386/kernel/timers/timer_tsc.c~lost-tick-speedstep-fix 2003-06-23 21:58:53.000000000 -0700 +++ 25-akpm/arch/i386/kernel/timers/timer_tsc.c 2003-06-23 22:05:52.000000000 -0700 @@ -124,6 +124,7 @@ static void mark_offset_tsc(void) int countmp; static int count1 = 0; unsigned long long this_offset, last_offset; + static int lost_count = 0; write_lock(&monotonic_lock); last_offset = ((unsigned long long)last_tsc_high<<32)|last_tsc_low; @@ -178,9 +179,20 @@ static void mark_offset_tsc(void) delta += delay_at_last_interrupt; lost = delta/(1000000/HZ); delay = delta%(1000000/HZ); - if (lost >= 2) + if (lost >= 2) { jiffies += lost-1; + /* sanity check to ensure we're not always loosing ticks */ + if (lost_count++ > 100) { + printk(KERN_WARNING "Loosing too many ticks!\n"); + printk(KERN_WARNING "TSC cannot be used as a " + "timesource. (Are you using SpeedStep?)\n"); + printk(KERN_WARNING "Falling back to a sane " + "timesource.\n"); + clock_fallback(); + } + } else + lost_count = 0; /* update the monotonic base value */ this_offset = ((unsigned long long)last_tsc_high<<32)|last_tsc_low; monotonic_base += cycles_2_ns(this_offset - last_offset); diff -puN include/asm-i386/timer.h~lost-tick-speedstep-fix include/asm-i386/timer.h --- 25/include/asm-i386/timer.h~lost-tick-speedstep-fix 2003-06-23 22:06:49.000000000 -0700 +++ 25-akpm/include/asm-i386/timer.h 2003-06-23 22:10:18.000000000 -0700 @@ -25,4 +25,9 @@ extern struct timer_opts* select_timer(v /* Modifiers for buggy PIT handling */ extern int pit_latch_buggy; + +extern struct timer_opts *timer; + +void clock_fallback(void); + #endif _