diff options
author | Andrew Morton <akpm@osdl.org> | 2004-05-14 05:49:27 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2004-05-14 05:49:27 -0700 |
commit | 1b9407d76c0cb2e46ae7abc53de35fa91626bd7d (patch) | |
tree | 14bf1173e51b21f6154c136b494b7815f0827c21 /kernel | |
parent | 496dc9b420ac2db0e639aca9b668de2a695d3277 (diff) | |
download | history-1b9407d76c0cb2e46ae7abc53de35fa91626bd7d.tar.gz |
[PATCH] Add del_single_shot_timer()
From: Geoff Gustafson <geoff@linux.jf.intel.com>,
"Chen, Kenneth W" <kenneth.w.chen@intel.com>,
Ingo Molnar <mingo@elte.hu>,
me.
The big-SMP guys are seeing high CPU load due to del_timer_sync()'s
inefficiencies. The callers are fs/aio.c and schedule_timeout().
We note that neither of these callers' timer handlers actually re-add the
timer - they are single-shot.
So we don't need all that complexity in del_timer_sync() - we can just run
del_timer() and if that worked we know the timer is dead.
Add del_single_shot_timer(), export it to modules and use it in AIO and
schedule_timeout().
(these numbers are for an earlier patch, but they'll be close)
Before: 32p 4p
Warm cache 29,000 505
Cold cache 37,800 1220
After: 32p 4p
Warm cache 95 88
Cold cache 1,800 140
[Measurements are CPU cycles spent in a call to del_timer_sync, the average
of 1000 calls. 32p is 16-node NUMA, 4p is SMP.]
(I cleaned up a few things and added some commentary)
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/timer.c | 42 |
1 files changed, 38 insertions, 4 deletions
diff --git a/kernel/timer.c b/kernel/timer.c index 5bc2d78ba903a..d28aecec0be12 100644 --- a/kernel/timer.c +++ b/kernel/timer.c @@ -317,10 +317,16 @@ EXPORT_SYMBOL(del_timer); * * Synchronization rules: callers must prevent restarting of the timer, * otherwise this function is meaningless. It must not be called from - * interrupt contexts. Upon exit the timer is not queued and the handler - * is not running on any CPU. + * interrupt contexts. The caller must not hold locks which would prevent + * completion of the timer's handler. Upon exit the timer is not queued and + * the handler is not running on any CPU. * * The function returns whether it has deactivated a pending timer or not. + * + * del_timer_sync() is slow and complicated because it copes with timer + * handlers which re-arm the timer (periodic timers). If the timer handler + * is known to not do this (a single shot timer) then use + * del_singleshot_timer_sync() instead. */ int del_timer_sync(struct timer_list *timer) { @@ -348,8 +354,36 @@ del_again: return ret; } - EXPORT_SYMBOL(del_timer_sync); + +/*** + * del_singleshot_timer_sync - deactivate a non-recursive timer + * @timer: the timer to be deactivated + * + * This function is an optimization of del_timer_sync for the case where the + * caller can guarantee the timer does not reschedule itself in its timer + * function. + * + * Synchronization rules: callers must prevent restarting of the timer, + * otherwise this function is meaningless. It must not be called from + * interrupt contexts. The caller must not hold locks which wold prevent + * completion of the timer's handler. Upon exit the timer is not queued and + * the handler is not running on any CPU. + * + * The function returns whether it has deactivated a pending timer or not. + */ +int del_singleshot_timer_sync(struct timer_list *timer) +{ + int ret = del_timer(timer); + + if (!ret) { + ret = del_timer_sync(timer); + BUG_ON(ret); + } + + return ret; +} +EXPORT_SYMBOL(del_singleshot_timer_sync); #endif static int cascade(tvec_base_t *base, tvec_t *tv, int index) @@ -1109,7 +1143,7 @@ fastcall signed long __sched schedule_timeout(signed long timeout) add_timer(&timer); schedule(); - del_timer_sync(&timer); + del_singleshot_timer_sync(&timer); timeout = expire - jiffies; |