diff options
author | Ingo Molnar <mingo@elte.hu> | 2005-01-07 21:49:19 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@evo.osdl.org> | 2005-01-07 21:49:19 -0800 |
commit | 3365d1671c8f5f1ede7a07dcc632e70a385f27ad (patch) | |
tree | 29f45f5f9e71f435215172663ced84b8049e15bd /kernel | |
parent | 38e387ee01e5a57cd3ed84062930997b87fa3896 (diff) | |
download | history-3365d1671c8f5f1ede7a07dcc632e70a385f27ad.tar.gz |
[PATCH] preempt cleanup
This is another generic fallout from the voluntary-preempt patchset: a
cleanup of the cond_resched() infrastructure, in preparation of the latency
reduction patches. The changes:
- uninline cond_resched() - this makes the footprint smaller,
especially once the number of cond_resched() points increase.
- add a 'was rescheduled' return value to cond_resched. This makes it
symmetric to cond_resched_lock() and later latency reduction patches
rely on the ability to tell whether there was any preemption.
- make cond_resched() more robust by using the same mechanism as
preempt_kernel(): by using PREEMPT_ACTIVE. This preserves the task's
state - e.g. if the task is in TASK_ZOMBIE but gets preempted via
cond_resched() just prior scheduling off then this approach preserves
TASK_ZOMBIE.
- the patch also adds need_lockbreak() which critical sections can use
to detect lock-break requests.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched.c | 23 |
1 files changed, 17 insertions, 6 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 1a05ab700f2330..a55af06f9976b1 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -3433,13 +3433,25 @@ asmlinkage long sys_sched_yield(void) return 0; } -void __sched __cond_resched(void) +static inline void __cond_resched(void) { - set_current_state(TASK_RUNNING); - schedule(); + do { + preempt_count() += PREEMPT_ACTIVE; + schedule(); + preempt_count() -= PREEMPT_ACTIVE; + } while (need_resched()); +} + +int __sched cond_resched(void) +{ + if (need_resched()) { + __cond_resched(); + return 1; + } + return 0; } -EXPORT_SYMBOL(__cond_resched); +EXPORT_SYMBOL(cond_resched); /* * cond_resched_lock() - if a reschedule is pending, drop the given lock, @@ -3462,8 +3474,7 @@ int cond_resched_lock(spinlock_t * lock) if (need_resched()) { _raw_spin_unlock(lock); preempt_enable_no_resched(); - set_current_state(TASK_RUNNING); - schedule(); + __cond_resched(); spin_lock(lock); return 1; } |