aboutsummaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorAndrew Morton <akpm@osdl.org>2004-05-09 23:29:30 -0700
committerLinus Torvalds <torvalds@ppc970.osdl.org>2004-05-09 23:29:30 -0700
commit85841fc043bfac3ebecd9d3181b496a2dadc1283 (patch)
tree0011bd7e8b5f1047bf24815b00e159a531aeed69 /kernel
parent8c8cfc36d9ec9e9cd6a440fd7bf8b5404bd11635 (diff)
downloadhistory-85841fc043bfac3ebecd9d3181b496a2dadc1283.tar.gz
[PATCH] sched: reduce idle time
From: Nick Piggin <nickpiggin@yahoo.com.au> It makes NEWLY_IDLE balances cause find_busiest_group return the busiest available group even if there isn't an imbalance. Basically - try a bit harder to prevent schedule emptying the runqueue. It is quite aggressive, but that isn't so bad because we don't (by default) do NEWLY_IDLE balancing across NUMA nodes, and NEWLY_IDLE balancing is always restricted to cache_hot tasks. It picked up a little bit of idle time that dbt2-pgsql was seeing...
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c
index e1d1eebf840f10..cf210a3f8c8e81 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -1630,7 +1630,8 @@ nextgroup:
return busiest;
out_balanced:
- if (busiest && idle != NOT_IDLE && max_load > SCHED_LOAD_SCALE) {
+ if (busiest && (idle == NEWLY_IDLE ||
+ (idle == IDLE && max_load > SCHED_LOAD_SCALE)) ) {
*imbalance = 1;
return busiest;
}