diff options
author | Andrew Morton <akpm@osdl.org> | 2004-05-14 20:13:06 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2004-05-14 20:13:06 -0700 |
commit | 632eb5ca925ff5f830ed85e45c12b2a22bb8e1d9 (patch) | |
tree | ee6b3bab0a7b37aba3755a0ebf01e6978ea305d6 /kernel | |
parent | 44069c37b6e110092b1836fc50c64374b4f5129e (diff) | |
download | history-632eb5ca925ff5f830ed85e45c12b2a22bb8e1d9.tar.gz |
[PATCH] sched: less locking in balancing
From: Nick Piggin <nickpiggin@yahoo.com.au>
Analysis and basic idea from Suresh Siddha <suresh.b.siddha@intel.com>
"This small change in load_balance() brings the performance back upto base
scheduler(infact I see a ~1.5% performance improvement now). Basically
this fix removes the unnecessary double_lock.."
Workload is SpecJBB on 16-way Altix.
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched.c | 18 |
1 files changed, 13 insertions, 5 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 3e94e3db459566..58f87452217913 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -1685,12 +1685,20 @@ static int load_balance(int this_cpu, runqueue_t *this_rq, goto out_balanced; } - /* Attempt to move tasks */ - double_lock_balance(this_rq, busiest); - - nr_moved = move_tasks(this_rq, this_cpu, busiest, imbalance, sd, idle); + nr_moved = 0; + if (busiest->nr_running > 1) { + /* + * Attempt to move tasks. If find_busiest_group has found + * an imbalance but busiest->nr_running <= 1, the group is + * still unbalanced. nr_moved simply stays zero, so it is + * correctly treated as an imbalance. + */ + double_lock_balance(this_rq, busiest); + nr_moved = move_tasks(this_rq, this_cpu, busiest, + imbalance, sd, idle); + spin_unlock(&busiest->lock); + } spin_unlock(&this_rq->lock); - spin_unlock(&busiest->lock); if (!nr_moved) { sd->nr_balance_failed++; |