Against 2.4.19-pre7-ac2 145, 165, and 175 are against 2.4.19-pre7-ac4. 166, 185, and 190 are against 2.4.19-pre8-ac5 200, 210, 220, and 230 are against 2.4.20-pre5-ac4 110-remove-wake-up-sync.patch We do not need sync wakeups anymore, as the load balancer handles the case fine. Remove wake_up_sync and friends and the sync flag in the __wake_up method. 120-need_resched-abstraction.patch Abstract away access to need_resched into set_need_resched, etc. 130-frozen-lock.patch Fix scheduler deadlock on some platforms. I'll let DaveM (the author) explain: Some platforms need to grab mm->page_table_lock during switch_mm(). On the other hand code like swap_out() in mm/vmscan.c needs to hold mm->page_table_lock during wakeups which needs to grab the runqueue lock. This creates a conflict and the resolution chosen here is to not hold the runqueue lock during context_switch(). The implementation is specifically a "frozen" state implemented as a spinlock, which is held around the context_switch() call. This allows the runqueue lock to be dropped during this time yet prevent another cpu from running the "not switched away from yet" task. 140-sched_yield.patch Optimize sched_yield. 145-more-sched_yield.patch More abstractions to yield() 150-need_resched-check.patch A new task can become runnable during schedule(). We always want to return from scheduler with the highest priority task running, so we should check need_resched before returning to see if we should rerun ourselves through schedule. This used to be in the scheduler but was removed and then readded. 160-maxrtprio-1.patch Cleanup assumptions over what the value of MAX_RT_PRIO. No change to object code; just replace magic numbers with defines. 165-maxrtprio.patch Separate notion of "maximum real-time priority" from "maximum user-space real-time priority" via MAX_RT_PRIO vs MAX_USER_RT_PRIO defines. 166-maxrtprio.patch Further cleanup the code and move the defines to sched.h 170-migration_thread.patch Backport of the migration_thread migration code from 2.5. This includes my interrupt-off bugfix and wli's new migration_init code. The migration_thread code allows arch-independent task migration via set_cpus_allowed and allows the creation of things like task cpu affinity interfaces. 175-updated_migration_init.patch Rewrite of migration_init using Erich Focht's simpler method of using the initial migration_thread to migrate any future threads. Also includes a fix for arches where logical != physical CPU mapping. 180-misc-stuff.patch Lots of misc stuff, almost entiirely invariant and trivial cleanups. Specifically: - rename lock_task_rq -> task_rq_lock - rename unlock_task_rq -> task_rq_lock - cleanup lock_task_rq - list_del_init -> list_del fix in dequeue_task - comment cleanups and additions - load_balance fixes and cleanups - simple optimization (rt_task -> policy!=SCHED_OTHER) 185-more-misc-stuff.patch More misc. cleanups and improvements: - move sched_find_first_bit from mmu_context.h to bitops.h as in 2.5. Why it was ever in mmu_context.h is beyond me. - remove the RUN_CHILD_FIRST cruft from kernel/fork.c. Pretty clear this works great; we do not need the ifdefs. - Add comments to top of kernel/sched.c to briefly explain new scheduler design, give credit, and update copyright. - set_cpus_allowed optimization from Mike Kravetz: we do not need to invoke the migration_thread's if the task is not running; just update task->cpu. 190-documentation.patch Add sched-coding.txt, a description of scheduler methods and locking rules, and sched-design.txt, Ingo's original lkml email detailing the goals, design, and implementation of the scheduler. 200-sched-yield.patch Fix sched_yield for good. Seriously. 210-sched-comments.patch Glorious comments everywhere. 220-task_cpu.patch "set_task_cpu()" and "task_cpu()" abstraction. 230-sched-misc.patch misc. and trivial cleanups