Because keventd is a resource which is shared between unrelated parts of the kernel it is possible for one person's workqueue handler to accidentally call another person's flush_scheduled_work(). thockin managed it by calling mntput() from a workqueue handler. It deadlocks. It's simple enough to fix: teach flush_scheduled_work() to go direct when it discovers that the calling thread is the one which should be running the work. Note that this can cause recursion. The depth of that recursion is equal to the number of currently-queued works which themselves want to call flush_scheduled_work(). If this ever exceeds three I'll eat my hat. --- 25-akpm/kernel/workqueue.c | 8 ++++++++ 1 files changed, 8 insertions(+) diff -puN kernel/workqueue.c~flush_scheduled_work-deadlock-fix kernel/workqueue.c --- 25/kernel/workqueue.c~flush_scheduled_work-deadlock-fix Fri Mar 12 15:38:14 2004 +++ 25-akpm/kernel/workqueue.c Fri Mar 12 15:38:14 2004 @@ -229,6 +229,14 @@ void fastcall flush_workqueue(struct wor continue; cwq = wq->cpu_wq + cpu; + if (cwq->thread == current) { + /* + * Probably keventd trying to flush its own queue. + * So simply run it by hand rather than deadlocking. + */ + run_workqueue(cwq); + continue; + } spin_lock_irq(&cwq->lock); sequence_needed = cwq->insert_sequence; _