aboutsummaryrefslogtreecommitdiffstatshomepage
diff options
context:
space:
mode:
authorAndrea Arcangeli <aarcange@redhat.com>2020-02-03 12:31:56 -0500
committerAndrea Arcangeli <aarcange@redhat.com>2023-11-11 22:03:37 -0500
commite9720aeb58674c615f9aa3c4edb96b6777b7748b (patch)
tree33d430bb7b05c9551d12457aacbbac7153dc7d11
parent4ea2a9ff98ac80bc096728c922ccf70c6d2ff7a1 (diff)
downloadaa-e9720aeb58674c615f9aa3c4edb96b6777b7748b.tar.gz
mm: use_mm: fix for arches checking mm_users to optimize TLB flushes
alpha, ia64, mips, powerpc, sh, sparc are relying on a check on mm->mm_users to know if they can skip some remote TLB flushes for single threaded processes. Most callers of kthread_use_mm() tend to invoke mmget_not_zero() or get_task_mm() before kthread_use_mm() to ensure the mm will remain alive in between kthread_use_mm() and kthread_unuse_mm(). Some callers however don't increase mm_users and they instead rely on serialization in __mmput() to ensure the mm will remain alive in between kthread_use_mm() and kthread_unuse_mm(). Not increasing mm_users during kthread_use_mm() is however unsafe for aforementioned arch TLB flushes optimizations. So either mmget()/mmput() should be added to the problematic callers of kthread_use_mm()/kthread_unuse_mm() or we can embed them in kthread_use_mm()/kthread_unuse_mm() which is more robust. Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
-rw-r--r--kernel/kthread.c3
1 files changed, 3 insertions, 0 deletions
diff --git a/kernel/kthread.c b/kernel/kthread.c
index e319a1b62586e9..22641a70cc0313 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -1359,6 +1359,7 @@ void kthread_use_mm(struct mm_struct *mm)
mmgrab(mm);
tsk->active_mm = mm;
}
+ mmget(mm);
tsk->mm = mm;
membarrier_update_current_mm(mm);
switch_mm_irqs_off(active_mm, mm, tsk);
@@ -1399,6 +1400,8 @@ void kthread_unuse_mm(struct mm_struct *mm)
force_uaccess_end(to_kthread(tsk)->oldfs);
+ mmput(mm);
+
task_lock(tsk);
/*
* When a kthread stops operating on an address space, the loop