aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorHugh Dickins <hugh@veritas.com>2004-11-18 22:54:47 -0800
committerLinus Torvalds <torvalds@ppc970.osdl.org>2004-11-18 22:54:47 -0800
commitb6de40561faafcc3f44e1acfa942c46ac8d5ad6e (patch)
treebbf68b8a377c7cc4c604e3df37756ac69a4ac8e3 /mm
parent2f8b7fcfca8b477d91d216f79625cdb3aa92ec5c (diff)
downloadhistory-b6de40561faafcc3f44e1acfa942c46ac8d5ad6e.tar.gz
[PATCH] mlock-vs-VM_IO hang fix
With Andrea Arcangeli <andrea@novell.com> Fix a hang which occurs when mlock() encounters a mapping of /dev/mem. These have VM_IO set. follow_page() keeps returning zero (not a valid pfn) and handle_mm_fault() keeps on returning VM_FAULT_MINOR (there's a pte there), so get_user_pages() locks up. The patch changes get_user_pages() to just bale out when it hits a VM_IO region. make_pages_present() is taught to ignore the resulting -EFAULT. We still have two bugs: a) If a process has a VM_IO vma, get_user_pages() will bale early, without having considered the vmas at higher virtual addresses. As do_mlock() also walks the vma list this bug is fairly benign, but get_user_pages() is doing the wrong thing there. b) The `len' argument to get_user_pages should be long, not int. We presently have a 16TB limit on 64-bit. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/memory.c2
-rw-r--r--mm/mlock.c3
2 files changed, 3 insertions, 2 deletions
diff --git a/mm/memory.c b/mm/memory.c
index 2c2ff72c118ef5..0a80f55f9f2ad7 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -761,7 +761,7 @@ int get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
continue;
}
- if (!vma || (pages && (vma->vm_flags & VM_IO))
+ if (!vma || (vma->vm_flags & VM_IO)
|| !(flags & vma->vm_flags))
return i ? : -EFAULT;
diff --git a/mm/mlock.c b/mm/mlock.c
index 9cdfcf0036fede..4a00277487bad9 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -47,7 +47,8 @@ static int mlock_fixup(struct vm_area_struct * vma,
pages = (end - start) >> PAGE_SHIFT;
if (newflags & VM_LOCKED) {
pages = -pages;
- ret = make_pages_present(start, end);
+ if (!(newflags & VM_IO))
+ ret = make_pages_present(start, end);
}
vma->vm_mm->locked_vm -= pages;