aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorZou Nanhai <nanhai.zou@intel.com>2004-11-25 00:00:28 -0800
committerLinus Torvalds <torvalds@ppc970.osdl.org>2004-11-25 00:00:28 -0800
commit3b5390826a85bad36012fe78c3052794ae418e54 (patch)
treea04d9e6cbded625b86a6ca41e3f1fc8d3152a936 /mm
parent3b720a8b5ce6be05e3308bc77ae77387e416e20d (diff)
downloadhistory-3b5390826a85bad36012fe78c3052794ae418e54.tar.gz
[PATCH] ia64/x86_64/s390 overlapping vma fix
IA64 is also vulnerable to the huge-vma-in-executable bug in 64 bit elf support, it just insert a vma of zero page without checking overlap, so user can construct a elf with section begin from 0x0 to trigger this BUGON(). However, I think it's safe to check overlap before we actually insert a vma into vma list. And I also feel check vma overlap everywhere is unnecessary, because invert_vm_struct will check it again, so the check is duplicated. It's better to have invert_vm_struct return a value then let caller check if it successes. Here is a patch against 2.6.10.rc2-mm3 I have tested it on i386, x86_64 and ia64 machines. Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Zou Nan hai <Nanhai.zou@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/mmap.c5
1 files changed, 3 insertions, 2 deletions
diff --git a/mm/mmap.c b/mm/mmap.c
index b55f5f534b656f..54f6a2f9966ff0 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1871,7 +1871,7 @@ void exit_mmap(struct mm_struct *mm)
* and into the inode's i_mmap tree. If vm_file is non-NULL
* then i_mmap_lock is taken here.
*/
-void insert_vm_struct(struct mm_struct * mm, struct vm_area_struct * vma)
+int insert_vm_struct(struct mm_struct * mm, struct vm_area_struct * vma)
{
struct vm_area_struct * __vma, * prev;
struct rb_node ** rb_link, * rb_parent;
@@ -1894,8 +1894,9 @@ void insert_vm_struct(struct mm_struct * mm, struct vm_area_struct * vma)
}
__vma = find_vma_prepare(mm,vma->vm_start,&prev,&rb_link,&rb_parent);
if (__vma && __vma->vm_start < vma->vm_end)
- BUG();
+ return -ENOMEM;
vma_link(mm, vma, prev, rb_link, rb_parent);
+ return 0;
}
/*