diff options
author | Andrew Morton <akpm@osdl.org> | 2004-05-22 08:01:46 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2004-05-22 08:01:46 -0700 |
commit | 6bccf794bf7f3161152ef4204c13e45c3d0fe372 (patch) | |
tree | 7eaf39f4e022be9d5a1dc86f650e3257a1f16fa1 /kernel | |
parent | 123e4df7e093329599a75ad8ad0eed9ebbd9aa27 (diff) | |
download | history-6bccf794bf7f3161152ef4204c13e45c3d0fe372.tar.gz |
[PATCH] rmap 10 add anonmm rmap
From: Hugh Dickins <hugh@veritas.com>
Hugh's anonmm object-based reverse mapping scheme for anonymous pages. We
have not yet decided whether to adopt this scheme, or Andrea's more advanced
anon_vma scheme. anonmm is easier for me to merge quickly, to replace the
pte_chain rmap taken out in the previous patch; a patch to install Andrea's
anon_vma will follow in due course.
Why build up and tear down chains of pte pointers for anonymous pages, when a
page can only appear at one particular address, in a restricted group of mms
that might share it? (Except: see next patch on mremap.)
Introduce struct anonmm per mm to track anonymous pages, all forks from one
exec sharing the same bundle of linked anonmms. Anonymous pages originate in
one mm, but may be forked into another mm of the bundle later on. Callouts
from fork.c to allocate, dup and exit the anonmm structure private to rmap.c.
From: Hugh Dickins <hugh@veritas.com>
Two concurrent exits (of the last two mms sharing the anonhd). First
exit_rmap brings anonhd->count down to 2, gets preempted (at the
spin_unlock) by second, which brings anonhd->count down to 1, sees it's 1
and frees the anonhd (without making any change to anonhd->count itself),
cpu goes on to do something new which reallocates the old anonhd as a new
struct anonmm (probably not a head, in which case count will start at 1),
first resumes after the spin_unlock and sees anonhd->count 1, frees "anonhd"
again, it's used for something else, a later exit_rmap list_del finds list
corrupt.
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/fork.c | 18 |
1 files changed, 16 insertions, 2 deletions
diff --git a/kernel/fork.c b/kernel/fork.c index 3f0ddd189004bb..47b8f8e6f78730 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -34,6 +34,7 @@ #include <linux/ptrace.h> #include <linux/mount.h> #include <linux/audit.h> +#include <linux/rmap.h> #include <asm/pgtable.h> #include <asm/pgalloc.h> @@ -419,9 +420,14 @@ struct mm_struct * mm_alloc(void) mm = allocate_mm(); if (mm) { memset(mm, 0, sizeof(*mm)); - return mm_init(mm); + mm = mm_init(mm); + if (mm && exec_rmap(mm)) { + mm_free_pgd(mm); + free_mm(mm); + mm = NULL; + } } - return NULL; + return mm; } /* @@ -448,6 +454,7 @@ void mmput(struct mm_struct *mm) spin_unlock(&mmlist_lock); exit_aio(mm); exit_mmap(mm); + exit_rmap(mm); mmdrop(mm); } } @@ -551,6 +558,12 @@ static int copy_mm(unsigned long clone_flags, struct task_struct * tsk) if (!mm_init(mm)) goto fail_nomem; + if (dup_rmap(mm, oldmm)) { + mm_free_pgd(mm); + free_mm(mm); + goto fail_nomem; + } + if (init_new_context(tsk,mm)) goto fail_nocontext; @@ -1262,4 +1275,5 @@ void __init proc_caches_init(void) mm_cachep = kmem_cache_create("mm_struct", sizeof(struct mm_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL, NULL); + init_rmap(); } |