diff options
author | Andrew Morton <akpm@osdl.org> | 2004-05-22 08:07:26 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2004-05-22 08:07:26 -0700 |
commit | 16ceff2d5dc9f0347ab5a08abff3f4647c2fee04 (patch) | |
tree | 91bea65a789e1c8c766831d0925fc10c51ca922f /kernel | |
parent | b124bc14b39502c8b46e3af5b12f821670e82298 (diff) | |
download | history-16ceff2d5dc9f0347ab5a08abff3f4647c2fee04.tar.gz |
[PATCH] rmap 22 flush_dcache_mmap_lock
From: Hugh Dickins <hugh@veritas.com>
arm and parisc __flush_dcache_page have been scanning the i_mmap(_shared) list
without locking or disabling preemption. That may be even more unsafe now
it's a prio tree instead of a list.
It looks like we cannot use i_shared_lock for this protection: most uses of
flush_dcache_page are okay, and only one would need lock ordering fixed
(get_user_pages holds page_table_lock across flush_dcache_page); but there's a
few (e.g. in net and ntfs) which look as if they're using it in I/O
completion - and it would be restrictive to disallow it there.
So, on arm and parisc only, define flush_dcache_mmap_lock(mapping) as
spin_lock_irq(&(mapping)->tree_lock); on i386 (and other arches left to the
next patch) define it away to nothing; and use where needed.
While updating locking hierarchy in filemap.c, remove two layers of the fossil
record from add_to_page_cache comment: no longer used for swap.
I believe all the #includes will work out, but have only built i386. I can
see several things about this patch which might cause revulsion: the name
flush_dcache_mmap_lock? the reuse of the page radix_tree's tree_lock for this
different purpose? spin_lock_irqsave instead? can't we somehow get
i_shared_lock to handle the problem?
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/fork.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/kernel/fork.c b/kernel/fork.c index 3eb6ca91d29ac..ef85a909e171f 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -332,7 +332,9 @@ static inline int dup_mmap(struct mm_struct * mm, struct mm_struct * oldmm) /* insert tmp into the share list, just after mpnt */ spin_lock(&file->f_mapping->i_mmap_lock); + flush_dcache_mmap_lock(mapping); vma_prio_tree_add(tmp, mpnt); + flush_dcache_mmap_unlock(mapping); spin_unlock(&file->f_mapping->i_mmap_lock); } |