diff options
author | Andrew Morton <akpm@digeo.com> | 2003-04-20 00:28:04 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@home.transmeta.com> | 2003-04-20 00:28:04 -0700 |
commit | efbb77b282f9173b4c183aa8ae30772bcb5e580f (patch) | |
tree | 60ab96a0a8ce2f2a8399333fa16b15cd403f3f54 /ipc | |
parent | bb4552505ff39c9f3dd978c143fa5d258cfd3bd6 (diff) | |
download | history-efbb77b282f9173b4c183aa8ae30772bcb5e580f.tar.gz |
[PATCH] shmdt() speedup
From: William Lee Irwin III <wli@holomorphy.com>
Micro-optimize sys_shmdt(). There are methods of exploiting knowledge
of the vma's being searched to restrict the search space. These are:
(1) shm mappings always start their lives at file offset 0, so only
vma's above shmaddr need be considered. find_vma() can be used
to seek to the proper position in mm->mmap in O(lg(n)) time.
(2) The search is for a vma which could be a fragment of a broken-up
shm mapping, which would have been created starting at shmaddr
with vm_pgoff 0 and then continued no further into userspace
than shmaddr + size. So after having found an initial vma, find
the size of the shm segment it maps to calculate an upper bound
to the virtualspace that needs to be searched.
(3) mremap() would have caused the original checks to miss vma's mapping
the shm segment if shmaddr were the original address at which
the shm segments were attached. This does no better and no worse
than the original code in that situation.
(4) If the chain of references in vma->vm_file->f_dentry->d_inode->i_size
is not guaranteed by refcounting and/or the shm code then this is
oopsable; AFAICT an inode is always allocated.
Diffstat (limited to 'ipc')
-rw-r--r-- | ipc/shm.c | 59 |
1 files changed, 52 insertions, 7 deletions
diff --git a/ipc/shm.c b/ipc/shm.c index e9726493790029..c822dc7872f59d 100644 --- a/ipc/shm.c +++ b/ipc/shm.c @@ -737,21 +737,66 @@ out: * detach and kill segment if marked destroyed. * The work is done in shm_close. */ -asmlinkage long sys_shmdt (char *shmaddr) +asmlinkage long sys_shmdt(char *shmaddr) { struct mm_struct *mm = current->mm; - struct vm_area_struct *shmd, *shmdnext; + struct vm_area_struct *vma, *next; + unsigned long addr = (unsigned long)shmaddr; + loff_t size = 0; int retval = -EINVAL; down_write(&mm->mmap_sem); - for (shmd = mm->mmap; shmd; shmd = shmdnext) { - shmdnext = shmd->vm_next; - if ((shmd->vm_ops == &shm_vm_ops || (shmd->vm_flags & VM_HUGETLB)) - && shmd->vm_start - (shmd->vm_pgoff << PAGE_SHIFT) == (ulong) shmaddr) { - do_munmap(mm, shmd->vm_start, shmd->vm_end - shmd->vm_start); + + /* + * If it had been mremap()'d, the starting address would not + * match the usual checks anyway. So assume all vma's are + * above the starting address given. + */ + vma = find_vma(mm, addr); + + while (vma) { + next = vma->vm_next; + + /* + * Check if the starting address would match, i.e. it's + * a fragment created by mprotect() and/or munmap(), or it + * otherwise it starts at this address with no hassles. + */ + if ((vma->vm_ops == &shm_vm_ops || is_vm_hugetlb_page(vma)) && + (vma->vm_start - addr)/PAGE_SIZE == vma->vm_pgoff) { + + + size = vma->vm_file->f_dentry->d_inode->i_size; + do_munmap(mm, vma->vm_start, vma->vm_end - vma->vm_start); + /* + * We discovered the size of the shm segment, so + * break out of here and fall through to the next + * loop that uses the size information to stop + * searching for matching vma's. + */ retval = 0; + vma = next; + break; } + vma = next; } + + /* + * We need look no further than the maximum address a fragment + * could possibly have landed at. Also cast things to loff_t to + * prevent overflows and make comparisions vs. equal-width types. + */ + while (vma && (loff_t)(vma->vm_end - addr) <= size) { + next = vma->vm_next; + + /* finding a matching vma now does not alter retval */ + if ((vma->vm_ops == &shm_vm_ops || is_vm_hugetlb_page(vma)) && + (vma->vm_start - addr)/PAGE_SIZE == vma->vm_pgoff) + + do_munmap(mm, vma->vm_start, vma->vm_end - vma->vm_start); + vma = next; + } + up_write(&mm->mmap_sem); return retval; } |