aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorPeter Xu <peterx@redhat.com>2023-01-04 17:52:05 -0500
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2023-01-24 07:24:36 +0100
commit3b8ede66658cd83db17a370152535e464786c14e (patch)
tree44d9100329f642f87ad9954c9f4410e659b8efb3 /mm
parent8d6a675cd744c1af2688778fba2404fdf9737ddd (diff)
downloadlinux-3b8ede66658cd83db17a370152535e464786c14e.tar.gz
mm/hugetlb: pre-allocate pgtable pages for uffd wr-protects
commit fed15f1345dc8a7fc8baa81e8b55c3ba010d7f4b upstream. Userfaultfd-wp uses pte markers to mark wr-protected pages for both shmem and hugetlb. Shmem has pre-allocation ready for markers, but hugetlb path was overlooked. Doing so by calling huge_pte_alloc() if the initial pgtable walk fails to find the huge ptep. It's possible that huge_pte_alloc() can fail with high memory pressure, in that case stop the loop immediately and fail silently. This is not the most ideal solution but it matches with what we do with shmem meanwhile it avoids the splat in dmesg. Link: https://lkml.kernel.org/r/20230104225207.1066932-2-peterx@redhat.com Fixes: 60dfaad65aa9 ("mm/hugetlb: allow uffd wr-protect none ptes") Signed-off-by: Peter Xu <peterx@redhat.com> Reported-by: James Houghton <jthoughton@google.com> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: James Houghton <jthoughton@google.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: Muchun Song <songmuchun@bytedance.com> Cc: Nadav Amit <nadav.amit@gmail.com> Cc: <stable@vger.kernel.org> [5.19+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/hugetlb.c13
1 files changed, 11 insertions, 2 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 3fc5a214f0b99a..52475c4262e45e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6604,8 +6604,17 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
spinlock_t *ptl;
ptep = huge_pte_offset(mm, address, psize);
if (!ptep) {
- address |= last_addr_mask;
- continue;
+ if (!uffd_wp) {
+ address |= last_addr_mask;
+ continue;
+ }
+ /*
+ * Userfaultfd wr-protect requires pgtable
+ * pre-allocations to install pte markers.
+ */
+ ptep = huge_pte_alloc(mm, vma, address, psize);
+ if (!ptep)
+ break;
}
ptl = huge_pte_lock(h, mm, ptep);
if (huge_pmd_unshare(mm, vma, address, ptep)) {