aboutsummaryrefslogtreecommitdiffstats
path: root/fs/hugetlbfs/inode.c
diff options
context:
space:
mode:
authorZi Yan <ziy@nvidia.com>2023-09-13 16:12:47 -0400
committerAndrew Morton <akpm@linux-foundation.org>2023-10-04 10:32:29 -0700
commit8db0ec791f7788cd21e7f91ee5ff42c1c458d0e7 (patch)
treeb3785e94b17152afa828497619f1f069e46718a9 /fs/hugetlbfs/inode.c
parent1640a0ef80f6d572725f5b0330038c18e98ea168 (diff)
downloadlinux-8db0ec791f7788cd21e7f91ee5ff42c1c458d0e7.tar.gz
fs: use nth_page() in place of direct struct page manipulation
When dealing with hugetlb pages, struct page is not guaranteed to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle it properly. Without the fix, a wrong subpage might be checked for HWPoison, causing wrong number of bytes of a page copied to user space. No bug is reported. The fix comes from code inspection. Link: https://lkml.kernel.org/r/20230913201248.452081-5-zi.yan@sent.com Fixes: 38c1ddbde6c6 ("hugetlbfs: improve read HWPOISON hugepage") Signed-off-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> Cc: David Hildenbrand <david@redhat.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'fs/hugetlbfs/inode.c')
-rw-r--r--fs/hugetlbfs/inode.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 316c4cebd3f3de..60fce26ff9378d 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -295,7 +295,7 @@ static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
size_t res = 0;
/* First subpage to start the loop. */
- page += offset / PAGE_SIZE;
+ page = nth_page(page, offset / PAGE_SIZE);
offset %= PAGE_SIZE;
while (1) {
if (is_raw_hwpoison_page_in_hugepage(page))
@@ -309,7 +309,7 @@ static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
break;
offset += n;
if (offset == PAGE_SIZE) {
- page++;
+ page = nth_page(page, 1);
offset = 0;
}
}