aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/mm.h
diff options
context:
space:
mode:
authorYajun Deng <yajun.deng@linux.dev>2023-06-19 10:34:06 +0800
committerAndrew Morton <akpm@linux-foundation.org>2023-06-23 16:59:27 -0700
commit61167ad5fecdeaa037f3df1ba354dddd5f66a1ed (patch)
treeb0e08754150ba76f267e9181f2a6173f2d99776c /include/linux/mm.h
parent9883c7f84053cec2826ca3c56254601b5ce9cdbe (diff)
downloadlinux-61167ad5fecdeaa037f3df1ba354dddd5f66a1ed.tar.gz
mm: pass nid to reserve_bootmem_region()
early_pfn_to_nid() is called frequently in init_reserved_page(), it returns the node id of the PFN. These PFN are probably from the same memory region, they have the same node id. It's not necessary to call early_pfn_to_nid() for each PFN. Pass nid to reserve_bootmem_region() and drop the call to early_pfn_to_nid() in init_reserved_page(). Also, set nid on all reserved pages before doing this, as some reserved memory regions may not be set nid. The most beneficial function is memmap_init_reserved_pages() if CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled. The following data was tested on an x86 machine with 190GB of RAM. before: memmap_init_reserved_pages() 67ms after: memmap_init_reserved_pages() 20ms Link: https://lkml.kernel.org/r/20230619023406.424298-1-yajun.deng@linux.dev Signed-off-by: Yajun Deng <yajun.deng@linux.dev> Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'include/linux/mm.h')
-rw-r--r--include/linux/mm.h3
1 files changed, 2 insertions, 1 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h
index cf43deb25553d..9ecb8b9c07f6b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2940,7 +2940,8 @@ extern unsigned long free_reserved_area(void *start, void *end,
extern void adjust_managed_page_count(struct page *page, long count);
-extern void reserve_bootmem_region(phys_addr_t start, phys_addr_t end);
+extern void reserve_bootmem_region(phys_addr_t start,
+ phys_addr_t end, int nid);
/* Free the reserved page into the buddy system, so it gets managed. */
static inline void free_reserved_page(struct page *page)