rsphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget./translations/zh_CN/core-api/memory-allocationmodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget./translations/zh_TW/core-api/memory-allocationmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget./translations/it_IT/core-api/memory-allocationmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget./translations/ja_JP/core-api/memory-allocationmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget./translations/ko_KR/core-api/memory-allocationmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget./translations/sp_SP/core-api/memory-allocationmodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhtarget)}(h.. _memory_allocation:h]h}(h]h ]h"]h$]h&]refidmemory-allocationuh1hhKhhhhhH/var/lib/git/docbuild/linux/Documentation/core-api/memory-allocation.rstubhsection)}(hhh](htitle)}(hMemory Allocation Guideh]hMemory Allocation Guide}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh paragraph)}(hXzLinux provides a variety of APIs for memory allocation. You can allocate small chunks using `kmalloc` or `kmem_cache_alloc` families, large virtually contiguous areas using `vmalloc` and its derivatives, or you can directly request pages from the page allocator with `alloc_pages`. It is also possible to use more specialized allocators, for instance `cma_alloc` or `zs_malloc`.h](h\Linux provides a variety of APIs for memory allocation. You can allocate small chunks using }(hhhhhNhNubhtitle_reference)}(h `kmalloc`h]hkmalloc}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhubh or }(hhhhhNhNubh)}(h`kmem_cache_alloc`h]hkmem_cache_alloc}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhubh2 families, large virtually contiguous areas using }(hhhhhNhNubh)}(h `vmalloc`h]hvmalloc}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhubhU and its derivatives, or you can directly request pages from the page allocator with }(hhhhhNhNubh)}(h `alloc_pages`h]h alloc_pages}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhubhG. It is also possible to use more specialized allocators, for instance }(hhhhhNhNubh)}(h `cma_alloc`h]h cma_alloc}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhubh or }hhsbh)}(h `zs_malloc`h]h zs_malloc}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhubh.}(hhhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hMost of the memory allocation APIs use GFP flags to express how that memory should be allocated. The GFP acronym stands for "get free pages", the underlying memory allocation function.h]hMost of the memory allocation APIs use GFP flags to express how that memory should be allocated. The GFP acronym stands for “get free pages”, the underlying memory allocation function.}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hDiversity of the allocation APIs combined with the numerous GFP flags makes the question "How should I allocate memory?" not that easy to answer, although very likely you should useh]hDiversity of the allocation APIs combined with the numerous GFP flags makes the question “How should I allocate memory?” not that easy to answer, although very likely you should use}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh literal_block)}(hkzalloc(, GFP_KERNEL);h]hkzalloc(, GFP_KERNEL);}hjasbah}(h]h ]h"]h$]h&] xml:spacepreserveuh1j_hhhKhhhhubh)}(hZOf course there are cases when other allocation APIs and different GFP flags must be used.h]hZOf course there are cases when other allocation APIs and different GFP flags must be used.}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hhh](h)}(hGet Free Page flagsh]hGet Free Page flags}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hXThe GFP flags control the allocators behavior. They tell what memory zones can be used, how hard the allocator should try to find free memory, whether the memory can be accessed by the userspace etc. The :ref:`Documentation/core-api/mm-api.rst ` provides reference documentation for the GFP flags and their combinations and here we briefly outline their recommended usage:h](hThe GFP flags control the allocators behavior. They tell what memory zones can be used, how hard the allocator should try to find free memory, whether the memory can be accessed by the userspace etc. The }(hjhhhNhNubh)}(h;:ref:`Documentation/core-api/mm-api.rst `h]hinline)}(hjh]h!Documentation/core-api/mm-api.rst}(hjhhhNhNubah}(h]h ](xrefstdstd-refeh"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]refdoccore-api/memory-allocation refdomainjreftyperef refexplicitrefwarn reftargetmm-api-gfp-flagsuh1hhhhK hjubh provides reference documentation for the GFP flags and their combinations and here we briefly outline their recommended usage:}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK hjhhubh block_quote)}(hX* Most of the time ``GFP_KERNEL`` is what you need. Memory for the kernel data structures, DMAable memory, inode cache, all these and many other allocations types can use ``GFP_KERNEL``. Note, that using ``GFP_KERNEL`` implies ``GFP_RECLAIM``, which means that direct reclaim may be triggered under memory pressure; the calling context must be allowed to sleep. * If the allocation is performed from an atomic context, e.g interrupt handler, use ``GFP_NOWAIT``. This flag prevents direct reclaim and IO or filesystem operations. Consequently, under memory pressure ``GFP_NOWAIT`` allocation is likely to fail. Users of this flag need to provide a suitable fallback to cope with such failures where appropriate. * If you think that accessing memory reserves is justified and the kernel will be stressed unless allocation succeeds, you may use ``GFP_ATOMIC``. * Untrusted allocations triggered from userspace should be a subject of kmem accounting and must have ``__GFP_ACCOUNT`` bit set. There is the handy ``GFP_KERNEL_ACCOUNT`` shortcut for ``GFP_KERNEL`` allocations that should be accounted. * Userspace allocations should use either of the ``GFP_USER``, ``GFP_HIGHUSER`` or ``GFP_HIGHUSER_MOVABLE`` flags. The longer the flag name the less restrictive it is. ``GFP_HIGHUSER_MOVABLE`` does not require that allocated memory will be directly accessible by the kernel and implies that the data is movable. ``GFP_HIGHUSER`` means that the allocated memory is not movable, but it is not required to be directly accessible by the kernel. An example may be a hardware allocation that maps data directly into userspace but has no addressing limitations. ``GFP_USER`` means that the allocated memory is not movable and it must be directly accessible by the kernel. h]h bullet_list)}(hhh](h list_item)}(hXgMost of the time ``GFP_KERNEL`` is what you need. Memory for the kernel data structures, DMAable memory, inode cache, all these and many other allocations types can use ``GFP_KERNEL``. Note, that using ``GFP_KERNEL`` implies ``GFP_RECLAIM``, which means that direct reclaim may be triggered under memory pressure; the calling context must be allowed to sleep.h]h)}(hXgMost of the time ``GFP_KERNEL`` is what you need. Memory for the kernel data structures, DMAable memory, inode cache, all these and many other allocations types can use ``GFP_KERNEL``. Note, that using ``GFP_KERNEL`` implies ``GFP_RECLAIM``, which means that direct reclaim may be triggered under memory pressure; the calling context must be allowed to sleep.h](hMost of the time }(hjhhhNhNubhliteral)}(h``GFP_KERNEL``h]h GFP_KERNEL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is what you need. Memory for the kernel data structures, DMAable memory, inode cache, all these and many other allocations types can use }(hjhhhNhNubj)}(h``GFP_KERNEL``h]h GFP_KERNEL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh. Note, that using }(hjhhhNhNubj)}(h``GFP_KERNEL``h]h GFP_KERNEL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh implies }(hjhhhNhNubj)}(h``GFP_RECLAIM``h]h GFP_RECLAIM}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhw, which means that direct reclaim may be triggered under memory pressure; the calling context must be allowed to sleep.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK'hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hXZIf the allocation is performed from an atomic context, e.g interrupt handler, use ``GFP_NOWAIT``. This flag prevents direct reclaim and IO or filesystem operations. Consequently, under memory pressure ``GFP_NOWAIT`` allocation is likely to fail. Users of this flag need to provide a suitable fallback to cope with such failures where appropriate.h]h)}(hXZIf the allocation is performed from an atomic context, e.g interrupt handler, use ``GFP_NOWAIT``. This flag prevents direct reclaim and IO or filesystem operations. Consequently, under memory pressure ``GFP_NOWAIT`` allocation is likely to fail. Users of this flag need to provide a suitable fallback to cope with such failures where appropriate.h](hRIf the allocation is performed from an atomic context, e.g interrupt handler, use }(hj:hhhNhNubj)}(h``GFP_NOWAIT``h]h GFP_NOWAIT}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:ubhi. This flag prevents direct reclaim and IO or filesystem operations. Consequently, under memory pressure }(hj:hhhNhNubj)}(h``GFP_NOWAIT``h]h GFP_NOWAIT}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:ubh allocation is likely to fail. Users of this flag need to provide a suitable fallback to cope with such failures where appropriate.}(hj:hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK-hj6ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hIf you think that accessing memory reserves is justified and the kernel will be stressed unless allocation succeeds, you may use ``GFP_ATOMIC``.h]h)}(hIf you think that accessing memory reserves is justified and the kernel will be stressed unless allocation succeeds, you may use ``GFP_ATOMIC``.h](hIf you think that accessing memory reserves is justified and the kernel will be stressed unless allocation succeeds, you may use }(hjvhhhNhNubj)}(h``GFP_ATOMIC``h]h GFP_ATOMIC}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjvubh.}(hjvhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK3hjrubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hUntrusted allocations triggered from userspace should be a subject of kmem accounting and must have ``__GFP_ACCOUNT`` bit set. There is the handy ``GFP_KERNEL_ACCOUNT`` shortcut for ``GFP_KERNEL`` allocations that should be accounted.h]h)}(hUntrusted allocations triggered from userspace should be a subject of kmem accounting and must have ``__GFP_ACCOUNT`` bit set. There is the handy ``GFP_KERNEL_ACCOUNT`` shortcut for ``GFP_KERNEL`` allocations that should be accounted.h](hdUntrusted allocations triggered from userspace should be a subject of kmem accounting and must have }(hjhhhNhNubj)}(h``__GFP_ACCOUNT``h]h __GFP_ACCOUNT}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh bit set. There is the handy }(hjhhhNhNubj)}(h``GFP_KERNEL_ACCOUNT``h]hGFP_KERNEL_ACCOUNT}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh shortcut for }(hjhhhNhNubj)}(h``GFP_KERNEL``h]h GFP_KERNEL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh& allocations that should be accounted.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK5hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hXUserspace allocations should use either of the ``GFP_USER``, ``GFP_HIGHUSER`` or ``GFP_HIGHUSER_MOVABLE`` flags. The longer the flag name the less restrictive it is. ``GFP_HIGHUSER_MOVABLE`` does not require that allocated memory will be directly accessible by the kernel and implies that the data is movable. ``GFP_HIGHUSER`` means that the allocated memory is not movable, but it is not required to be directly accessible by the kernel. An example may be a hardware allocation that maps data directly into userspace but has no addressing limitations. ``GFP_USER`` means that the allocated memory is not movable and it must be directly accessible by the kernel. h](h)}(hUserspace allocations should use either of the ``GFP_USER``, ``GFP_HIGHUSER`` or ``GFP_HIGHUSER_MOVABLE`` flags. The longer the flag name the less restrictive it is.h](h/Userspace allocations should use either of the }(hjhhhNhNubj)}(h ``GFP_USER``h]hGFP_USER}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, }(hjhhhNhNubj)}(h``GFP_HIGHUSER``h]h GFP_HIGHUSER}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh or }(hjhhhNhNubj)}(h``GFP_HIGHUSER_MOVABLE``h]hGFP_HIGHUSER_MOVABLE}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh< flags. The longer the flag name the less restrictive it is.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK9hjubh)}(h``GFP_HIGHUSER_MOVABLE`` does not require that allocated memory will be directly accessible by the kernel and implies that the data is movable.h](j)}(h``GFP_HIGHUSER_MOVABLE``h]hGFP_HIGHUSER_MOVABLE}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj2ubhw does not require that allocated memory will be directly accessible by the kernel and implies that the data is movable.}(hj2hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK=hjubh)}(h``GFP_HIGHUSER`` means that the allocated memory is not movable, but it is not required to be directly accessible by the kernel. An example may be a hardware allocation that maps data directly into userspace but has no addressing limitations.h](j)}(h``GFP_HIGHUSER``h]h GFP_HIGHUSER}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjNubh means that the allocated memory is not movable, but it is not required to be directly accessible by the kernel. An example may be a hardware allocation that maps data directly into userspace but has no addressing limitations.}(hjNhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKAhjubh)}(hm``GFP_USER`` means that the allocated memory is not movable and it must be directly accessible by the kernel.h](j)}(h ``GFP_USER``h]hGFP_USER}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubha means that the allocated memory is not movable and it must be directly accessible by the kernel.}(hjjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKFhjubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]bullet*uh1jhhhK'hjubah}(h]h ]h"]h$]h&]uh1jhhhK'hjhhubh)}(hXYou may notice that quite a few allocations in the existing code specify ``GFP_NOIO`` or ``GFP_NOFS``. Historically, they were used to prevent recursion deadlocks caused by direct memory reclaim calling back into the FS or IO paths and blocking on already held resources. Since 4.12 the preferred way to address this issue is to use new scope APIs described in :ref:`Documentation/core-api/gfp_mask-from-fs-io.rst `.h](hIYou may notice that quite a few allocations in the existing code specify }(hjhhhNhNubj)}(h ``GFP_NOIO``h]hGFP_NOIO}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh or }(hjhhhNhNubj)}(h ``GFP_NOFS``h]hGFP_NOFS}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhX. Historically, they were used to prevent recursion deadlocks caused by direct memory reclaim calling back into the FS or IO paths and blocking on already held resources. Since 4.12 the preferred way to address this issue is to use new scope APIs described in }(hjhhhNhNubh)}(hK:ref:`Documentation/core-api/gfp_mask-from-fs-io.rst `h]j)}(hjh]h.Documentation/core-api/gfp_mask-from-fs-io.rst}(hjhhhNhNubah}(h]h ](jstdstd-refeh"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]refdocj refdomainjreftyperef refexplicitrefwarnjgfp_mask_from_fs_iouh1hhhhKIhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKIhjhhubh)}(hX`Other legacy GFP flags are ``GFP_DMA`` and ``GFP_DMA32``. They are used to ensure that the allocated memory is accessible by hardware with limited addressing capabilities. So unless you are writing a driver for a device with such restrictions, avoid using these flags. And even with hardware with restrictions it is preferable to use `dma_alloc*` APIs.h](hOther legacy GFP flags are }(hjhhhNhNubj)}(h ``GFP_DMA``h]hGFP_DMA}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and }(hjhhhNhNubj)}(h ``GFP_DMA32``h]h GFP_DMA32}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhX. They are used to ensure that the allocated memory is accessible by hardware with limited addressing capabilities. So unless you are writing a driver for a device with such restrictions, avoid using these flags. And even with hardware with restrictions it is preferable to use }(hjhhhNhNubh)}(h `dma_alloc*`h]h dma_alloc*}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh APIs.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKQhjhhubh)}(hhh](h)}(hGFP flags and reclaim behaviorh]hGFP flags and reclaim behavior}(hj7hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj4hhhhhKYubh)}(hMemory allocations may trigger direct or background reclaim and it is useful to understand how hard the page allocator will try to satisfy that or another request.h]hMemory allocations may trigger direct or background reclaim and it is useful to understand how hard the page allocator will try to satisfy that or another request.}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKZhj4hhubj)}(hX* ``GFP_KERNEL & ~__GFP_RECLAIM`` - optimistic allocation without _any_ attempt to free memory at all. The most light weight mode which even doesn't kick the background reclaim. Should be used carefully because it might deplete the memory and the next user might hit the more aggressive reclaim. * ``GFP_KERNEL & ~__GFP_DIRECT_RECLAIM`` (or ``GFP_NOWAIT``)- optimistic allocation without any attempt to free memory from the current context but can wake kswapd to reclaim memory if the zone is below the low watermark. Can be used from either atomic contexts or when the request is a performance optimization and there is another fallback for a slow path. * ``(GFP_KERNEL|__GFP_HIGH) & ~__GFP_DIRECT_RECLAIM`` (aka ``GFP_ATOMIC``) - non sleeping allocation with an expensive fallback so it can access some portion of memory reserves. Usually used from interrupt/bottom-half context with an expensive slow path fallback. * ``GFP_KERNEL`` - both background and direct reclaim are allowed and the **default** page allocator behavior is used. That means that not costly allocation requests are basically no-fail but there is no guarantee of that behavior so failures have to be checked properly by callers (e.g. OOM killer victim is allowed to fail currently). * ``GFP_KERNEL | __GFP_NORETRY`` - overrides the default allocator behavior and all allocation requests fail early rather than cause disruptive reclaim (one round of reclaim in this implementation). The OOM killer is not invoked. * ``GFP_KERNEL | __GFP_RETRY_MAYFAIL`` - overrides the default allocator behavior and all allocation requests try really hard. The request will fail if the reclaim cannot make any progress. The OOM killer won't be triggered. * ``GFP_KERNEL | __GFP_NOFAIL`` - overrides the default allocator behavior and all allocation requests will loop endlessly until they succeed. This might be really dangerous especially for larger orders. h]j)}(hhh](j)}(hX&``GFP_KERNEL & ~__GFP_RECLAIM`` - optimistic allocation without _any_ attempt to free memory at all. The most light weight mode which even doesn't kick the background reclaim. Should be used carefully because it might deplete the memory and the next user might hit the more aggressive reclaim. h]h)}(hX%``GFP_KERNEL & ~__GFP_RECLAIM`` - optimistic allocation without _any_ attempt to free memory at all. The most light weight mode which even doesn't kick the background reclaim. Should be used carefully because it might deplete the memory and the next user might hit the more aggressive reclaim.h](j)}(h``GFP_KERNEL & ~__GFP_RECLAIM``h]hGFP_KERNEL & ~__GFP_RECLAIM}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj^ubhX - optimistic allocation without _any_ attempt to free memory at all. The most light weight mode which even doesn’t kick the background reclaim. Should be used carefully because it might deplete the memory and the next user might hit the more aggressive reclaim.}(hj^hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK^hjZubah}(h]h ]h"]h$]h&]uh1jhjWubj)}(hXe``GFP_KERNEL & ~__GFP_DIRECT_RECLAIM`` (or ``GFP_NOWAIT``)- optimistic allocation without any attempt to free memory from the current context but can wake kswapd to reclaim memory if the zone is below the low watermark. Can be used from either atomic contexts or when the request is a performance optimization and there is another fallback for a slow path. h]h)}(hXd``GFP_KERNEL & ~__GFP_DIRECT_RECLAIM`` (or ``GFP_NOWAIT``)- optimistic allocation without any attempt to free memory from the current context but can wake kswapd to reclaim memory if the zone is below the low watermark. Can be used from either atomic contexts or when the request is a performance optimization and there is another fallback for a slow path.h](j)}(h&``GFP_KERNEL & ~__GFP_DIRECT_RECLAIM``h]h"GFP_KERNEL & ~__GFP_DIRECT_RECLAIM}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh (or }(hjhhhNhNubj)}(h``GFP_NOWAIT``h]h GFP_NOWAIT}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhX+)- optimistic allocation without any attempt to free memory from the current context but can wake kswapd to reclaim memory if the zone is below the low watermark. Can be used from either atomic contexts or when the request is a performance optimization and there is another fallback for a slow path.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKdhjubah}(h]h ]h"]h$]h&]uh1jhjWubj)}(hX``(GFP_KERNEL|__GFP_HIGH) & ~__GFP_DIRECT_RECLAIM`` (aka ``GFP_ATOMIC``) - non sleeping allocation with an expensive fallback so it can access some portion of memory reserves. Usually used from interrupt/bottom-half context with an expensive slow path fallback. h]h)}(hX``(GFP_KERNEL|__GFP_HIGH) & ~__GFP_DIRECT_RECLAIM`` (aka ``GFP_ATOMIC``) - non sleeping allocation with an expensive fallback so it can access some portion of memory reserves. Usually used from interrupt/bottom-half context with an expensive slow path fallback.h](j)}(h3``(GFP_KERNEL|__GFP_HIGH) & ~__GFP_DIRECT_RECLAIM``h]h/(GFP_KERNEL|__GFP_HIGH) & ~__GFP_DIRECT_RECLAIM}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh (aka }(hjhhhNhNubj)}(h``GFP_ATOMIC``h]h GFP_ATOMIC}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh) - non sleeping allocation with an expensive fallback so it can access some portion of memory reserves. Usually used from interrupt/bottom-half context with an expensive slow path fallback.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKkhjubah}(h]h ]h"]h$]h&]uh1jhjWubj)}(hXO``GFP_KERNEL`` - both background and direct reclaim are allowed and the **default** page allocator behavior is used. That means that not costly allocation requests are basically no-fail but there is no guarantee of that behavior so failures have to be checked properly by callers (e.g. OOM killer victim is allowed to fail currently). h]h)}(hXN``GFP_KERNEL`` - both background and direct reclaim are allowed and the **default** page allocator behavior is used. That means that not costly allocation requests are basically no-fail but there is no guarantee of that behavior so failures have to be checked properly by callers (e.g. OOM killer victim is allowed to fail currently).h](j)}(h``GFP_KERNEL``h]h GFP_KERNEL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh: - both background and direct reclaim are allowed and the }(hjhhhNhNubhstrong)}(h **default**h]hdefault}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j hjubh page allocator behavior is used. That means that not costly allocation requests are basically no-fail but there is no guarantee of that behavior so failures have to be checked properly by callers (e.g. OOM killer victim is allowed to fail currently).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKphjubah}(h]h ]h"]h$]h&]uh1jhjWubj)}(h``GFP_KERNEL | __GFP_NORETRY`` - overrides the default allocator behavior and all allocation requests fail early rather than cause disruptive reclaim (one round of reclaim in this implementation). The OOM killer is not invoked. h]h)}(h``GFP_KERNEL | __GFP_NORETRY`` - overrides the default allocator behavior and all allocation requests fail early rather than cause disruptive reclaim (one round of reclaim in this implementation). The OOM killer is not invoked.h](j)}(h``GFP_KERNEL | __GFP_NORETRY``h]hGFP_KERNEL | __GFP_NORETRY}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubh - overrides the default allocator behavior and all allocation requests fail early rather than cause disruptive reclaim (one round of reclaim in this implementation). The OOM killer is not invoked.}(hj.hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKvhj*ubah}(h]h ]h"]h$]h&]uh1jhjWubj)}(h``GFP_KERNEL | __GFP_RETRY_MAYFAIL`` - overrides the default allocator behavior and all allocation requests try really hard. The request will fail if the reclaim cannot make any progress. The OOM killer won't be triggered. h]h)}(h``GFP_KERNEL | __GFP_RETRY_MAYFAIL`` - overrides the default allocator behavior and all allocation requests try really hard. The request will fail if the reclaim cannot make any progress. The OOM killer won't be triggered.h](j)}(h$``GFP_KERNEL | __GFP_RETRY_MAYFAIL``h]h GFP_KERNEL | __GFP_RETRY_MAYFAIL}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjTubh - overrides the default allocator behavior and all allocation requests try really hard. The request will fail if the reclaim cannot make any progress. The OOM killer won’t be triggered.}(hjThhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK{hjPubah}(h]h ]h"]h$]h&]uh1jhjWubj)}(h``GFP_KERNEL | __GFP_NOFAIL`` - overrides the default allocator behavior and all allocation requests will loop endlessly until they succeed. This might be really dangerous especially for larger orders. h]h)}(h``GFP_KERNEL | __GFP_NOFAIL`` - overrides the default allocator behavior and all allocation requests will loop endlessly until they succeed. This might be really dangerous especially for larger orders.h](j)}(h``GFP_KERNEL | __GFP_NOFAIL``h]hGFP_KERNEL | __GFP_NOFAIL}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjzubh - overrides the default allocator behavior and all allocation requests will loop endlessly until they succeed. This might be really dangerous especially for larger orders.}(hjzhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjvubah}(h]h ]h"]h$]h&]uh1jhjWubeh}(h]h ]h"]h$]h&]jjuh1jhhhK^hjSubah}(h]h ]h"]h$]h&]uh1jhhhK^hj4hhubeh}(h]gfp-flags-and-reclaim-behaviorah ]h"]gfp flags and reclaim behaviorah$]h&]uh1hhjhhhhhKYubeh}(h]get-free-page-flagsah ]h"]get free page flagsah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hSelecting memory allocatorh]hSelecting memory allocator}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hXThe most straightforward way to allocate memory is to use a function from the kmalloc() family. And, to be on the safe side it's best to use routines that set memory to zero, like kzalloc(). If you need to allocate memory for an array, there are kmalloc_array() and kcalloc() helpers. The helpers struct_size(), array_size() and array3_size() can be used to safely calculate object sizes without overflowing.h]hXThe most straightforward way to allocate memory is to use a function from the kmalloc() family. And, to be on the safe side it’s best to use routines that set memory to zero, like kzalloc(). If you need to allocate memory for an array, there are kmalloc_array() and kcalloc() helpers. The helpers struct_size(), array_size() and array3_size() can be used to safely calculate object sizes without overflowing.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hThe maximal size of a chunk that can be allocated with `kmalloc` is limited. The actual limit depends on the hardware and the kernel configuration, but it is a good practice to use `kmalloc` for objects smaller than page size.h](h7The maximal size of a chunk that can be allocated with }(hjhhhNhNubh)}(h `kmalloc`h]hkmalloc}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubhu is limited. The actual limit depends on the hardware and the kernel configuration, but it is a good practice to use }(hjhhhNhNubh)}(h `kmalloc`h]hkmalloc}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh$ for objects smaller than page size.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hX6The address of a chunk allocated with `kmalloc` is aligned to at least ARCH_KMALLOC_MINALIGN bytes. For sizes which are a power of two, the alignment is also guaranteed to be at least the respective size. For other sizes, the alignment is guaranteed to be at least the largest power-of-two divisor of the size.h](h&The address of a chunk allocated with }(hj hhhNhNubh)}(h `kmalloc`h]hkmalloc}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj ubhX is aligned to at least ARCH_KMALLOC_MINALIGN bytes. For sizes which are a power of two, the alignment is also guaranteed to be at least the respective size. For other sizes, the alignment is guaranteed to be at least the largest power-of-two divisor of the size.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hChunks allocated with kmalloc() can be resized with krealloc(). Similarly to kmalloc_array(): a helper for resizing arrays is provided in the form of krealloc_array().h]hChunks allocated with kmalloc() can be resized with krealloc(). Similarly to kmalloc_array(): a helper for resizing arrays is provided in the form of krealloc_array().}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hFor large allocations you can use vmalloc() and vzalloc(), or directly request pages from the page allocator. The memory allocated by `vmalloc` and related functions is not physically contiguous.h](hFor large allocations you can use vmalloc() and vzalloc(), or directly request pages from the page allocator. The memory allocated by }(hj7hhhNhNubh)}(h `vmalloc`h]hvmalloc}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj7ubh4 and related functions is not physically contiguous.}(hj7hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXIf you are not sure whether the allocation size is too large for `kmalloc`, it is possible to use kvmalloc() and its derivatives. It will try to allocate memory with `kmalloc` and if the allocation fails it will be retried with `vmalloc`. There are restrictions on which GFP flags can be used with `kvmalloc`; please see kvmalloc_node() reference documentation. Note that `kvmalloc` may return memory that is not physically contiguous.h](hAIf you are not sure whether the allocation size is too large for }(hjWhhhNhNubh)}(h `kmalloc`h]hkmalloc}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjWubh\, it is possible to use kvmalloc() and its derivatives. It will try to allocate memory with }(hjWhhhNhNubh)}(h `kmalloc`h]hkmalloc}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjWubh5 and if the allocation fails it will be retried with }(hjWhhhNhNubh)}(h `vmalloc`h]hvmalloc}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjWubh=. There are restrictions on which GFP flags can be used with }(hjWhhhNhNubh)}(h `kvmalloc`h]hkvmalloc}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjWubh@; please see kvmalloc_node() reference documentation. Note that }(hjWhhhNhNubh)}(h `kvmalloc`h]hkvmalloc}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjWubh5 may return memory that is not physically contiguous.}(hjWhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXIf you need to allocate many identical objects you can use the slab cache allocator. The cache should be set up with kmem_cache_create() or kmem_cache_create_usercopy() before it can be used. The second function should be used if a part of the cache might be copied to the userspace. After the cache is created kmem_cache_alloc() and its convenience wrappers can allocate memory from that cache.h]hXIf you need to allocate many identical objects you can use the slab cache allocator. The cache should be set up with kmem_cache_create() or kmem_cache_create_usercopy() before it can be used. The second function should be used if a part of the cache might be copied to the userspace. After the cache is created kmem_cache_alloc() and its convenience wrappers can allocate memory from that cache.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(h?When the allocated memory is no longer needed it must be freed.h]h?When the allocated memory is no longer needed it must be freed.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXObjects allocated by `kmalloc` can be freed by `kfree` or `kvfree`. Objects allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`, `kfree` or `kvfree`, where the latter two might be more convenient thanks to not needing the kmem_cache pointer.h](hObjects allocated by }(hjhhhNhNubh)}(h `kmalloc`h]hkmalloc}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh can be freed by }(hjhhhNhNubh)}(h`kfree`h]hkfree}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh or }(hjhhhNhNubh)}(h`kvfree`h]hkvfree}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh. Objects allocated by }(hjhhhNhNubh)}(h`kmem_cache_alloc`h]hkmem_cache_alloc}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh can be freed with }(hjhhhNhNubh)}(h`kmem_cache_free`h]hkmem_cache_free}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh, }(hjhhhNhNubh)}(h`kfree`h]hkfree}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh or }(hjhhhNhNubh)}(h`kvfree`h]hkvfree}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubh], where the latter two might be more convenient thanks to not needing the kmem_cache pointer.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hDThe same rules apply to _bulk and _rcu flavors of freeing functions.h]hDThe same rules apply to _bulk and _rcu flavors of freeing functions.}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hXMemory allocated by `vmalloc` can be freed with `vfree` or `kvfree`. Memory allocated by `kvmalloc` can be freed with `kvfree`. Caches created by `kmem_cache_create` should be freed with `kmem_cache_destroy` only after freeing all the allocated objects first.h](hMemory allocated by }(hjuhhhNhNubh)}(h `vmalloc`h]hvmalloc}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjuubh can be freed with }(hjuhhhNhNubh)}(h`vfree`h]hvfree}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjuubh or }(hjuhhhNhNubh)}(h`kvfree`h]hkvfree}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjuubh. Memory allocated by }(hjuhhhNhNubh)}(h `kvmalloc`h]hkvmalloc}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjuubh can be freed with }hjusbh)}(h`kvfree`h]hkvfree}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjuubh. Caches created by }(hjuhhhNhNubh)}(h`kmem_cache_create`h]hkmem_cache_create}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjuubh should be freed with }(hjuhhhNhNubh)}(h`kmem_cache_destroy`h]hkmem_cache_destroy}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjuubh4 only after freeing all the allocated objects first.}(hjuhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]selecting-memory-allocatorah ]h"]selecting memory allocatorah$]h&]uh1hhhhhhhhKubeh}(h](memory-allocation-guideheh ]h"](memory allocation guidememory_allocationeh$]h&]uh1hhhhhhhhKexpect_referenced_by_name}jhsexpect_referenced_by_id}hhsubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksentryfootnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerj9error_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}refids}h]hasnameids}(jhjj jjjjjju nametypes}(jjjjjuh}(hhj hjjjj4jju footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}Rparse_messages]transform_messages]hsystem_message)}(hhh]h)}(hhh]h7Hyperlink target "memory-allocation" is not referenced.}hjsbah}(h]h ]h"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]levelKtypeINFOsourcehlineKuh1juba transformerN include_log] decorationNhhub.